latest posts

Getting back into my bbXP project this morning I was running into issues with VS 15 Preview 3 and the bbXP ASP NET Core project from loading. I kept getting:

The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v15.0\DotNet.Web\Microsoft.DotNet.Web.targets" was not found. Confirm that the path in the declaration is correct.

Via the following popup:

Knowing VS 2015 Update 3 just came out and that was a new tooling update for .NET Core available here, I downloaded that update hoping the installer would place the files VS 15 Preview required into the 14.0 folder of that same path.

Low and behold, it did:

So to resolve this issue, simply copy the C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v14.0\DotNet.Web folder to C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v15.0. After reloading the project I had 0 issues working with VS 15 Preview on my ASP NET Core project. I hope that helps out someone in the same boat.
Over the past couple of weeks I have been taking a deep dive back into game development, specifically VR Game Development. With the Rift, Vive, Hololens and HDK to name a few are either on the market or going to be on the market within the next few months I wanted to be at the forefront like I was with IoT. As with any endeavor, I took a look into the main two approaches presented to developers: Unity/Unreal/Other existing engine or homebrew engine.

Existing Engine Approach

The first approach is clearly the most bang for the buck as far as investment, both Unity and Unreal Engine offer a ton of functionality out of the box, have large communities and lots of documentation. I personally tried both Unity and Unreal over a weekend to get antiquated with some success (more so on the Unity side). The main problem with both of these approaches is that I felt like I was on rails, constrained by someone elses code and having used frameworks in my day job from Xamarin and Telerik, updates always proved problematic.


For those that have not followed this blog for long, I have off and on ventured into OpenGL, Glide and XNA since 1998. Knowing that OpenGL and DirectX shifted off the fixed pipeline models in favor of a programmable pipeline, I knew I would be in for a learning curve no matter which route I went down. In looking into Vulkan one evening I uncovered a huge lack of documentation on the API and how you are supposed to use it so I started down the DirectX path for the first time since DirectX 8. For those unaware, SharpDX provides a really lightweight wrapper for DirectX 12 in C#. Deep diving into the API everything was clicking as most things were what I was familiar with in either my day job (command lists are really similar to a Service Bus/Messaging architecture) or SwapChain (double buffering).

As one could infer, I went with a homebrew DirectX 12 approach. To follow the Unity approach of a launcher Win32 app and then directly launching the game, I utilized my XAML WPF skills to throw together a quick frontend to manage graphics adapter, resolution, FSAA and full screen options. This ended up being a great approach to the DXGI namespace in SharpDX to query all of the supported resolutions and graphics adapters. Which led to abstracting out the input handling, sound handling, graphics renderer and level loading. As of right now basic sound playing, keyboard input all utilizing DirectX 12.

The big element remaining to get a basic "game" up and running in writing an export from Blender to a series of triangle strips to then be read by my renderer code. Lastly getting the virtual camera working with keyboard and mouse input. Once these elements on in there, a lot remains such as collision detection, physics, texture mapping, 2D UI and most importantly the VR component. I recently purchased 2 AMD Radeon RX-480 cards (another blog article is coming on those in particular) and will be getting the Razer HDK v2 as soon as it is available.

For those curious, all of this work is being open sourced like everything I do and is available on my github in the HVR.DX12 project.
As a frequent or infrequent visitor might have noticed, the site has undergone a slight refresh. I kept the bootstrap menu, but stripped out the right hand panel (though it might come back in a different form). The biggest undertaking was the complete rewrite of the codebase from the ASP NET MVC 5 and Entity Framework 6.1 to ASP NET Core and Entity Framework Core. The code base is now considerably smaller and as you can tell is faster without caching even turned on as it was before.

As my shift in time spent has shifted from blogging to my GitHub projects, I wanted to shift focus of my blog to my github projects. Over the next couple weeks I will be adding in some feeds and milestones of the projects I am working in the header area. This way a visitor might notice I had not blogged in a while, but can see active progress on GitHub.

That being said I have been dividing my time between a couple different projects. One being bbXP, the codebase that powers this blog. The other being jcFUS, a collaboration tool for businesses and consumers. In the coming weeks expect a lot of coverage on these projects.

One might be asking, where is the updated code for bbXP? I will be pollishing it up today and checking it into the GitHub repository.

Some other features coming back at some point:
  • Archives
  • Content Search
  • White Papers
  • My Computers
So stay tuned for more updates and some other posts on the hardware side of my passion.
For some time now going back several years I have been attempting to figure out a way to make data smaller without  degradation. On the flight back from a Labor Day Weekend trip to Silicon Valley I started writing out some ideas for compression. What I came to realize is that to be efficient, for each popular file type the compression should adjust to the data inside. For example, XML, there is a lot of duplication of data by design. For each element in  an XML Document you essientially have the same text again to close the element. A dictionary compression would reduce file size immensely, conversely a JSON file of the same data wouldnt benefit as  much.

An hour or so later I started picking apart JSON, thinking about  making a transport that would be even more efficient. My solution: JCON.

While not for everyone, the solution I came up with does away with the property names entirely and braces - reducing the size even more so than JSON.

For instance take the following class:
public class Person {
     public string FirstName {
     get; set; }
public string LastName {
     get; set; }
Assume you initialize the class like so:
var person = new Person {
     FirstName = "John", LastName = "Doe" }
; ]]>
With that class and then serialized you get the following size in  bytes:

XML: 218
JSON: 37
JCON: 12

JCON ends up being almost 20 times smaller than XML and 3 times smaller than JSON. Wanting to make sure when dealing with collections of objects JCON continued to be efficient, assuming 100 Person objects (in bytes):

XML: 8973
JSON: 3801
JCON: 1000

Not as huge of an impact, but still almost 9 times smaller than XML and almost 4 times smaller than JSON.

Knowing data duplication can exist in the data itself outside of the data transport definition so I will be working on adding in a compression flag so no other libraries are needed on either client or server side. For example, if there are 2 or more people with the same last name or first name, then you would save a couple bytes.

As with any of my projects, this project is completely open source on my GitHub profile.

Going back to 11/26/1997 (scary it has been almost 18 years), I’ve been fascinated by artificial intelligence. In November 1997 I was still pretty much doing QBasic with a little Visual Basic, so it is not surprisingly the sole surviving code snippet I have from my Jabba Chat application (yes I was a huge Star Wars fan even back then) was in QBasic:

PRINT "Your Jabba friend, Karen"
PRINT "What is your first name", name$
PRINT "Well HI There "; name%
PRINT "It sure is neat to have You Drop by"
PRINT "Press space bar when ready you're ready to start"

As simple as it may be, this is at a basic level following a pre-programmed path, taking input and "learning" a person's name. Now days with programming languages better structured to handle AI, processing power and overall a better understanding of how we humans think now has never been a better time to dive into this area.

As stated in a previous post back in July 2014 I've become heavily vested into making true Artificial Integllience work in C#. In working on a fully automated scheduling system at work a lot of big questions I had never encountered in my 15 years of professional development with one in particular:

How do you not only replicate a human's job, but also make it better taking advantage of one of the biggest advantages a computer has over a human being; the ability to process huge data sets extremely fast (and consistently produce results)?

The answer wasn't the solution, but instead I realized I was asking the wrong question, and only after really deep diving into the complexities of the rather small "artificial intelligence" engine that I came to realize this. The question should have been what drives a human to make decisions? The simple programmatic solution to that question is to go through and apply conditionals for every scenario. Depending on what the end goal of the project is that maybe a good choice, but what if it is a more complex solution or hits one of the most common events in a computer program: the unexpected, that approach can't be applied. This question drove me down a completely different path thinking about how a human being makes decisions when he or she has yet to have encountered a scenario, an unhandled exception if you will. Thinking about decisions I have had to make throughout my life, big or small I have relied on past experience. Thinking about an application just going to production, it has no experience, each nanosecond is a new experience, for all intents and purpose a human infant. Remembering back as far as I can to when I was 4, sometimes you would fail or make a mistake, as we all have, the key as our parents enthralled into us was to learn from the mistake or failure so we wouldn't make it again. Applications for the most part haven't embraced this. Most of the time a try/catch is employed with an even less likely alert to a human notifying them that their program ran into a new (or possibly repeated experience if the error is caught numerous times before being patched, if ever). The human learns of the "infants" mistake and corrects the issue hopefully. The problem here is that the program didn't learn, it simply was told nothing more than to check for a null object or the appropriate way to handle a specific scenario, i.e. a very rigid form of advancement. This has been the accepted practice for as long as I have programmed. A bug arises, a fix is pushed to staging, tested and pushed to production (assuming it wasn't a hotfix). I don't think this is the right approach any longer. Gone are the days of extremely slow x86 CPUs or limited memory. In today's world we have access to extremely fast gpus and vast amounts of memory that largely go unused coupled with languages that facilitate anything we as programmers can dream up.

So where does the solution really reside in?

I believe the key is to architect applications to become more organic in that it should learn from paths taken previously. I have been bouncing around this idea for the last several years looking to self-updating applications where metrics captured during use could used to automatically update the applications own code. The problem with this is then you're relying on both the original programming to affect the production code in addition to the code it would be updating. Let alone also ensuring that changes that are made automatically are then tracked and reported appropriately. I would venture most larger applications would also need projections to be performed prior to any change along with the scope of what was changed.

Something I added into the same platform at work was the tracking of every request by user, what he or she was requesting, the timestamp and the duration the request took from the inital request to returning of information or processing of the request. To me this not only provided the audit trails that most companies desire and there by the ability to add levels of security to specific pieces of information system regardless of the platform, but also providing the ability to obtain the information of "John Doe requests this piece of information and then this piece of information a few seconds later every time". At that point the system would look for these patterns and alert the appropriate party. In that example, is it that the User Interface for what John Doe needs to access requires two different pages to access or is that he was simply investigating something? Without this level of granularity you are relying on the user to report these "issues" which rarely happens as most users get caught up in simply doing their tasks as quickly as possible.

Going forward I hope to add the automatic reporting of trends and take a pro-active approach to performance ridden areas of the system (if metrics for the last 6 months are returning a particular request in .13 seconds on average and then for the last week it jumps to 3.14 seconds on average, the dev team should be alerted so they can investigate the root cause). However, these are far from my longer term goals in designing a system that truly learns. More on this in the coming months as my next generation ideas come to fruition.


Yesterday I posted about my GitHub repo work over the last couple of weeks. One project I did not include intentionally was jcLSL. Followers of my blog know I had been toying with writing my own scripting language for quite some time, but kept hitting road blocks in parsing and handling anything some what complex. This has led me to think about what I would really want in a scripting language and what value it would bring to the already feature rich C# eco-system that I live and work in every day.

The Problem

At the end of the day I found that I really would only want a scripting language for doing mail merges on strings. Expanding on that, how many times have you needed to have a base template and then populate with a class object or even simply some business logic applied? I'm sure we've all done something like:

public string ParseString(string sourceString, string stringToReplace, string stringValue) {
     return sourceString.Replace(stringToReplace, stringValue); }
While at a simplistic level this is acceptable and a better approach than not wrapping the Replace call, it isn't ideal especially if you are working with POCO (Plain Old CLR Objects), then it becomes a lot more dirty in your code, assuming you wrapped the calls like in the above function, let's say you have a basic User class definition like so:

public partial class User {
     public int ID {
     get; set; }
public string Name {
     get; set; }
And then in your parsing code assuming it also exists in a User class:

public string ToParsedString(string sourceString) {
     sourceString = sourceString.Parse("ID", this.ID); sourceString = sourceString.Parse("Name", this.Name); return sourceString; }
As you could guess not only is that code an eye sore, it doesn't scale when more properties are added to your class and you'd end up adding something similar to each POCO in  your code base - not ideal. This brought me to my first objective, solving this problem in an extremely clean fashion.

Earlier this week I got it to where any POCO with jcLSLMemberAttribute decorated above a property will be automatically parsed handling the situation of code changing in time (some properties could go away, be renamed or added). With my current implementation all you need is to define a jcLSLGenericParser and then call the Run method passing in the string to mail merge and the class object you wish to merge from like so:

var TEST_STRING = "Hello {
This is a test of awesomness with User ID #{
"; var user = new User {
     ID = 1, Name = "Testing" }
; var gParser = new jcLSLGenericParser(); var parsedString = gParser.Run(TEST_STRING, user); ]]>
After running that block of code the parsedString will contain: Hello Testing This is a test of awesomeness with User ID #1. There is an optional event for the jcLSLGenericParser class as well if more custom parsing needs to be achieved.

Going forward

One of the first things to do is to add some more options for different scenarios. Maybe you've got a large code base and going through and adding the jcLSLMemberAttribute decoration would be a task in itself. One approach to solve this scenario is to add an optional parameter to the jcLSLGenericParser constructor to simply iterate through all the properties. This takes a performance hit as one would be expect, but it would leave the level of ties to this library to a minimum. Thoughts on this, please post a comment.

On the larger scale my next major goal is to add support for Streams and other output options. Let's say you had a static HTML template that need to be populated with a News Post for instance. The HTML template could be read in, mail merged against the database and then outputed to a string, stream output to a file or return binary. Trying to handle ever scenario I don't think is realistic or feasible, but handling the 90% scenario or better is my goal.

Another item that I hate implementing with mail merge fields is error handling. One approach is to simply return the exception or error in the mail merge itself. This isn't good when you have external customers and they see something like Object Reference Exception or worse yet a full stack trace. The vailidity of your product will go down quickly. However, I think a standardized {
merge field to store the error to display on an exception page or email would make sense. Handling both objectives in error handling: being aware of the error, but also handling it gracefully.

Further down the road I hope to start adding in support for "true" scripting so you could have conditionals within the merges or other logic you would want to be able to change on the fly without having to deploy an updated Mobile App, ASP.NET MVC Web App, WebAPI Service or whatever platform you're using with this library.

Where to get it

As mentioned earlier you can grab the code and samples on GitHub or grab the binary from either NuGet or from the NuGet Console with: PM> Install-Package jcLSL. As I check code into GitHub and get to stable releases, I'll update the NuGet package.

While largely quiet on here since my last post two weeks ago I have been hard at work on several smaller projects all of which are on GitHub. As mentioned previously, everything I work on in my freetime will be open sourced under the MIT License.


The first item I should mention is some new functionality in my jcAnalytics library. Earlier this week I had some ideas for reducing collections of arbitrary data down to distinct elements. For example if you had 3 Objects of data, with 2 of them being identical, my reduction extension methods would return 2 instead of 3. This is one the biggest problems I find when analyzing data for aggregation or simply reporting, especially when the original amount of data is several hundreds of thousands or more. I attempted the more straight forward single threaded model, as expected the performance as the number of elements increased was dramatically slower than a parallel approach. Wondering if there were any theories on taking a sampling of data quickly to scale as the number of items increased, I was surprised there was not more research on this subject. Doing a Log(n) sample size seemed to be the "goto" method, but I could not find any evidence to support the claim. This is where I think recording patterns of data and then persisting those patterns could actually achieve this goal. Since every problem is unique and every dataset over time the extension methods could in fact learn something along the lines of "I have a collection of 500,000 Addresses, last 10 times I ran I only found 25,000 unique addresses at an average rate of every 4 records." On subseqent runs, it could adapt per request. Maybe assign Guids or another unique identifier for each run with the result patterns on disk, in a SQL database or in Azure Cache. For those curious, I did update the NuGet package as well with these new extension methods. You can download the compiled NuGet Package here on NuGet or via NuGet Console with PM> Install-Package jcANALYTICS.Lib.


A huge topic in my world at work has been offline/online hybrid mobile applications. The idea that one could "sync" and then pull down data for 100% offline use has been on my mind since it was requested several months ago by one of our clients. Knowing the first approach might not be the best and that I wanted to create a generic portable class library that could be plugged into any mobile application on any platform (iOS, Android, Windows), I figured I would begin my research fully exposed on GitHub and then as stable releases were built I would publish them on NuGet. This project is of a larger nature in that it could quickly blossum into a framework instead of simply a library. As of right now on GitHub I have a the GET, POST and DELETE HTTP verbs working to pull/push data, but not storing the data for offline purposes. I'm still working out the logistics of how I want to achieve everything, but the ultimate goal would be to have any request queued when offline and then when a network connection was made automatically sync data. Handling multiple versions of data is a big question. Hypothetical if you edited a piece of information and then edited it again, should it send the request twice or once? If you were online it would have sent it twice and in some cases you would want the full audit trail (as I do in the large enterprise platform at work). Another question that I have not come up with a great answer for is the source of truth question. If you make an edit, then come online I could see a potential race condition of the data syncing back and a request being made on the same data. Handling the push and pull properly will take some extensive logic and more than likely might be a global option or down to the request type level. I am hoping to have an early alpha of this working perfectly in the coming weeks.


This project came at the request of my wife who wanted a way to view Trendnet cameras from her Nokia Lumia 1020 Windows Phone. Trendnet only offered apps for iOS and Android and there were no free apps available in the Windows Phone marketplace - so I spent an evening and wrote one last August (2014). Again going with the Windows 10 Universal approach, I began to re-write the app to take advantage of all the new XAML and addin features I had long since wanted to add in. Going with my open source initiative, all of the code is checked into GitHub. I am hoping to have everything ported from the old Windows Phone 8.1 app along with all of the new functionality this summer.


Another older project that I see a need to fufill going forward. Since Google Reader faded away, I switched over to feedly, but I really don't like their interface nor how slow it is. Originally this project was going to be an ASP.NET MVC/WebAPI project with a Windows Phone/Windows Store app. As with my other projects, I knew I wanted to simply port over the work I had done to a Windows 10 Universal App, but as I got into working on it, there was no reason to tie the apps back to a WebAPI Service if I did away with the MVC view. Knowing I was going to be freely giving away this application and didn't want to have ads I also didn't want to incur massive Azure fees if this were to take off. So for the time being this project will exist as a Windows 10 Univeral App with full support for multiple devices (i.e. if you read an article on one device, it will mark it as read on the others). You can check out the code on GitHub. I'm hoping for a release in the coming months.


This was a project I had been slowly designing in my head since summer of 2012 - a turn based Star Trek game without microtransactions and the ability for one to simply keep playing as long as they want. I started coding this in August 2014 and into September 2014, but put it on hold to work on Windows IoT among other topics of interest. Now with Windows 10's release on the immediate horizon I figured I should wrap up the game and in kind open source the project. As of now I'm in the process porting over the XAML to Windows 10 as it was originally targeting Windows Phone 8.1. Once that process is complete, I will return to working on the logic and with any luck release it sometime this summer, but in the meantime you can checkout the code on GitHub.


I originally wrote this "game" for my boss's child since there was not a dot math game in the Windows Phone marketplace. Seeing as how it got 0 downloads, I open sourced it. I did start porting it over to a Windows 10 Universal Application, but have not finished yet.


Now that Visual Studio 2015 RC is out, I will more than likely be returning to my open source bbXP project. The only reason I put it on hold was the issues I was running into with NuGet packages in CTP6 of Visual Studio 2015. Coming up in a few weeks is the 20th anniversary of when I wrote my first line of code, expect a retrospective post on that in a few weeks.

Silicon Graphics Onyx2


Those following my blog for some time know my passion for Silicon Graphics machines. After having picked up a Silicon Graphics Onyx 2 last fall I finally had some time to get the "big iron" up and running. The Onyx 2 is interesting in that it is the one of the last "board" based deskside graphics machines. Following the Onyx2 the only comparable workstation would be the Tezro offering up to 4 R16000 1ghz CPUs and a 128mb V12 Odyssey graphics card. My specific Onyx2 CPU wise was nearly maxed out with 4 400mhz R12000 MIPS cpus and completely maxed out in ram at 8gb. Graphics systems wise it was pretty low end. It came with a DG5-2, GE14-2 and 1 RM8-16 Raster Manager, effectively making it a Reality graphics system. Fortunately eBay in time had a DG5-8, GE16-4 and 2 RM10-256 boards for extremely cheap so after swapping the boards I now have an InfiniteReality3 system. The InfiniteReality4 (the last generation) only differs by offering the RM11-1024 (1gb of texture memory) versus the 256mb per board I have in the RM10s in addition to Pixel Fill Rate differences of nearly double.

Silicon Graphics Onyx2 - GE14-2
Geometry Engine (GE) 14-2

Silicon Graphics Onyx2 - GE16-4
Geometry Engine (GE) 16-4

Silicon Graphics Onyx2 - RM8-16
Raster Manager (RM) 8-16

Silicon Graphics Onyx2 - RM10-256
Raster Manager (RM) 10-256

Like most of the machines I have gotten second hand they come with the original slower scsi drives. This Onyx2 came with the original 9gb IBM Ultra-Wide SCSI 2 hard drive with IRIX 6.5.8 on it. Knowing from the listing the cd-rom drive was faulty, I simply copied all of the IRIX 6.5.30 cds over NFS to upgrade it. After which like I had done with my Silicon Graphics Origin 300 back in 2012. For those inquiring I chose to my goto Ultra 320 SCSI Drive, the Maxtor Atlas 15K II. Silicon Graphics Onyx2 - Original Harddrive
Silicon Graphics Onyx2 - Replacement Harddrive

A look at my Onyx 2 all wired up, note this was before I swapped in the DG5-8:

Silicon Graphics Onyx2 - Back wired up

I should note anyone curious getting an Onyx2, you should keep it in a cool place or outside of a bedroom as the fans (which are temperature controlled) when in full speed are quite loud.


Knowing one of the first things I do after getting a new system up and running is benchmarking it with my own cross-platform CPU benchmark, jcBENCH. Wanting to compare the R12000 and R14000 architectures, specifically with my goto 4xR14000 600mhz Silicon Graphics Origin 300 I ran jcBENCH. Surprisingly with the extra 200mhz (50% increase) and enhancements the R14000 MIPS cpu brought, my Origin 300 is over 3 times faster in both integer and floating point tests.

Since I had never had an InfiniteReality system before I wanted to test it with something semi-recent such as Quake 3. Knowing it was not optimized for gaming, let alone optimized for IRIX I was still intrigued.

For my Quake 3 benchmarks I used the neko_quake3-1.36.tardist release leaving everything on the highest settings except filtering which I left on bilinear. For each test in Quake 3 the only things I changed were the resolution and bit depth. No other processes were running.

Silicon Graphics Onyx2 - Quake 3 Benchmarks

The results were pretty interesting. Being on just a step down from the highest end cpu I figured the performance might actually be better with the InfiniteReality3 installed. If anyone reading this has an IR3 and 4xR14k Onyx2 please run the same tests and let me know your results. Overall the biggest jump was swapping the RM8-16 with an RM10-256, especially when using 32bit bit depth. What I found most interesting is the addition of a 2nd RM10-256 and swapping out the GE14-2 for a GE16-4 brought upon diminishing returns. This leads me to believe at that point Quake 3 became CPU limited with my 400mhz R12000s. Knowing that this particular build is single threaded I am curious how my 600mhz R14k Fuel with a V10 would perform in comparison (a test I will do in the coming weeks).

Closing Thoughts

For a machine that if bought new in May 2001 would cost $252536 (per this pricing sheet), I feel as though I have a piece of history that for a time blew away what was delivered by the PC and Mac worlds. Based on my own research comparing systems with PCs of the time (and other workstation manufacturers like DEC and Sun), the Onyx2 was one of the last extremely competitive offerings Silicon Graphics had. One could argue the Octane 2 was the last. Companies like 3dfx (interestingly enough had several Silicon Graphics employees) and AMD drove the PC industry forward with their Voodoo and Athlon products respectively; the "death of the workstation" so to speak.

Going forward with the Onyx 2 I hope to add some Onyx 2 specific optimizations to the ioquake3 project taking advantage of the Silicon Graphics OpenGL extensions that could speed up rendering. Along this path I would also focus on V10/V12 optimizations to bring Fuel and Tezro machines a more optimized experience.


Taking a break from ASP.NET 5 (and Visual Studio 2015 CTP6) until the next CTP release I've gone back to working on a project I started for all intents and purposes back in 1995, the MODEXngine. As mentioned back on August 8th, 2013, the MODEXngine is intended to not only being a game engine with the usual graphics, audio, input handling etc, but also a cloud platform. With cell phones and tablets establishing themselves as a viable platform to target the last several years, one can no longer simply focus on the PC (Win32/Linux/Mac OSX). With mobility also comes with things that a traditional "PC" developer wouldn't run into: vastly different platform APIs, network drops, "5 minute or less" gameplay and supporting an eco-system that crosses all of those platforms. Coming at this as an enterprise developer who actively develops on virtually every platform (Windows, the Web, Android, iOS, Windows Phone and Windows Store), I feel as though I bring a fairly unique perspective of how everything can exist and bring the experience to each platform natively, putting myself in the shoes of someone who wants to develop a game for every platform, but wants full native control over the platform as opposed to an "Apache Cordova" approach in which it solves the bigger problem of quickly delivering to multiple platforms, but stalls when you have a feature that needs more native functionality (let alone speed). Another advantage I hope to bring is the ease of use. Wrapping native level calls with generic wrappers across the board, it should cut down on the issues of "how do I do that on platform XYZ", similar to how Xamarin Forms has made wrappers for iOS, Android and Windows Phone, but hopefully with less issues.

With the introduction and overall goals out of the way, lets deep diving into the details.

Language and Tech Details

A big decision (one that I am still not 100% decided on) is the overall language used for the platform. Keeping to just one language has the advantage that if someone knows the language I choose, he or she can develop againinst the entire platform. However, one language for a project of this scope goes against my "use the best tool for the job" principle. By utilizing my language of choice, C#, I would be committing people to utilizing Xamarin for iOS and Android deployments. For smaller projects, they could simply get away with the free license, but that would be putting an extra burden on the developer (or development team) which also has to incur the costs of simply getting into the various stores for iOS, Android and Windows. On that same breath, with Microsoft's big push for cross-platform development to Linux and Mac OSX this might be overlooked (hoping that one day that license is just bundled with Visual Studio so this point would be mute for the most part).

The bigger question that has been pestering me for quite some time is the graphics API to use. When Carmack gave his observations of Direct3D back in mid 90s when asked why there was only an OpenGL port of Quake, I chose to follow his path of using OpenGL. It made sense at the time and still does. It is supported by almost every platform and only being focused on graphics I appreciated far more (and still do) than the "I can do everything" model that DirectX followed. While it might be unfair now to continue that mentality almost 20 years later, the idea still holds true. I can utilize OpenGL on Linux, iOS, Android, Mac OSX and regular Windows desktop applications all in C#. The only platforms I would be excluding would be Windows Phone and Windows Store. Which for followers of this blog, know I love from both a consumer and developer's perspective in every aspect but Microsoft's stance on not allowing OpenGL natively supported like they have done since Windows NT. Doing some research into this issue, I came across the ANGLE (Almost Native Graphics Layer Engine) project which translates OpenGL ES calls to DirectX 9 or 11 calls for Windows Phone and Windows Store apps. As of right now I haven't dove into this library to see its full capabilities, but from the MSDN blog posts on it, this approach has been used in production grade apps.

For the time being, I think utilizing C# across the board is the best approach. Web Developers who know ASP.NET would find the WebAPI service and libraries accessible, while Windows Phone/Store developers would find the engine libraries no different than utilizing a NuGet package.

The area where I want to be a lot more flexible is in the CRUD operations on data locally and in the Cloud. In my mind, whether the data is on a device or on a cloud, during retrieval it should make no difference. Akin to how easy Quake III made it to download levels from the games' server without having to leave and come back (as was the case in other games of that era). Obviously if one isn't connected to the internet or dropped connection then handling needs to be in place to handle a hybrid situation, but for all intents and purposes the shock and awe of doing such an implementation really isn't a huge endeavor if one designs the architecture with that in mind.

Along those same lines a big question in my mind is the storage of user data, statistics, level and other game content. A traditional .NET developer approach would be to utilize SQL Server 2014 and possibly Azure File Storage for the content (textures, audio files etc). Open source developers coming from Python or PHP might be drawn to use MySQL or MongoDB in place of SQL Server. My goal is to make the calls abstract so that depending on you, the developer, you can utilize whatever you wish. I more than likely will be using SQL Server for User Data at the very least, but planning ahead for potentially billions of concurrent users storing the rest of the data in that fashion would be extremely inefficient. Databases like Redis or ArangoDB might be a better choice for concurrent data. Or perhaps even my own distrubted key/value database jcDB. Seeing as I am still setting up the overall architecture, this will evolve and will be interesting to start doing simulated performance tests while also taking into account how easy it is to interact with each of the databases for CRUD operations.


Even before my announcement in August of 2013, the year prior in August of 2012 I had seen a huge disconnect between mobile/console games and PC games: the ability mod. One of the things that for myself and I imagine others back in the 90s with the modability of Doom and Quake (among others), it expanded the games' community in a way. Whether it was as "simple" as a new deathmatch level or as extravagent as some of the mods like Quake Rally, it made a huge difference inbetween major game releases. To this day I am not aware of any cross platform games that support modding like id software had provided back in the 90s. Since coming up with that idea technology has changed dramatically, but the idea is the same. Instead of a WCF Service and thinking small scale, I would use a WebAPI service hosted on Azure using Azure Storage with containers for each game. Security being an even bigger issue now than it was almost 3 years ago, I would more than likely employ a human element of reviewing submitted mods prior to implementing a fully automated security scan.

Release and what to look forward to

Those are the main talking points at this point in my mind, but as I get further in the development these more than likely will expand and the "features" list will need its own index.

I imagine at this point a big question on your mind is how soon this be made available in even an alpha state. Well the good news is that as I am developing the engine, I am committing all my code to GitHub under the MIT License (meaning you can use the code freely, but it comes without any warranty). Later on when it is further along and you do find it useful, a ping back would be appreciated especially if you have ideas for ways to make it better.

As for a specific release date. Knowing my freetime is extremely unstable and I still have to deep dive into OpenGL ES far more than I have, I would not expect to see this come to fruition until much later this year, especially with my bbXP project also competing for my free time (not to mention my masters program).

Any questions, comments or suggestions please leave them in the comments section below or email me at jarred at jarredcapellman dot com.

Continuing my work deep diving into ASP.NET 5 (vNext), I started going down the path of EntityFramework 7. Which similiarly to ASP.NET 5, is like a reboot of the framework itself. Readers interested to dive in, I highly suggest you watch the MVA Video called What's New with ASP.NET 5 that goes over all of the changes in pretty good detail (though I have a running questions list to ask at BUILD in a few weeks).

Noting that EntityFramework 7 beta was included in my ASP.NET 5 project, I hit a road block into finding it through the usual method in the NuGet Package Manager. As of this writing, only 6.1.3 was available. In looking around, the answer is to add another NuGet Package Source. I had done this previously as I setup a private NuGet Package Server at work to host common libraries used throughout all of our projects. For those unaware, goto Tools->NuGet Package Manager->Package Manager Settings.

Once there, click on Package Sources and then the + icon, enter a descriptive name and for the source and paste the following url: and click Update. After you're done, you should have something similiar to this:

You can now close out that window and return to the NuGet Package Manager and upon switching the Package Source dropdown to be ASP.NET vNext (or whatever you called it in the previous screen) you should now see EntityFramework 7 (among other pre-release packages) as shown below.

Hopefully that helps someone out there wanting to deep dive into EntityFramework 7.

Per my announcement on Sunday, I'm working on making bbXP (my CMS that runs this site) generic to the point where anyone could just use it with minimal configuration/customizations. Along this journey, I'm going to be utilizing all of the new ASP.NET 5 (vNext) features. This way I'll be able to use my platform as a test bed for all of the new features of ASP.NET 5 and then apply to production products at work, much like what Version 1 was in back in day when I wanted to deep dive into PHP and MySQL back in 2003 and MVC in general almost 2 years ago.

Tonight's deep dive was into the new configuration model. If you've been developing for ASP.NET or .NET in general you're probably accustomed to using  either the app.config or the web.config like .

And then in your app you would do something like this:

var siteName = ConfigurationManager.AppSettings["SITE_NAME"]; ]]>
And if you got a little fancier you would add a wrapper in your base page or controller to return typed properties for booleans or integers.

With ASP.NET 5, configuration is completely new and extremely flexible, but with the same end result. I will assume you have at the very least downloaded and installed the Visual Studio 2015 CTP in addition to launching the ASP.NET 5 template to at least get somewhat comfortable with all the changes. If you are just starting, I highly suggest watching Daniel Roth's introduction video.

To dive into the configuration specifically, you will want to open the Startup.cs. You will notice at the top of class is the Startup constructor. For bbXP I wanted to add my own json configuration file so my constructor looks like:
public Startup(IHostingEnvironment env) {
     Configuration = new Configuration() .AddJsonFile("config.json") .AddJsonFile("bbxpconfig.json"); }
Knowing I would not have the same ConfigurationManager.AppSettings access as I am used to, I wanted to make a clean method for which to access these configuration options and go one step further to make it strongly typed and utilize dependency injection. So I came up with a quick approach to dynamically populate a class and then use DI to pass the class to my controllers. To get started I wrote a quick function to populate an arbitrary class:

private T readConfig() {
     var tmpObject = Activator.CreateInstance(); var objectType = tmpObject.GetType(); IList props = new List(objectType.GetProperties()); var className = objectType.Name; foreach (var prop in props) {
     var cfgValue = Configuration.Get(String.Format("{
", className, prop.Name)); prop.SetValue(tmpObject, cfgValue, null); }
return tmpObject; }
And then my arbitrary class:

public class GlobalVars {
     public string SITE_NAME {
     get; set; }
Scrolling down to the ConfigureServices function also in the Startup.cs:

public void ConfigureServices(IServiceCollection services) {
     services.AddMvc(); services.AddWebApiConventions(); var options = readConfig(); services.AddSingleton(a => options); }
In this method the first 2 lines are unchanged, but the last 2 add my GlobalVars to the DI list and initialize it with the options from my file. Now to see it in action inside a controller:

[Activate] private GlobalVars _globalVars {
     get; set; }
public IActionResult Index() {
     ViewBag.Title = _globalVars.SITE_NAME; return View(); }
Notice how clean the access to the option is now simply using the new Activate attribute on top of the GlobalVars property. Something I'll be adding to this helper method going forward is type inference so the readConfig method would typecast to the type of the property in your arbitrary class.

Hopefully this helped someone out there in diving into the next version of ASP.NET, more to come for sure.

As some may have noticed, my site has undergone a slight revamp. Since going live with my ASP.NET MVC 4 based blog in April 2013 I had making the site responsive from a mobile phone to desktop one of my top priorities when feeling the desire to do web development outside of work. A project at work on the horizon in a few months from now demanded I invest the time in my off-hours to get 100% comfortable with the latest techniques. Along with the responsive design, I redid the routing to take advantage of MVC 5's Attribute based routing and did away with the WCF Service that had been the back bone of my blog for nearly 2 years. The idea being now that you can have MVC and WebAPI services hosted in one solution (ideal for my blog - still not convinced that's a good seperation of concerns for larger enterprise projects), there is no reason for my blog to have 2 seperate solutions.

Along those same lines, my recent presentation at the Blue Ocean Competition on Xamarin Forms and C# in general made me turn a new leaf in regards to my side-projects. Starting a week or so ago, every day I've been checking in the complete source code to a project from my private Subversion repository I've kept for 8 years now. My thought process is that if 1 person finds even 1 thing they didn't know or could use, that's 1 more person who got use out of it than it simply sitting in my SVN repository until I went back around to work on it. As anyone whose followed my blog for any period of time knows I pick up a project, work on it for a while and then pick it back up a few months (or years) later.

That being said, in the coming weeks the platform that powers my blog, bbXP will also be made available on my Git Hub account. There's some work involved to get it to a more generic place along with cleaning up some of the code now that I've got another 2 years of MVC development under my belt.

Lastly, content wise I finally cleaned up my White Papers so everything is formatted properly now. I also began to fill in the gaps on the About Me page, still a lot of gaps in my development history that I want to document if only for myself.
Yesterday I presented two sessions at the Blue Ocean Competition on Xamarin, Azure and the C# ecosystem in general. From the reaction received after my sessions the students seemed to have really been captivated by the possibilities afforded them by using Xamarin Forms, Azure and C# to turn their ideas into real products.

As mentioned during my presentation, I created a full Xamarin Forms demo for Android, iOS and Windows Phone tied to Azure Mobile Services. The full source code I put up on my github account here. For those not use to Git, I also packaged together all of the code into a zipfile here.

I developed the demo in Visual Studio 2013 Update 4, but it should work in the 2015 preview as well.

Those looking for the Powerpoint presentation you can download it here.
A little over a year ago now at work I started diving into my next generation platform. I knew it would have the usual specifications: super fast and efficient, full audit handling etc. The crown jewel was to be a fully automated scheduling engine, but over time the platform itself evolved into the most dynamic platforms I know to exist. Fast forward a few months as the system was in Alpha, I started having my own questions of how to start predicting events based on data points now that I was storing much more data. With several other requirements taking precedence I put it on the back burner knowing the "brute force" method of comparing every value of every object on a per class or report was not ideal nor viable given the amount of reporting possibilities.

Last Sunday afternoon after playing around with Microsoft's Azure Machine Learning platform, I came to the realization that their solution while great for non-programmers I was left feeling as though I could do the same thing, but better in C# and because of that I could integrate it across every device even for offline native Mobile experiences.

Thinking about the question I had a while back at work, how can I predict things based on 1 (or maybe no data points)?

Breaking down the problem - the first thing is to reduce the data set. Given a thousand or millions of rows, chances are there are patterns in  the data. Reducing the individual rows to what more than likely is considerably smaller would make additional processing and reporting much easier and more meaningful. Give a million rows to a C-Level executive and they'll be left wondering what they are even looking, but group the data points into maybe the top 10 most common scenarios and immediately he/she will see value.

At this point this is where I opened Visual Studio 2013 and dove into the problem at hand. An hour later I had a working reusable library: jcANALYTICS. As of right now the .NET 4.5 library does the following:

1. Reduces large datasets into groups of datasets
2. Helper methods to return the most common and least common data rows
3. Solution completer - providing an object, based on the larger dataset, fill in the blanks

As of right it is also multi-threaded (can be turned off if desired). Given 100,000 objects on my AMD FX-8350 (8x4ghz) desktop it processes in  just under 5 seconds - though there is certainly room for optimization improvements.

The next big question I imagine is how will this work with my existing application? Knowing utilization of an external library takes some serious considerations (will the company cease to exist, thereby eliminating support, how much in bed will I be after integrating the framework or library etc.). Well good news - at a minimum to process a dataset just 2 lines of code.

Let's assume you've got some class object like the following:

[Serializable] public class Users {
     public string Username {
     get; set; }
public bool? HasIOS {
     get; set; }
public bool? HasAndroid {
     get; set; }
public bool? HasWinPhone {
     get; set; }
public bool? LivesInMaryland {
     get; set; }
To make it work with jcANALYTICS, just inherit from the jcAnalyticsBaseObject class and mark the properties to analyze with the Tally attribute like so:

[Serializable] public class Users : jcAnalyticsBaseObject {
     public string Username {
     get; set; }
[Tally] public bool? HasIOS {
     get; set; }
[Tally] public bool? HasAndroid {
     get; set; }
[Tally] public bool? HasWinPhone {
     get; set; }
[Tally] public bool? LivesInMaryland {
     get; set; }
That's it, then assuming you've got a List collection of Users you would simply use the following lines of code to process the data:

var engine = new jcAEngine< Users >(); engine.AnalyzeData(users); ]]>
After that you've got 4 helper methods to access the analyzed data:

1. GetMostCommon - Returns the most common data row
2. GetLeastCommon - Returns the least common data row
3. GetGroupItems - Returns the analyzed/reduced data points
4. GetCompleteItem - Given an object T based on the common data, fill in an properties based on the most common data that fits what was passed

I think the top 3 are self explanatory, but here's an example of the last function. Using the Users class above, assume you knew  that they lived in Maryland and had a Windows Phone, but you wanted to know based on the other data whether they had an iOS and or Android device as well like so:

var incompleteItem = new Users {
     LivesInMaryland = true, HasWinPhone = true }
; var completeItem = engine.GetCompleteItem(incompleteItem); ]]>
The engine would proceed to look at all of the data points and use the most probable values.

Going forward, my immediate objectives are to optimize the AnalyzeData method and make the library into a Portable Class Library so I can use it on any platform. Longer term I hope to continue to add more methods to analyze and produce data.

You can download the library from the NuGet Console or from here:

[bash] PM> Install-Package jcANALYTICS.Lib [/bash]


NeXTstation Turbo Rom
A while back in Summer 2011, I had this strange desire to obtain "classic" computers from every manufacturer that I had grown up hearing things about, obviously couldn't afford. Anyone following my blogs knows I have amassed quite a collection of Silicon Graphics machines along with a few DEC, Sun and late PowerPC-era Apple machines (G4 and G5). One manufacturer that had previously alluded me was NeXT. While still living at my parent's house back in 2004 I had an opportunity to purchase a NeXTcube, the pinacle of NeXT computers. Unfortunately, as an undergrad college student, I just couldn't justify the $300 at the time for a 33mhz computer that I might turn on every now and then. Fast forward a few years to 2011 shortly after Steve Jobs passed away, the prices on eBay for NeXT computers skyrocketed. It could have just been a coincidence, coupled with the fact that there were supposedly only 50,000 NeXT computers built between 1989 and 1993. Fast forward another couple years, prices seemed to have almost hit rock bottom for the NeXTstation Turbo, so I finally broke down and got one in near mint condition.

Inside Look

For those looking into purchasing a NeXTstation computer, a couple things to know before diving in:

-There are 4 models of the NeXTstation: regular (mono), Color, Turbo and Color Turbo. The main differences outside of supporting color in the Color models (4096 colors btw), is the speed of the Motorola 68040 (33mhz for Turbo models, 25mhz for non) and the maximum amount of ram the motherboard can take. For the Turbo models 128mb of ram is supported (72pin SIMMS, also compatible with the later year 68040 Macintosh models). For the non-Turbo models you're limited to 32mb of ram.

-The power supply is extremely picky amount how much draw your hard drive can utilize. I thankfully read a ton of posts regarding the maximum power draw which is extremely low (no Maxtor Atlas 15k II Ultra 320 drives unfortunately). By default, any drive with the older SCSI I 50pin internal connector should be fine. Another element is the heat output of a 15k drive for instance. There is only a single fan in the NeXTstation. Also the partitions have a 4gb limit, so if you have a larger drive, just remember to partition it in 4gb blocks

-By default there is no "VGA" connector, thankfully I was able to procure a Y-VGA Cable (one end to the NeXTstation, one to my monitor and one to the Soundbox) so I could use an Acer LED I had laying around.

Onto my NeXTstation Turbo machine specifically:

-Motorola 68040 33mhz CPU
-128MB of RAM
-2gb Quantum Fireball SCSI I Hard Drive
-2.88MB 3.5" Floppy Drive

NeXTstation Turbo - Inside
NeXTstation Turbo Rom
NeXTstation Turbo - RAM
Specifically for the Turbo models you'll need 72pin 70ns SIMMs (on a historical side note in December 1992 an 8MB SIMM went for $225). On eBay right now you can get 4x32mb SIMMs for $40 shipped, so keep an eye out. On a side note, a quick way to know if it's a Turbo board or not is if there are only 4 ram sockets instead of the 8 on non- turbo boards.

NeXTstation Turbo - Back
Note the 8P8C 10baseT ethernet connector - very handy to have it built-in compared to other machines of this era that still only had AUI or BNC connectors.
NeXTstation Turbo - Hard Drive
50pin SCSI I drives are used, which aren't anywhere close to the speed I'm used to in my SGI or Sun machines with a Maxtor Atlas 15k II Ultra 320 drive
NeXTstation Turbo - Power Supply


Having never used NeXTstep, I was unprepared for what to expect. The only operating systems from that era I've used are Windows 3.x, MS-DOS and IRIX 5.3. I know 3.3 came out in February 1995, but it seems as though that really only updated the CPU Architecture support rather than a feature release, so I'll be comparing it to operating systems released around October 1993 (when 3.2 was released).

Turning on the NeXTstation via the power button on the keyboard (shown below), you're presented with a few startup screens before the login prompt:

NeXTstation Turbo - Power Button

NeXTstation Turbo - Boot Sequence

NeXTstation Turbo - NeXTstep Boot Sequence

NeXTstation Turbo - NeXTstep Login Prompt

Immediately I came to the realization I hadn't used a monochrome computer ever as the first computer I ever used (a Tandy 1000) had 16 color support, though games like Falcon and Test Drive I remember playing in black and white. Upon logging in, I noticed the computer as Jobs had wanted, was silent outside of the hard drive spinning. The interface, similar to IRIX, offered something similar to IRIX's Toolchest, but different functionality mapping to keyboard shortcuts. Something I didn't really didn't note at first was the command button below the space bar on the keyboard. By holding down the command button and s for instance in the text editor would save. Similar to the Control+S we're used to today, but easier to execute in my opinion with your left thumb.

One of the first things I did was try NFS Mounting. I created a mount on my Windows Server 2012 R2 NAS with semi-recent ports of GCC, BASH and other "standard" software. Sure enough it was picked up by NeXTstep and I was copying files between the machines. Kind of interesting that machines of different architectures (x86- 64 vs m68k) and 20 years apart, communicating over the same protocol have 0 issues.

For those curious, I've made local copies of the latest versions of GCC and Bash available here:
CC Tools for GCC
GCC 3.4.6
Updated Headers for GCC

Installing packages is a bit more modern than I was used to with IRIX. Each "package" has a modern installer, as opposed to tardists in IRIX.

NeXTstation Turbo - Installing BASH

One thing to note when installing Bash (or any other shell), update the /etc/shells file with the path to the new shell like so:
NeXTstation Turbo - Adjusting Shells

Without updating that file, you won't be able to utilize the new shell when adding/modifying users in the GUI.

What's NeXT?

Yesterday I started working on the jcBENCH port for NeXTstep/m68k, I should have that wrapped up within a week. I am very interested in seeing how it compares to a 33mhz MIPS R3000 Silicon Graphics Indigo that came out a year before the NeXTstation (July 1991), albiet at a little bit higher price ($7,995 vs $6,500), but offered color and additional video options.


After having long been on my "todo list" to deep dive on a Saturday, ASP.NET SignalR finally got some attention yesterday. For those unaware, SignalR offers bi-directional communication between clients and servers over WebSockets. For those longtime readers of my blog, I did deep dive into WebSockets back in August 2012 with a WCF Service and Console App, though lost in the mix of several other technologies (MVC, WebAPI, OpenCL etc.) I had forgotten how powerful the technology was. Before I go any further, I highly suggest reading the Introduction to SignalR by Patrick Fletcher.

Fast forward to January 2015, things are even more connected to each other with REST Web Services like WebAPI, the Internet of Things and Mobile Apps from Android, iOS and Windows Phone exploding the last couple of years. Having a specific need for real-time communication from a server to client came up last Friday night for a dashboard in the next big revision of the architecture at work. The idea behind it would be to show off how every request was truly being tracked to who, when and what they were requesting or submitting. This type of audit trail I hadn't ever implemented in any of the previous three major Service Oriented architectures. In addition, presenting the data with Telerik's Kendo UI Data Visualization controls would be a nice way to show not only the audit trail functionality visually outside of the grid listing, but also to show graphs (pictures still tell a thousand words).

As I dove in, the only examples/tutorials I found were showing a simple chat. A user enters his or her name and messages, all the while without any postback the un-ordered list would dynamically update as new messages came in. Pretty neat - but what I was curious about was how one would execute a server side trigger to all the clients. Going back to my idea for enhancing my work project, it would need to be triggered by the WebAPI service, passed to the SignalR MVC app, which in turn the main MVC app would act as the client to display anything triggered originally from the WebAPI Service. So I started diving further into SignalR and in this post go over what I did (if there is a better way, please let me know in the comments). In the coming weeks I will do a follow up post as I expand the functionality to show at a basic level having three seperate projects like the one I will eventually implement at work at some point.

MVC SignalR Server Side Trigger Example

The following code/screenshots all tie back to an example I wrote for this post, you can download it here.

To begin I started with the base Visual Studio 2013 MVC Project (I will assume from here on out everyone is familiar with ASP.NET MVC):
ASP.NET Web Application Visual Studio 2013 Template
Then select the MVC Template:
ASP.NET MVC Template

Add the NuGet package for SignalR (be sure to get the full package as shown in the screenshot, not just the client):
NuGet Manager with SignalR

Upon the NuGet Package completing installation, you will need to add an OWIN Startup File as shown below:
OWIN Startup Class - Visual Studio 2013

This is crucial to SignalR working properly. For posterity here is the Startup.cs in the project I mentioned above:
using Microsoft.AspNet.SignalR; using Microsoft.Owin; using Owin; [assembly: OwinStartup(typeof(SignalRMVC.Startup))] namespace SignalRMVC {
     public class Startup {
     public void Configuration(IAppBuilder app) {
     var config = new HubConfiguration {
     EnableJavaScriptProxies = true }
; app.MapSignalR(config); }
Also new for MVC developers is the idea of a SignalR Hub. You will need to add at least one SignalR Hub class to your project, goto Add and then New Item, scroll down to the SignalR grouping and select the SignalR Hub Class (v2) option as shown in the screenshot below:
OWIN Startup Class - Visual Studio 2013

In the Hub class you define the endpoint(s) for your SignalR Server/Client relationship. For this example, I wrote a simple SendMessage function that accepts a string parameter like so:
using Microsoft.AspNet.SignalR; using Microsoft.AspNet.SignalR.Hubs; namespace SignalRMVC {
     [HubName("systemStatusHub")] public class SystemStatusHub : Hub {
     internal static void SendMessage(string logEntry) {
     var context = GlobalHost.ConnectionManager.GetHubContext(); context.Clients.All.sendData(logEntry); }
To make things a little cleaner for this example I added a BaseController with a wrapper to the SignalR Hub (mentioned above) adding in a timestamp along with the string passed from the MVC Action like so:
using System; using System.Web.Mvc; namespace SignalRMVC.Controllers {
     public class BaseController : Controller {
     internal void RecordVisit(string actionName) {
     SystemStatusHub.SendMessage(String.Format("Someone checked out the {
page at {
", actionName, DateTime.Now)); }
With the static wrapper function in place, lets look into the actual MVC Controller, HomeController:
using System.Web.Mvc; namespace SignalRMVC.Controllers {
     public class HomeController : BaseController {
     public ActionResult Index() {
     RecordVisit("home"); return View(); }
public ActionResult About() {
     RecordVisit("about"); return View(); }
public ActionResult Contact() {
     RecordVisit("contact"); return View(); }
Nothing unusual for a C# developer, simply passing an indicator based on the title of the Action.

And then in the Index.shtml, it contains the reference to the dynamically generated /signalr/hubs JavaScript file, the JavaScript Connection to the Hub and the handler for what should happen when it receives a message:

Site Activity

    Pretty simple, as the messages come in, append a li to the activityLog ul.

    Finished Product

    Below is a screenshot after clicking around from another browser:

    SignalR <span classin Action" />

    Again, if you wish to download the complete example you can download it here. In the coming weeks expect at least one more SignalR post detailing a possible solution for common Service Oriented Architectures (seperate MVC, WebAPI and SignalR hosted apps). I hope this helps someone out in the beginning of their SignalR journey.

    Some time ago (I want to say 2005) while working on my Infinity Project I was having issues with the large pre-rendered animations I was rendering out in 3ds max (funny to think that 1024x1024@32bpp was at one point huge) akin to the Final Fantasy VII-IX style. After a few animations not only were the files huge, the number of files (they were rendered out to individual frame files) got out of control. Thinking outside the box I started working on my own image format that could contain multiple frames of animation akin to a gif, while also applying what I thought was an ingenious compression approach. Not a hard problem, but when it came to writing a 3ds max or Photoshop plugin to convert targa files to the new format I was overwhelmed (surprisingly I never thought to just write a command line app to batch convert them).

    Fast forward 9 years to last April/May while watching the HBO show Silicon Valley, I was reminded of my earlier compression idea that I never brought to life. Approaching it as a 28 year old having spent the last 8 years doing professional C# as opposed to the 19 year old version of myself proved to be a huge help. I was able to finally implement it to some degree, however any file over 5mb would take several minutes even on my AMD FX-8350, not exactly performance friendly. So I shelved the project until I could come up with a more performance friendly approach and at the same time write it to be supported across the board (Windows Store, Win32, Windows Phone, Android, Linux, MacOSX, FreeBSD).

    Fast forward again to December 21st, 2014 - I had a hunch a slightly revised approach might satisfy the requirements I laid out earlier and finally gave it a name, jcCCL (Jarred Capellman Crazy Compression Library). Sure enough it was correct, and at first I was seeing a 10-15X compression level over zip or rar on every file format (png, mp4, mp3, docx, pdf) I tried. As is the case most of the time, this was due to an error in my compression algorithm not outputting all of the bytes that I uncovered when getting my decompression algorithm working.

    As it stands I have a WPF Win32 application compressing and decompressing files with drag and drop support, however the compression varies from 0 to 10% - definitely not a world changing compression level. However, now that all of the architecture (versioning support, wrappers etc.) are all complete I can focus on optimizing the algorithm and adding in new features like encryption (toying with the idea that it will be encrypted by default). More to come in time on this project, hoping to have a release available soon.
    For as long as I can remember since C# became my language of choice, I've been yearning for the cleanest and most efficient way of getting data from a database (XML, SQL Server etc.) into my C# application whether it was in an ASP.NET, WinForms or Windows Service. For the most part I was content with Typed DataSets back in the .NET 2.0 days. Creating an XML file by hand with all of the different properties, running the command line xsd tool on the XML file and having it generate a C# class I could use in my WinForms application. This had problems later on down the road when Portable Class Libraries (PCLs) became available, eliminating code duplication, but lack to this day Typed Dataset support, not to mention Client<->Web Service interactions have changed greatly since then switching to a mostly JSON REST infrastructure.

    Ideally I wish there was a clean way to define a Class inside a Portable Class Library (made available to Android, iOS, Windows Store, Windows Phone and .NET 4.5+) and have Entity Framework map to those entities. Does this exist today? To the best of my research it does not without a lot of work upfront (more on this later). So where does this lead us to today?

    For most projects you probably see a simple Entity Framework Model mapped to SQL Tables, Views and possibly Stored Procedures living inside the ASP.NET WebForms or MVC solution. While this might have been acceptable 4 or 5 years ago, in the multi-platform world we live in today you can't assume you'd only have a Web only client. As I've stated on numerous times over the years, investing a little bit more time in the initial development of a project to plan ahead for multiple platforms is key today. Some of you might be saying "Well I know it'll only be a Web project, the client said they'd never want a native mobile app when we asked them" Now ask yourself how many times a client came back and asked for what they said they never wanted when you or the PM asked. Planning ahead not only saves time later, but delivers a better product for your client (internal or external), bringing a better value add to your service.

    Going back to the problem at hand: a highly coupled Entity Framework model to the rest of the ASP.NET application. Below are some possible solutions (not all of them, but in my opinion the most common:

    1. Easy way out

    Some projects I have worked on had the Entity Framework model (and associated code) in their own Library and then the ASP.NET (WebForms, MVC, WebAPI or WCF service) project simply reference the library. While this is better in that if  you migrate from the existing project or want a completely different project to reference the same model (a Windows Service perhaps), then you don't have to invest the time in moving all of the code and updating all of the namespaces in both the new library and the project(s) referencing it. However you still have the tight coupling between your project and the Entity Framework model.

    2. POCO via Generators

    Another possible solution is to use the POCO (Plain Old CLR Object) approach with Entity Framework. There are a number of generators (Entity Framework Power Tools or the EntityFramework Reverse POCO Generator), both have dependencies in  the clients that reference the POCO Classes with Entity Framework, thus negating the idea you'd be able to have 1 set of classes for both your clients of your platform and Entity Framework.

    3. POCO with Reflection

    Yet another possible solution is to create a custom attribute and via reflection map a class object defined in your PCL. This approach has the cleaness of having the following possible POCO Class with custom attributes:
    [POCOClass] [DataContract] public class UserListingResponseItem {
         [DataMember] [POCOMemeber("ID")] public int UserID {
         get; set; }
    [DataMember] [POCOMemeber("Username")] public string Username {
         get; set; }
    [DataMember] [POCOMemeber("FirstName")] public string FirstName {
         get; set; }
    [DataMember] [POCOMemeber("LastName")] public string LastName {
         get; set; }
    The problem with this solution is that as any seasoned C# developer knows, reflection is extremely slow. If performance wasn't an issue (very unlikely) then this could be possible solution.

    4. DbContext to the rescue

    In doing some research on POCO with Entity Framework I came across one approach in which you can retain your existing Model untouched, but then define a new class inheriting from DbContext like so:
    public class TestModelPocoEntities : DbContext {
         public DbSet Users {
         get; set; }
    protected override void OnModelCreating(DbModelBuilder modelBuilder) {
         // Configure Code First to ignore PluralizingTableName convention // If you keep this convention then the generated tables will have pluralized names. modelBuilder.Conventions.Remove(); modelBuilder.Entity().ToTable("Users"); modelBuilder.Entity().Property(t => t.UserID).HasColumnName("ID"); modelBuilder.Entity().HasKey(t => t.UserID); }
    What this code block does is map the Users table to a POCO Class called UserListingResponseItem (the same definition as above). By doing so you can then in your code do the following:
    using (var entity = new TestModelPocoEntities()) {
         return entity.Users.ToList(); }
    As one can see this is extremely clean on the implementation side, albiet a bit tideous on the backend side. Imagining a recent project at work with hundreds of tables this could be extremely daunting to maintain, let alone implement in an sizeable existing project.

    Unsatisfied with these options I was curious how a traditional approach would compare performance wise to Option 4 above, given that it satisfied the requirement of a single class residing in a PCL. For comparison assuming the table is  defined as such:

    Data Translation User Table

    The "traditional" approach:
    using (var entity = new Entities.testEntities()) {
         var results = entity.Users.ToList(); return results.Select(a => new UserListingResponseItem {
         FirstName = a.FirstName, LastName = a.LastName, Username = a.Username, UserID = a.ID }
    ).ToList(); }
    Returns a List of Users EntityFramework objects and then iterates over every item and sets the equivalent property in the UserListingResponseItem Class before returning the result.

    The Benchmark

    For the benchmark I started with the MVC Base Template in Visual Studio 2015 Preview, removed all the extra Views, Controllers and Models and implemented a basic UI for testing:

    Data Translation Base UI

    A simple population of random data for the Users table and deletion of records before each test run:
    private void createTestData(int numUsersToCreate) {
         using (var entity = new Entities.testEntities()) {
         entity.Database.ExecuteSqlCommand("DELETE FROM dbo.Users"); for (var x = 0; x < numUsersToCreate; x++) {
         var user = entity.Users.Create(); user.Active = true; user.Modified = DateTimeOffset.Now; user.Password = Guid.Empty; user.LastName = (x%2 == 0 ? (x*x).ToString() : x.ToString()); user.FirstName = (x%2 != 0 ? (x * x).ToString() : x.ToString()); user.Username = x.ToString(); entity.Users.Add(user); entity.SaveChanges(); }

    Benchmark Results

    Below are the results running the test 3 times for each size data set. For those that are interested I was running the benchmark on my AMD FX-8350 (8x4ghz), VS 2013 Update 4 and SQL Server 2014 with the database installed on a Samsung 840 Pro SSD.

    Data Translation Performance Results

    The results weren't too surprising to me figuring the "traditional" approach would be a factor or so slower than the DbContext approach, but I didn't think about it from the standpoint of larger datasets being considerably slower. Granted we're talking fractions of a second, but multiple that by hundreds of thousands (or millions) of concurrent connections it is considerable.

    Closing thoughts

    Having spent a couple hours deep diving into the newer features of Entity Framework 6.x hoping that the golden solution would exist today I'm having to go back to an idea I had several months ago, jcENTITYFRAMEWORK in which at compile time the associations would be created mapping the existing classes to the equivalent Tables, Views and Stored Procedures. In addition to utilizing the lower level ADO.NET calls instead of simply making EntityFramework calls. Where I left it off I was still hitting an ASP.NET performance hit on smaller data sets (though on larger data sets my implementation was several factors better). More to come on that project in the coming weeks/months as Database I/O with C# is definitely not going away for anyone and there is clearly a problem with the possible solutions today. At the very least coming up with a clean and portable way to allow existing POCOs to be mapped to SQL Tables and Stored Procedures is a new priority for myself.

    For those interested in the ASP.NET MVC and PCL code used in benchmarking the two approaches, you can download it here. Not a definitive test, but real world enough. If for some reason I missed a possible approach, please comment below, I am very eager to see a solution.
    After a failed attempt earlier this week (see my post from Tuesday night) to utilize the CLR that is bundled with Windows IoT, I started digging around for an alternative. Fortunately I was on Twitter later that night and Erik Medina had posted:

    Not having any time Wednesday night to dive in, I spent a few hours Thursday night digging around for the Mono for Windows IoT. Fortunately, it was pretty easy to find Jeremiah Morrill's Mono on Windows IoT Blog post, download his binaries and get going.

    For those wanting to use Xamarin Studio or MonoDevelop instead of Visual Studio after downloading the binary from Jeremiah, you'll need to add mcs.bat to your bin folder, with the following line (assuming you've extract the zip file to mono_iot in the root of your C drive):
    [bash] @"C:\mono_iot\bin\mono.exe" %MONO_OPTIONS% "C:\mono_iot\lib\mono\4.5\mcs.exe" %* [/bash] For whatever reason this wasn't included and without it, you'll receive:
    [bash] Could not obtain a C# compiler. C# compiler not found for Mono / .NET 4.5. [/bash] Inside of Xamarin Studio, goto Tools -> Options and then scroll down to Projects -> .NET Runtimes and click add to the root of the mono_iot folder. After you've added it, it should look like the following (ignoring the Mono 3.3.0 that I installed separately):

    Xamar<span classin Studio with Mono for Windows IoT setup" />

    In addition you'll need to copy the lib folder to your Galileo and at least mono.exe and mono-2.0.dll, both found in the bin folder from Jeremiah's zip file to the folder where you intend to copy your C# executable. You could alternatively after copying over the entire mono_iot folder structure add it to the path like so (assuming once again you've extracted to c:\mono_iot) over a Telnet session:
    [bash] C:\mono_iot\bin>setx PATH "%PATH%;c:\mono_iot\bin" SUCCESS: Specified value was saved. [/bash] In order for the path variable to update, issue a shutdown /r.

    If you want to see the existing variables and their values you can issue a set p which will list the following after you've rebooted your Galileo:
    [bash] Path=C:\windows\system32;C:\windows;C:\wtt;;c:\mono_iot\bin PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC PROCESSOR_ARCHITECTURE=x86 PROCESSOR_LEVEL=5 PROCESSOR_REVISION=0900 ProgramData=C:\ProgramData ProgramFiles=C:\Program Files PROMPT=$P$G PUBLIC=C:\Users\Public [/bash] With the environment variable updated, you'll no longer have to either copy the mono executable to the folder of your app nor include the full path over Telnet - definitely a time saver in my opinion.

    Now back to deploying my little WebClient test from Tuesday with Mono...

    From Visual Studio you don't need set anything up differently, however I was running into issues with the app.config when compiling from Visual Studio and deploying a little test app to my Galileo:
    [bash] System.AggregateException: One or more errors occurred ---> System.TypeInitializationException: An exception was thrown by the type initializer for System.Net.HttpWebRequest ---> System.Configuration.ConfigurationErrorsException: Error Initializing the configuration system. ---> System.Configuration.ConfigurationErrorsException: Unrecognized configuration section [/bash] So I went back to using Xamarin Studio, but received the following:
    [bash] System.Net.WebException: An error occurred performing a WebClient request. ---> System.NotSupportedException: at System.Net.WebRequest.GetCreator (System.String prefix) [0x00000] in :0 at System.Net.WebRequest.Create (System.Uri requestUri) [0x00000] in :0 at System.Net.WebClient.GetWebRequest (System.Uri address) [0x00000] in :0 at System.Net.WebClient.SetupRequest (System.Uri uri) [0x00000] in :0 at System.Net.WebClient.OpenRead (System.Uri address) [0x00000] in :0 --- End of inner exception stack trace --- at System.Net.WebClient.OpenRead (System.Uri address) [0x00000] in :0 at System.Net.WebClient.OpenRead (System.String address) [0x00000] in :0 at (wrapper remoting-invoke-with-check) System.Net.WebClient:OpenRead (string) [/bash] Not a good sign - essentially saying WebClient isn't supported. Got me thinking to verify the version of Mono from Jeremiah:
    [bash] C:\>mono -V Mono JIT compiler version 2.11 (Visual Studio built mono) Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. TLS: normal SIGSEGV: normal Notification: Thread + polling Architecture: x86 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: Included Boehm (with typed GC) [/bash] From the 2.x branch not the newer 3.x branch like what I utilize at work for my iOS and Android development, but not wanting to go down the path of creating my own 3.x port I kept diving in - attempting to try the HttpClient that I knew wasn't supported by Windows IoT's CLR. I threw together a quick sample to pull down the compare results from jcBENCH to the console:
    public async Task < T > Get < T > (string url) {
         using (var client = new HttpClient()) {
         var result = await client.GetStringAsync(url); return JsonConvert.DeserializeObject < T > (result); }
    public async void RunHttpTest() {
         var result = await Get < List < string > > (""); foreach (var item in result) {
         Console.WriteLine (item); }
    As far the project was concerned I added the .NET 4.5 version of Newtonsoft.Json.dll to the solution via NuGet and made sure it was copied over during deployment. With a bit of a surprise:
    [bash] C:\winiottest>mono winiottest.exe AMD A10-4600M APU with Radeon(tm) HD Graphics AMD A10-7850K Radeon R7, 12 Compute Cores 4C 8G AMD A6-3500 APU with Radeon(tm) HD Graphics AMD A6-5200 APU with Radeon(TM) HD Graphics AMD Athlon(tm) 5150 APU with Radeon(tm) R3 AMD Athlon(tm) 5350 APU with Radeon(tm) R3 AMD C-60 APU with Radeon(tm) HD Graphics AMD E-350D APU with Radeon(tm) HD Graphics AMD E2-1800 APU with Radeon(tm) HD Graphics AMD FX(tm)-8350 Eight-Core Processor AMD Opteron(tm) Processor 6176 SE ARMv7 Processor rev 0 (v7l) ARMv7 Processor rev 1 (v7l) ARMv7 Processor rev 2 (v7l) ARMv7 Processor rev 3 (v7l) ARMv7 Processor rev 4 (v7l) ARMv7 Processor rev 9 (v7l) Cobalt Qube 2 Intel Core 2 Duo Intel Core i5-4300U Intel(R) Atom(TM) CPU Z3740 @ 1.33GHz Intel(R) Core(TM) i3-2367M CPU @ 1.40GHz Intel(R) Core(TM) i7-4650 Intel(R) Quartz X1000 Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz Intel(R) Xeon(R) CPU E5440 @ 2.83GHz PowerPC G5 (1.1) R14000 UltraSPARC-IIIi [/bash] I should note even with the newly added Sandisk Extreme Pro SDHC card, the total time from execution to returning the results was around 10 seconds, where as on my AMD FX-8350 also hard wired it returns in less than a second. Given that also the Galileo itself is only 400mhz - you definitely won't be running a major WebAPI service on this device, but there are some definite applications (including one I will be announcing in the coming weeks).

    More to come with the Galileo - I received my Intel Centrino 6235 mPCI WiFi/Bluetooth card yesterday and am just awaiting the half->full length mPCIe adapter so I can mount it properly. With any luck I will receive that today and will post on how to get WiFi working on the Galileo under Windows IoT.

    After doing some research last night, it looks as though there is limited C# support on the Intel Galileo when using Windows IoT (see my Intel Galileo and Windows IoT post for more information on both the Intel Galileo and Windows IoT) per this August 20th, 2014 post from Pete Brown. This got me thinking what exactly is supported?

    Checking the usual folder for Microsoft .NET under c:\Windows\Microsoft.Net was non-existent, so I started poking around. Low and behold under the c:\Windows \System32\CoreCLR\v1.0 folder all of the CLR dlls live.

    Windows IoT CoreCLR Dlls
    In looking at the list, Pete Brown was correct, outside of the System namespace we've got 0 support, but we do have LINQ, XML Support, Collections and IO. What about adding in let's say System.Net.Http?

    Trying a simple HttpClient test with the System.Net.Http copied unfortunately gave a System.BadImageFormatException:
    [bash] Unhandled Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.BadImageFormatException: Could not load file or assembly 'System.Net.Http, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. Reference assemblies should not be loaded for execution. They can only be loaded in the Reflection-only loader context. (Exception from HRESULT: 0x80131058) ---> System.BadImageFormatException: Cannot load a reference assembly for execution. --- End of inner exception stack trace --- at winiottest.Program.Main(String[] args) --- End of inner exception stack trace --- at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor) at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at AppDomainManager.HostMain(Int32 argc, Char** argv, Char* assemblyToRun, Int32 flags) [/bash] So I got thinking of using the older WebClient with the following code block:

    static void Main(string[] args) {
         using (var webClient = new WebClient()) {
         using (var stream = webClient.OpenRead(new Uri(""))) {
         using (var streamReader = new StreamReader(stream)) {
         var result = streamReader.ReadToEnd(); var list = JsonConvert.DeserializeObject>(result); foreach (var item in list) {
         Console.WriteLine(item); }
    However, I received a System.IO.FileNotFoundException:
    [bash] Unhandled Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.IO.FileNotFoundException: Could not load file or assembly 'System, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089' or one of its dependencies. The system cannot find the file specified. at winiottest.Program.Main(String[] args) --- End of inner exception stack trace --- at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor) at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at AppDomainManager.HostMain(Int32 argc, Char** argv, Char* assemblyToRun, Int32 flags) [/bash] At this point, I thought well what exactly is supported in the System.Net namespace? So I copied over the from the c$\Windows\System32\CoreCLR\v1.0 path and opened it in the Object Browser inside of Visual Studio 2013 only to see not a lot is implemented...

    Windows IoT System Dll
    So at this point I ran out of ideas as far as getting any sort of System.Net component in C# at this time. Crossing my fingers it is on the horizon as I have a few ideas I'd love to implement in C# as opposed to C++ with the Galileo board.

    Intel Galileo Development Board
    After hearing about and seeing Windows IoT while at BUILD this April, I was waiting for an opportunity to really get started with it. Fortunately, this weekend I was near a Fry's Electronics and picked up an Intel Galileo Development Board for nearly half off.

    Intel Galileo Development Board - All included parts
    Inside the box you'll find the various power plugs used around the world, a USB cable, the development board and the power brick itself.

    After reading through the very thorough (no need to ellaborate on the steps) Updating your Intel Galileo I was able to get Windows IoT onto a Micro SDHC card. Make sure to connect to the Client Port (the one closest to the Ethernet Port).

    Installing Windows IoT to Galileo
    30 minutes into the installation:

    Installing Windows IoT to Galileo - 30 minutes in
    The whole process took 45 minutes on my HP dv7 laptop to install to my SDHC card. For those curious I chose to use a Sandisk Ultra Plus 16gb Micro SDHC card, while by no means the fastest (the Sandisk Extreme Pro from what I've read is the fastest), it was the most cost effective at $15.

    Intel Galileo Development Board - All included parts
    After unplugging the power from the Galileo, removing the SDHC card from my PC and popping it into the Galileo I was pleasantly surprised to be able to Telnet in after a minute or two.

    Windows IoT with Telnet into Intel Galileo
    For those curious, as I have said in other posts, I find XShell to be the best SSH/SERIAL/TELNET client for Windows, best of all it is free for non-commercial use.

    After installing WindowsDeveloperProgramforIOT.msi, I started porting jcBENCH to Windows IoT. Since jcBENCH is C++ and written in a extremely portable manner the only big thing I had to do was recompile the pthreads Win32 library to not take advantage of SSE instructions as the Galileo does not support them. The other thing to note is if you want to run a program in a more traditional route, simply do the following:

    int _tmain(int argc, _TCHAR* argv[]) {
         ArduinoInit(); // rest of your program here return 0; }
    The base template that is installed in Visual Studio 2013 is focused more on applications that loop continuously (which makes sense given the headless nature of Windows IoT).

    So how did the Intel Galileo fair in jcBENCH?

    [bash] C:\jcBench>jcBENCH.cmdl jcBENCH 1.0.850.0531(x86/WinIoT Intel Galileo Edition) (C) 2012-2014 Jarred Capellman Usage: jcBench [Number of Objects] [Number of Threads] Example: jcBench 100000 4 This would process 100000 objects with 4 threads Invalid or no arguments, using default benchmark of 10000 objects using 1 CPUS CPU Information --------------------- Manufacturer: GenuineIntel Model: Intel Quartz X1000 Count: 1x399.245mhz Architecture: x86 --------------------- Running Benchmark.... Integer: 1 Floating Point: 1 Submit Result (Y or N):n Results can be viewed and compared on [/bash] Given that it is a single 400mhz Pentium (P5 essientially) - not bad, but not great. It got a score of 1 in both Floating Point operations and Integer Operations, given the baseline is a Dual Core AMD C-60 running at a much faster clock rate. This isn't discouraging, especially given the applications for this platform including my own ideas from back in March 2014 (at the bottom of the post) having my own Network Monitoring software. Given the lower power usage and the challenge of making my C++ code fast enough to do those operations, I'm up for the task.

    As for what is next with the Intel Galileo? I have an Intel Centrino 6235 mPCIe 802.11n/Bluetooth 4.0 card coming today for my Galileo, as it is one of the few confirmed working WiFi cards. In addition I have a 16gb Sandisk Extreme Pro SDHC card on its way to test whether or not the "best" card has any impact on a real-world application. For those curious you can now obtain the Windows IoT version of jcBENCH on the official jcBENCH page.

    From a developer perspective, for those needing to make use of Pthreads Library, you can download the source and binary compatible with Windows IoT here.

    A couple updates since my last "what I am doing" update back in July, still been extremely busy with the Baltimore GiveCamp MVC 5 Web Application and work itself. In between everything however, I have a few announcements.

    First off, jcBENCH has a new compare results feature on the web site where you can compare up to 3 results with a bar graph. This feature will make it to the Windows Store and Windows Phone 8.1 apps in the coming weeks. In addition a new OpenCL enabled version is near BETA, just have to add error handling and do more testing. The OpenCL client will be released for Linux/x86 and Win32/x86 initially with a MacOSX/x86 release to come later.

    A project my wife requested a few weeks ago is in the Windows Phone Store: jcTRENDNET. This application allows you to remotely view and manage Trendnet IP 751 WIC/WC cameras. More features to come, but a pretty feature filled 1.0 release.

    The larger project I mentioned in my July update, is still being worked on. I ran into some huge technical hurdles, that while not impossible will require some extensive work to even get cranking with something of substance. My latest work has been leading me to design my own language to be interfaced with C++, but I'm still working out the short and long term goals.

    Lastly, for the last week or two I've been attempting to get back into game development. After a good start last August, I lost momentum after 2 weeks or so for two reasons: last of mapping out what the game would be and other projects. This new idea is modular in the regards that the base game isn't terribly complex, but future features can be added on to enhance the game. At least initially it will only be a Windows Phone 8.1 game, but more than likely a Windows Store 8.1 game as well. I have yet to find a game that can captivate my time on my Lumia 1520 so this should solve it. More details to come with a release when it's done to quote id software.

    Just a quick update, I just wrapped up the 0.9.850.0531 release of jcBENCH for Mac OS X/x86. Previously there was a much older port going back to January 2012, so it is nice to finally have it running on the same codebase as the other platforms now. So please if you've got an Apple product, please go run jcBENCH as I only have an older iMac to compare results with.

    On a somewhat funny note the iMac I do all my Xamarin work on is several factors slower than my $60 AMD Athlon 5350 I use in my firewall - funny how technology catches up at a fraction of the cost.
    In doing some routine maintenance on this blog, I updated the usual JSON.NET, Entity Framework etc. In doing so and testing locally, I came across the following error:
    ASP.NET Webpages Conflict

    In looking at the Web.config, the NuGet Package did update the dependentAssembly section properly:
    ASP.NET Webpages Conflict

    However, in the appSettings section, it didn't update the webpages:Version value:
    ASP.NET Webpages Conflict

    Simply update the "" to "" and you'll be good to go again.


    For the longest time I’ve had these great ideas only to keep them in my head and then watch someone else or some company turn around and develop the idea (not to say someone stole the idea, but given the fact that there are billions of people on this planet, it is only natural to assume one of those billion would come up with the same idea). Watching this happen, as I am sure other developers have had since the 70s I’ve decided to put my outlook on things here, once a year, every July.

    As one who reads or has read my blog for a decent amount of time knows I am very much a polyglot of software and enjoy the system building/configuration/maintaining aspect of hardware. For me, they go hand in hand. The more I know about the platform itself (single threaded performance versus multi-threaded performance, disk iops etc.) the better I can program the software I develop. Likewise the more I know about a specific programming model, the better I will know the hardware that it is specialized for. To take it a step further, this makes decisions on implementation at work & my own projects better.

    As mentioned in the About Me section, I started out in QBasic and a little later when I was 12 I really started getting into custom pc building (which wasn’t anywhere as big as it is today). Digging through the massive Computer Shopper magazines, drooling over the prospect of the highest end Pentium MMX CPUs, massive (at the time) 8 GB hard drives and 19” monitors. Along with the less glamorous 90s PC issues of IRQ conflicts, pass through 3Dfx Voodoo cards that required a 2D video card (and yet another PCI slot), SCSI PCI controllers and dedicated DVD decoders. Suffice it to say I was glad I experienced all of that because it as it creates a huge appreciation for USB, PCI Express, SATA and if nothing else the stability of running a machine 24/7 on a heavy work load (yes part of that is also software).

    To return to the blog’s title…

    Thoughts on the Internet of Things?

    Universally I do follow the Internet of Things (IoT) mindset. Everything will be interconnected, which brings the question of privacy and what that means for the developer of the hardware, the software and consumer. As we all know, your data is money. If the lights in your house for instance were WiFi enabled and connected into a centralized server in your house with an exposed client on a tablet or phone I would be willing to be the hardware and software developer would love to know the total energy usage, which lights in which room were on, what type of bulbs and when the bulbs were dying. Marketing data could then be sold to let you know of bundle deals, new “more efficient” bulbs, how much time is spent in which rooms (if you are in the home theater room a lot, sell the consumer on blu-rays and snacks for instance). With each component of your home becoming this way, the more data will be captured and in some cases will be able to predict what you want before you realize, simply based off your tendencies.

    While I don’t like the lack of privacy in that model (hopefully some laws can be enacted to resolve those issues), being a software developer I would hate to be ever associated with the backlash of capturing that data, but this idea of everything being connected will create a whole new programming model. With the recent trend towards REST web services returning Gzipped JSON with WebAPI for instance, the problem of submitting and retrieving has never been easier and portable across so many platforms. With C# in particular in conjunction with the Http Client library available on NuGet, a lot of the grunt work is already done for you in an asynchronous manner. Where I do see a change, is in the standardization of an API for your lights, TV, garage door, toaster etc. Allowing 3rd party plugins and universal clients to be created rather than having a different app to control element or one company providing a proprietary API that only works on their devices, forcing the difficult decision for the consumer to either stay with that provider to be consistent or mixing the two, requiring 2 apps/devices.

    Where do I see mobile technology going?

    Much like where the Mobile Devices have headed towards (as I predicted 2 years ago), apps are becoming ever increasingly integrated into your device (for better or for worse). I don’t see this trend changing, but I do hope from a privacy standpoint the apps have to become more explicit in what they are accessing. I know there is fine line from the big three (Apple, Google and Microsoft) in becoming overly explicit before any action (remember Vista?), but think if an app gets more than your current location, the capabilities should be brought to a bolder or larger font to better convey the apps true accessibility to your device. I don’t see this situation getting better from a privacy standpoint, but I do see more and more customer demand for the “native” experience to be like that of Cortana on Windows Phone 8.1. She has access to the data you provide her and will help make your experience better. As the phones provide more and more APIs, this trend will only continue until apps are more of plugins to your base operating system’s experience to integrate into services like Yelp, Facebook, Twitter etc.

    Where do I see web technology going?

    I enjoyed diving into MVC over the last year in a half. The model definitely feels much more in line with a MVVM XAML project, but still has overwhelming strong tie to the client side between the strong use of jQuery and the level of effort in maintaining the ever changing browser space (i.e. browser updates coming out at alarming rate). While I think we all appreciate when we goto a site on our phones or desktop and it scales nicely providing a rich experience no matter the device, I feel the ultimate goal of trying to achieve a native experience in the browser is waste of effort. I know just about every web developer might stop reading and be in outrage – but what is the goal of the last web site you developed and designed that was also designed for the mobile? Was it to convey information to the masses? Or was it simply a stop gap until you had a native team to develop for the big three mobile platforms?

    In certain circumstances I agree with the stance of making HTML 5 web apps instead of native apps, especially when it comes to cost prohibiting of a project. But at a certain point, especially as of late with Xamarin’s first class citizen status with Microsoft you have to ask yourself, could I deliver a richer experience natively and possible faster (especially given the cast range of mobile browsers to content with in the HTML 5 route)?

    If you’re a C# developer who wants to see a native experience, definitely give the combination of MVVM Cross, Xamarin’s Framework and utilizing Portable Libraries a try. I wish all of those tools existed when I first dove into iOS development 4 years ago.

    Where do I see desktop apps going?

    In regards to desktop applications, I don’t see them going away even in the “app store” world we live in now. I do however see a demand for a richer experience expected by customers after having a rich native experience on their phone or after using a XAML Windows 8.x Store App. The point being, I don’t think it will be acceptable for an app to look and feel like the default WinForms grey and black color scheme that we’ve all used at one point in our careers and more than likely began our programming (thinking back to classic Visual Basic).

    Touch will also play a big factor in desktop applications (even in the enterprise). Recently at work I did a Windows 8.1 Store App for an executive dashboard. I designed the app with touch in mind, and it was interesting how it changes your perspective of interacting with data. The app in question, utilized Mutli-layered graphs and a Bing Map with several layers (heat maps and pushpins). Gone was the un-natural mouse scrolling and instead pinching, zooming and rotating as if one was in a science fiction movie from just 10 years ago.

    I see this trend continuing especially as the number of practical general purpose devices like laptops having touch screens at every price point, instead of the premium they previously demanded. All that needs to come about is a killer application for the Windows Store – could your next app be that app?

    Where is programming heading in general?

    Getting programmers out of the single-threaded – top to bottom programming mindset. I am hoping next July when I do a prediction post this won’t even be a discussion point, but sadly I don’t see this changing anytime soon. Taking a step back and looking at what this means generally speaking: programmers aren’t utilizing the hardware available to them to their full potential.

    Over 5 years ago at this point I found myself at ends with a consultant who kept asking for more and more cpus added to a particular VM. At the time when he first asked, it seemed reasonable as there was considerably more traffic coming to a particular ASP.NET 3.5 Web Application as a result of a lot of eagerly awaited functionality he and his team had just deployed. Even after the additional CPUS were added, his solution was still extremely slow under no load. This triggered me to review his Subversion checkins and I realized the crux of the matter wasn’t the server – it was his single threaded resource intensive/time consuming code. In this case, the code was poorly written on top of trying to achieve a lot of work performed on a particular page. For those that remember back to .NET 3.5’s implementation of LINQ, it wasn’t exactly a strong performer in performance intensive applications, let alone being looped through multiple times as opposed to one larger LINQ Query. The moral of the story being the single-threaded coded only helped for handling the increased load, not the performance of a user’s experience on a 0% load session.

    A few months later when .NET 4 came out of beta and further still when the Task Parallel Library was released it changed my view on performance (After all, jcBENCH stemmed from my passion for diving into Parallel Programming on different architectures and operating systems back in January 2012). No longer was I relying on high single threaded performing cpus, bu t instead writing my code to take advantage of the ever-increasing # of cores available to me at this particular client (for those curious 2U 24 core Opteron HP G5 rackmount servers).

    With .NET’s 4.5’s async/await I was hopeful that meant more developers I worked with would take advantage of this easy model and no longer lock the UI thread, but I was largely disappointed. If developers couldn’t grasp async/await, let alone TPL how could they proceed to what I feel is an even bigger breakthrough to become available to developers: Heterogeneous Programming, or more specifically OpenCL.

    With parallel programming comes the need to break down your problem into independent problems, all coming together at a later time (like breaking down image processing to look at a range of pixels rather than the entire image for instance). This is where Heterogeneous Programming can make an even bigger impact, in particular with GPUs (Graphics Processing Units) which have upwards of hundreds of cores to process tasks.

    I had dabbled in OpenCL as far as back as June 2012 in working on the OpenCL version of jcBENCH and I did some further research back in January/February of this year (2014) in preparation for a large project at work – a project I ended up using the TPL extensively instead. The problem wasn’t OpenCL’s performance, but my mindset at the time. Before the project began, I thought I knew the problem inside out, but really I only knew it as a human would think about it – not a machine that only knows 1s and 0s. The problem wasn’t a simple task, nor was it something I had ever even attempted previously so I gave myself some slack two months in when it finally hit me on what I was really trying to solve – teaching a computer to think like a human. Therefore when pursuing Heterogeneous programming as a possible solution, ensure you have a 100% understanding of the problem and what you are in the end trying to achieve, in most cases it might make sense to utilize OpenCL instead of a traditional parallel model like with the TPL.

    So why OpenCL outside of the speed boost? Think about the last laptop or desktop you bought, chances are you have an OpenCL 1.x compatible APU and/or GPU in it (i.e. you aren’t required to spend any more money – just utilizing what has already been available to you). In particular on the portable side, laptops/Ultrabooks that already have a lower performing CPU than your desktop, why utilize the CPU when the GPU could off load some of that work?

    The only big problem with OpenCL for C# programmers is the lack of an officially supported interop library from AMD, Apple or any of the other members of the OpenCL group. Instead you’re at the mercy of using one of the freely available wrapper libraries like OpenCL.NET or simply writing your own wrapper. I haven’t made up my mind yet as to which path I will go down – but I know at some point a middle ware makes sense. Wouldn’t it be neat to have a generic work item and be able to simply pass it off to your GPU(s) when you wanted?

    As far as where to begin with OpenCL in general, I strongly suggest reading the OpenCL Programming Guide. Those who have done OpenGL and are familiar with the “Red Book”, this book follows a similar pattern with a similar expectation and end result.


    Could I be way off? Sure – it’s hard to predict the future, while being grounded in the past that brought us here, meaning it’s hard to let go of how we as programmers and technologists in the world have evolved in the last 5 years to satisfy not only our current consumer demand but our own and anticipate what is next. What I am more curious in hearing is programmers outside of the CLR in particular the C++, Java and Python crowds – where they feel the industry is heading and how they see their programming language handling the future, so please leave comments.
    Over the weekend I picked up an AMD Athlon 5150 APU (4x1.6ghz) along with a MSI AM1I motherboard from Frys on sale for $59. A month or two ago I purchased the 5350 APU (4x2ghz) and an ASUS AM1-I, which had been working great since I set it up so I was curious how the 5150 performed along with the MSI motherboard.

    MSI AM1I and AMD 5150

    Big differences between the two is the inclusion of a Mini-PCLe slot (for a WiFi or SSD card) along with a PCIe 2.0 x16 slot (x4 mechanical). MSI AM1I and AMD 5150

    AMD 5150

    For those that have followed my blog for a while, I swapped the motherboard into my Lian Li PC-Q03B case that I bought back in 2012. In the time since I setup that machine, I had installed XBMC to stream TV, Movies and Music from my NAS (to be written up at a later date). Over time it became apparent the low powered AMD C-60 I had in there wasn't truly up to the task of streaming higher bit-rate 1080p video.

    One thing I wasn't a big fan of after booting up the machine was MSI's choice of bios in comparison to my ASUS motherboards:

    BIOS aside, I had no issues with the motherboard, Windows 8.1 updated itself without the need for a reinstall and I was ready to benchmark it.

    Interestingly enough comparing jcBENCH the 5150 is roughly 40% slower in both Integer and Floating Point than my 5350. I included a couple other systems for comparisons:

    AMD 5150 Benchmark Results

    First off, things have been a bit hectic since my last posting (sadly almost a month ago). I’ve been doing extensive volunteer work for an all new custom ASP.NET MVC 5 web application for the Baltimore GiveCamp to the point it’s become a second job (for better or for worse).

    In between that work and my day job, I managed to get jcBENCH its own domain along with an all new MVC/WebAPI site hosted on Azure. Newly available is a Windows Store port along with an updated Android port (the last update to the Android port was 2.5 years ago – good to get that done). Big changes in the 0.9 release is the ability to upload your results across every platform along with a new scoring system to better show performance across all devices. Every platform but IRIX/MIPS3 and MacOSX/x86 ports have been updated (hoping to get these done sometime this week). More to come on that front in regards on the web site in regards to graphs and comparisons (along with some responsive UI fixes on lower resolution displays). As of right now every current major platform outside of iOS and Blackberry have ports. If someone wishes to waive the iOS Store fee and Xamarin Framework license fee, I would be happy to do the iOS port. For a free app, I just can’t justify the cost.

    In addition a project I’ve resurrected from July/August 2008, I finally figured out how to actually implement it (it wasn’t a focus of my time for sure – I can only remember one instance since where I actually opened Visual Studio to work on it). Technology has definitely advanced far beyond where it was then and I’ve added several extremely neat features (at least to me) purely from how far I’ve come as a programmer in the 6 years since. My goal is to have an alpha released sometime in August. What is neat, is that no one else (to my knowledge) has done anything of this scope or functionality – nor to this scale, especially by a single developer.

    Along with this project, two smaller libraries will also be released freely with the hope of some adoption, but at the end of the day I am making them to assist myself in all future projects (creating a jcPLATFORM for lack of a better name). If no one else uses them, there will be no sorrow on my part.

    More to come in the days ahead – two long blog posts and the remaining two ports of jcBENCH in particular.
    Working on the Android port of jcBENCH today and ran into a question:
    How to accurately detect the number of CPUs on a particular Android Device running at least Android version 4.0?

    This led me to search around, I found a pretty intuitive native Java call:
    Runtime.GetRuntime().AvailableProcessors(); ]]>
    But this was returning 1 on my dual core Android devices. So I continued searching, one such suggestion on Stack Overflow was to count the number of files listed in the /sys/devices/system/cpu folder that began with cpu followed by a number. I ported the Java code listed in the Stack Overflow answer into C# and ran it on all 4 Android devices I own - all of them returned 3 (and looking at the actual listing only actually found a CPU0).

    This got me thinking, I wonder if the traditional C# approach would work in this case? Xamarin afterall is built off of Mono....

    Sure enough:
    System.Environment.ProcessorCount ]]>
    Returned properly on each device. Hopefully that helps someone out there.
    Last night I presented at CMAP's main meeting on using SQL Server's Geospatial Data functionality with MVC and Windows Phone with a focus on getting exposed to using the data types and the associated SQLCLR functions.

    As promised, you can download the full Visual Studio 2013 solution (PCL, MVC/WebAPI application and the Windows Phone 8.1 application), in addition to the SQL script to run on your database and the Powerpoint presentation itself here.

    Included in the zip file as well is a Readme.txt with the instructions for what you would need to update before running the solution:
    -Update the Database Connection String in the Web.Config
    -Update the WebAPI Service's connection string in the PCL
    -Update the Bing Maps API Key in the Index/View of the MVC App

    Any questions, concerns or comments leave them below or email me at jarred at jarredcapellman dot com. As stated during the presentation, this is not a production ready solution (there is no caching, error handling etc.), but merely a demo for one who wants to see how you can begin to use Spatial Data in your applications.
    After spending the weekend publishing jcBENCH to both the Windows Store and Windows Phone I figured I would post some notes/findings.

    My first impressions of a Universal Windows Applications is mixed. On one hand I found the starting templates extremely well thought out, especially if someone who hadn't been exposed to MVVM prior. In addition I liked the idea that all but my View could be shared being that the Windows Phone 8.1 app is not the Silverlight 8.1 project type. This came with its own negatives however as you lose access to DeviceStatus, which as of the time of this writing is not available in Windows Runtime applications. I should note my scenario with jcBENCH is kind of unique, it requires the ability to query the device pretty extensively to get the number of cores, cpu model, speed and architecture on each. For a typical scenario you wouldn't need this level of detail.

    Going forward I think I might simply keep with my MVVM Cross methodology to create Android and iOS applications with Xamarin for work projects, especially considering Windows Store and Windows Phone are unfortunately not the top requested platforms for clients of my employer. For personal projects I will keep building Universal Windows Applications however, as both of those platforms I enjoy far more than iOS and Android (even with Xamarin).

    In regards to getting access to the QueryPerformanceFrequency, QueryPerformanceCounter and GetNativeSystemInfo inside the Shared project you'll need to do the following:
    #if WINDOWS_APP [DllImport("kernel32.dll", SetLastError = true, CharSet = CharSet.Ansi)] private static extern bool QueryPerformanceFrequency(out long lpFrequency); [DllImport("kernel32.dll", SetLastError = true, CharSet = CharSet.Ansi)] internal static extern void GetNativeSystemInfo(ref SYSTEM_INFO lpSystemInfo); [DllImport("kernel32.dll")] private static extern bool QueryPerformanceCounter(out long lpPerformanceCount); #endif #if WINDOWS_PHONE_APP [DllImport("api-ms-win-core-profile-l1-1-0.dll", SetLastError = true, CharSet = CharSet.Ansi)] private static extern bool QueryPerformanceFrequency(out long lpFrequency); [DllImport("api-ms-win-core-sysinfo-l1-2-0.dll", SetLastError = true, CharSet = CharSet.Ansi)] internal static extern void GetNativeSystemInfo(ref SYSTEM_INFO lpSystemInfo); [DllImport("api-ms-win-core-profile-l1-1-0.dll")] private static extern bool QueryPerformanceCounter(out long lpPerformanceCount); #endif ]]>
    More to come on Universal Windows Applications and some big jcBENCH announcements coming later this week.

    I recently needed to migrate a database from SQL Server to MySQL, knowing the reverse could be easily done with the Microsoft SQL Server Migration Assistant tool I wasn't entirely sure how to achieve the opposite. Figuring MySQL Workbench(in my opinion the best "SQL Management Studio" equivalent for MySQL) had something similar built in, I was pleasantly surprised to find the Migration menu option. I entered my SQL Server credentials and my local MySQL database, but was presented with an undescript error, so I proceeded to the logs:
    [bash] File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.1 CE\modules\", line 186, in getCatalogNames return [ row[0] for row in execute_query(connection, query) ] File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.1 CE\modules\", line 62, in execute_query return get_connection(connection_object).cursor().execute(query, *args, **kwargs) pyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Could not find stored procedure 'sp_databases'. (2812) (SQLExecDirectW)") Traceback (most recent call last): File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.1 CE\workbench\", line 192, in thread_work self.func() File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.1 CE\modules\", line 439, in task_fetch_schemata self.main.plan.migrationSource.doFetchSchemaNames(only_these_catalogs) File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.1 CE\modules\", line 241, in doFetchSchemaNames catalog_names = self.getCatalogNames() File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.1 CE\modules\", line 205, in getCatalogNames return self._rev_eng_module.getCatalogNames(self.connection) SystemError: ProgrammingError("('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Could not find stored procedure 'sp_databases'. (2812) (SQLExecDirectW)")"): error calling Python module function DbMssqlRE.getCatalogNames ERROR: Retrieve schema list from source: ProgrammingError("('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Could not find stored procedure 'sp_databases'. (2812) (SQLExecDirectW)")"): error calling Python module function DbMssqlRE.getCatalogNames Failed [/bash] Needing to get this migration performed ASAP, I proceeded to line 186 in C:\Program Files (x86)\MySQL\MySQL Workbench 6.1 CE\modules\
    query = 'exec sp_databases' ]]>
    Knowing this was wrong I adjusted it to:
    query = 'exec sys.sp_databases' ]]>
    Saved and then restarted the Migration Tool - no errors and my migration proceeded to work perfectly. Hopefully that helps someone else out there. I know version 6.0 and 6.1 of MySQL Workbench are affected by this bug.

    Saturday morning after resolving my Realtek RTL8111-GR Issues on my new ClearOS box, I ran into yet another error:


    Knowing the AM1 Platform that my AMD Athlon 5350 APU/ASUS AM1I-A motherboard run on more than likely does not support IOMMU like my desktop's 990FX does I figured it was a detection issue with the Linux kernel that ClearOS 6.5 utlizes.

    Doing some research into the issue there are a couple adjustments to your GRUB configuration that may or may not resolve the issue. In my case adjusting my GRUB arguments upon boot to include iommu=soft resolved the issue. I'm hoping down the road with newer Linux kernels the detection (if that even is the issue) gets better, but for those running an AMD "Kabini" APU and ran into this issue you'll at least be able to boot into your Linux distribution without any issues.

    Continuing down the path of securing my home network, I wanted to get some sort of automated reporting of traffic and other statistics. Looking around I came upon Monitorix, which offered everything I was looking for. Unfortunately, adding Monitorix to my ClearOS 6.5 install wasn't as trivial as a yum install. In addition, there seems to be a huge gap between the version all of the online guides include (2.5.2-1) and the current as of this writing, version 3.5.1-1. With some work I was able to get the latest installed and running with 1 caveat.

    Installing 2.5.2-1

    To get started execute the following commands:
    [bash] yum-config-manager --enable clearos-core clearos-developer yum upgrade yum --enablerepo=clearos-core,clearos-developer,clearos-epel install clearos-devel app-devel yum install app-web-server rrdtool rrdtool-perl perl-libwww-perl perl-MailTools perl-MIME-Lite perl-CGI perl-DBI perl-XML-Simple perl-Config-General perl-HTTP-Server-Simple rpm -ivh [/bash] Edit /etc/httpd/conf.d/monitorix.conf and update the line that has "" to "all".

    In addition depending on your setup, you may want to configure Monitorix itself in the /etc/monitorix.conf file for an eth1 or other devices that aren't "standard".

    Once statisfied with the configuration execute the following commands:
    [bash] service httpd start service monitorix start [/bash] Now you should be able to access Monitorix from http://localhost/monitorix.

    Installing 3.5.1-1

    Not content to be running a 2 year old version of the software if only for the principle of it, I started to deep dive into getting the latest version up and running. I tried my best to document the steps, though there was some trial and error in doing the upgrade. Going from a fresh install you may need to execute some of the yum commands above, in particular the first 2 commands.

    First off execute the following commands:
    [bash] yum --enablerepo=rpmforge install perl-HTTP-Server-Simple yum install perl-IO-Socket-SSL perl-XML-Simple perl-Config-General perl-HTTP-Server-Simple wget HTTP-Server-Simple-0.440.0-3-mdv2011.0.noarch.rpm [/bash] These will download the neccessary prequisites for the newer version of Monitorix. Next you'll download and install the new rpm:
    [bash] wget rpm -U monitorix-3.5.1-1.noarch.rpm [/bash] Then restart httpd and monitorix:
    [bash] service httpd restart service monitorix restart [/bash] After restarting you may notice an error:
    [bash] Starting monitorix: Can't locate HTTP/Server/Simple/ in @INC (@INC contains: /usr/bin/lib /usr/lib/monitorix /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/lib/monitorix/ line 27. BEGIN failed--compilation aborted at /usr/lib/monitorix/ line 27. Compilation failed in require at /usr/bin/monitorix line 30. BEGIN failed--compilation aborted at /usr/bin/monitorix line 30. [/bash] Doing some research, for Perl to adjust the @INC path permenantly, it requires a recompile, so in order to fix the problem permenantly for Monitorix simply copy /usr/lib/perl5/vendor_perl/5.X.X/HTTP to the /usr/lib/monitorix/.

    After copying the folder, you may also need to verify the path changes for the new version in the /etc/monitorix/monitorix.conf to match the following:
    [bash] base_dir = /var/lib/monitorix/www/ base_lib = /var/lib/monitorix/ base_url = /monitorix base_cgi = /monitorix-cgi [/bash] Also verify the first few lines match the following in the /etc/httpd/conf.d/monitorix.conf:
    [bash] Alias /monitorix /usr/share/monitorix Alias /monitorix /var/lib/monitorix/www [/bash] After restarting httpd and monitorix (same commands as above), I was presented with a "500 Internal Error". Knowing the errors are logged in the /var/log/httpd/error_log file I immediately scrolled to the end to find out the root cause of the internal error:
    [bash] Sat May 10 23:12:04 2014] [error] [client] Undefined subroutine &main::param called at /var/lib/monitorix/www/cgi/monitorix.cgi line 268., referer: [Sat May 10 23:12:04 2014] [error] [client] Premature end of script headers: monitorix.cgi, referer: [/bash] Having not done Perl in nearly 12 years, I simply went to line 268:
    [bash] our $mode = defined(param('mode')) ? param('mode') : ''; [/bash] Looking at the error, it looks to have stemmed from the param calls. Knowing for myself this would always be localhost, I simply updated the line to the following:
    [bash] our $mode = 'localhost'; [/bash] Attempting to restart monitorix again I received the same error on the next line, so for the time being I "hard coded" the values like so:
    [bash] our $mode = 'localhost'; #defined(param('mode')) ? param('mode') : ''; our $graph = 'all'; #param('graph'); our $when = '1day'; #param('when'); our $color = 'black'; #param('color'); our $val = ''; # defined(param('val')) ? param('val') : ''; our $silent = ''; # defined(param('silent')) ? param('silent') : ''; [/bash] After saving, restarting Monitorix I was presented with the Monitorix 3.5.1-1 landing page:

    Monitorix 3.5.1-1

    Clicking on Ok I was presented with all of the graphs I was expecting. To give a sample of a few graphs:
    Monitorix Network Graph

    Monitorix System Graph

    In the coming days I will revisit the error and dig up my old Perl books to remove the hard coded values. Hopefully this helps someone out there with ClearOS wanting to get some neat graphs with Monitorix.

    As some maybe aware, I recently purchased an Asus AM1I-A for a new ClearOS machine to run as a firewall. The installation for ClearOS 6 went extremely smoothly, but upon restarting I kept receiving kernel panic errors from eth1 (the onboard Realtek RTL8111-GR). After doing some investigating, it turns out RHEL and thereby ClearOS have an issue with loading the r8169 kernel module when it detects the RTL8111 (and the 8111 variants).

    Sure enough after doing an lspci -k:
    [bash] 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 11) Subsystem: ASUSTeK Computer Inc. Device 859e Kernel driver in use: r8169 Kernel modules: r8169 [/bash] The dreadful r8169 kernel module is the only module installed and in use. Thankfully you can download the r8168 x64-rpm here or wget

    After downloading, simply run:
    [bash] wget -i kmod-r8168-8.037.00-2.clearos.x86_64.rpm [/bash] and then:
    [bash] modprobe r8168 [/bash] Then add Blacklist r8169 to the /etc/modprobe.d/anything.conf and then restart your machine.

    Once your machine is backup, you can verify the correct r8168 module is loaded by re-running lspci -k:
    [bash] 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 11) Subsystem: ASUSTeK Computer Inc. Device 859e Kernel driver in use: r8168 Kernel modules: r8168, r8169 [/bash] After installing and using the r8168 module I no longer received kernel panic errors and was able to utilize the onboard RTL8111-GR without issue. Hopefully this helps someone else who ran into the same issue I did.

    Another day, another updated port of jcBENCH, this time for x86/Linux. There are no dependencies, just execute and run.

    You can download the x86-linux-0.8.755.0507 release here.

    [bash] jcBENCH 0.8.755.0507(x86/Linux Edition) (C) 2012-2014 Jarred Capellman CPU Information --------------------- Manufacturer: AuthenticAMD Model: AMD Phenom(tm) II X2 545 Processor Count: 2x2999.977mhz Architecture: x86 --------------------- Running Benchmark.... Integer: 15.8167 seconds Floating Point: 22.9469 seconds [/bash] To recap the following ports are still needing to be updated:
    -x86/MacOS X
    -arm/Windows Phone 8
    -Windows Store

    Hoping to get more of these ports knocked out in the coming days so as always check back here. In addition I'm hoping to get an official page up and running with all of the release conveniently located instead of the current situation. No eta on that project however.
    I'm pleased to announce the first release of the x86/FreeBSD port of jcBENCH. A few notes on this port, that I thought would be interesting:

    1. At least with FreeBSD 10.0, you need to use clang++ instead of g++.
    2. With FreeBSD in Hyper-V I needed to switch to utilizing the Legacy Network Adapter.

    You can download the 0.8.755.0504 release here.
    [bash] jcBENCH 0.8.755.0505(x86/FreeBSD Edition) (C) 2012-2014 Jarred Capellman CPU Information --------------------- Manufacturer: AuthenticAMD Model: AMD Phenom(tm) II X2 545 Processor Count: 2x1517mhz Architecture: amd64 --------------------- Running Benchmark.... Integer: 14.877 seconds Floating Point: 17.5544 seconds [/bash] I'm hoping to have the x86-Linux release later this week.
    Going back to a post from a few weeks back, I'm proud to say all of the ppc/MacOS X code has been upgraded to the big 0.8 release a few months back.

    You can download the 0.8.755.0504 release here.

    For those curious on the latest code here at the results from Power Mac G5: [bash] jcBENCH 0.8.755.0504(ppc/MacOS X Edition) (C) 2012-2014 Jarred Capellman CPU Information --------------------- Manufacturer: Apple Model: PowerPC G5 (1.1) Count: 2x2 GHz Architecture: ppc --------------------- Running Benchmark.... Integer: 62.7726 seconds Floating Point: 142.774 seconds [/bash] In comparison a somewhat recent AMD E-350 D (Dual 1.6ghz APU) gets roughly the same numbers. To be fair, for an October 2005 G5 compared to a low end January 2011 processor it didn't do as bad as I would have thought. What is even more surprisingly, comparing the G5 to my HP DV7-7010us, the G5 is actually slightly faster in 2 threaded floating point tasks (probably the cache), 171.996 seconds. However in integer, my HP is considerably faster, 17.61328 seconds.

    An updated version for x86/MacOS X should be available later this week as well as a x86/Linux release - stay tuned.

    A little less than 2 months ago I had some crazy ideas for interacting with a SQL database and C# as opposed to simply using ADO.Net or Microsoft's own Entity Framework. Not sure exactly as to how I was going to implement some of the features, I shelved it until I came up with a clean way to implement it.

    With this project I had four goals:
    1. Same or similar syntax to Entity Framework - meaning I should be able to simply drop in my framework in place of Entity Framework with little to no changes.
    2. Performance should be equal to or better in both console and WebAPI applications - covering both scenarios of desktop applications and normal for today, WebAPI Services returning results and executing SQL server side and then returning results to a client.
    3. Implement my own caching syntax that puts the effort of caching on the Framework, not the user of the Framework.
    4. Provide an easy way to generate strongly typed classes akin to Microsoft's Entity Framwork. This weekend I was able to achieve #1 and to some degree #2.

    I was able to achieve an identical syntax to Entity Framework like in the snippet below:
    using (var jFactory = new jcEntityFactory()) {
         jFactory.JCEF_ExecuteTestSP(); }
    In regards to performance, I wrote 2 tests. One that simply called a stored procedure with a single insert statement and another that returned several thousand rows. To give some what real results I directly referenced the framework in a console application and then wrote a WebAPI Service referencing the framework along with a wrapper function to call the WebAPI Service from a console application.

    Without further adieu here are the results running it with 10 to 1000 iterations:
    [bash] Console App Tests JC EF 10 Iterations with average of 0.00530009 MS EF 10 Iterations with average of 0.05189771 WebAPI Tests JC EF 10 Iterations with average of 0.18459302 MS EF 10 Iterations with average of 0.12075582 Console App Tests JC EF 100 Iterations with average of 0.000740188 MS EF 100 Iterations with average of 0.005783375 WebAPI Tests JC EF 100 Iterations with average of 0.018184102 MS EF 100 Iterations with average of 0.011673686 Console App Tests JC EF 1000 Iterations with average of 0.0002790646 MS EF 1000 Iterations with average of 0.001455153 WebAPI Tests JC EF 1000 Iterations with average of 0.0017801566 MS EF 1000 Iterations with average of 0.0011440657 [/bash] An interesting note is the WebAPI performance differences between the console application. Sadly, with a WebAPI Service my framework is nearly twice as slow, but in console applications (presumably WinForms and WPF as well) my framework was considerably faster.

    So where does that leave the future of the framework? First off, I am going to investigate further on the performance discrepencies between the two approaches. Secondly, I am going to then add in caching support with the following syntax (assuming one would want to cache a query result for 3 hours):
    using (var jFactory = new jcEntityFactory()) {
         jFactory.Cache(JCEF_ExecuteTestSP(), HOURS, 3); }
    More to come with my framework as it progresses over the next few weeks. As far as a release schedule, once all four of my main project requirements are completed I will release a pre-release version on NuGet. I don't plan on open-sourcing the framework, but that may change further down the road. One thing is for sure, it will be freely available through NuGet.

    In working on the ia64/Win32 port a few weeks ago, I realized I hadn't completed the CPU Detection nor released an x86/Win32 client in over a year. New since the last release is the complete cpu detection and now uses the common code base across all of the platforms. Result uploading will be coming in the big 0.9 release (hoping for next month).

    In general though, I hope to add more platforms to my ever expanding list and offer a quick way to download the client for the platform you wish to test. The existing jcBENCH page was a carry over from the WordPress page I created a very long time ago now - look for that also to coincide with the 0.9 release.

    In general I have current native ports for the following platforms:

    And older (need to recompile and update) the following:
    -ppc/MacOS X
    -x86/MacOS X
    -arm/Windows Phone 8

    With ia64/Win32 and alpha/Win32 releases coming this month - just waiting on obtaining VC++ 6.0 for Alpha and some quirks in ia64/Win32 CPU Detection.

    I'm looking to add x86/FreeBSD and Windows Store to the list sooner than later.

    For those looking for the x86/Win32 release click here.

    In working on the NetBSD-mips ports of jcBench and jcDBench, I realized I never did a Solaris port of either when working on my Sun Blade 2500 last Fall. Without further adieu I am proud to announce the initial releases of both jcBench and jcDBench for sparc/Solaris compiled with g++ 4.80. Both have 0 dependencies so just extract and run. Man pages and a true installer will come with the big 1.0 release (more on that at a later date).

    You can download jcBench 0.8.753.0318 here.

    You can download jcDBench here.

    No known issues in my testing on Solaris 5.10 with my Sun Blade 2500.

    Any suggestions, issues, comments etc. just leave a comment below and I'll get back to you as soon as possible.
    After a little bit of work on both getting the platform specific code working perfectly - I am proud to announce the initial releases of both jcBench and jcDBench for mips/NetBSD. Both have 0 dependencies so just extract and run. Man pages and a true installer will come with the big 1.0 release (more on that at a later date).

    You can download jcBench 0.8.752.0306 here.

    You can download jcDBench here.

    No known issues in my testing on NetBSD 5.2.2 with my Cobalt Qube 2.

    Any suggestions, issues, comments etc. just leave a comment below and I'll get back to you as soon as possible.
    Continuing my work on my Cobalt Qube 2, I had some time tonight to re-install NetBSD on a Corsair NOVA 30GB SSD.

    A fairly trivial hardware installation with a PATA<->SATA adapter:
    Gateway Qube 2 - PATA<->SATA Adapter

    Gateway Qube 2 - Corsair NOVA SSD

    After installation I ran jcDBench after running it with the "stock" Seagate Barracuda ATA IV PATA drive expecting a huge boost in performance, I was disappointed to see these results: [bash] $ ./jcDBench Running with no arguments... #----------------------------------------------- # jcDBench mips/NetBSD ( # (C)2013 Jarred Capellman # # Test Date : 3-13-2014 18:48:14 # Starting Size : 4096 # Maximum Size : 4194304 # Iterations : 100 # Filename : testfile #----------------------------------------------- # test size write read # (bytes) (MB/s) (MB/s) #----------------------------------------------- 4096 1.83MB/s 6.17MB/s 8192 3.01MB/s 10.3MB/s 16384 6.77MB/s 13.2MB/s 32768 8.5MB/s 11.4MB/s 65536 3.55MB/s 10.9MB/s 131072 3.83MB/s 11.6MB/s 262144 3.78MB/s 12.1MB/s 524288 3.87MB/s 12.4MB/s 1048576 3.9MB/s 7.47MB/s 2097152 3.91MB/s 7.56MB/s 4194304 3.93MB/s 7.58MB/s Benchmark Results Uploaded [/bash] The "stock" drive performed reads very similarly and the writes it performed considerably better than the theoretically much faster SSD. Not convinced something else was wrong, I went through the dmesg. Low and behold:
    [bash] wd0 at atabus0 drive 0: wd0: drive supports 16-sector PIO transfers, LBA48 addressing wd0: 28626 MB, 58161 cyl, 16 head, 63 sec, 512 bytes/sect x 58626288 sectors wd0: 32-bit data port wd0: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133) wd0(viaide0:0:0): using PIO mode 4, Ultra-DMA mode 2 (Ultra/33) (using DMA) Kernelized RAIDframe activated boot device: wd0 root on wd0a dumps on wd0b root file system type: ffs viaide0:0:0: lost interrupt type: ata tc_bcount: 16384 tc_skip: 0 viaide0:0:0: bus-master DMA error: missing interrupt, status=0x20 wd0: transfer error, downgrading to Ultra-DMA mode 1 wd0(viaide0:0:0): using PIO mode 4, Ultra-DMA mode 1 (using DMA) wd0a: DMA error reading fsbn 28640352 of 28640352-28640383 (wd0 bn 34144032; cn 33873 tn 0 sn 48), retrying wd0: soft error (corrected) viaide0:0:0: lost interrupt type: ata tc_bcount: 16384 tc_skip: 0 viaide0:0:0: bus-master DMA error: missing interrupt, status=0x20 wd0: transfer error, downgrading to PIO mode 4 wd0(viaide0:0:0): using PIO mode 4 wd0a: DMA error reading fsbn 31656736 of 31656736-31656767 (wd0 bn 37160416; cn 36865 tn 7 sn 55), retrying wd0: soft error (corrected) [/bash] For whatever reason the drive got downgraded to PIO mode 4 - or 16.6mb/sec down from the 33mb/sec Ultra-DMA mode 1 offers. Doing some reasch on the issue, two suggestions came up: 1) the drive was failing (which is possible, but unlikely considering I just purchased it) and 2) the controller or adapter is faulty. Having never used this adapter in another system I am leaning towards later - I will have to pull out the adapter I am using for my DEC Personal Workstation 433a which I know works perfectly.

    More to come on this issue...

    Gateway rebadged Cobalt Qube2 - Front
    Gateway rebadged Cobalt Qube2 - Rear


    Ever since I saw one of these devices about 10 years ago I had always wanted to own one, but were always out of the price point I’d pay for a 14 year old network appliance. Low and behold I was able to finally acquire the Gateway rebadged version (unknowingly a rare version) for $41 shipped recently in near perfect condition.

    For those that may be unaware, Cobalt Networks, Inc originally released the Qube 2 in 2000 and in September 2000, Sun Microsystems purchased Cobalt Networks (finalized in December 2000), which in turn in 2003 ended the entire line. For more information on the story behind the development, I highly suggest reading the Startup Story of Cobalt, a very neat story of bringing an idea to fruition (and to a $2.1 billion buyout in 4 years).


    The Qube and Qube 2 are interesting devices in that they run MIPS R5k cpus (150 MHz RM5230 and 250 MHz RM5231-250-Q respectively). Those following my blog know I am a huge fan of RISC cpus especially MIPS so it was neat for me to get to play around with MIPS in a machine outside of a Silicon Graphics workstation or server, on top of a semi-current operating system. I should note for those curious you can run NetBSD on some SGI systems, but if you have a Silicon Graphics machine why wouldn’t you want to run IRIX?

    My particular model has only 64MB of ram, which can be upgraded to 256MB via 2 128MB SIMMs. The Qube 2 requires a special 72pin EDO SIMM running at 3.3V. Before buying any ram second hand via eBay, Flea Market etc. be sure it is 3.3V. On eBay as of this writing there is one vendor with a stock pile of 128MB SIMMS for a fairly reasonable price – so if you’re in the market for a Qube or have one and running with the stock amount I recommend obtaining the ram now before it spikes because of the obscurity of the ram or worse the lack of supply anywhere.

    The IO Board itself is interesting in that it connects to the CPU Board via what looks to be a 32bit 33mhz PCI Slot – meaning the 2 10/100 DEC 21143 Ethernet controllers, VT82C586 VIA ATA 33 controller and any PCI card in the additional slot compete for that ~133 MB/sec bandwidth versus other implementations of the time where each of those devices (or at least a few) would have been on their own controllers. Thinking about that further based on the aforementioned Startup Story of Cobalt and their thinking of making the Qube Dual CPU – maybe the idea was to simply drop another CPU Board into the opposing PCI Slot while also allowing the OEM or customer to drop in a modem (like Gateway did) or another PCI card?

    Another note – temptation might be to throw a Western Digital Black 7200rpm SATA II drive with a SATA->PATA adapter, the tiny exhaust fan on the back of the Qube might not be enough to cool the machine down let alone the additional power requirements on the older power supplies. I highly recommend one of the following: keep with a 5400rpm ATA drive from that era, IDE -> Compact Flash Adapter (slowest option by far) or SATA I/II SSD with a SATA -> PATA converter.

    Inside the Qube 2

    Gateway rebadged Cobalt Qube2 – CPU Board
    Gateway rebadged Cobalt Qube2 - CPU
    Gateway rebadged Cobalt Qube2 – IO Board
    Gateway rebadged Cobalt Qube2 – Seagate ATA IV 20GB Hard Drive
    Gateway rebadged Cobalt Qube2 - RAM

    Pre-Installation Notes

    Seeing as how things go away (especially when the product line has been EOL for 10+ years) I’m locally hosting the Cobalt Qube 2 User Manual, which was helpful in figuring out how to open the Qube 2 properly.

    In addition, for those curious the proper Serial Port Settings are 115,200 Baud Rate, 8 Data Bits, 1 Stop Bits, No Parity and No Flow Control. I found it gave me piece of mind to have the serial port connected to my PC during install because outside of the Qube 2’s LCD screen you will have no other indication of what is occurring.

    A good SSH/TELNET/SERIAL/RLOGIN client for Windows is the Netsarang’s Xshell, which is free for home/student work (I prefer it over PuTTY for those curious).

    Installing NetBSD

    Gateway rebadged Cobalt Qube2 - Installing
    I don’t want to simply regurgitate the very informative NetBSD/cobalt Restore CD HOWTO, but to put it simply:

    Download the latest ISO from the NetBSD site. As of this writing, the latest is 5.2.2 released on 2/1/2014.

    Obtain a crossover Cat5 cable or use an adapter and connect two Cat5(e) cables from the Primary Ethernet port on the Qube to your device

    Burn the ISO Image to a CD, pop it into a laptop or desktop and boot from the CD (I used my HP DV7-7010US for those curious and during the boot process the CD will not touch your existing file system)

    Once the Restore CD of NetBSD has finished booting up on your device, turn on the Qube and hold the left and right arrows until it says Net booting

    It took a few minutes from this point to the restorecd ready message being displayed on the LCD screen, then hold the select button for two seconds, hit the Enter button twice (once to select the restore option and the second time to confirm the restore)

    From here it actually installs NetBSD, some files took longer than others (due to the size more than likely) and for those curious here are the files installed in 5.2.2:
    1. base.tgz
    2. comp.tgz
    3. etc.tgz
    4. man.tgz
    5. misc.tgz
    6. text.tgz
    This step of the installation only took a few minutes and when it was completed the LCD updated to indicate success and that it was rebooting:

    Gateway rebadged Cobalt Qube2 – NetBSD Installed and Restarting

    After this occurred, the first time I installed NetBSD, the LCD appeared stuck on [Starting up]. In going through the Serial Connection log – it appeared my SSD was throwing write errors during installation, I then swapped out the SSD for a 160gb Western Digital SATA drive I had laying around, performed the installation again and had a successful boot from the Qube itself:

    Gateway rebadged Cobalt Qube2 – NetBSD Running

    The point being – if it hangs, hook up the serial connection upon attempting to reinstall NetBSD.

    Post Installation

    After unplugging my laptop and hooking the Qube 2 to one of my gigabit switches, I was under the impression Telnet was running and available to connect as root without a password. Unfortunately I was given the following error: [bash] Trying Connected to Escape character is '^]'. telnetd: Authorization failed. Connection closed by foreign host. [/bash] Doing some research, RLOGIN appeared to be the solution to at least get into the device so I could enable SSH, after switching from TELNET to RLOGIN I was in: [bash] Connecting to Connection established. To escape to local shell, press 'Ctrl+Alt+]'. login: root Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010 The NetBSD Foundation, Inc. All rights reserved. Copyright (c) 1982, 1986, 1989, 1991, 1993 The Regents of the University of California. All rights reserved. NetBSD 5.2.2 (GENERIC) #0: Sat Jan 18 18:11:12 UTC 2014 Welcome to NetBSD! Terminal type is xterm. We recommend creating a non-root account and using su(1) for root access. # [/bash] Immediately I went into my /etc/rc.conf and added sshd=YES, ran /etc/rc.d/sshd start and immediately sshd generated the keys and started itself.

    Also something to make you do is set a password for root, which you can do simply by typing in /usr/bin/passwd.

    By default SSH will not allow root to connect (for good reasons), so be sure to add another user. You can add another user that has su to root abilities with useradd –m –G wheel johndoe where johndoe is the name of the username you wish to add.

    Benchmarking Results

    As anyone who has followed my blog for some time, one of the first things I do is port jcBENCH to the platform if it doesn’t already exist. Luckily, NetBSD came with GCC 4.1.1 and BSD derivatives offer a pretty neat C header sysctl.h that provides a lot of the CPU/Architecture information very easily. After implementing the necessary changes and recompiling (wasn’t too slow to compile I have to say), I ran jcBENCH: [bash] $ ./jcBench 100000 1 jcBENCH 0.8.752.0306(mips/NetBSD Edition) (C) 2012-2014 Jarred Capellman CPU Information --------------------- Manufacturer: cobalt Model: Cobalt Qube 2 Count: 1x250mhz Architecture: mipsel --------------------- Running Benchmark.... Integer: 475.185 seconds Floating Point: 2298.39 seconds [/bash] For those who recall I recently upgraded a Silicon Graphics Indy to an R5000 CPU so I was very curious how the Qube running NetBSD would compare to the older Indy. I should note a fair comparison would be to compile jcBENCH on the exact or similar version of GCC in IRIX instead of the version of the 3.4.6 version in nekoware – so take these results with a grain of salt. The results were interesting in that the Floating Point performance was hugely impacted in the Qube (similarly to the R5000SC and R5000PC used in Silicon Graphics machines).

    This led me to investigate the exact variants of the RM5XXX CPUs used in the Silicon Graphics O2, Indy and the Cobalt Qube. Low and behold the Qube’s variant runs on a “crippled” 32bit system bus and without any L2 Cache. This got me thinking of any other R5k series Silicon Graphics I had owned – my Silicon Graphics O2 I received February 2012 came to mind, but sadly it had the R5000SC CPU with 512kb of Level 2 Cache. Also unfortunate, I sold that machine off before I had a chance to do an IRIX port of jcBENCH in April 2012.

    What’s Next?

    The Qube 2 offers a unique opportunity of being a MIPS cpu, extremely low power requirements and virtually silent – leaving me to come up with a 24/7 use for the device. In addition the base NetBSD installation even with 64MB of ram leaves quite a bit left over for additional services: [bash] load averages: 0.02, 0.05, 0.03; up 0+00:28:26 16:30:24 18 processes: 17 sleeping, 1 on CPU CPU states: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle Memory: 17M Act, 312K Wired, 5516K Exec, 7616K File, 35M Free Swap: 128M Total, 128M Free PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND 0 root 125 0 0K 2040K schedule 0:09 0.00% 0.00% [system] 628 root 43 0 4572K 1300K CPU 0:00 0.00% 0.00% top 583 root 85 0 16M 3944K netio 0:00 0.00% 0.00% sshd 560 jcapellm 85 0 16M 2916K select 0:00 0.00% 0.00% sshd 462 root 85 0 12M 2304K wait 0:00 0.00% 0.00% login 485 root 85 0 11M 1820K select 0:00 0.00% 0.00% sshd 603 jcapellm 85 0 4280K 1336K wait 0:00 0.00% 0.00% sh 114 root 85 0 3960K 1320K select 0:00 0.00% 0.00% dhclient 420 root 85 0 4552K 1204K kqueue 0:00 0.00% 0.00% inetd 630 root 85 0 3636K 1188K pause 0:00 0.00% 0.00% csh 458 root 85 0 4024K 1168K select 0:00 0.00% 0.00% rlogind 454 root 85 0 3636K 1164K ttyraw 0:00 0.00% 0.00% csh 159 root 85 0 4248K 1100K kqueue 0:00 0.00% 0.00% syslogd 1 root 85 0 4244K 1028K wait 0:00 0.00% 0.00% init 439 root 85 0 3956K 988K nanoslp 0:00 0.00% 0.00% cron 444 root 85 0 4216K 956K ttyraw 0:00 0.00% 0.00% getty 437 root 85 0 4224K 924K nanoslp 0:00 0.00% 0.00% getty 395 root 85 0 3960K 868K select 0:00 0.00% 0.00% paneld [/bash] Thinking back to the device’s original intent and my current interests of mobile device interoperability with “wired” devices, I’ve decided to go back to an idea I had in November 2012 called jcNETMAP. This Windows Phone 8 app’s purpose was to alert you when your set servers went down, utilizing the Scheduled Task feature of the Windows Phone’s API without relying on any other software or device such as GFI Alerts, a custom Windows Service etc.

    Taking the idea in a slightly different direction, take the following example of what I imagine is a typical home network:

    Maybe add a few tablets, smart phones, consoles, DVRs etc. but the idea being you’re relying on your router’s internal firewall rules to protect your home network. With the advent of DD-WRT among others people might be running a more tuned (albeit all around better version), but still how often can you say you’ve gone into your router and updated the firmware or checked the access logs for anything suspicious? Wouldn’t it be nice if you had some way of knowing traffic funneling from outside to your router could be analyzed like a traditional firewall/packet filter, if anything didn’t look right, send a notification to your smart phone and give you weekly analysis of the “unusual” traffic? I know you could probably setup an open source project to do packet filtering, but would it have the mobile component and run on hardware as low as a Qube 2? And first and foremost – it is simply a learning experience for myself to dive into an area I had never really gotten into previously.

    Like many of my ideas – there could be some serious hurdles to overcome. Among others the biggest concern I have is whether or not the Qube 2 would be able to process packets fast enough on my home network to even make this possible – let alone connect into the Web Service which in turn would send the notification to your Windows Phone 8.x device quickly. Regardless of how this project ends up I am sure it will be worth the investment of time, if nothing else I will learn a ton.

    In the coming days look for jcBENCH and jcDBENCH MIPS/NetBSD releases. Also I have 2 128MB SIMMS and a 30gb Corsair NOVA SSD coming this week to pop into the Qube 2 - will post any hurdles.

    Any general questions about the Qube 2, feel free to comment below and I’ll answer them as soon as I can.

    In working on my Silicon Graphics Indy (as detailed in these posts: R5000 upgrade and the initial post), I realized I hadn't built a jcBENCH MIPS3/Irix release before and hadn't released a revamped MIPS4/Irix build on the cross-platform code I worked on a few weeks ago.

    Without further adieu:

    MIPS3 Release:

    MIPS4 Release:

    More to come in the coming days in regards to jcBENCH.
    Going back to my initial Silicon Graphics Indy post, I was able to procure an R5000SC 180mhz and the 2 board XZ graphics option (along with a spare chassis/node board). These 2 components coupled with the 256mb of ram and the Maxtor ATLAS 15k II U320 drive I already owned, I have acquired the "ultimate" Silicon Graphics Indy.

    The R5000SC 180mhz CPU board: Silicon Graphics Indy R5000SC 180mhz CPU board

    XZ Graphics board: Silicon Graphics Indy XZ board

    The problem I was not aware of until I was about to swap my older R4400 cpu is that you need a specific PROM version in order to run the newer R4600 and R5000 CPUs. In checking my PROM, I came across one of the original versions (version 4) going back to 1993: Silicon Graphics Indy Old Prom

    In order to use a R5000 in an Indy you need version 11, which as it happens I have in the newer node board from 1996: Silicon Graphics Indy New Prom

    In the end I swapped out the ram from my R4400 Indy, put a "new" Maxtor Atlas 15k II 73gb Ultra 320 drive in the R5000 chassis and started a fresh install of Irix. One might ask why I didn't just take the drive out of my old system, the answer is that the R4400 is the MIPS3 generation of CPUs and the R5000 and above is MIPS4 generation. As noted in my initial post, by having an R5000 I can make full use of the nekoware archive.

    Performance wise how does it compare?
    Integer Performance: 596.206 (R4400) 225.467 (R5000)
    Floating Point Performance: 797.860 (R4400) 1053.990 (R5000)

    Integer performance is considerably faster, about equivalent of my old AMD Phenom II P920 1.6ghz notebook CPU, but floating point (probably due to half the L2 cache of the R4400) is roughly 24% slower. With most gaming/graphics operations being floating point based one could argue depending on the application a R4400 200mhz model might be faster than the R5000 180mhz cpu.

    Brief Introduction

    The Silicon Graphics Indy workstation was originally released in late 1993 starting at $4,995. For that price you received a diskless 100 MHz R4000 Indy, 32mb of a ram, the base 8bit graphics card and a 15” monitor. A more reasonable configuration: 64mb of ram, 1 GB hard drive, flooptical drive, 24bit XL graphics card and external CD-ROM was $23,695 at launch. I should note, standard features on the Indy included an ISDN modem, 10baseT Ethernet, four channel stereo sound card and composite video & s-video input, pretty advanced for the time especially compared to the Apple Mac Quadra.

    Silicon Graphics Indy - Front
    Silicon Graphics Indy - Back

    My Story - Initial Hardware Setup

    I actually received my Indy way back in May 2012 for a whopping $37 shipped with no hard drive and no memory, while having the R4400SC 150 MHz CPU and 8bit graphics. The SC in the R4400SC, stands for Secondary Cache. Commonly you will find the R4000PC and R4600PC on eBay which lack the L2 cache.

    Silicon Graphics Indy - Boot Menu
    Silicon Graphics Indy - hinv
    Luckily the Indy takes 72 pin FPM memory that was pretty standard back in 1993 when it was first released, this made finding compatible working ram on eBay much easier. The Indy has 8 slots and supports up to 256mb of memory (8x32mb), which I was able to find for < $10.

    Knowing I would be using this at some point for at least some vintage SGI Doom I also picked up the 24bit XL Graphics Option for $20 later in May 2012, hoping I would get the R5000 180 MHz CPU (more on this later).

    Fast forward to January 23rd 2014, I was cleaning up my office after work and noticed the Indy sitting on top of my Prism and decided to invest the time in getting it up and running.

    Little did I know all of the spare Ultra 160 and Ultra 320 SCSI drives I had laying around either were dead or didn’t have the backwards compatibility to SCSI-2 which the Indy utilizes (I didn’t realize some manufacturers dropped SCSI-2 support in the U160/U320 era). Luckily, I just purchased several Maxtor ATLAS 15k II 73 GB U320 drives (Model #8E073L0) for use in my Fuel, Tezros, Origin 300s and Origin 350. Maxtor ATLAS 15k II Ultra 320 Drive
    Realizing it was a long shot, I put one of those in the Indy (with an SCA->50pin adapter I got off of eBay for $2) and the Indy recognized it without any problems. Granted the SCSI-2’s 10mb/sec bus limits the outbound and inbound bandwidth the drive has (I had previously benchmarked it around 95mb/sec of actual transfer speed), fluid dynamic bearing motors (virtually silent), the 3.5ms access time and internal transfers far outweigh trying to find an “original” SCSI-2 drives, which I might add often go for $40+ on a Seagate 5400pm 2gb drive. I should note the Seagate Cheetah 15k.3 18 GB U320 (Model #ST318453LC) and the Fujitsu 18 GB U160 (Model #MAJ3182MC) drives did not downgrade to SCSI-2.

    I should note, my Indy randomly refused to boot (no power to even the power supply fan). Apparently this was a common problem with the initial power supplies from Nidec. The later Sony models didn’t have this problem, but had the problem of not running the fan 100% of the time, instead only when the temperatures hit a high point. Some folks have modified their Sony power supplies to keep the fan on 100% of the time, I did not as I only really use the Indy in the basement where the hottest it gets is about 69’.

    Silicon Graphics Indy - Nidec
    A solution I found was to disconnect the power cable for a good 20-30 minutes and then try again. 9 times out of 10 this worked and the system had 0 issues booting into IRIX and maintaining it. So before you run out to buy a “new” power supply of eBay try this solution out. These are 20-21 year old machines afterall.

    My Story – Getting IRIX Installed

    Having installed IRIX now on a Fuel, Tezro and an Origin 300 I am well versed in the process. For those who are not, checkout this howto guide (, it is very detailed and should be all you need to get going. This assumes you have IRIX 6.5.x. Depending on which workstation/server you have, you might need a higher version. 6.5.30 is the latest version of IRIX released, however those typically go for $300+ on eBay. I highly suggest simply getting some version of 6.5 and downloading the 6.5.22m tar files off of SGI via their Supportfollio (this is free after registration).

    In my case, the Indy is so old that any version of 6.5 is acceptable, though I wanted to get it to 6.5.22m so I could utilize nekoware. Nekoware is a community project where several contributors compile typical open source software like bash, Apache, MySQL, PHP etc. for MIPS/IRIX. You can download the tardist files here ( (a tardist is similar to a rpm if you’re coming from a Linux background).

    I should note, if you install from an older IRIX 6.5.x release (prior to 6.5.22m) you need to install Patch 5086 (available via the Supportfollio for free) prior to the upgrade.

    Another question that might arise especially for those installing to an Indy, I pulled out the DVD-ROM Drive (SCSI-2 50pin) from my Silicon Graphics Fuel to install IRIX. For newer systems that utilize SCA like an Octane or Origin 3x0, you could use a 50pin -> SCA adapter with the drive or do as I did with my Origin 300 a while back and setup a VM of DINA. Basically this allows you to install IRIX over the network to your hardware. After installation before continuing I highly recommend you clone your drive and keep it locked away in case you ever accidentially mess up your installation of if your hard drive dies. Depending on the speed of your system, you could have just invested several hours of time. Cloing a disk is very easy in IRIX, simply follow this guide ( I had done it myself previously on my Origin 300 and it only took about 10 minutes. For my Indy for those curious, it took about 20 minutes.

    My Story – Post IRIX Installation

    After installation my first step was to get nekoware installed, I typically install the following (including the dependencies for them):
    • BASH
    • OpenSSH
    • wget
    • SDL
    • Firefox
    • GCC
    • Nedit
    • rdesktop
    • Samba
    • Subversion
    There’s countless others, but those are the essentials that I utilize very frequently. Depending on your system, the installation (especially of GCC) could take some time so be patient. Something to note, if you have an R4x00 CPU you need to utilize the MIPS3 tardists, however if you have an R5000 Indy you can use the MIPS4 variations. At some point it seems contributors to nekoware for MIPS3 trickled off so you’ll more than likely be compiling from source for most things. As I compile stuff I’ll start contributing to my local repository as well.

    What’s Next?

    I’ve been on a hunt for an R5000 180 MHz CPU or at the least a 150 MHz variant so I can be close to the highest end Indy available.

    As for its use, I plan to start on the MODEXngine project now that I have a pretty clear multiplatform architecture.

    In addition I want to use it as test bed for writing efficient C++, in a future blog I will be focusing on laziness of programmers (not all, but many) that I feel the commodity PC hardware of today has gotten so fast and cheap that programmers don’t consider efficiency or writing with performance in mind.
    Hoping to wrap up the Windows Store, ARM/Windows Phone 8, SPARC/Solaris 10 and ia64/Linux ports this week, but here is another port: MIPS/Irix:


    In case someone missed the other releases:
    x86/Mac OS X
    Power PPC/Mac OS X

    In related news, jcBENCH will have it's big official 1.0 release across the same platforms.
    A day later than I wanted, but here are the links to the x86/Linux and x86/Mac OS X binaries of the initial jcDBench release :
    x86/Mac OS X

    And for those that hadn't seen the initial post, here are the ppc/Mac OS X and x86/Win32 binaries:
    Power PPC/Mac OS X

    After a few weeks of development, I'm proud to announce the initial release of jcDBench, a cross-platform disk benchmarking tool with the ability to upload results. These results are anonymous, the only things that are submitted are block size, platform and your scores. These results, in the next version I will add the ability to compare your results with others.

    [bash] #----------------------------------------------- # jcDBench x86/Win32 ( # (C)2013 Jarred Capellman # # Test Date : 10-28-2013 20:28:2 # Starting Size : 4096 # Maximum Size : 4194304 # Iterations : 10 # Filename : testfile #----------------------------------------------- # test size write read # (bytes) (MB/s) (MB/s) #----------------------------------------------- 4096 0.625MB/s 0.0391MB/s 8192 0.0781MB/s 0.0781MB/s 16384 0.156MB/s 0.156MB/s 32768 5MB/s 0.313MB/s 65536 10MB/s 0.625MB/s 131072 1.25MB/s 1.25MB/s 262144 40MB/s 40MB/s 524288 80MB/s 80MB/s 1048576 80MB/s 160MB/s 2097152 107MB/s 160MB/s 4194304 128MB/s 107MB/s Benchmark Results Uploaded [/bash] This initial release I made native binaries for the following platforms:
    Power PPC/Mac OS X

    More platforms to come tomorrow, please leave feedback below.

    Digital Personal Workstation 433a - Internet Explorer 2

    After installing Windows NT 4 on my Digital Personal Workstation 433a and realizing I was going to be browsing the web on it with Internet Explorer 2.0, I started the arduous process of updating my installation to more recent versions of software.

    First thing I did was install Service Pack 6 for Windows NT 4. Remembering back in the day on my Pentium how unstable NT 4 was pre-service pack 3, I figured this would be the best first step.

    Having couldn't find Service Pack 6 for Windows NT 4 on Microsoft's website, I figured I would make the file available locally here.

    After downloading SP6, click the Accept checkbox and then Install:

    Digital Personal Workstation 433a - Windows NT 4 SP6

    Digital Personal Workstation 433a - Windows NT 4 SP6 Installing

    Digital Personal Workstation 433a - Windows NT 4 SP6 Finshed

    Roughly five minutes later, click the Restart button and upon rebooting, the startup screen reflects the update:

    Digital Personal Workstation 433a - Windows NT 4 SP6

    Next up, I wanted to update Internet Explorer to the latest available which happens to be version 5. Like Service Pack 6, it was an effort in a half to find Internet Explorer 5 for NT4 Alpha. To make it easier for folks, you can download it here.

    Digital Personal Workstation 433a - Windows NT 4 Internet Explorer 5 installed

    Digital Personal Workstation 433a - Windows NT 4 Internet Explorer 5 installed

    After extraction and installation, I was pleasantly surprised at how fast the browser was. After attempting to browse the internet with Firefox 3 on my "faster" Quad 700mhz Silicon Graphics Tezro, it was refreshing and a confirmation that web browsing on IRIX definitely needs to be readdressed at some point as I briefly discussed in my Cloud Based Web Rendering? post back in January 2012.

    What's next? Getting Visual Studio 6 installed and diving into FX!32.
    Continuing from my post on Wednesday night where I detailed the insides of my Digital Personal Workstation 433a, in this post I'll detail Windows NT 4 being installed on it.

    For those unaware, you can use a regular Windows NT 4 Workstation or Server CD. On the CD there is also PPC and MIPS versions of the OS, though I do not know the exact models that are supported on those platforms.

    A prerequisite to installing Windows NT 4 on the Personal Workstation is to download and extract the HAL (Hardware Abstraction Layer) for the PWS to a floppy disk. Like many more than likely, I did not have a floppy disk drive handy. Fortunately USB floppy drives are only about $10 as of the time of this writing on Amazon, I picked up this External USB Floppy Drive for $11.25 shipped. In addition you'll need at least 1 3.5" floppy disk. Sometime in 1999 or 2000, I cleaned out my stock pile of floppy disks, but thankfully Amazon also had a 10 pack of Verbatim 3.5" floppy disks.

    Once you have a floppy disk in hand, I've locally hosted the HAL you'll need in case Compaq/HP stops hosting it. You can download it here. Simply extract it to the floppy disk after download.

    Diverging a bit, I was able to pull an ELSA GLoria Synergy from my DS20E AlphaServer. This card is a 3DLabs Permedia 2 8mb PCI Video Card that can do up to 1920x1200 @ 16BPP. A huge upgrade over the Diamond Stealth 64 2001 (1mb) PCI Video Card that came in my PWS. On an interesting side note, when the PWS 433a debuted in 1996, I was also running a Diamond Stealth 64 2001 in my 75mhz Pentium and 3 summers later in 1999 I was running a Diamond FireGL 1000 Pro (3DLabs Permedia 2) along with a Diamond Monster 3D II (12mb version). I find it funny my little desktop, several factors cheaper at the same time frame had the same graphics card. You would have thought Digital would have put a higher end graphics card in their "workstation" than a S3 Trio 64 based card.

    With the Windows NT 4 CD and HAL floppy disk in hand, simply insert the CD and in the AlphaBIOS select Install Windows NT.

    Digital Personal Workstation 433a - Windows NT 4 Initial Setup Screen

    After a few moments you'll be presented with a computer type selection:

    Digital Personal Workstation 433a - Windows NT 4 Computer Type Selection

    Select Other and insert the HAL Floppy disk you made earlier and upon loading select Digital Personal Workstation 433a, 500a, 600:
    Digital Personal Workstation 433a - Windows NT 4 HAL Selection

    After that the setup continued and upon reboot, booted into the GUI portion of the setup:
    Digital Personal Workstation 433a - Windows NT 4 Setup 1

    Digital Personal Workstation 433a - Windows NT 4 Setup 2

    Digital Personal Workstation 433a - Windows NT 4 Setup 3

    I chose to hold off configuring the Ethernet Adapter during the initial setup because I wasn't sure what model was actually built into my PWS 433a. For those curious, my 433a had a DEC PCI Ethernet DC21142. I don't have the MiataGL revised motherboard so this may not be the correct model for you (if you had USB ports you have the MiataGL variant.

    Digital Personal Workstation 433a - Windows NT 4 Setup 4

    I proceeded through the rest of the setup essentially just clicking next.

    Digital Personal Workstation 433a - Windows NT 4 Setup 5

    Digital Personal Workstation 433a - Windows NT 4 Setup 6

    Digital Personal Workstation 433a - Windows NT 4 Setup 7

    Digital Personal Workstation 433a - Windows NT 4 Setup 8

    Digital Personal Workstation 433a - Windows NT 4 Setup 9

    Digital Personal Workstation 433a - Windows NT 4 Setup 10

    Digital Personal Workstation 433a - Windows NT 4 SP1 First Boot

    After the reboot, Windows NT 4 converted the boot partition to NTFS and afterwards I was presented with the expected login screen:

    Digital Personal Workstation 433a - Windows NT 4 NTFS Boot

    Digital Personal Workstation 433a - Windows NT 4 Logon

    After logging in you are presented with the usual "Welcome" screen, but for those unfamiliar with Windows NT 4:
    Digital Personal Workstation 433a - Windows NT 4 Welcome Screen

    Remembering from the installation that the graphics driver for my ELSA GLoria Synergy was not bundled with NT4 for Alpha, I did some searching on the Internet and fortunately was able to find the driver. I've locally hosted the file, here to make it easier to get going. After a reboot I was presented with a 640x480x16bpp resolution instead of the 640x480x8bpp without the driver:
    Digital Personal Workstation 433a - Windows NT 4 with the ELSA GLoria driver installed

    What's next? Getting Service Pack 6, Internet Explorer 5, Visual Studio 6 and FX!32 installed so I can begin porting jcBENCH with native Alpha/NT binaries.
    A while back I received a Digital Equipment Corporation (DEC) Personal Workstation 433a and finally had some time to get it up and running. For those that are unaware, the 433a has a 433mhz Alpha 21164A (EV56) that is capable of running Windows NT 4 and with the right bios can run OpenVMS. The 433au model has the OpenVMS capable bios out of the box. I am not familiar with the process, but from what I gather it isn't terribly difficult.

    Digital Personal Workstation 433a - Front View

    Digital Personal Workstation 433a - Rear View

    Digital Personal Workstation 433a - Inside View

    Digital Personal Workstation 433a - Mainboard

    Digital Personal Workstation 433a - Mainboard (Back)

    Digital Personal Workstation 433a - Mainboard Inside

    Digital Personal Workstation 433a - RAM Slots

    Digital Personal Workstation 433a -Diamond Stealth 64 (S3 Trio64) Video Card

    Digital Personal Workstation 433a - 32mb PC100 ECC SDRAM Stick

    Being enamored by the prospect of running Windows NT 4 on a non-x86 system and being my favorite operating system, I jumped at the chance of getting it on the 433a.

    Microsoft Windows NT 4 Workstation

    Unfortunately, the 433a came without a hard drive and being from 1996, there is no SATA support only EIDE and with the right card Ultra Wide 2 SCSI. Not having any spare Ultra 320 SCSI drives, I used a SATA -> PATA adapter on a 500gb Hitachi SATA II laptop drive I had laying around. Knowing this would technically limit it to 16mb/sec outside of Disk <-> Buffer operations, I was intrigued about the performance versus putting a "slower" Ultra 320 SCSI drive like a Maxtor Atlas 15k II drive and attaching it to a Ultra Wide 2 SCSI adapter that has a 80mb/sec bandwidth. One day I will pop a U320 drive and compare the disk results.

    Digital Personal Workstation 433a - SATA -> PATA Adapter

    During shipment the mainboard came loose from the PCI looking connector that it connects to, if you receive a PWS that fails to power on - ensure it is fully connected.

    After reseating the mainboard I was greeted with the startup logo:

    Digital Personal Workstation 433a - BIOS Image

    I entered the "BIOS" and was presented with a very "PC" like screen that I wasn't expecting:

    Digital Personal Workstation 433a - System Information

    Knowing that the Windows 8 NTFS partitions would be unrecognizable most likely by the Alpha, I proceeded to the Disk Setup and reconfigured the drive with the express setup:

    Digital Personal Workstation 433a - Express Setup

    Digital Personal Workstation 433a - Hard Drive Partitions

    Fortunately, the "express" setup was very easy, it configured the drive for use with Windows NT in mind with no manual configuration needed outside of creating a second partition to fill out the remaining space on the hard drive.

    After exiting the "bios": Digital Personal Workstation 433a - Bios Boot

    In the next post, I'll detail the Windows NT 4 Workstation and subsequent updates required.
    A couple months ago I uncovered a floppy disk with unknown content.
    Shoot A Rama 2000 3.5 inch floppy

    Judging by the cover on the disk I knew it had content from at least 13 years if not more. Fortunately, I had just purchased a USB Floppy Drive for work on my Digital Personal Workstation.

    To my surprise:
    Shoot A Rama 2000

    Shoot A Rama 2000

    I had long since thought I had lost the sole floppy disk copy I knew I had made nearly 14 years ago on Christmas Eve 1999. I programmed it in Borland C++ 5.02 on Windows NT 4.0 and it marked the first game I utilized my then brand new 2D VGA Engine, GORE. Unfortunately I don't have the source code any more, but for those wishing to launch it in Dosbox (or on an older O/S) can download it here.
    As noted in my Asus X55U Notebook write up earlier today, I ported over jcBENCH to x86/Linux. Luckily, there were only minute changes required from the ia64/Linux port. Interesting the /proc/cpuinfo is handled differently between SLES 9 and OpenSUSE 12.3. I'll have to do further investigation on the reasoning behind that.

    You can grab the x86/Linux 0.6.522 release of jcBENCH here.
    Over the Labor Day weekend, I happened to be in Silicon Valley on a short vacation and Fry's Electronics luckily had an Asus X55U Notebook on sale for $258. I had been looking for a long battery life laptop that wouldn't kill me if it ever got hurt during traveling (versus my much more expensive HP laptop). On top of that, I had wanted a Linux laptop to do cross-platform testing of jcDB, jcPIMP and the Mode Xngine. Planning ahead a bit, I brought an older Corsair Force 3 90gb SSD that I was no longer using and a Philips screw driver (yes folks, screw drivers are allowed by the TSA, it just can't be more than a few inches long).

    Asus X55U Notebook

    Asus X55U Notebook box contents

    Asus X55U OpenSUSE Boot Menu

    Specifications wise the laptop has:
    -15.6" 1366x768 Screen (not the greatest quality, but definitely a lot better than expected at the price point)
    -AMD E2-1800 (Dual 1.7ghz APU that clocks itself down to 2x850mhz when performance isn't needed)
    -4gb DDR3 1066 (1 slot, upgradeable to 8gb)
    -Radeon HD 7340 (DX11, 512mb of system ram is used)
    -500gb Hitachi SATA Drive
    -1 USB 2.0 and 1 USB 3.0 Port
    -HDMI Output
    -Gigabit Ethernet
    -802.11n WiFi
    -VGA Output
    -Mic and Headphone Jack
    -DVD-RW Drive
    -SD/SDXC/SDHC Memory Card Slot

    I should note, this APU does support AMD's Virtualization, so you can run Hyper-V, Xen, VMware Workstation etc. on this notebook. Coupled with the 8gb of ram support, this could be a decent portable VM notebook for the price.

    Fortunately, doing the swap of the hard drive was extremely easy as opposed to some laptops that require taking apart the entire laptop (looking back at the Dell Inspiron I purchased in October 2010). Just 2 screws to pull off the back, which also contains the single DDR3 SO-DIMM slot.

    Asus X55U Notebook (Bottom)

    Corsair Force 3 90gb SSD

    Curious if the system supported 8gb of DDR3 ram (since the manual didn't specify), I bought an 8gb DDR3-1333 Corsair SO-DIMM:

    Corsair 8gb DDR3-1333 SO-DIMM

    Swapped out the Hynix 4gb with the Corsair:

    Asus X55U with the Corsair 8gb DDR3 SO-DIMM installed

    And sure enough, the notebook supports 8gb:

    Asus X55U with 8gb showing in the BIOS

    While in the BIOS I should mention the charge when off feature this notebook has. Meaning with the lid closed, you can still charge your phone or media player. I wish my HP had that functionality.

    Asus X55U BIOS Options

    OpenSUSE 12.3 installed without a hitch (even the WiFi worked off the bat). After getting the system configured, the first thought I had was take the recent ia64/Linux port of jcBench and port it over to x86/Linux. On the flight back from SFO, I ported it over and thankfully it was only a re-compile with a slight tweak to the CPU detection.

    How does the system perform?
    [bash] jcBENCH 0.6.522.0928(x86/Linux Edition) (C) 2012-2013 Jarred Capellman CPU Information --------------------- Manufacturer: AuthenticAMD Model: AMD E2-1800 APU with Radeon(tm) HD Graphics Count: 2x1700.000mhz Architecture: x86/x86-64 --------------------- Running Benchmark.... Integer: 65.4932 seconds Floating Point: 35.6109 seconds [/bash] In comparison to my Silicon Graphics Prism (2x1.5ghz Itanium 2) it performs a little slower in Integer operations, but is nearly 3X faster in Floating Point Operations. In comparison to my HP DV7 laptop (AMD A10), it performs in the same dual threaded applications about 2X as slow, as expected with the slower clock rate and much smaller cache.

    Overall, the notebook does exactly what I want and more for a $258 device. Build quality exceeds the Acer netbook and Dell Inspiron I had several years ago, coming close to my HP DV7, if only this Asus used the higher grade plastics. For those curious, battery life is about 4 hours with WiFi enabled the middle of the road screen brightness.
    For frequent visitors to this site, you might have noticed the "metro" look starting to creep into the site design. Going back to April, my gameplan was to simply focus on getting the site functioning and duplicate the look and feel of the WordPress Theme I had been using. A little longer than expected to start the "refit", but I think when it is completed, it will be pretty cool. For now I know the site only looks "perfect" in IE 11, there's some tweaking to be done for Chrome and Firefox, but that's on the back burner until I round out the overall update.

    In addition, the My Computers page has been updated to include a break out of non-x86 systems I have. As I get time I'm going to flush out the detail design and start to get content in there for each system. These later generation Silicon Graphics machines seem to have little to no documentation available as far as what upgrades are possible, things to look for and min/max configurations, not to mention "loudness" factor. Look to that section in the coming weeks with a big update.

    Lastly, in regards to development, jcDB is progressing well on both ia64/Linux and x86/Win32. I am still on track to wrap up a 0.1 release by the end of the year with a C# Library to assist with interfacing with the database. Mode Xngine is also progressing well, I spent a little bit of time over the last 2 weeks working on the SDL Interface for the platforms that don't support Mono.

    This week overall, I expect to post a few DEC Alpha related posts and with any luck get some time to play with NT4/Windows 2000 on one of them.
    As noted in my last Silicon Graphics Prism post, I had some spare nodeboards from Silicon Graphics Altix 350 systems I had collected over the years. Thankfully they are drop in compatible.

    The nodeboard in question that I am swapping with my original Dual 1.3ghz/3mb Itanium 2's has Dual 1.5ghz/6mb Itanium 2 CPUs:
    Silicon Graphics Prism Dual 1.5ghz Nodeboard

    Prism Chassis without nodeboard:

    Silicon Graphics Prism Empty Nodeboard

    A key thing to note is after swapping the nodeboard, ensure the light is turned on close to where the screws on the left hand side are:
    Silicon Graphics Prism connection light

    If you don't see this light turned on or are getting unusual errors after the swap during post, make sure the 3 screws are secure. I received the following upon booting with a semi-connected nodeboard:

    Silicon Graphics Prism nodeboard connection errors

    Also, after the swap the date seemed to have reset to 1998 which caused SLES9 to hang here during boot on Setting up the CMOS clock:
    Silicon Graphics Prism CMOS Error upon booting up

    A simple fix for this is to use the EFI Shell and execute a date command like so:
    [bash] date 09/28/2013 [/bash] Then type exit and SLES9 should boot without issue.

    So after all of that, how does it compare in jcBENCH performance?
    [bash] jcBENCH 0.6.522.0826(ia64/Linux Edition) (C) 2012-2013 Jarred Capellman CPU Information --------------------- Manufacturer: GenuineIntel Model: Itanium 2 Count: 2x1500.000000mhz Architecture: IA-64 --------------------- Running Benchmark.... Integer: 54.7224 seconds Floating Point: 103.501 seconds [/bash] Looking back to my Dual 1.3ghz benchmarks, Integer and Floating Point performance is 14% faster. Given the ~14% clock speed boost and twice the Level 2 Cache I expected more, but as it is largely a un-real world benchmark, take it for what you will.

    What's next for my Silicon Graphics Prism? I've got an LSI Megaraid SATA Controller and a Dual 1.6ghz nodeboard to swap in next.
    Installing a new Windows Service on a server today and ran into unexpected error using the installutil found in your Windows Folder\Microsoft.NET\Framework\vX.xxxx folder.

    The error I received:
    [bash] An exception occurred during the Install phase. System.Security.SecurityException: The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security. [/bash] Judging by the error message, it appeared to be a permissions issue, luckily, the solution is to simply run the command prompt as an administrator.
    Showing it's age again, my SLES 9 SP3 installation didn't have any bundled packages for Subversion, my source control of choice. Wanting to really start on my ia64 development inconjunction with my Win32 and IRIX I started my search for a precompiled RPM.

    Sadly the only one I could find was version 1.1.4 and it had several dependencies that I couldn't seem to find matches for anywhere. The only solution I could think of was to donwload the source rpm (available in the link mentioned above) and compile it myself.

    Luckily one of the dependencies neon was also available on that same server here. Simply install with a rpm -i neon-0.24.7-6.1.1.ia64.rpm.

    Next up was actually compiling Subversion - which is where it got more interesting.

    After running ./, I ran ./configure from within my newly installed Subversion source code only to get this warning: [shell] configure: WARNING: Detected SWIG version 1.1 (Patch 5) configure: WARNING: This is not compatible with Subversion configure: WARNING: Subversion can use SWIG versions 1.3.19, 1.3.20, 1.3.21 configure: WARNING: or 1.3.24 or later [/shell] SLES SP3 apparently only has 1.1 (Patch 5) bundled with it, but fortunately the source code is readily available for SWIG. Keeping the age of Subversion 1.1.4, I chose to download from SourceForge, extracting it with the following commands:

    [shell] Gunzip swig-1-3.24.tar.gz tar -xvf swig-1.3.24.tar [/shell] And then:

    [shell] ./ ./configure make make install [/shell] After re-running the configure for Subversion, I was pleasantly surprised to find everything compiled and installed properly. Unfamiliar with how to package my own rpm (it's on my todo list), the only thing I can provide folks is locally hosted direct links to the files you'll need:

    For those curious I've checked-out, updated and committed to a SVN 1.8.1 server without any issues. So for those running the latest svn server and were wondering how far back of a client it would support - you are good to go, at least for now.
    After getting my Silicon Graphics Prism ready to go on Saturday, and then getting the Intel C++ Compiler up and running on Sunday, my first task: getting jcBENCH ported over.

    Not having done Linux development in a while (IRIX is similar, but different), I had to go through a few hurdles. One of which being, remembering the compiler macro for detecting Linux versus Win32 and IRIX. Luckily Sourceforge has a great listing of all the macros. After pushing through a few timespec_t issues (Linux's version is timespec), I was able to get it ported over while leaving in my #ifdef to retain portability.

    A couple notes for me to remember and in case some else has the same issues:
    *To fix the "undefined reference to 'clock_gettime' and 'clock_settime'"
    -include "-l rt" to your g++ argument list

    So how does my Dual Intel Itanium 2 1.3ghz/3mb L3 Cache do?

    [shell] jcBENCH 0.6.522.0826(ia64/Linux Edition) (C) 2012-2013 Jarred Capellman CPU Information --------------------- Manufacturer: GenuineIntel Model: Itanium 2 Count: 2x1300.000000mhz Architecture: IA-64 --------------------- Running Benchmark.... Integer: 63.0576 seconds Floating Point: 119.499 seconds [/shell] In comparison to my other machines, Integer performance with both CPUs is some where between a single CPU of an A6-3500 (3x2.1ghz) and an A10-4600M (4x2.3ghz).

    In comparison, Floating Pointing performance with both CPUs faired much better in that it was some where between my old Phenom II X2 545 (2x3ghz) and an A6-3500 (3x2.1ghz).

    You can download the initial ia64/linux release here.

    What's next? Now that I have some baseline metrics, tomorrow night I will swap in the spare 1.6ghz/9mb L2 Cache Itanium 2 CPUs I purchased a while back for my Silicon Graphics Altix 350 systems. Stay tuned for more details.
    A big thanks goes to jan-jaap on the Nekochan Forums for pointing out you can, if you're only doing non-commercial development. This license applies to x86 and AMD64 as well.

    First, go to Intel's site to register an account (it doesn't work in IE 11 at least in the Windows 8.1 Preview).

    Second, you'll receive an email with your license key, be sure to keep this handy as the installation requires it to activate.

    After clicking the download button in the email, then click Additional downloads, updates and versions.:
    Intel Cpp Suite
    And then select Intel C++/Fortran Compiler 11.1 Update 9. As of this writing the file is l_cproc_p_11.1.080_ia64.tgz.

    After downloading it, extract it and run the
    [shell] tar -zxvf l_cproc_p_11.1.080_ia64.tgz ./ [/shell] The install process is pretty trivial - afterwards you'll have access to the Intel C++ compiler installed to /opt/intel/Compiler/11.1/080/bin/ia64 (assuming you installed it to the default folder).

    To compile with the Intel Compiler just call icc. At this point it would make sense to add the Intel Compiler to your PATH.

    Hope it helps someone out - I would highly suggest if you have any inclination of doing ia64 development, to get it downloaded now before it becomes unavailable at some point.


    Back in May of this year I purchased a Silicon Graphics Prism. I had been looking to obtain one since January when one went for next to nothing on eBay. While not being a big fan of Intel based machines let alone ia64 – the machine itself, similar to the pinnacle of SGI’s MIPS line, the Tezro (which I love and detailed in my Dual to Quad upgrade here) intrigued me enough to want to play with it.

    What is it?

    Silicon Graphics Prism (Front)

    Silicon Graphics Prism (Mainboard)

    Silicon Graphics Prism (Nodeboard)

    Silicon Graphics Prism (AMD FireGL X3 Cards)

    The system itself for those unaware, the Silicon Graphics Prism supports:
    -Up to 2 1.6ghz Itaniums with 9mb of cache
    -Up to 24gb (12x2gb) of DDR (Registered/ECC)
    -Dual AMD FireGL X3 (X800 256mb) AGP cards (yes – it has 2 AGP 8X slots)
    -4 64bit PCI-X buses spread across 8 64 bit 133mhz PCI-X slots (1064 mb/s)
    -4 USB 2.0 ports
    -Built-in copper gigabit
    -2 3.5” SATA I bays

    Step One - Bypassing the EFI Password

    My Prism unfortunately didn’t come with a hard drive and what made matters worse – the EFI (the newer replacement for the traditional BIOS) had a password preventing any action after bootup.

    After digging through virtually piece of documentation I could find and resorting to posting on the Nekochan Forums for some ideas – one poster suggested clearing the NVRAM or pulling the battery. Unfortunately as it turns out the battery is soldered to the node board. Thus leaving it to the highly undocumented POD (Power On Diagnostics) mode over the L1 Controller (Serial connector on the back of the Prism). [shell] EFI Boot Manager ver 1.10 [14.62] Partition 0: Enabled Disabled CBricks 1 Nodes 3 0 RBricks 0 CPUs 2 0 IOBricks 1 Mem(GB) 16 0 Password not supplied -- keyboard is locked. Seg: 1 Bus: 0 Dev: 1 Func: 0 - SGI IOC4 ATA detected: Firmware Rev 83 Ata(Primary,Master) MATSHITADVD-ROM SR-8178 PZ21 Seg: 1 Bus: 0 Dev: 3 Func: 0 - Vitesse Serial ATA detected: Firmware Rev 1 Sata(Pun2,Lun0) Hitachi HDS72105 JP2OA50E JP1532FR32P54K Broadcom NetXtreme Gigabit Ethernet (BCM5701) is detected (PCI) EFI Boot Manager ver 1.10 [14.62] Partition 0: Enabled Disabled CBricks 1 Nodes 3 0 RBricks 0 CPUs 2 0 IOBricks 1 Mem(GB) 16 0 Loading device drivers EFI Boot Manager ver 1.10 [14.62] Partition 0: Enabled Disabled CBricks 1 Nodes 3 0 RBricks 0 CPUs 2 0 IOBricks 1 Mem(GB) 16 0 Please select a boot option SUSE Linux Enterprise Server 11 SLES10 SLES10_bak EFI Shell [Built-in] CDROM Disk2SLES Boot option maintenance menu Use ^ and v to change option(s). Use Enter to select an option Loading: SUSE Linux Enterprise Server 11 Load of SUSE Linux Enterprise Server 11 failed: Not Found Loading: SLES10 Load of SLES10 failed: Not Found Loading: SLES10_bak Load of SLES10_bak failed: Not Found Loading: EFI Shell [Built-in] EFI Shell version 1.10 [14.62] Device mapping table blk0 : Acpi(PNP0A03,1)/Pci(1|0)/Ata(Primary,Master) blk1 : Acpi(PNP0A03,1)/Pci(3|0)/Sata(Pun2,Lun0) Shell> [/shell] By issuing a nmi command over the L1 Controller (accessible by hitting Control + t) it'll switch you to POD Mode. After which use the initalllogs command which will clear all of the EFI NVRAM Settings (the password, boot menu, date/time etc). [shell] 2 000: POD SysCt Cac> initalllogs *** This must be run only after NUMAlink discovery is complete. *** This will clear all previous log variables such as: *** moduleids, nodeids, etc. for all nodes. Clear all logs environment variables, and aliases ? [n] y Clearing nasid 0... Clearing nasid 0 EFI variables................... Clearing nasid 0 error log........... All PROM logs cleared! 2 000: POD SysCt Cac> exit [/shell]

    Step Two - Getting an OS on the Prism

    Now that I was able to actually "use" it - I needed to install an OS. With the Prism you have a couple choices. If you want to run it "headless" meaning without physically connecting a monitor and using it as a desktop - you can run Debian 5.x to 6.x, Suse Enterprise Linux 9 to 11 SP2, and Red Hat Enterprise Linux 4.x to 6.4.

    However, if you want to run it as a desktop like I did - you'll be stuck using Suse Enterprise Linux 9 SP3 or Red Hat Enterprise Linux (not sure of the version). In addition you will need the Silicon Graphics ProPack 4 Sp4 (be careful to not obtain any later version as they dropped support for the Prism). The ProPack installs a special kernel that has support for the custom dual AGP setup found in the Prism along with the FireGL cards themselves.

    To review you will need the following if you are wanting to go down the graphics route:
    -Suse Enterprise Linux (SLES) 9 base ISO Images (6 CDs, freely available on Novell as of this writing)
    -Suse Enterprise Linux (SLES) 9 SP3 ISO Images (3 CDs, available via support contract only or if you happen to get a prism with the RPMS - store them for later installation)
    -Silicon Graphics ProPack 4 SP4 ISO Images (3 CDs, very rare for them to turn up on eBay and expensive otherwise)

    From this point on I'll assume you have the CDs mentioned above.

    Suse Enterprise Linux 9 SP3 Installation
    1. Insert the SLES 9 SP3 CD 1 in the DVD drive of the Prism (the stock SLES 9 CD 1 will not work)
    2. Proceed through the installation (occasionally it'll refer to CD 1 as SLES 9 CD 1 not SLES 9 SP3 CD 1, so if it doesn't like the CD you entered try the other).

    After the installation completes and you find yourself staring at the BASH shell I highly suggest keeping an extracted copy of all 9 of the SLES 9 CDs on your hard drive.

    I simply created an isos directory (with mkdir) and then did a cp -r /media/dvd . to copy the DVD. A faster approach would be to extract the ISO images on your PC or other machine, put them on a USB thumb drive, mount the USB drive (was /dev/sdb1 for me) and simply copy the contents of the thumb drive to your newly created folder. I did this for the Propack CDs as well. It not only is faster, but then when you go to install a new package that has a dependency you don't need to swap CDs.

    To get YaST ready for future installs (including the next step), issue a /sbin/yast from your shell, which will present you with the YaST Control Center.

    Tab over to the Change Source of Installation option on the right like so: YaST SLES 9
    Go through all of the CDs by selecting Add and then selecting Local Directory.... When completed your screen should look very similar to this:
    YaST SLES 9

    Step Three - Installing the Propack

    Now that SLES 9 SP3 is installed on your Prism, you will need to install the ProPack. If you followed the last steps you have already added the Propack 3 CD set to your hard drive and added the path to the available sources for YaST to install from.
    To get started return to the initial screen of the YaST Control Center and select Install and Remove Software.

    Once on the Install and Remove Software screen select Filters from the menu option like so:
    YaST SLES 9
    Then select Selections and scroll all the way down and select all of the SGI ProPack options with the spacebar:
    YaST SLES 9

    You may have a few dependencies that YaST will determine you need - simply accept them and shortly after you should see it installing:
    YaST SLES 9

    After a reboot you should see SLES 9 with full graphics:
    SLES 9 SP3 installed with the ProPack

    Step Four - Next Steps

    Now that SLES 9 is installed, my primary goal for the Prism was to do C++ development on it. The two biggest components for that are a compiler and version control. Both will be detailed in future blog posts so stay tuned. In addition, I will begin exploring PCI-X expansion options like a SATA II RAID card to compare performance with the built in IO10 SATA controller.
    While I haven’t been posting as often as I had been earlier this year I have been hard at work on several projects, with several ending up being dead ends for one reason or another (mostly very low return on investment for the amount of time needed to accomplish them). jcBENCH2 will be released in the next week for Windows 8 and immediately following that I am pleased to announce three interconnected projects that will be taking me through the end of the year – ModeXngine, jcPIMP and Mystic Chronicles.

    Mystic Chronicles

    Mystic Chronicles

    For those that have been following my blog back to 2004, I originally coined Mystic Chronicles that summer and as seen in my original Source Forge project of the Infinity Project I had made it fairly far – definitely farther than I had gotten in the years leading up to that point. The big goal back then for Mystic Chronicles was to provide a semi-online experience, if you were online with your friends playing single player you could join in the same universe akin to Interplay’s Starfleet Command Metaverse concept going back to 2000-2001. The revival of Mystic Chronicles will be my first Windows 8 game and the first powered by the ModeXngine (more details below). The game puts you in charge of a colony with the goal of the game to conquer the other 3 AI players also attempting to rule the galaxy. The game is turn based and has you as the player balancing technological advancements, defensive and offensive strategies.

    The longer goal is to turn it into a cross platform HTML5 MVC4 and Windows Phone 8 game supported by ads and offering a premium version with additional features.

    Below is an early alpha screenshot of the progress thus far:

    I would provide a release date, but taking one from id software, it’ll be released when it’s done.


    Going back to April of this year I debuted the jcGE in my Windows Phone 7/8 game, Cylon Wars. This engine provided a 2D experience with collision detection, animated sprites and trigger based sound effects. While capable, it was definitely not what I wanted in a long term engine that I could readily use on other platforms. Building on the knowledge gained I am pleased to announce, ModeXngine that will initially support both Windows 8 and Windows Phone 8 with an HTML5 component to follow. One of the big elements will be the modularized components all communicating through a “kernel”. Since writing my first game back in 1995, I’ve wanted a framework that could grow with my skills and the technology that surrounds me – I think the time for investment into creating such a framework is now. For those curious, it will coded in C# exclusively. As noted above, this engine will debut with the Windows 8 release of Mystic Chronicles.


    Going back to May, I had soft-launched the jcPICP and started diving down the WCF <-> Java path to promote a truly platform independent platform, however that approach led to quite a few hurdles, namely IRIX’s Java implementation being so old that native soap support wasn’t there. In addition I have begun to switch some of my approaches to WebAPI over WCF in many scenarios where it makes sense. In a messaging environment I feel it does especially since virtually every platform has an HTTP Library no matter the language. That being said, I am renaming the project to jcPIMP, which stands for Jarred Capellman’s Platform Independent Messaging Platform. The main processing server will be an ASP.Net WebAPI 4.5 service with a SQL Server 2012 backend (using Entity Framework 6) with the client being written a cross-platform manner using C++. A recent side project to be discussed later reminded me of the simplicity of C++ and how much I missed using it on a semi-frequent basis.

    ModeXngine’s core will be utilizing the message handling components of jcPIMP for both internal and later multiplayer components.

    A big question one might ask themselves is, “Why build yet another messaging protocol/platform when there are so many other great solutions out there like RabbitMQ, MSMQ etc?” The main reason – to learn. I am firm believer in that in order to truly understand something you have to create it yourself, learn to overcome the mistakes others have and prevail.

    My current plan is to release jcPIMP around Mystic Chronciles to put it in the hands of other developers.
    With the ever changing software development world (I and every other developer) live in I’ve found it increasingly harder to keep up with every toolset, every framework let alone language. In the last few years I have attempted to buckle down and focus on what I enjoy the most: C#. Over the last 6+ years of C# development I’ve inherited or developed ASP.Net WebForms 1.1 to 4.5 web applications, MVC4, 3.5 to 4.5 WinForms desktop applications, Windows Workflow, custom Sharepoint 2010 parts, WPF, iOS, Android, Windows Phone 7/8, Web Services in WCF and WebAPI and most recently diving into Windows 8 development. Suffice it to say – I’ve been around the block in regards to C#.

    About 6 months ago, I started doing some early research in preparation for a very large project at work that relied more on the mathematical/statistical operations than the traditional “make stuff happen” that I am used to. Keeping with an open, out of the box mentality, I just happened to be in the Book Buyers Inc. bookstore in downtown Mountain View, California on vacation and picked up Professional F# 2.0 for a few dollars used. Knowing they were on already on version 3, I figured it would provide a great introduction to the language and then I would advance my skills through MSDN and future books. I poured through the book on the overly long flight from San Francisco International to Baltimore-Washington International using my laptop the entire flight back writing quick snippets that I could easily port back and forth between C# and F# to see the benefits and best use cases for F#. When I returned home, I found myself wanting more, and as fate would have it shortly afterwards SyncFusion was offering the F# Succinctly e-book by Robert Pickering, for free.

    Eager to read the e-book after my introduction to F#, I ended up finishing it after a short weekend. The e-book, while much shorter than the paperback I purchased, provided a great introduction and solidified many of the concepts I was still cementing in my mind. Like other developers I am sure – when investing time into a new technology or language you want some guarantee of its success and worthiness of your time (especially if it is coming out of your precious off hours) Be happy to know the author chose to include real-world quotes and links to successes with F# over the traditional C# implementations. I should note, while the author does not assume some Visual Basic or C# experience, it definitely will help, but I feel that the book provides an in-depth enough explanation and easy to follow examples for anyone with some higher level programming experience to grasp the main concepts and build a solid foundation to grow from.

    Another element of the e-book I personally enjoyed was the intuitive and easy to follow progression the author chose to utilize. The author early on in the book offered an introduction to F# and proceeded to dive into the fundamentals before providing real-use cases that a professional software developer would appreciate. Several books provide an introductory chapter only to spend the next half of the book on reference manual text or snippets that don’t jump out to you with a real world applicability or even a component of one.

    If there was one element I wished for in the e-book, it would be for it to be longer or a part 2 be written. This "sequel" would build on the concepts provided, assuming a solid foundation of F# and dive into more real-world scenarios where F# would be beneficial over C# or other higher level programming languages. Essentially a "best practices" for the C#/F# programmer.

    On a related note, during my own investigations into F# I found the Microsoft Try F# site to be of great assistance.

    In conclusion, definitely checkout the F# Succinctly e-book (and others) in SyncFusion’s ever growing library of free e-books.
    After updating a large ASP.NET 4.5 WebForms Friday, this afternoon I started to take a look into the new features in the release and discovered the "lightweight" rendering mode of the RadWindow control. Previously - going back to 2011/2012 one of my biggest complaints with the RadWindow was the hacking involved in order to make the popup appear relatively the same across Internet Explorer 9, Chrome and Firefox. Some back and forth with Telerik's Support left much to be desired, so I ended up just padding the bottom and it more or less worked. Thus my excitement for a possible fix to this old problem - turns out the new lightweight mode does infact solve the issue, across Internet Explorer 11, Firefox and Chrome there are only minimal differences now for my popups content (W3C Validated DIVs for the most part).

    This is where the fun ended for me temporarily - in the Q2 2013 (2013.2.717.45) release, the h6 tag for the Title was empty:

    I immediately pulled open the Internet Explorer 11 Developer Tool Inspector and found this curious:
    Not enjoying hacking Control Suites, but needed to implement a fix ASAP so I could continue development, I simply put this CSS Override in my WebForm's CSS Theme File: [css] .RadWindow_Glow .rwTitleWrapper .rwTitle { width: 100% !important; } [/css] Depending on the Skin you selected, you will need to update the name of the class. Hope that helps someone out there - according to the forums, it is a known issue and will be in the Q2 SP1 2013 Release, but in the mean time this will correct the issue.
    After a few days of development, jcBENCH2 is moving along nicely. Features completed:

    1. WebAPI and SQL Server Backend for CRUD Operations of Results
    2. Base UI for the Windows Store App is completed
    3. New Time Based CPU Benchmark inside a PCL
    4. Bing Maps Integration for viewing the top 20 results

    Screenshot of the app as of tonight:
    jcBENCH2 Day 4

    What's left?

    7/17/2013 - Social Networking to share results
    7/18/2013 - Integrate into the other #dev777 projects
    7/19/2013 - Bug fixes, polish and publish

    More details of the development process after the development is complete - I would rather focus on the actual development of the app currently.
    Starting a new, old project this weekend as part of the #dev777 project, jcBENCH 2. The idea being, 7 developers, develop 7 apps and have them all communicate with each other on various platforms.

    Those that have been following my blog for a while, might know I have a little program I originally wrote January 2012 as part of my trip down the Task Parallel Library in C# called, jcBENCH. Originally I created Mac OS X, IRIX (in C++), Win32 and Windows Phone 7 ports. This year I created a Windows Phone 8 app and a revamped WPF port utilizing a completely new backend.

    So why revisit the project? The biggest reason: never being 100% satisfied because of my skill set constantly being expanded I find myself always wanting to go back and make use of a new technology even if the end user sees no benefit. It's the principle - never let your code rot.

    So what is Version 2 going to entail? Or better put, what are some of the issues in the jcBENCH 1.x codebase?

    Issues in the 1.x Codebase

    Issue 1

    As it stands today all of the ports have different code bases. In IRIX's case this was a necessity since Mono hasn't been ported to IRIX (yet). With the advent of PCL (Portable Class Libraries) I can now keep one code base for all but the IRIX port, leaving only the UI and other platform specific APIs in the respective ports.

    Issue 2

    On quad core machines or faster the existing benchmark completes in a fraction of the time. This poses two big problems - doesn't represent a real test of performance over a few second span (meaning all of the CPUs may not have enough time to be tasked before completion) and on the flip side those devices that are much slower (like a cell-phone) it could take several minutes. Solution? Implement a 16 second time benchmark and then calculate the performance based on how many objects were processed during tha time.

    Issue 3

    When testing multi-processor performance, it was cumbersome to test all of the various scenarios. For instance if you had an 8 core CPU as I do with my AMD FX-8350, I had to select 1 CPU, run the benchmark and then record the result, select 2 CPUs and repeat so on and so forth. This took a long time when in reality it would make sense to offer the ability to either run the benchmark on using all cores by default and then via an advanced option allow the end user to select a specific test or have it do the entire test automatically.

    Issue 4

    No easy way to share the results exists across the board in the current version. In recent versions I added a centralized result database and charting so no matter the device you could see how your device compared, but there was no easy to get a screenshot of the benchmark, send the results via email or post on a social network. Where is the fun in a benchmark if you can't brag about it easily? In Version 2 I plan to focus on this aspect.

    Proposed Features for Version 2

    1. Rewritten from the ground up utilizing the latest approaches to cross-platform development I have learned since jcBENCH's original release 1/2012. This includes the extensive use of MVVMCross and Portable Class Libraries to cut down on the code duplication among ports.

    2. Sharing functionality via Email and Social Networking (Twitter and Facebook) will be provided, in addition a new Bing Map will visually reflect the top performing devices across the globe (if the result is submitted with location access allowed)

    3. Using WebAPI (JSON) instead of WCF XML backend for result submission and retrieval. For this app since there is no backend processing between servers, WebAPI makes a lot more sense.

    4. New Timed Based Benchmark as opposed to time to process X amount of tasks

    5. Offer an "advanced" mode to allow the entire test suite to be performed or individual tests (by default it will now use all of the cores available)

    6. At launch only a Windows Store app will be available, but Windows Phone 7/8 and Mac OS X ports will be released later this month.

    Future Features

    Ability to benchmark GPUs is something I have been attempting to get working across platforms and for those that remember I had a special Alpha release last Fall using OpenCL. Once the bugs and features for Version 2 are completed I will shift focus to making this feature a reality.

    Implement all of this functionality in a upgraded IRIX port and finally create a Linux port (using Mono). One of the biggest hurdles I was having with keeping the IRIX version up to date was the SOAP C++ Libraries not being anywhere near the ease of user a Visual Studio/C# environment offers. By switching over to HTTP/JSON I'm hoping to be able to parse and submit data much easier.

    Next Steps

    Given that the project is an app in 7 days, today marks the first day of development. As with any project, the first step was getting a basic feature set as mentioned above and now to create a project timeline based on that functional specification.

    As with my WordPress to MVC Project in April, this will entail daily blog posts with my progress.

    Day 1 (7/13/2013) - Create the new SQL Server Database Schema and WebAPI Backend
    Day 2 (7/14/2013) - Create all of the base UI Elements of the Windows Store App
    Day 3 (7/15/2013) - Create the PCL that contains the new Benchmark Algorithms
    Day 4 (7/16/2013) - Integrate Bing Maps for the location based view
    Day 5 (7/17/2013) - Add Social Networking and Email Sharing Options
    Day 6 (7/18/2013) - Integrate with fellow #dev777 projects
    Day 7 (7/19/2013) - Bug fixing, polish and Windows Store Submission

    So stay tuned for an update later today with my the successes and implementation of the new SQL Server Database Schema and WebAPI Backend
    Diving into Windows Store (Windows 8) XAML development now that I have a Microsoft Surface Pro and RT to test touch functionality. One thing that I was wondering was how to customize the hover, selected and checkbox colors. Digging around the internet I found a few elements here and there until I found ListViewItem styles and templates on MSDN.

    So without further adieu for my app I stylized the Selected, Pointer Over, Pointer Over Border and Checkbox Color below:
    #d8d8d9 #e7e8e8 #d8d8d9 #d8d8d9 #d8d8d9 ]]>

    With the styles from above implemented my Selected, Hover and Unselected (respectively):
    Windows 8 ListView Stylized
    Hopefully that helps someone out there with a similar requirement in their Windows Store App.
    Working on a new project at work today and have started to utilize the free Azure hours bundled with our MSDN account for my development environment. Previously I had setup development environments on production servers to alleviate the Sys Admins from having to create new VMs and all of the DNS entries in our firewall for external access. Over the years this hasn't caused any problems (they were in their own App Pools and never crashed the server itself), but with the free Azure hours there is no reason to even have that risk.

    So I began diving into creating my environment on Azure. I had been working off and on over the last couple days with a local SQL Server 2012 instance I have on my desktop, so I had my database schema ready to go to deploy.

    Unfortunately I was met with:
    Some searching around, I uncovered the issue is with the two ON [PRIMARY] tags, specifically the ON option I had from SQL Management Studio's Generate Scripts Option. In looking around I could not find an option in SQL Management Studio to export safely to Azure - hopefully that comes sooner than later. If I had missed the option - please post a comment below.
    This morning I will be presenting at the Maryland Code Camp with the topic of Developing Once and Deploying to Many, specifically talking to practices and patterns I've found to help create rich mobile applications efficiently over the last 3 years I've been actively developing for the mobile space. WCF, WPF, PCL, MonoDroid, Azure Mobile Services and Windows Phone 8 are to be discussed.

    For the Powerpoint 2013 presentation, all of the code going to be mentioned during the session, the SQL, PSD files and external libraries used, click here to download the zip file.

    In addition during the session I will be making reference to an app I wrote earlier this year, jcLOG-IT, specifically the Mobile Azure Service and Windows Live Integration elements.

    The code block mentioned for Authentication:
    public async Task AttemptLogin(MobileServiceAuthenticationProvider authType) {
         try {
         if (authType == MobileServiceAuthenticationProvider.MicrosoftAccount) {
         if (!String.IsNullOrEmpty(Settings.GetSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken))) {
         App.CurrentUser = await App.MobileService.LoginAsync(Settings.GetSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken)); }
    else {
         var liveIdClient = new LiveAuthClient(Common.Constants.APP_AUTHKEY_LIVECONNECT); while (_session == null) {
         var result = await liveIdClient.LoginAsync(new[] {
    ); if (result.Status != LiveConnectSessionStatus.Connected) {
         continue; }
    _session = result.Session; App.CurrentUser = await App.MobileService.LoginAsync(result.Session.AuthenticationToken); Settings.AddSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken, result.Session.AuthenticationToken); }
    Settings.AddSetting(Settings.SETTINGS_OPTIONS.AuthType, authType.ToString()); Settings.AddSetting(Settings.SETTINGS_OPTIONS.IsFirstRun, false.ToString()); return true; }
    catch (Exception ex) {
         Settings.AddSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken, String.Empty); return false; }
    The Settings class:
    public class Settings {
         public enum SETTINGS_OPTIONS {
         IsFirstRun, LiveConnectToken, AuthType, LocalPassword, EnableLocation }
    public static void CheckSettings() {
         var settings = IsolatedStorageSettings.ApplicationSettings; if (!settings.Contains(SETTINGS_OPTIONS.IsFirstRun.ToString())) {
         WriteDefaults(); }
    public static void AddSetting(SETTINGS_OPTIONS optionName, object value) {
         AddSetting(optionName.ToString(), value); }
    public static void AddSetting(string name, object value) {
         var settings = IsolatedStorageSettings.ApplicationSettings; if (!settings.Contains(name)) {
         settings.Add(name, value); }
    else {
         settings[name] = value; }
    settings.Save(); }
    public static T GetSetting(SETTINGS_OPTIONS optionName) {
         return GetSetting(optionName.ToString()); }
    public static T GetSetting(string name) {
         if (IsolatedStorageSettings.ApplicationSettings.Contains(name)) {
         if (typeof(T) == typeof(MobileServiceAuthenticationProvider)) {
         return (T) Enum.Parse(typeof (MobileServiceAuthenticationProvider), IsolatedStorageSettings.ApplicationSettings[name].ToString()); }
    return (T) Convert.ChangeType(IsolatedStorageSettings.ApplicationSettings[name], typeof (T)); }
    return default(T); }
    public static void WriteDefaults() {
         AddSetting(SETTINGS_OPTIONS.IsFirstRun, false); AddSetting(SETTINGS_OPTIONS.EnableLocation, false); AddSetting(SETTINGS_OPTIONS.LocalPassword, String.Empty); AddSetting(SETTINGS_OPTIONS.LiveConnectToken, String.Empty); AddSetting(SETTINGS_OPTIONS.AuthType, MobileServiceAuthenticationProvider.MicrosoftAccount); }
    I had the interesting request at work last week to do deletions on several million rows in the two main SQL Server 2012 databases. For years now, nothing had been deleted, only soft-deleted with an Active flag. In general anytime I needed to delete rows it usually meant I was doing a test of migration so I would simply TRUNCATE the tables and call it a day - thus never utilizing C# and there by Entity Framework. So what are your options?

    Traditional Approach

    You could go down the "traditional" approach:
    using (var eFactory = new SomeEntities()) {
         var idList = new List(); // assume idList is populated here from a file, other SQL Table etc... foreach (var someObject = eFactory.SomeObjects.Where(a => idList.Contains(a.ID)).ToList()) {
         eFactory.DeleteObject(someObject); eFactory.SaveChanges(); }
    This definitely works, but if you have an inordinate amount of rows I would highly suggest not doing it this way as the memory requirements would be astronomical since you're loading all of the SomeObject entities.

    Considerably better Approach

    using (var eFactory = new SomeEntities()) {
         var idList = new List(); // assume idList is populated here from a file, other SQL Table etc... string idStr = String.Join(",", idList); eFactory.Database.ExecuteSqlCommand("DELETE FROM dbo.SomeObjects WHERE ID IN ({
    )", idStr); }
    This approach creates a comma seperated string and then executes the SQL Command. This is a considerably better than the approach above in that it doesn't load all of those entity objects into memory and doesn't look through each element. However depending on the size of idList you could get the following error:

    Entity Framework 5 - Rare Event

    An even better Approach

    What I ended up doing to solve the problems of those above was to split the list and then process the elements on multiple threads.
    private static List getList(List original, int elementSize = 500) {
         var elementCollection = new List(); // If there are no elements dont bother processing if (original.Count == 0) {
         return elementCollection; }
    // If the size of the collection if (original.Count <= elementSize) {
         elementCollection.Add(String.Join(",", original)); return elementCollection; }
    var elementsToBeProcessed = original.Count; while (elementsToBeProcessed != 0) {
         var rangeSize = elementsToBeProcessed < elementSize ? elementsToBeProcessed : elementSize; elementCollection.Add(String.Join(",", original.GetRange(original.Count - elementsToBeProcessed, rangeSize))); elementsToBeProcessed -= rangeSize; }
    return elementCollection; }
    private static void removeElements(IEnumerable elements, string tableName, string columnName, DbContext objContext, bool debug = false) {
         var startDate = DateTime.Now; if (debug) {
         Console.WriteLine("Removing Rows from Table {
    @ {
    ", tableName, startDate.ToString(CultureInfo.InvariantCulture)); }
    try {
         Parallel.ForEach(elements, elementStr => objContext.Database.ExecuteSqlCommand(String.Format("DELETE FROM dbo.{
    WHERE {
    IN ({
    )", tableName, columnName, elementStr))); }
    catch (Exception ex) {
         Console.WriteLine(ex); }
    if (!debug) {
         return; }
    var endDate = DateTime.Now; Console.WriteLine("Removed Rows from Table {
    in {
    seconds", tableName, endDate.Subtract(startDate).TotalSeconds); }
    To utilize these methods you can do something like this:
    using (var eFactory = new SomeEntities()) {
         var idList = new List(); // assume idList is populated here from a file, other SQL Table etc... var idStr = getList(idList); removeElements(idStr, "SomeObjects", "ID", eFactory); }
    Note you could simplify this down to:
    using (var eFactory = new SomeEntities()) {
         removeElements(getList(/* your Int Collection */), "SomeObjects", "ID", eFactory); }
    Hopefully that helps someone else out there who runs into issues with deleting massive amount of rows. Note I did try to utilize the Entity Framework Extended NuGet library, but ran into errors when trying to delete rows.
    Working on a new project at work today I realized with the amount of clients potentially involved with a new WCF Service I would have to adjust my tried and true process with using a WCF Service and Visual Studio's WCF Proxy Generation. I had often wondered about the option circled below, intuitively thinking it would know automatically a type referenced in an OperationContract and in a Class Library before generating a WCF Proxy to not create an entirely new type.

    After applying regular expressions to the Post Content

    Something like this for instance defined in a common Class Library:
    [DataContract] public class SomeObject {
         [DataMember] public int ID {
         get; set;}
    [DataMember] public string Name {
         get; set;}
    public SomeObject() {
    And then in my OperationContract:
    [ OperationContract ] public List< SomeObject > GetObject(int id); ]]>
    Sadly, this is not how it works. Intuitively you would think the SomeObject type since it is being referenced in the Class Library, the Operation Contract and your Client(s) the Proxy Generation with the box checked above simply generate the Proxy Class referencing the SomeObject class. So how can this be achieved cleanly?

    The best solution I could come up with was to do some moving around of code in my Visual Studio 2012 projects. In short, the interface for my WCF Service and any external classes (i.e. classes used for delivering and receiving data between the Service and Client(s)) were moved to a Class Library previously setup and a wrapper for the Interface was created.

    Let's dive in...

    Luckily, I had setup my WCF Service with internally used and externally used classes with a proper folder structure like so:


    So it was simply a matter of relocating and adjusting the namespace references.

    After moving only the Interface for my WCF Service (leaving the actual implementation in the WCF Service), I wrote my wrapper:
    public class WCFFactory : IDisposable {
         public IWCFService Client {
         get; set; }
    public WCFFactory() {
         var myBinding = new BasicHttpBinding(); var myEndpoint = new EndpointAddress(ConfigurationManager.AppSettings["WEBSERVICE_Address"]); var cFactory = new ChannelFactory< IWCFService >(myBinding, myEndpoint); Client = cFactory.CreateChannel(); }
    public void Dispose() {
         ((IClientChannel)Client).Close(); }
    So then in my code I could reference my Operation Contracts like so:
    using (var webService = new WCFFactory()) {
         var someObject = webService.Client.GetSomeObject(1); }
    All of this is done without creating any references via the "Add Service Reference" option in Visual Studio 2012.

    Downsides of this approach? None that I've been able to uncover. One huge advantage of going this route versus the Proxy Generation approach is that when your Interface Changes, you update it in one spot, simply recompile the Class Library and then update all of the Clients with the updated Class Library. If you've got all of your clients in the same Visual Studio Solution, simply recompiling is all that is necessary.

    More to come on coming up with ways to make interoperability between platforms better as I progress on this project, as it involves updating SQL Server Reporting Services, .NET 1.1 WebForms, .NET 3.5 WebForms, .NET 4.5 WebForms, two other WCF Services and the Windows Workflow solution I mentioned earlier this month.
    After wrapping up Phase One of my migration from WordPress to MVC4, I began diving into the admin side of the migration trying to replicate a lot of ease of use WordPress offered while adding my own touches. To begin I started with the Add/Edit Post form.

    After adding in my view:
    @model bbxp.mvc.Models.PostModel
        Layout = "~/Views/Shared/_AdminLayout.cshtml";
    @using (Html.BeginForm("SavePost", "bbxpAdmin", FormMethod.Post)) { if (@Model.PostID.HasValue) { }





    And then my code behind in the controller:
    public ActionResult AddEditPost(int? PostID) {
        if (!checkAuth()) {
            var lModel = new Models.LoginModel();
            lModel.ErrorMessage = "Authentication failed...";
            return View("Index", lModel);
        var pModel = new Models.PostModel();
        if (PostID.HasValue) {
            using (var pFactory = new PostFactory()) {
                var post = pFactory.GetPost(PostID.Value);
                pModel.Body = post.Body;
                pModel.PostID = post.ID;
                pModel.Title = post.Title;
                pModel.Tags = string.Join(", ", post.Tags.Select(a => a.Name).ToList());
                pModel.Categories = String.Empty;
        return View(pModel);
    My post back ActionResult in my Controller never got hit. After inspecting the outputted HTML I noticed the form's action was empty:
    form action="" method="post"
    Having a hunch it was a result of a bad route, I checked my Global.asax.cs file and added a specific route to handle the Action/Controller:
    routes.MapRoute(name: "bbxpAddEditPost", url: "bbxpadmin/{action}/{PostID}", defaults: new { controller = "bbxpAdmin", action = "AddEditPost"});
    Sure enough immediately following adding the route, the form posted back properly and I was back at work on adding additional functionality to the backend. Hopefully that helps someone else out as I only found one unanswered StackOverflow post on this issue. I should also note, a handy feature when utilizing Output Caching as discussed in a previous post is to programmatically reset the cache.

    In my case I added the following in my SavePost ActionResult:
    Response.RemoveOutputCacheItem(Url.Action("Index", "Home"));
    This removes the cached copy of my main Post Listing.
    In today's post I will be diving into adding Search Functionality, Custom Error Pages and MVC Optimizations. Links to previous parts: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6 and Part 7.

    Search Functionality

    A few common approaches to adding search functionality to a Web Application:

    Web App Search Approaches

    1. Pull down all of the data and then search on it using a for loop or LINQ - An approach I loathe because to me this is a waste of resources, especially if the content base you're pulling from is of a considerable amount. Just ask yourself, if you were at a library and you knew the topic you were looking for, would you pull out all of the books in the entire library and then filter down or simply find the topic's section and get the handful of books?
    2. Implement a Stored Procedure with a query argument and return the results - An approach I have used over the years, it is easy to implement and for me it leaves the querying where it should be - in the database.
    3. Creating a Search Class with a dynamic interface and customizable properties to search and a Stored Procedure backend like in Approach 2 - An approach I will be going down at a later date for site wide search of a very large/complex WebForms app.
    For the scope of this project I am going with Option #2 since the scope of the content I am searching for only spans the Posts objects. At a later date in Phase 2 I will probably expand this to fit Option #3. However since I will want to be able to search on various objects and return them all in a meaningful way, fast and efficiently. So let's dive into Option #2. Because the usage of virtually the same block of SQL is being utilized in many Stored Procedures at this point, I created a SQL View: [sql] CREATE VIEW dbo.ActivePosts AS SELECT dbo.Posts.ID, dbo.Posts.Created, dbo.Posts.Title, dbo.Posts.Body, dbo.Users.Username, dbo.Posts.URLSafename, dbo.getTagsByPostFUNC(dbo.Posts.ID) AS 'TagList', dbo.getSafeTagsByPostFUNC(dbo.Posts.ID) AS 'SafeTagList', (SELECT COUNT(*) FROM dbo.PostComments WHERE dbo.PostComments.PostID = dbo.Posts.ID AND dbo.PostComments.Active = 1) AS 'NumComments' FROM dbo.Posts INNER JOIN dbo.Users ON dbo.Users.ID = dbo.Posts.PostedByUserID WHERE dbo.Posts.Active = 1 [/sql] And then create a new Stored Procedures with the ability to search content and reference the new SQL View: [sql] CREATE PROCEDURE [dbo].[getSearchPostListingSP] (@searchQueryString VARCHAR(MAX)) AS SELECT dbo.ActivePosts.* FROM dbo.ActivePosts WHERE (dbo.ActivePosts.Title LIKE '%' + @searchQueryString + '%' OR dbo.ActivePosts.Body LIKE '%' + @searchQueryString + '%') ORDER BY dbo.ActivePosts.Created DESC [/sql] You may be asking why not simply add the ActivePosts SQL View to your Entity Model and do something like this in your C# code:
    public List<ActivePosts> GetSearchPostResults(string searchQueryString) {
         using (var eFactory = new bbxp_jarredcapellmanEntities()) {
         return eFactory.ActivePosts.Where(a => a.Title.Contains(searchQueryString) || a.Body.Contains(searchQueryString)).ToList(); }
    That's perfectly valid and I am not against doing it that, but I feel like code like that should be done at the Database level, thus the Stored Procedure. Granted Stored Procedures do add a level of maintenance over doing it via code. For one, anytime you update/add/remove columns you have to update the Complex Type in your Entity Model inside of Visual Studio and then update your C# code that makes reference to that Stored Procedure. For me it is worth it, but to each their own. I have not made performance comparisons on this particular scenario, however last summer I did do some aggregate performance comparisons between LINQ, PLINQ and Stored Procedures in my in C#">LINQ vs PLINQ vs Stored Procedure Row Count Performance in C#. You can't do a 1 to 1 comparison between varchar column searching and aggregate function performance, but my point, or better put, my lesson I want to convey is to definitely keep an open mind and explore all possible routes. You never want to find yourself in a situation of stagnation in your software development career simply doing something because you know it works. Things change almost daily it seems - near impossible as a polyglot programmer to keep up with every change, but when a new project comes around at work do your homework even if it means sacrificing your nights and weekends. The benefits will become apparent instantly and for me the most rewarding aspect - knowing when you laid down that first character in your code you did so with the knowledge of what you were doing was the best you could provide to your employer and/or clients. Back to implementing the Search functionality, I added the following function to my PostFactory class:
    public List<Objects.Post> GetSearchResults(string searchQueryString) {
         using (var eFactory = new bbxp_jarredcapellmanEntities()) {
         return eFactory.getSearchPostListingSP(searchQueryString).Select(a => new Objects.Post(a.ID, a.Created, a.Title, a.Body, a.TagList, a.SafeTagList, a.NumComments.Value, a.URLSafename)).ToList(); }
    You might see the similarity to other functions if you've been following this series. The function exposed in an Operation Contract inside the WCF Service:
    [OperationContract] List<lib.Objects.Post> GetPostSearchResults(string searchQueryString); public List<Post> GetPostSearchResults(string searchQueryString) {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetSearchResults(searchQueryString); }
    Back in the MVC App I created a new route to handle searching:
    routes.MapRoute("Search", "Search/{
    ", new {
         controller = "Home", action = "Search" }
    ); ]]>
    So now I can enter values via the url like so: Would search all Posts that contained mvc in the title or body. Then in my Controller class:
    [AcceptVerbs(HttpVerbs.Post)] public ActionResult Search(string searchQueryString) {
         ViewBag.Title = searchQueryString + " << Search Results << " + Common.Constants.SITE_NAME; var model = new Models.HomeModel(baseModel); using (var ws = new WCFServiceClient()) {
         model.Posts = ws.GetPostSearchResults(searchQueryString); }
    ViewBag.Model = model; return View("Index", model); }
    In my partial view:
    <div class="Widget"> <div class="Title"> <h3>Search Post History</h3> </div> <div class="Content"> @using (Html.BeginForm("Search", "Home", new {
         searchQueryString = "searchQueryString"}
    , FormMethod.Post)) {
         <input type="text" id="searchQueryString" name="searchQueryString" class="k-textbox" required placeholder="enter query here" /> <button class="k-button" type="submit">Search >></button> }
    </div> </div> ]]>
    When all was done: [caption id="attachment_2078" align="aligncenter" width="252"]Search box <span classin MVC App" width="252" height="171" class="size-full wp-image-2078" /> Search box in MVC App[/caption] Now you might be asking, what if there are no results? Your get an empty view: [caption id="attachment_2079" align="aligncenter" width="300"]Empty Result - Wrong way to handle it Empty Result - Wrong way to handle it[/caption] This leads me to my next topic:

    Custom Error Pages

    We have all been on sites where we go some place we either don't have access to, doesn't exist anymore or we misspelled. WordPress had a fairly good handler for this scenario: [caption id="attachment_2081" align="aligncenter" width="300"]WordPress Content not found Handler WordPress Content not found Handler[/caption] As seen above when no results are found, we want to let the user know, but also create a generic handler for other error events. To get started let's add a Route to the Global.asax.cs:
    routes.MapRoute("Error", "Error", new {
         controller = "Error", action = "Index" }
    ); ]]>
    This will map to /Error with a tie to an ErrorController and a Views/Error/Index.cshtml. And my ErrorController:
    public class ErrorController : BaseController {
         public ActionResult Index() {
         var model = new Models.ErrorModel(baseModel); return View(model); }
    And my View:
    @model bbxp.mvc.Models.ErrorModel <div class="errorPage"> <h2>Not Found</h2> <div class="content"> Sorry, but you are looking for something that isn't here. </div> </div> ]]>
    Now you maybe asking why isn't the actual error going to be passed into the Controller to be displayed? For me I personally feel a generic error message to the end user while logging/reporting the errors to administrators and maintainers of a site is the best approach. In addition, a generic message protects you somewhat from exposing sensitive information to a potential hacker such as "No users match the query" or worse off database connection information. That being said I added a wrapper in my BaseController:
    public ActionResult ThrowError(string exceptionString) {
         // TODO: Log errors either to the database or email powers that be return RedirectToAction("Index", "Error"); }
    This wrapper will down the road record the error to the database and then email users with alerts turned on. Since I haven't started on the "admin" section I am leaving it as is for the time being. The reason for the argument being there currently is that so when that does happen all of my existing front end code is already good to go as far as logging. Now that I've got my base function implemented, let's revisit the Search function mentioned earlier:
    public ActionResult Search(string searchQueryString) {
         ViewBag.Title = searchQueryString + " << Search Results << " + Common.Constants.SITE_NAME; var model = new Models.HomeModel(baseModel); using (var ws = new WCFServiceClient()) {
         model.Posts = ws.GetPostSearchResults(searchQueryString); }
    if (model.Posts.Count == 0) {
         ThrowError(searchQueryString + " returned 0 results"); }
    ViewBag.Model = model; return View("Index", model); }
    Note the If conditional and the call to the ThrowError, no other work is necessary. As implemented: [caption id="attachment_2083" align="aligncenter" width="300"]Not Found Error Handler Page <span classin the MVC App" width="300" height="81" class="size-medium wp-image-2083" /> Not Found Error Handler Page in the MVC App[/caption] Where does this leave us? The final phase in development: Optimization.


    You might be wondering why I left optimization for last? I feel as though premature optimization leads to not only a longer debugging period when nailing down initial functionality, but also if you do things right as you go on your optimizations are really just tweaking. I've done both approaches in my career and definitely have had more success with doing it last. If you've had the opposite experience please comment below, I would very much like to hear your story. So where do I want to begin?

    YSlow and MVC Bundling

    For me it makes sense to do the more trivial checks that provide the most bang for the buck. A key tool to assist in this manner is YSlow. I personally use the Firefox Add-on version available here. As with any optimization, you need to do a baseline check to give yourself a basis from which to improve. In this case I am going from a fully featured PHP based CMS, WordPress to a custom MVC4 Web App so I was very intrigued by the initial results below. [caption id="attachment_2088" align="aligncenter" width="300"]WordPress YSlow Ratings WordPress YSlow Ratings[/caption] [caption id="attachment_2089" align="aligncenter" width="300"]Custom MVC 4 App YSlow Results Custom MVC 4 App YSlow Ratings[/caption] Only scoring 1 point less than the battle tested WordPress version with no optimizations I feel is pretty neat. Let's now look into what YSlow marked the MVC 4 App down on. In the first line item, it found that the site is using 13 JavaScript files and 8 CSS files. One of the neat MVC features is the idea of bundling multiple CSS and JavaScript files into one. This not only cuts down on the number of HTTP Requests, but also speeds up the initial page load where most of your content is subsequently cached on future page requests. If you recall going back to an earlier post our _Layout.cshtml we included quite a few CSS and JavaScript files:
    <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.common.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.dataviz.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.default.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.dataviz.default.min.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/kendo/2013.1.319/jquery.min.js")"></script> <script src="@Url.Content("~/Scripts/kendo/2013.1.319/kendo.all.min.js")"></script> <script src="@Url.Content("~/Scripts/kendo/2013.1.319/kendo.aspnetmvc.min.js")"></script> <script src="@Url.Content("~/Scripts/kendo.modernizr.custom.js")"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shCore.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/syntaxhighlighter/shCore.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/syntaxhighlighter/shThemeRDark.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushCSharp.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushPhp.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushXml.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushCpp.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushBash.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushSql.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/lightbox/jquery-1.7.2.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/lightbox/lightbox.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/lightbox/lightbox.css")" rel="stylesheet" type="text/css" /> ]]>
    Let's dive into Bundling all of our JavaScript files. First off create a new class, I called it BundleConfig and inside this class add the following static function:
    public static void RegisterBundles(BundleCollection bundles) {
         // JavaScript Files bundles.Add(new ScriptBundle("~/Bundles/kendoBundle") .Include("~/Scripts/kendo/2013.1.319/jquery.min.js") .Include("~/Scripts/kendo/2013.1.319/kendo.all.min.js") .Include("~/Scripts/kendo/2013.1.319/kendo.aspnetmvc.min.js") .Include("~/Scripts/kendo.modernizr.custom.js") ); bundles.Add(new ScriptBundle("~/Bundles/syntaxBundle") .Include("~/Scripts/syntaxhighlighter/shCore.js") .Include("~/Scripts/syntaxhighlighter/shBrushCSharp.js") .Include("~/Scripts/syntaxhighlighter/shBrushPhp.js") .Include("~/Scripts/syntaxhighlighter/shBrushXml.js") .Include("~/Scripts/syntaxhighlighter/shBrushCpp.js") .Include("~/Scripts/syntaxhighlighter/shBrushBash.js") .Include("~/Scripts/syntaxhighlighter/shBrushSql.js") ); bundles.Add(new ScriptBundle("~/Bundles/lightboxBundle") .Include("~/Scripts/lightbox/jquery-1.7.2.min.js") .Include("~/Scripts/lightbox/lightbox.js") ); }
    Then in your _Layout.cshtml replace all of the original JavaScript tags with the following 4 lines:
    @Scripts.Render("~/Bundles/kendoBundle") @Scripts.Render("~/Bundles/syntaxBundle") @Scripts.Render("~/Bundles/lightboxBundle") ]]>
    So afterwards that block of code should look like:
    <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.common.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.dataviz.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.default.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.dataviz.default.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/syntaxhighlighter/shCore.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/syntaxhighlighter/shThemeRDark.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/lightbox/lightbox.css")" rel="stylesheet" type="text/css" /> @Scripts.Render("~/Bundles/kendoBundle") @Scripts.Render("~/Bundles/syntaxBundle") @Scripts.Render("~/Bundles/lightboxBundle") ]]>
    Finally go to your Global.asax.cs file and inside your Application_Start function add the following line:
    BundleConfig.RegisterBundles(BundleTable.Bundles); ]]>
    So in the end your Application_Start function should look like:
    protected void Application_Start() {
         AreaRegistration.RegisterAllAreas(); RegisterGlobalFilters(GlobalFilters.Filters); RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); }
    Now after re-running the YSlow test: [caption id="attachment_2092" align="aligncenter" width="300"]YSlow Ratings after Bundling of JavaScript Files <span classin the MVC App" width="300" height="190" class="size-medium wp-image-2092" /> YSlow Ratings after Bundling of JavaScript Files in the MVC App[/caption] Much improved, now we're rated better than WordPress itself. Now onto the bundling of the CSS styles. Add the following below the previously added ScriptBundles in your BundleConfig class:
    // CSS Stylesheets bundles.Add(new StyleBundle("~/Bundles/stylesheetBundle") .Include("~/Content/Site.css") .Include("~/Content/lightbox/lightbox.css") .Include("~/Content/syntaxhighlighter/shCore.css") .Include("~/Content/syntaxhighlighter/shThemeRDark.css") .Include("~/Content/kendo/2013.1.319/kendo.common.min.css") .Include("~/Content/kendo/2013.1.319/kendo.dataviz.min.css") .Include("~/Content/kendo/2013.1.319/kendo.default.min.css") .Include("~/Content/kendo/2013.1.319/kendo.dataviz.default.min.css") ); ]]>
    And then in your _Layout.cshtml add the following in place of all of your CSS includes:
    @Styles.Render("~/Bundles/stylesheetBundle") ]]>
    So when you're done, that whole block should look like the following:
    @Styles.Render("~/Bundles/stylesheetBundle") @Scripts.Render("~/Bundles/kendoBundle") @Scripts.Render("~/Bundles/syntaxBundle") @Scripts.Render("~/Bundles/lightboxBundle") ]]>
    One thing that I should note is if your Bundling isn't working check your Routes. Because of my Routes, after deployment (and making sure the is set to false), I was getting 404 errors on my JavaScript and CSS Bundles. My solution was to use the IgnoreRoutes method in my Global.asax.cs file:
    routes.IgnoreRoute("Bundles/*"); ]]>
    For completeness here is my complete RegisterRoutes:
    "); routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{
    ", defaults: new {
         id = RouteParameter.Optional }
    ); routes.IgnoreRoute("Bundles/*"); routes.MapRoute("Error", "Error/", new {
         controller = "Error", action = "Index" }
    ); routes.MapRoute("Search", "Search/{
    ", new {
         controller = "Home", action = "Search" }
    ); routes.MapRoute("Feed", "Feed", new {
        controller = "Home", action = "Feed"}
    ); routes.MapRoute("Tags", "tag/{
    ", new {
        controller = "Home", action = "Tags"}
    ); routes.MapRoute("PostsRoute", "{
    ", new {
         controller = "Home", action = "Posts" }
    , new {
         year = @"\d+" }
    ); routes.MapRoute("ContentPageRoute", "{
    ", new {
        controller = "Home", action = "ContentPage"}
    ); routes.MapRoute("PostRoute", "{
    ", new {
         controller = "Home", action = "SinglePost" }
    , new {
         year = @"\d+", month = @"\d+", day = @"\d+" }
    ); routes.MapRoute("Default", "{
    ", new {
         controller = "Home", action = "Index" }
    ); ]]>
    Afterwards everything was set properly and if you check your source code you'll notice how MVC generates the HTML:
    <link href="/Bundles/stylesheetBundle?v=l3WYXmrN_hnNspLLaGDUm95yFLXPFiLx613TTF4zSKY1" rel="stylesheet"/> <script src="/Bundles/kendoBundle?v=-KrP5sDXLpezNwcL3Evn9ASyJPShvE5al3knHAy2MOs1"></script> <script src="/Bundles/syntaxBundle?v=NQ1oIC63jgzh75C-QCK5d0B22diL-20L4v96HctNaPo1"></script> <script src="/Bundles/lightboxBundle?v=lOBITxhp8sGs5ExYzV1hgOS1oN3p1VUnMKCjnAbhO6Y1"></script> ]]>
    After re-running YSlow: [caption id="attachment_2095" align="aligncenter" width="300"]YSlow after all bundling <span classin MVC" width="300" height="215" class="size-medium wp-image-2095" /> YSlow after all bundling in MVC[/caption] Now we received a score of 96. What's next? Caching.

    MVC Caching

    Now that we've reduced the amount of data being pushed out to the client and optimized the number of http requests, lets switch gears to reducing the load on the server and enhance the performance of your site. Without diving into all of the intricacies of caching, I am going to turn on server side caching, specifically Output Caching. At a later date I will dive into other approaches of caching including the new HTML5 client side caching that I recently dove into. That being said, turning on Output Caching in your MVC application is really easy, simply put the OutputCache Attribute above your ActionResults like so:
    [OutputCache(Duration = 3600, VaryByParam = "*")] public ActionResult SinglePost(int year, int month, int day, string postname) {
         ----- }
    In this example, the ActionResult will be cached for one hour (3600 seconds = 1 hour) and by setting the VaryByParam to * that means each combination of arguments passed into the function is cached versus caching one argument combination and displaying the one result. I've seen developers simply turn on caching and not thinking about dynamic content - suffice it to say, think about what could be cached and what can't. Common items that don't change often like your header or sidebar can be cached without much thought, but think about User/Role specific content and how bad it would be for a "Guest" user to see content as a Admin because an Admin had accessed the page within the cache time before a Guest user had.


    In this post I went through the last big three items left in my migration from WordPress to MVC: Search Handling, Custom Error Pages and Caching. That being said I have a few "polish" items to accomplish before switching over the site to all of the new code, namely additional testing and adding a basic admin section. After those items I will consider Phase 1 completed and go back to my Windows Phone projects. Stay tuned for Post 9 tomorrow night with the polish items.
    Can't believe it's been a week to the day when I began this project, but I am glad at the amount of progress I have made on the project thus far. Tonight I will dive into adding a WCF Service to act as a layer in between the logic and data layer done in previous posts Part 1, Part 2, Part 3, Part 4, Part 5 and Part 6) and add adding RSS support to the site.

    Integrating a WCF Service

    First off, for those that aren't familiar, WCF (Windows Communication Foundation) is an extremely powerful Web Service Technology created by Microsoft. I first dove into WCF April 2010 when diving into Windows Phone development as there was no support for the "classic" ASMX Web Services. Since then I have used WCF Services as the layer for all ASP.NET WebForms, ASP.NET MVC, Native Mobile Apps and other WCF Services at work since. I should note, WCF to WCF communication is done at the binary level, meaning it doesn't send XML between the services, something I found extremely enlightening that Microsoft implemented. At it's most basic level a WCF Service is comprised of two components, the Service Interface Definition file and the actual implementation. In the case of the migration, I created my Interface as follows:
    [ServiceContract] public interface IWCFService {
         [OperationContract] lib.Objects.Post GetSinglePost(int year, int month, int day, string postname); [OperationContract] List<lib.Objects.Comment> GetCommentsFromPost(int postID); [OperationContract(IsOneWay = true)] void AddComment(string PersonName, string EmailAddress, string Body, int PostID); [OperationContract] lib.Objects.Content GetContent(string pageName); [OperationContract] List<lib.Objects.Post> GetPosts(DateTime startDate, DateTime endDate); [OperationContract] List<lib.Objects.Post> GetPostsByTags(string tagName); [OperationContract] List<lib.Objects.ArchiveItem> GetArchiveList(); [OperationContract] List<lib.Objects.LinkItem> GetLinkList(); [OperationContract] List<lib.Objects.TagCloudItem> GetTagCloud(); [OperationContract] List<lib.Objects.MenuItem> GetMenuItems(); }
    The one thing to note, IsOneWay a top of the AddComment function indicates, the client doesn't expect a return value. As noted in last night's post, the end user is not going to want to wait for all the emails to be sent, they simply want their comment to be posted and the Comment Listing refreshed with their comment. By setting the IsOneWay to true, you ensure the client's experience is fast no matter the server side work being done. And the actual implementation:
    public class WCFService : IWCFService {
         public Post GetSinglePost(int year, int month, int day, string postname) {
         using (var pFactory = new PostFactory()) {
         var post = pFactory.GetPost(postname)[0]; post.Comments = pFactory.GetCommentsFromPost(post.ID); return post; }
    public List<Comment> GetCommentsFromPost(int postID) {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetCommentsFromPost(postID); }
    public void AddComment(string PersonName, string EmailAddress, string Body, int PostID) {
         using (var pFactory = new PostFactory()) {
         pFactory.addComment(PostID, PersonName, EmailAddress, Body); }
    public Content GetContent(string pageName) {
         using (var cFactory = new ContentFactory()) {
         return cFactory.GetContent(pageName); }
    public List<Post> GetPosts(DateTime startDate, DateTime endDate) {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetPosts(startDate, endDate); }
    public List<Post> GetPostsByTags(string tagName) {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetPostsByTags(tagName); }
    public List<ArchiveItem> GetArchiveList() {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetArchiveList(); }
    public List<LinkItem> GetLinkList() {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetLinkList(); }
    public List<TagCloudItem> GetTagCloud() {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetTagCloud(); }
    public List<MenuItem> GetMenuItems() {
         using (var bFactory = new BaseFactory()) {
         return bFactory.GetMenuItems(); }
    One thing you might be asking, isn't this a security risk? If you're not, you should. Think about it, anyone who has access to your WCF Service could add comments and pull down your data at will. In its current state, this isn't a huge deal since it is only returning data and the AddComment Operation Contract requires a prior approved comment to post, but what about when the administrator functionality is implemented? You definitely don't want to expose your contracts to the outside world with only the parameters needed. So what can you do?
    1. Keep your WCF Service not exposed to the internet - this is problematic in today's world where a mobile presence is almost a necessity. Granted if one were to only create a MVC 4 Mobile Web Application you could keep it behind a firewall. My thought process currently is design and do it right the first time and don't corner yourself into a position where you have to go back and do additional work.
    2. Add username, password or some token to the each Operation Contract and then verify the user - this approach works and I've done it that way for public WCF Services. The problem becomes more of a lot of extra work on both the client and server side. Client Side you can create a base class with the token, username/password and simply pass it into each contract and then server side do a similar implementation
    3. Implement a message level or Forms Membership - This approach requires the most upfront work, but reaps the most benefits as it keeps your Operation Contracts clean and offers an easy path to update at a later date.
    Going forward I will be implementing the 3rd option and of course I will document the process. Hopefully this help get developers thinking about security and better approaches to problems. Moving onto the second half of the post, creating an RSS Feed.

    Creating an RSS Feed

    After getting my class in my WCF Service, I created a new Stored Procedure in preparation: [sql] CREATE PROCEDURE dbo.getRSSFeedListSP AS SELECT TOP 25 dbo.Posts.Created, dbo.Posts.Title, LEFT(CAST(dbo.Posts.Body AS VARCHAR(MAX)), 200) + '...' AS 'Summary', dbo.Posts.URLSafename FROM dbo.Posts INNER JOIN dbo.Users ON dbo.Users.ID = dbo.Posts.PostedByUserID WHERE dbo.Posts.Active = 1 ORDER BY dbo.Posts.Created DESC [/sql] Basically this will return the most recent 25 posts and up to the first 200 characters of the post. Afterwards I created a class to translate the Entity Framework Complex Type:
    [DataContract] public class PostFeedItem {
         [DataMember] public DateTime Published {
         get; set; }
    [DataMember] public string Title {
         get; set; }
    [DataMember] public string Description {
         get; set; }
    [DataMember] public string URL {
         get; set; }
    public PostFeedItem(DateTime published, string title, string description, string url) {
         Published = published; Title = title; Description = description; URL = url; }
    And then I added a new Operation Contract in my WCF Service:
    public List<lib.Objects.PostFeedItem> GetFeedList() {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetFeedList(); }
    Now I am going to leave it up to you which path to implement. At this point you've got all backend work done to return the data you need to write your XML file for RSS. There are many approaches to how you want to go about to proceeding, and it really depends on how you want to serve your RSS Feed. Do you want it to regenerate on the fly for each request? Or do you want to write an XML file only when a new Post is published and simply serve the static XML file? From what my research gave me, there are multiple ways to do each of those. For me I am in favor of doing the work once and writing it out to a file rather than doing all of that work on each request. The later seems like a waste of server resources. Generate Once
    1. One being using the Typed DataSet approach I used in Part 1 - requires very little work and if you're like me, you like a strongly typed approach.
    2. Another option is to use the SyndicationFeed built in class to create your RSS Feed's XML - an approach I hadn't researched prior to for generating one
    3. Using the lower level XmlWriter functionality in .Net to build your RSS Feed's XML - I strongly urge you to not do this with the 2 approaches above being strongly typed. Unstrongly Typed code leads to spaghetti and a debugging disaster when something goes wrong.
    Generate On-Thee-Fly
    1. Use the previously completed WCF OperationContract to simply return the data and then use something like MVC Contrib to return a XmlResult in your MVC Controller.
    2. Set your MVC View to return XML and simply iterate through all of the Post Items
    Those are just some ways to accomplish the goal of creating a RSS Feed for your MVC site. Which is right? I think it is up to you to find what works best for you. That being said, I am going to walk through how to do the first 2 Generate Once Options. For both approaches I am going to use IIS's UrlRewrite functionality to route to For those interested, all it took was the following block in my web.config in the System.WebService section: [xml] <rewrite> <rules> <rule name="RewriteUserFriendlyURL1" stopProcessing="true"> <match url="^feed$" /> <conditions> <add input="{
    " matchType="IsFile" negate="true" /> <add input="{
    " matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="rss.xml" /> </rule> </rules> </rewrite> [/xml] To learn more about URL Rewrite go the official site here.

    Option 1 - XSD Approach

    Utilizing a similar approach to how I got started, utilizing the XSD tool in Part 1, I generated a typed dataset based on the format of an RSS XML file: [xml] <?xml version="1.0"?> <rss version="2.0"> <channel> <title>Jarred Capellman</title> <link></link> <description>Putting 1s and 0s to work since 1995</description> <language>en-us</language> <item> <title>Version 2.0 Up!</title> <link></link> <description>Yeah in all its glory too, it's far from complete, the forum will be up tonight most likely...</description> <pubDate>5/4/2012 12:00:00 AM</pubDate> </item> </channel> </rss> [/xml] [caption id="attachment_2056" align="aligncenter" width="300"]Generated Typed Data Set <span classfor RSS" width="300" height="151" class="size-medium wp-image-2056" /> Generated Typed Data Set for RSS[/caption] Then in my HomeController, I wrote a function to handle writing the XML to be called when a new Post is entered into the system:
    private void writeRSSXML() {
         var dt = new rss(); using (var ws = new WCFServiceClient()) {
         var feedItems = ws.GetFeedList(); var channelRow =; channelRow.title = Common.Constants.SITE_NAME; channelRow.description = Common.Constants.SITE_DESCRIPTION; channelRow.language = Common.Constants.SITE_LANGUAGE; = Common.Constants.URL;;; foreach (var item in feedItems) {
         var itemRow = dt.item.NewitemRow(); itemRow.SetParentRow(channelRow); itemRow.description = item.Description; = buildPostURL(item.URL, item.Published); itemRow.pubDate = item.Published.ToString(CultureInfo.InvariantCulture); itemRow.title = item.Title; dt.item.AdditemRow(itemRow); dt.item.AcceptChanges(); }
    var xmlString = dt.GetXml(); xmlString = xmlString.Replace("<rss>", "<?xml version=\"1.0\" encoding=\"utf-8\"?><rss version=\"2.0\">"); using (var sw = new StreamWriter(HttpContext.Server.MapPath("~/rss.xml"))) {
         sw.Write(xmlString); }
    Pretty intuitive code with one exception - I could not find a way to add the version property to the rss element, thus having to use the GetXml() method and then do a more elaborate solution instead of simply calling dt.WriteXml(HttpContext.Server.MapPath("~/rss.xml")). Overall though I find this approach to be very acceptable, but not perfect.

    Option 2 - Syndication Approach

    Not 100% satisfied with the XSD Approach mentioned above I dove into the SyndicationFeed class. Be sure to include using System.ServiceModel.Syndication; at the top of your MVC Controller. I created the same function as above, but this time utilizing the SyndicationFeed class that is built into .NET:
    private void writeRSSXML() {
         using (var ws = new WCFServiceClient()) {
         var feed = new SyndicationFeed(); feed.Title = SyndicationContent.CreatePlaintextContent(Common.Constants.SITE_NAME); feed.Description = SyndicationContent.CreatePlaintextContent(Common.Constants.SITE_DESCRIPTION); feed.Language = Common.Constants.SITE_LANGUAGE; feed.Links.Add(new SyndicationLink(new Uri(Common.Constants.URL))); var feedItems = new List<SyndicationItem>(); foreach (var item in ws.GetFeedList()) {
         var sItem = new SyndicationItem(); sItem.Title = SyndicationContent.CreatePlaintextContent(item.Title); sItem.PublishDate = item.Published; sItem.Summary = SyndicationContent.CreatePlaintextContent(item.Description); sItem.Links.Add(new SyndicationLink(new Uri(buildPostURL(item.URL, item.Published)))); feedItems.Add(sItem); }
    feed.Items = feedItems; var rssWriter = XmlWriter.Create(HttpContext.Server.MapPath("~/rss.xml")); var rssFeedFormatter = new Rss20FeedFormatter(feed); rssFeedFormatter.WriteTo(rssWriter); rssWriter.Close(); }
    On first glance you might notice very similar code between the two approaches, with one major exception - there's no hacks to make it work as intended. Between the two I am going to go live with the later approach, not having to worry about the String.Replace ever failing and not having any "magic" strings is worth it. But I will leave the decision to you as to which to implement or maybe another approach I didn't mention - please comment if you have another approach. I am always open to using "better" or alternate approaches. Now that the WCF Service is fully integrated and RSS Feeds have been added, as far as the end user view there are but a few features remaining: Caching, Searching Content, Error Pages. Stay tuned for Part 8 tomorrow.
    Nearing the end of my initial migration now in Part 6, I dove into Comment Listing, Adding New Comments and then emailing users a new comment was entered. (Other Posts: Part 1, Part 2, Part 3, Part 4 and Part 5). In Part 5, I imported the Comments, but wasn't doing anything but showing the Comment Count in the Post Title. In this post I will begin with what I did to display the comments. First off I added a new column to my PostComments to handle those Comments that were not approved or are pending approval (thinking about spam bots in particular). After adding that new column, I created a Stored Procedure to return the Comments for a given Post, some (maybe most) might find creating a Stored Procedure to simply return one Table is unnecessary, but I find it helps keep my C# much cleaner by adding that layer between my SQL Database and my C# code. [sql] CREATE PROCEDURE [dbo].[getPostCommentsSP] (@PostID INT) AS SELECT dbo.PostComments.Modified, dbo.PostComments.Body, dbo.PostComments.Name FROM dbo.PostComments WHERE dbo.PostComments.PostID = @PostID AND dbo.PostComments.IsApproved = 1 AND dbo.PostComments.Active = 1 [/sql] I proceeded to add a new function in my PostFactory to return the converted List collection:
    public List<Objects.Comment> GetCommentsFromPost(int PostID) {
         using (var eFactory = new bbxp_jarredcapellmanEntities()) {
         return eFactory.getPostCommentsSP(PostID).Select(a => new Objects.Comment(a.Name, a.Body, a.Modified)).ToList(); }
    Because I had already created a SinglePost ActionResult, I simply added the one line to also include my new created Comments List Collection:
    model.Post.Comments = pFactory.GetCommentsFromPost(model.Posts[0].ID); ]]>
    Since the main listings do not display the Comments, just the count, it was necessary to have a unique ActionResult. That being said, I did reuse my PartialView I created the other night, only adding to it:
    if (@Model.Comments != null) {
         <div id="PostListing"> <div class="Title"> <h2>Comments</h2> </div> @foreach (var comment in @Model.Comments) {
         <div class="Comment"> <div class="Title"> <h3>@comment.Name - @comment.PostTime</h3> </div> <div class="Body"> @comment.Body </div> </div> }
    </div> }
    Because the Comments are otherwise null I can reuse the PartialView. After adding in all of the CSS Styles: [caption id="attachment_2039" align="aligncenter" width="300"]Comments Listing <span classin MVC4 Site" width="300" height="165" class="size-medium wp-image-2039" /> Comments Listing in MVC4 Site[/caption] Next on the list of things I wanted to accomplish is adding a form below the Comments Listing. Adding a pretty basic form for web developers is pretty trivial, however here is the code I am using:
    if (@ViewBag.SinglePost != null) {
         <div class="CommentForm"> <input type="hidden" name="PostID" value="@Model.ID" /> <div class="Title"> <h2>Add a Comment</h2> </div> <div class="Fields"> <input type="text" id="PersonName" name="PersonName" class="k-textbox" required placeholder="Name" /><span class="requiredField">*</span><br/><br/> <input type="text" id="EmailAddress" name="EmailAddress" class="k-textbox" required placeholder="Email Address" /><span class="requiredField">*</span><br/><br/> </div> <div class="Body"> <textarea class="k-textbox" id="Body" name="Body" cols="500" maxlength="9999" wrap="soft" rows="5" placeholder="Enter comment here"></textarea><span class="requiredField">*</span><br/> </div> <div class="Submit"> <button class="k-button" type="submit">Submit Comment >></button> </div> </div> }
    And the following line right above the CommentListing posted above:
    @using (Ajax.BeginForm("AddComment", "Home", new AjaxOptions {
        UpdateTargetId = "PostListing"}
    )) {
    In my PostFactory I added the following code, note the line about auto-approving if the name/email combination had previously been approved just like WordPress does:
    public void addComment(int PostID, string name, string email, string body) {
         using (var eFactory = new bbxp_jarredcapellmanEntities()) {
         var comment = eFactory.PostComments.Create(); comment.Active = true; comment.Body = body; comment.Created = DateTime.Now; comment.Email = email; comment.Modified = DateTime.Now; comment.Name = name; comment.PostID = PostID; comment.IsApproved = eFactory.PostComments.Any(a => a.Name == name && a.Email == email && a.Active && a.IsApproved); eFactory.PostComments.Add(comment); eFactory.SaveChanges(); }
    A feature of WordPress I realized I enjoyed was the fact it emailed me when a new comment was entered in the system. So I figured I would add the same functionality to my MVC app. One thing I should note, this is far from ideal code. Think of a larger site, with hundreds or thousands of comments from users. The user would have to wait until all of the emails were sent and then return the user to the post they added their comment to. A better approach would be to offload this potentially long running task to a Windows Service - a feature I will be adding shortly.
    // If the comment wasn't approved don't bother continuing to process if (!comment.IsApproved) {
         return; }
    // Grab the existing approved comments and exclude the nearly added comment var existingComments = eFactory.getPostCommentsSP(PostID).Where(a => a.ID != comment.ID).ToList(); // Make sure there is at least 1 other comment in the system if (existingComments.Count == 1) {
         return; }
    // Grab the Post to get the Post Title var post = eFactory.Posts.FirstOrDefault(a => a.ID == PostID); // Populate the Title and Body sections var Title = "Comment: \"" + post.Title + "\""; var Body = String.Format("The following comment by {
    was added:" + System.Environment.NewLine + "{
    ", comment.Name, comment.Body); // Iterate through all of comments individually so as to not reveal other's email addresses to each other using (var smtpClient = new SmtpClient()) {
         foreach (var existingComment in existingComments) {
         smtpClient.Send(ConfigurationManager.AppSettings["EMAILADDRESS_CommentNotification"], existingComment.Email, Title, Body); }
    So what is next? Implementing the WCF Service previously mentioned and the Windows Service mentioned above. This will allow me to easily create Windows Phone 8, Windows 8 Store apps or heck even a command line version if there was demand. More to come...
    As mentioned yesterday I began diving back into a large project at work that involves Windows Workflow. At this point it had been almost six months to the day when I last touched the initial code I did, so today involved a lot of getting back into the mindset I had then and what I was trying to accomplish. This digging unfortunately left me figuring out I had left the code in a non-functional state, to the point that the Workflow Service was not connecting to the AppFabric databases properly. Long story short, three hours later I was able to get everything where I thought I had left it. Lesson learned here is that before jumping projects, always make sure you leave it in a usable state or at least document what isn't working properly. In my defense, the original sidetracking project was to be only three weeks. Back to the Windows Workflow development - one issue I was having today with my xamlx/Code Activity was my InArgument variables defined at a global level in my xamlx file I was not able to retrieve or set using the more proper method of:
    InArgument<decimal> SomeID {
         get; set; protected override Guid Execute(CodeActivityContext context) {
         SomeID.Get(context); // To get the value SomeID.Set(context, 1234); // Set it to 1234 }
    No matter what, the value was always 0 when getting the variable's value even though it had been set on the entry point of the Workflow. After trying virtually just about everything I could I came up with a work around that does work. Do note I highly doubt this is the way Microsoft intended for it to be accomplished, but for the time being this is the only way I could get my xamlx defined variables updated/set in the custom CodeActivity. What I did was create as generic set/get functions as possible below:
    private T getValue<T>(CodeActivityContext context, string name) {
         var properties = context.DataContext.GetProperties()[name]; if (properties == null) {
         return default(T); }
    return (T) properties.GetValue(context.DataContext); }
    private void setValue(CodeActivityContext context, string name, object value) {
         context.DataContext.GetProperties()[name].SetValue(context.DataContext, value); }
    And then to use the functionality in your CodeActivity:
    protected override Guid Execute(CodeActivityContext context) {
         var SomeID = getValue<decimal>(context, "SomeID"); // Get the SomeID variable setValue(context, "SomeID", 1234); // Set SomeID to 1234 }
    Hopefully that gets someone on the right track and if you do eventually find the "correct" way please let me know. Otherwise I will definitely be asking the Windows Workflow experts at TechED North America in June.
    Continuing onto Part 5 of my migration from WordPress to MVC 4, I dove into Content, Comments and Routing tonight. (Other Posts: Part 1, Part 2, Part 3 and Part 4). First thing I did tonight was add a new route to handle pages in the same way WordPress does (YYYY/MM/DD/) for several reasons, though my primary reason is to retain all of the links from the existing WordPress site - something I'd highly suggest you consider doing as well. As noted the other night, your MVC Routing is contained in your Global.asax.cs file. Below is the route I added to accept the same format as WordPress:
    routes.MapRoute("ContentPageRoute", "{
    ", new {
        controller = "Home", action = "ContentPage"}
    ); ]]>
    Be sure to put it before the Default Route otherwise the route above will not work. After I got the Route setup, I went back into my _Layout.cshtml and updated the header links to pull from a SQL Table and then return the results to the layout:
    <div class="HeaderMenu"> <nav> <ul id="menu"> <li>@Html.ActionLink("home", "Index", "Home")</li> @{
         foreach (bbxp.lib.Objects.MenuItem menuItem in @Model.Base.MenuItems) {
         <li>@Html.ActionLink(@menuItem.Title, "ContentPage", "Home", new {
        pagename = @menuItem.URLName}
    , null)</li> }
    </ul> </nav> </div> ]]>
    Further down the road I plan to add a UI interface to adjust the menu items, thus the need to make it programmatic from the start. Next on the list was actually importing the content from the export functionality in WordPress. Thankfully the structure is similar to the actual posts so it only took the following code to get them all imported:
    if (item.post_type == "page") {
         var content = eFactory.Contents.Create(); content.Active = true; content.Body = item.encoded; content.Created = DateTime.Parse(item.post_date); content.Modified = DateTime.Parse(item.post_date); content.PostedByUserID = creator.ID; content.Title = item.title; content.URLSafename = item.post_name; eFactory.Contents.Add(content); eFactory.SaveChanges(); continue; }
    With some time to spare, I started work on the Comments piece of the migration. Immediately after the Post creation in the Importer, I added the following to import all of the comments:
    foreach (var comment in item.GetcommentRows()) {
         var nComment = eFactory.PostComments.Create(); nComment.Active = true; nComment.Body = comment.comment_content; nComment.Created = DateTime.Parse(comment.comment_date); nComment.Modified = DateTime.Parse(comment.comment_date); nComment.PostID = post.ID; nComment.Email = comment.comment_author_email; nComment.Name = comment.comment_author; eFactory.PostComments.Add(nComment); eFactory.SaveChanges(); }
    And now that there were actual comments in the system, I went back into my partial view for the Posts and added the code to display the Comments Link and Total properly:
    <div class="CommentLink"> @{
         object commentLink = @bbxp.mvc.Common.Constants.URL + @Model.PostDate.Year + "/" + @Model.PostDate.Month + "/" + @Model.PostDate.Day + "/" + @Model.URLSafename; <h4><a href="@commentLink">@Model.NumComments @(Model.NumComments == 1 ? "Comment" : "Comments")</a></h4> }
    </div> ]]>
    After getting the Comments Count displayed I wanted to do some refactoring on the code up to now. Now that I've got a pretty good understanding of MVC architecture I started to create Base objects. The commonly pulled in data for instance (Tag Cloud, Menu Items, Archive List etc.) I now have in a BaseModel and pulled in a BaseController. After which all Controllers inherit. I cut down on a good chunk of code and feel pretty confident as time goes on I will be able to expand upon this baseline architecture very easily. [caption id="attachment_2032" align="aligncenter" width="300"]Migration Project <span classas of Part 5" width="300" height="115" class="size-medium wp-image-2032" /> Migration Project as of Part 5[/caption] So what is on the plate next? Getting the Comments displayed, the ability to post new comments and in the back end email people upon a new comment being entered for a particular post.
    Today I began a several month project that includes an extensive Windows Workflow implementation, Mail Merging based around a Word Template (dotx) and extensive integrations to WCF Services, WebForms and a WinForms application. Without going into a ton of detail, this project will most likely be my focus for the next 8-9 months at least. That being said, today I dove into OpenXML Mail Merging I began last October. Realizing the scope of the Mail Merging was evolving, I looked into possibly using an external library. Aspose's Word Library looked like it fit the bill for what I was planning on achieving and allowing me to retire some stop gaps I had put in place years ago at this point. Luckily, the way I had implemented OpenXML myself, it was as easy as replacing 20-30 lines in my main document generation class with the following:
    var docGenerated = new Document(finalDocumentFileName); // Callback to handle HTML Tables docGenerated.MailMerge.FieldMergingCallback = new MailMergeFieldHandler(); // Get all of the Mail Merge Fields in the Document var fieldNames = docGenerated.MailMerge.GetFieldNames(); // Check to make sure there Mail Merge Fields if (fieldNames != null && fieldNames.Length > 0) {
         foreach (string fieldName in fieldNames) {
         var fieldValue = fm.Merge(fieldName); // Replace System.Environment.NewLine with LineBreak if (!String.IsNullOrEmpty(fieldValue)) {
         fieldValue = fieldValue.Replace(System.Environment.NewLine, Aspose.Words.ControlChar.LineBreak); }
    // Perform the Mail Merge docGenerated.MailMerge.Execute(new string[] {
    , new object[] {
    ); }
    // Save the document to a PDF docGenerated.Save(finalDocumentFileName.Replace(".dotx", ".pdf")); ]]>
    And the Callback Class referenced above:
    public class MailMergeFieldHandler : IFieldMergingCallback {
         void IFieldMergingCallback.FieldMerging(FieldMergingArgs e) {
         if (e.FieldValue == null) {
         return; }
    // Only do the more extensive merging for Field Values that start with a <table> tag if (!e.FieldValue.ToString().StartsWith("<table")) {
         e.Text = e.FieldValue.ToString(); return; }
    // Merge the HTML Tables var builder = new DocumentBuilder(e.Document); builder.MoveToMergeField(e.DocumentFieldName); builder.InsertHtml((string)e.FieldValue); e.Text = ""; }
    void IFieldMergingCallback.ImageFieldMerging(ImageFieldMergingArgs args) {
    More to come with the Aspose library as I explore more features, but so far I am very pleased with the ease of use and the performance of library.
    Continuing onto Day 4 of my Migration to MVC 4 (Part 1, Part 2 and Part 3) contrary to what I suggested I would focus on last night, I dove into getting all of the MVC Routing in place so the site could start functioning like my existing WordPress site does. Having only been really doing MVC for a month in half in between MonoDroid, MonoTouch and Windows Phone I hadn't had time to really dive into Routing, which I had been excited about implementing in a future project. One of the first hurdles I ran into was leaving the default routing in place. Frustratingly this caused all of my new routes to not be processed correctly, keep that in mind when you first dive into MVC. For those also early on in their MVC Path, your routing is defined in the Global.asax.cs file. Your default route is defined in the RegisterRoutes function:
    routes.MapRoute( name: "Default", url: "{
    ", defaults: new {
        controller = "Home", action = "Index", id = UrlParameter.Optional}
    ); ]]>
    If you were going to have a URL like this:
    routes.MapRoute("PostsRoute", "{
    ", new {
         controller = "Home", action = "Posts" }
    , new {
         year = @"\d+" }
    ); ]]>
    I took it one step further by forcing the year parameter to be a number. To make use of the this route, just make sure your controller, Home has an ActionResult called Posts with both a year and month parameters like so:
    public class HomeController : Controller {
         public ActionResult Posts(int year, string month) {
         .... }
    So now that the routing is working like I wanted tomorrow night is a focus on getting the content pages imported, routed and displayed properly. Being the planner I am, I figured I would map out the next few evenings:
    1. Tuesday - Content Pages (Import, Routing, Display)
    2. Wednesday - Comments (Import, Display, Adding)
    3. Thursday - Create the WCF Service Layer
    4. Friday - Add Login and Basic Add Post Form
    5. Sunday - Edit Post/Content Support
    Continuing my series on Migrating from WordPress to MVC4 (Part 1 and Part 2). Today I worked on getting the right hand side bar and importing Tags from the export mentioned in Part 1. Where did I begin? Based on the original import I made last Friday night, I created a Stored Procedure to create the right hand side "Archives List", for those curious here is the SQL: [sql] SELECT DATEPART(YEAR, dbo.Posts.Created) AS 'PostYear', DATENAME(MONTH, dbo.Posts.Created) AS 'PostMonth', (SELECT COUNT(*) FROM dbo.Posts postsCount WHERE DATENAME(MONTH, postsCount.Created) = DATENAME(MONTH, dbo.Posts.Created) AND DATEPART(YEAR, postsCount.Created) = DATEPART(YEAR, dbo.Posts.Created) AND postsCount.Active = 1) AS 'NumPosts' FROM dbo.Posts WHERE dbo.Posts.Active = 1 GROUP BY DATENAME(MONTH, dbo.Posts.Created), DATEPART(YEAR, dbo.Posts.Created) ORDER BY DATEPART(YEAR, dbo.Posts.Created) DESC, DATENAME(MONTH, dbo.Posts.Created) [/sql] And then in the UI:
    <div class="Widget"> <div class="Title"> <h3>Archives</h3> </div> <div class="Content"> @foreach (var item in @Model.ArchiveItems) {
         var baseURL = "" + @item.Year + "/" + @item.Month + "/"; <div class="ArchiveItem"> <a href="@baseURL">@item.Month @item.Year (@item.PostCount)</a> </div> }
    </div> </div> ]]>
    After all is said and done (also included are the SQL stored Links): [caption id="attachment_2004" align="aligncenter" width="100"]Archived List and Links List Archived List and Links List[/caption] At this point I needed to do a re-import of the data as I had only imported the Posts. In addition I added support to import the Categories for Posts while I was at it. If you're referencing the code in this series I added the following block immediately after the Post row is added. This block parses and imports the Tags and Categories:
    foreach (var tag in item.GetcategoryRows()) {
         if (tag.domain == "post_tag") {
         var existingTag = eFactory.Tags.FirstOrDefault(a => a.Description == tag.category_Text); if (existingTag == null) {
         existingTag = eFactory.Tags.Create(); existingTag.Active = true; existingTag.Created = DateTime.Now; existingTag.Description = tag.category_Text; existingTag.Modified = DateTime.Now; eFactory.Tags.Add(existingTag); eFactory.SaveChanges(); }
    var relationalRow = eFactory.Posts2Tags.Create(); relationalRow.Active = true; relationalRow.Created = post.Created; relationalRow.Modified = post.Created; relationalRow.PostID = post.ID; relationalRow.TagID = existingTag.ID; eFactory.Posts2Tags.Add(relationalRow); eFactory.SaveChanges(); }
    else if (tag.domain == "category") {
         var existingCategory = eFactory.PostCategories.FirstOrDefault(a => a.Description == tag.category_Text); if (existingCategory == null) {
         existingCategory = eFactory.PostCategories.Create(); existingCategory.Active = true; existingCategory.Created = DateTime.Now; existingCategory.Description = tag.category_Text; existingCategory.Modified = DateTime.Now; eFactory.PostCategories.Add(existingCategory); eFactory.SaveChanges(); }
    var relationalRow = eFactory.Posts2Categories.Create(); relationalRow.Active = true; relationalRow.Created = post.Created; relationalRow.Modified = post.Created; relationalRow.PostID = post.ID; relationalRow.PostCategoryID = existingCategory.ID; eFactory.Posts2Categories.Add(relationalRow); eFactory.SaveChanges(); }
    Now that the Tags and Categories are imported into the SQL Server, I wanted to recreate the Tag Cloud feature that WordPress offers and Telerik offers in their ASP.NET WebForms Suite. At a base level all a Tag Cloud really does is based on the highest count of items, create a larger link of the Tag and get decreasingly smaller as the occurrences decrease. So for those only here to check out how I accomplished it, let's dive in. First I created a Stored Procedure to get the Top 50 used Tags: [sql] SELECT TOP 50 (SELECT COUNT(*) FROM dbo.Posts2Tags WHERE dbo.Posts2Tags.TagID = dbo.Tags.ID) AS 'NumTags', dbo.Tags.Description FROM dbo.Tags WHERE dbo.Tags.Active = 1 ORDER BY (SELECT COUNT(*) FROM dbo.Posts2Tags WHERE dbo.Posts2Tags.TagID = dbo.Tags.ID) DESC, dbo.Tags.Description ASC [/sql] And then in my Controller in the MVC 4 App:
    private List<lib.Objects.TagCloudItem> processTagCloud(List<lib.Objects.TagCloudItem> tagItems) {
         var startingLevel = 10; for (var x = 0; x < tagItems.Count; x++) {
         tagItems[x].CSSClassName = "TagItem" + startingLevel; if (startingLevel > 1) {
         startingLevel--; }
    return tagItems.OrderBy(a => a.Name).ToList(); }
    Basically I created 10 "Levels" of different Tag Cloud sizes and based on the top 9 I use the larger sizes. And the associated CSS: [css] /* Tag Cloud Items */ .sideBar .Widget .Content .TagItem1 {
         font-size: 8pt; }
    .sideBar .Widget .Content .TagItem2 {
         font-size: 9pt; }
    .sideBar .Widget .Content .TagItem3 {
         font-size: 10pt; }
    .sideBar .Widget .Content .TagItem4 {
         font-size: 11pt; }
    .sideBar .Widget .Content .TagItem5 {
         font-size: 12pt; }
    .sideBar .Widget .Content .TagItem6 {
         font-size: 16pt; }
    .sideBar .Widget .Content .TagItem7 {
         font-size: 20pt; }
    .sideBar .Widget .Content .TagItem8 {
         font-size: 24pt; }
    .sideBar .Widget .Content .TagItem9 {
         font-size: 28pt; }
    .sideBar .Widget .Content .TagItem10 {
         font-size: 32pt; }
    [/css] And the Razor View:
    <div class="Widget"> <div class="Title"> <h3>Tags</h3> </div> <div class="Content"> @foreach (bbxp.lib.Objects.TagCloudItem tag in @Model.TagCloudItems) {
         var url = "" + @tag.Name + "/"; <a href="@url" class="@tag.CSSClassName">@tag.Name</a> }
    </div> </div> ]]>
    After implementing the SQL, CSS and HTML I got it working just as I wanted: [caption id="attachment_2008" align="aligncenter" width="116"]bbxp TagCloud bbxp TagCloud[/caption] Next up on the list is to import the comments, display them and create a comments form.
    Continuing from last night's post, Part 1 of Migrating WordPress to MVC4 I spent some time today working on the visual display of the migrated posts. Like most programmers using WordPress, I utilize the excellent Syntax Highlighter JavaScript/CSS library for all of my code blocks. The caveat with this, those tags now exist all throughout my posts going back a year or more at this point. Luckily, my Regular Expression skills have gone up considerably with projects like my Windows Phone 8 app, jcCMAP that utilizes XPath and Regular Expressions extensively. So where do you begin? Like many migrations you have a choice, do you migrate the data as is into the new structure or do you manipulate, like in this case with the tags, the actual tags into something that is preprocessed? Being a firm believer in storing data in as bare of a form as possible and then in my business and presentation layers worrying about the UI, I am choosing to leave the tags as they exist. Luckily, the tags follow a very easy to parse syntax with brackets and the name of the language. First steps from last night were to do some refactorization of the Data Layer and split it into a true 3 tier architecture. I first created a PostFactory class to interface with the EntityFramework in my Windows Class Library:
    public class PostFactory : IDisposable {
         public List<Objects.Post> GetPosts(DateTime startDate, DateTime endDate) {
         using (var eFactory = new bbxp_jarredcapellmanEntities()) {
         return eFactory.getPostListingSP(startDate, endDate).Select(a => new Objects.Post(a.ID, a.Created, a.Title, a.Body)).ToList(); }
    .... ]]>
    This block grabs all of the posts for a given date range from the getPostListingSP Stored Procedure and then using LINQ does a translation to a Post object that resides in my PCL library. The Post object exists in the Portable Class Library (PCL) to be utilized by the MVC4 app and the eventual Windows Phone 8 app. Planning ahead and doing things right from the get go will save you time - don't rush your initial architecture, you'll pay for it later. Next I create my Post Object that encapsulates the properties I want exposed to the clients (MVC4 App and Windows Phone). Some might find it silly to not simply reuse the EntityFramework Complex Type object that the stored procedure mentioned above returns. I find that approach to be a lack of separation of concerns and crossing tiers between the data and UI layers. For a simple site I might overlook it, but for 99% of the things I do, I always have an object that acts as a middle man between the data and UI layers. Now onto the code:
    public class Post {
         // Propertiess public int ID {
         get; set; }
    public DateTime PostDate {
         get; set; }
    public string Title {
         get; set; }
    public string Body {
         get; set; }
    public string PostBy {
         get; set; }
    public Post(int id, DateTime postDate, string title, string body) {
         ID = id; PostDate = postDate; Title = title; Body = parsePost(body); }
    // Parse the SyntaxHighlighter Tags and replace them with the SyntaxHighlighter <pre> tags private static string parsePost(string content) {
         var matches = Regex.Matches(content, @"\[(.*[a-z])\]"); foreach (Match match in matches) {
         var syntaxTag = new SyntaxTag(match.Value); if (!syntaxTag.IsParseable) {
         continue; }
    if (syntaxTag.IsClosingTag) {
         content = content.Replace(syntaxTag.FullTagName, "</pre>"); }
    else {
         content = content.Replace(syntaxTag.FullTagName, "<pre class=\"brush: " + syntaxTag.NameOnly + ";\">"); }
    return content; }
    Pretty stock code, the only "interesting" code is the regular expression to grab all of the SyntaxHighlighter. For those doing Regular Expressions, I find it incredibly useful to use a tool like Regex Hero to build your Regular Expressions since you can test input on the fly without having to constantly rebuild your code and test. Next on the "to code" list was the SyntaxTag object.
    public class SyntaxTag {
         public SyntaxTag(string value) {
         FullTagName = value; }
    public string NameOnly {
         get {
         return FullTagName.Replace("[/", "").Replace("[", "").Replace("]", ""); }
    public bool IsClosingTag {
         get {
         return FullTagName.StartsWith("[/"); }
    public string FullTagName {
         get; private set; }
    // Acceptable syntaxtags (there are more, but this is all I used previously) private enum SYNTAXTAGS {
         csharp, xml, sql, php, c, bash, shell, cpp, js, java, ps, plain }
    public bool IsParseable {
         get {
         SYNTAXTAGS tag; return Enum.TryParse(NameOnly, out tag); }
    Again, a pretty basic class. Based on the full tag, it provides a clean interface to the Post class (or others down the road) without mucking up other areas of code. One thing I did do that many might find strange is to use an enumeration to eliminate false positives. I am a huge fan of strongly typed code (thus why I shy away from languages that aren't) so it made perfect sense to again utilize this approach. As I utilize new tags for whatever reason, the logic is contained only here so I won't be hunting around for where to update it. Another less "clean" approach would be to put these in the web.config or in your SQL Database. Though I find both of those more performance intensive and not necessary in this case. Now that the business and data layers are good to go for the time being, let's go back to our MVC4 App. Inside my controller the code is pretty simple still for my Index:
    public ActionResult Index() {
         var model = new Models.HomeModel(); using (var pFactory = new PostFactory()) {
         model.Posts = pFactory.GetPosts(new DateTime(2001, 1, 1), new DateTime(2013, 4, 13)); }
    ViewBag.Model = model; return View(model); }
    At the moment I don't have my WCF Service written yet so for the time being I am simply referencing the Windows Class Library mentioned above, thus why I am referencing the PostFactory class directly in the Controller. Then in my View:
    @model bbxp.mvc.Models.HomeModel @{
         ViewBag.Title = "Jarred Capellman"; }
    @foreach (var post in Model.Posts) {
         @Html.Partial("PartialPost", post) }
    <script type="text/javascript"> SyntaxHighlighter.all() </script> ]]>
    As I am looping through each post I am calling out to my Partial View, PartialPost. And for my Partial View:
    @model bbxp.lib.Objects.Post <div class="post"> <div class="Date"> <h3>@Model.PostDate.ToLongDateString()</h3> </div> <div class="Content"> <div class="Title"> <h2>@Model.Title</h2> </div> <div class="Body">@(new MvcHtmlString(@Model.Body))</div> </div> </div> ]]>
    The @(new MvcHtmlString(@Model.Body)) line is very important otherwise your HTML Tags will not be parsed as you would expect. When all is said and done I went from this last night: [caption id="attachment_1982" align="aligncenter" width="300"]End Result of an Initial Conversion End Result of an Initial Conversion[/caption] To this tonight: [caption id="attachment_1992" align="aligncenter" width="300"]After applying regular expressions to the Post Content After applying regular expressions to the Post Content[/caption] Next up is creating the WordPress Sidebar History and extending functionality of the "engine" to support single Post Views.
    For those that are unfamiliar, from July 2003 until March 2011 this site ran under a custom PHP Content Management System I named bbXP, an acronym for Bulletin Board eXPerience. This project was one of the most influential projects I ever undertook in my free time and really defined the next ten years of programming career. So why the sudden desire to go back to something custom in lieu of WordPress? A simple answer: I love to do things from scratch and as practice for getting my ASP.NET MVC skills up to my ASP.NET WebForms skills. That answer leads to this blog series in which I'll be documenting my transition from this WordPress site to an ASP.NET MVC 4 Web Application and eventually a Windows Phone 8 native application when the MVC app is completed. In this blog post I will be reviewing the initial transition from a MySQL/WordPress installation to a baseline MVC application. A lot of people might disagree with my approach here, especially if you're coming at this from a DBA background. I prefer to do my data translation in C# instead of via SQL via DTS or some other SQL to SQL approach. If you're looking for that approach, I am sure there are other blog posts detailing that process. For those still reading, Step 1 in my mind in migrating to a new platform is getting the data out of WordPress. Luckily in WordPress you have an easy XML export from the Admin Menu: [caption id="attachment_1974" align="aligncenter" width="300"]Step 1 - Export Posts <span classin WordPress" width="300" height="156" class="size-medium wp-image-1974" /> Step 1 - Export Posts in WordPress[/caption] Depending on the amount of posts you have this could be anywhere between a few kb to a couple mbs. Step 2: Getting a strongly typed interface to the newly exported XML. Luckily there is a very awesome tool that has come with Visual Studio since at least 2008: xsd. The xsd tool can be run from it's location in the Visual Studio folder or it can be used anywhere via the Developer Command Prompt: [caption id="attachment_1976" align="aligncenter" width="300"]Step 2: Visual Studio 2012 Developer Prompt Step 2: Visual Studio 2012 Developer Prompt[/caption] Navigate to where you downloaded the WordPress XML Export and run the following two lines, assuming the name of your file was export.xml: [bash] xsd export.xml /d xsd export.xsd /d [/bash] Like so: [caption id="attachment_1977" align="aligncenter" width="300"]Step 3 - Generate Strongly Typed DataSet Step 3 - Generate Strongly Typed DataSet[/caption] After issuing those two commands you'll have a C# Class among a few other files (as seen in the screenshot below) to include in your C# Importer Application - a much better situation to be in when dealing with XML files or data migration I've found. [caption id="attachment_1978" align="aligncenter" width="300"]xsd tool generated Files xsd tool generated Files[/caption] Step 3 - Creating your new database schema. Now that you've got a clean interface for your XML file, you need to create a new database schema for your .NET Application. You could simply recreate it based on the structure of the XML file, but I took the more traditional approach of creating a normalized database schema in SQL Server: [caption id="attachment_1980" align="aligncenter" width="193"]Step 3 - Create your <span classnew database" width="193" height="198" class="size-full wp-image-1980" /> Step 3 - Create your new database[/caption] I'm not going to go over Database Schema Design, I feel it is very subjective and can vary from project to project. For those curious I do follow a pattern for keeping as little in each Table and instead create tables to reuse among other main tables (normalization). For instance with Tags, rather than tying a Tag to a specific Post, I created a relational table so many Posts can reference the same Tag. I did the same with Categories. Step 4 - Create your C# Importer App. Now that you've got your database schema it is time to create your Entity Model and write your C# code to actually import the XML file and populate your new SQL Tables. Pretty standard code for those that have used Typed DataSets and the EntityFramework - if you have questions please comment below and I'll be happy to help.
    NewDataSet ds = new NewDataSet(); ds.ReadXml("export.xml"); using (var eFactory = new bbxp_jarredcapellmanEntities()) {
         foreach (NewDataSet.itemRow item in ds.item.Rows) {
         var creator = eFactory.Users.FirstOrDefault(a => a.Username == item.creator); if (creator == null) {
         creator = eFactory.Users.Create(); creator.Active = true; creator.Created = DateTime.Now; creator.Modified = DateTime.Now; creator.Username = item.creator; eFactory.Users.Add(creator); eFactory.SaveChanges(); }
    var post = eFactory.Posts.Create(); post.Active = true; post.Created = DateTime.Parse(item.post_date); post.Modified = post.Created; post.Body = item.encoded; post.Title = item.title; post.PostedByUserID = creator.ID; eFactory.Posts.Add(post); eFactory.SaveChanges(); }
    After running the importer and some MVC work later: [caption id="attachment_1982" align="aligncenter" width="300"]End Result of an Initial Conversion End Result of an Initial Conversion[/caption] More to come in the coming days, but hopefully that can help get someone pointed in the right direction on moving onto their new custom .NET solution. I should note in this initial migration I am not importing the tags, categories or comments - that will come in the next post.
    In working on a new MVC4 app in my free time I had renamed the original assembly early on in development, did a clean solution, yet kept getting:
    Multiple types were found that match the controller named XYZ. This can happen if the route that services this request ('{controller}/{action}/{id}') does not specify namespaces to search for a controller that matches the request.
    Turns out, since the project name changed, doing a clean solution had zero effect. So the quick and easy solution: go to your bin folder and delete the original dll.
    Something I am used to checking in my Windows Phone apps that utilize your WiFi or 3G/4G/LTE signal is an active connection. Nothing is worse as an end user than an app that immediately crashes because you don't have a connection. Sad part at least for me, when I encounter a crash in an app that isn't my own, my first inclination is to check that I have a signal. Why have we gotten "programmed" to do this? I think the main reason, often when testing your own app, especially if you're a one person team, you never check the worse case scenarios. Most likely not intentionally, you're just focused on delivering your app on time (and budget if applicable). Luckily enough in MonoDroid (and Windows Phone 8), it is very easy to check for an active internet connection. In MonoDroid:
    public bool HasInternetConnection {
         get {
         var connectivityManager = (ConnectivityManager)GetSystemService(Context.ConnectivityService); return connectivityManager.ActiveNetworkInfo != null && connectivityManager.ActiveNetworkInfo.IsConnectedOrConnecting; }
    And in Windows Phone 8:
    public bool HasInternetConnection {
         get {
         return NetworkInterface.GetIsNetworkAvailable(); }
    One thing to note on MonoDroid, if you do not have the ACCESS_NETWORK_STATE permission checked under your Project's Properties -> Android Manifest -> Required Permissions: [caption id="attachment_1964" align="aligncenter" width="300"]Android Manifest - ACCESS_NETWORK_STATE required <span classfor checking Internet Connectivity" width="300" height="146" class="size-medium wp-image-1964" /> Android Manifest - ACCESS_NETWORK_STATE required for checking Internet Connectivity[/caption] You will get the following exception: [caption id="attachment_1966" align="aligncenter" width="300"]Android Exception when there <span classis no permission to check Network Connectivity" width="300" height="170" class="size-medium wp-image-1966" /> Android Exception when there is no permission to check Network Connectivity[/caption] Simply check the ACCESS_NETWORK_STATE, rebuild, deploy and you will have detection working properly.
    A couple weeks back I needed to integrate a Word Press site with a C# WCF Service. Having only interfaced with the older "classic" ASP.NET Web Services (aka asmx) nearly 6 years ago I was curious if there had been improvements to the soapClient interface inside of PHP.

    Largely it was exactly as I remembered going back to 2007. The one thing I really wanted to do with this integration was have a serialized object being passed up to the WCF Service from the PHP code - unfortunately due to the time constraints this was not achievable - if anyone knows and can post a snippet I'd be curious. That being said the following sample code passes simple data types as parameters in a WCF Operation Contract from PHP.

    Given the following WCF Operation Contract definition:
    [OperationContract] string CreateNewUser(string firstName, string lastName, string emailAddress, string password); ]]>
    Those who have done some PHP in the past should be able to understand the code below, I should note when doing your own WCF Integration, the $params->variableName needs to match casing to the actual WCF Service. In addition the object returned has the name of the WCF Service plus the "Result" suffix. [php] <?php class WCFIntegration {
         const WCFService_URL = ""; public function addUser($emailAddress, $firstName, $lastName, $password) {
         try {
         // Initialize the "standard" SOAP Options $options = array('cache_wsdl' => WSDL_CACHE_NONE, 'encoding' => 'utf-8', 'soap_version' => SOAP_1_1, 'exceptions' => true, 'trace' => true); // Create a connection to the WCF Service $client = new SoapClient(self::WCFService_URL, $options); if (!$client == null) {
         throw new Exception('Could not connect to WCF Service'); }
    // Set the WCF Service Parameters based on the argument values $params->emailAddress = $emailAddress; $params->firstName = $firstName; $params->lastName = $lastName; $params->password = $password; // Submit the $params object $result = $client->CreateNewUser($params); // Check the return value, in this case the WCF Service Operation Contract returns "Success" upon a succesful insertion if ($result->CreateNewUserResult === "Success") {
         return true; }
    throw new Exception($result->CreateNewUserResult); }
    catch (Exception $ex) {
         echo 'Error in WCF Service: '.$ex->getMessage().'<br/>'; return false; }
    ?> [/php] Then to actually utilize this class in your existing code: [php] $wcfClient = new WCFIntegration(); $result = $wcfClient->addUser('', 'John', 'Doe', 'password'); if (!$result) {
         echo $result; }
    [/php] Hopefully that helps someone out who might not be as proficient in PHP as they are in C# or vice-versa.
    Wrapping up the large MonoDroid application for work, one thing I had been putting off for one reason or another was handling non-image files. I was semi-fearful the experience was going to be like on MonoTouch, but surprisingly it was very similar to what you find on Windows Phone 8 where you simply tell the OS you need to open a specific file type and let the OS handle which program to open it in for you. There's drawbacks of this approach, you don't have an "in-your-app" experience, but it does alleviate the need to write a universal display control in your app - in my case this would have involved supporting iOS, Windows Phone 7.1/8 and Android, something I did not want to develop or maintain. So how do you accomplish this feat? Pretty simple assuming you have the following namespaces in your Class:
    using System.IO; using Android.App; using Android.Content; using Android.Graphics; using Android.OS; using Android.Webkit; using Android.Widget; using File = Java.IO.File; using Path = System.IO.Path; ]]>
    I wrote a simple function to get the Android MimeType, if none exists then return a wildcard:
    public string getMimeType(string extension) {
         if (extension.Length > 0) {
         if (MimeTypeMap.Singleton != null) {
         var webkitMimeType = MimeTypeMap.Singleton.GetExtensionFromMimeType(extension); if (webkitMimeType != null) {
         return webkitMimeType; }
    return "*/*"; }
    And then the actual function that accepts your filename, file extension and the byte array of the actual file:
    private void loadFile(string filename, string extension, byte[] data) {
         var path = Path.Combine(Android.OS.Environment.ExternalStorageDirectory.Path, filename); using (var fs = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.ReadWrite)) {
         fs.Write(data, 0, data.Length); fs.Close(); }
    var targetUri = Android.Net.Uri.FromFile(new File(path)); var intent = new Intent(Intent.ActionView); intent.SetDataAndType(targetUri, getMimeType(extension)); StartActivity(intent); }
    Granted if you've already got your file on your device you can modify the code appropriately, but in my case I store everything on a NAS and then pull the file into a WCF DataContract and then back down to the phone. This approach has numerous advantages with a little overhead in both complexity and time to deliver the content - however if you have another approach I'd love to hear it. I hope this helps someone out there, I know I couldn't find a simple answer to this problem when I was implementing this functionality today.
    Back at MonoDroid Development at work this week and ran into a serious issue with a DataContract and an InvalidDataContractException. Upon logging into my app instead of receiving the DataContract class object, I received this: [caption id="attachment_1943" align="aligncenter" width="300"]MonoDroid InvalidDataContractException MonoDroid InvalidDataContractException[/caption] The obvious suspect would be to verify the class had a getter and setter - sure enough both were public. Digging a bit further MonoTouch apparently had an issue at one point with not preserving all of the DataMembers in a DataContract so I add the following attribute to my class object:
    [DataContract, Android.Runtime.Preserve(AllMembers=true)] ]]>
    Unfortunately it still threw the same exception - diving into the forums on Xamarin's site and Bing, I could only find one other developer who had run into the same issue, but had 0 answers. Exhausting all avenues I turned to checking different combinations of the Mono Android Options form in Visual Studio 2012 since I had a hunch it was related to the linker. After some more time I found the culprit - in my Debug configuration the Linking dropdown was set to Sdk and User Assemblies. [caption id="attachment_1944" align="aligncenter" width="300"]MonoDroid Linking Option <span classin Visual Studio 2012" width="300" height="138" class="size-medium wp-image-1944" /> MonoDroid Linking Option in Visual Studio 2012[/caption] As soon as I switched it back over to Sdk Assemblies Only I was back to receiving my DataContract. Though I should also note - DataContract objects are not treated in MonoDroid like they are in Windows Phone, MVC or any other .NET platform I've found. Null really doesn't mean null, so what I ended up doing was changing my logic to instead return an empty object instead of a null object and both the Windows Phone and MonoDroid apps work perfectly off the same WCF Proxy Class.
    If you've been following my blog posts over the last couple of years you'll know I have a profound love of using XML files for reading and writing for various purposes. The files are small and because of things like Typed Datasets in C# you can have clean interfaces to read and write XML files. In Windows Phone however, you do not have Typed Datasets so you're stuck utilizing the XmlSerializer to read and write. To make it a little easier going back to last Thanksgiving I wrote some helper classes in my NuGet library jcWPLIBRARY. The end result within a few lines you can read and write List Collections of Class Objects of your choosing. So why continue down this path? Simple answer: I wanted it better. Tonight I embarked on a "Version 2" of this functionality that really makes it easy to keep with your existing Entity Framework knowledge, but provide the functionality of a database on a Windows Phone 8 device that currently doesn't exist in the same vain it can in a MVC, WinForm, WebForm or Console app. To make this even more of a learning experience, I plan to blog the entire process, the first part of the project: reading all of the objects from an existing file. To begin, I am going to utilize the existing XmlHandler class in my existing Library. This code has been battle tested and I feel no need to write something from scratch especially since I am going to leave the existing classes in the library to not break anyone's apps or my own. First thoughts, what does a XmlSerializer file actually look like when written to? Let's assume you have the following class, a pretty basic class:
    public class Test : jcDB.jObject {
         public int ID {
         get; set; }
    public bool Active {
         get; set; }
    public string Name {
         get; set; }
    public DateTime Created {
         get; set; }
    The output of the file is like so: [xml] <?xml version="1.0" encoding="utf-8"?> <ArrayOfTest xmlns:xsi="" xmlns:xsd=""> <Test> <ID>1</ID> <Active>true</Active> <Name>Testing Name</Name> <Created>2013-04-03T20:47:09.8491958-04:00</Created> </Test> </ArrayOfTest> [/xml] I often forget the XmlSerializer uses the "ArrayOf" prefix on the name of the root object so when testing with sample data when writing a new Windows Phone 8 app I have to refer back - hopefully that helps someone out. Going back to the task at hand - reading data from an XML file and providing an "Entity Framework" like experience - that requires a custom LINQ Provider and another day of programming it. Stay tuned for Part 2 where I go over creating a custom LINQ Provider bound to an XML File.
    Had an unusual experience this morning when attempting to deploy an iOS MonoTouch app to a local iMac instead of the one I had previously done over the VPN from home to my office. The Build Server Icons on the Visual Studio 2012 Command Bar were all greyed out and for the life of me I could not find a context menu to have it refresh the available build hosts. Some digging into the documentation, the actual option is under TOOLS -> Options... -> Xamarin -> iOS Settings like so: [caption id="attachment_1930" align="aligncenter" width="300"]Xamarin iOS Settings in Visual Studio 2012 Xamarin iOS Settings in Visual Studio 2012[/caption] Once on that dialog, click "Configure..." and then you'll get a dialog like so, and click on the Mac you wish to deploy to: [caption id="attachment_1931" align="aligncenter" width="300"]Xamarin iOS Build Host Xamarin iOS Build Host[/caption]
    Working on a new project in my free time and remembered a few years back using the System.ServiceModel.Syndication namespace to pull down an RSS Feed at work. To my surprise in a new MVC 4 application the Assembly is no longer listed in the available assemblies in Visual Studio 2012: [caption id="attachment_1923" align="aligncenter" width="300"].NET 4.5 Framework Assembly List <span classin Visual Studio 2012" width="300" height="206" class="size-medium wp-image-1923" /> .NET 4.5 Framework Assembly List in Visual Studio 2012[/caption] Digging in the base System.ServiceModel assembly I opened the assembly in the Object Browser, sure enough the System.ServiceModel.Syndication namespace exists there now: [caption id="attachment_1924" align="aligncenter" width="278"]System.ServiceModel opened <span classin Visual Studio 2012's Object Browser" width="278" height="300" class="size-medium wp-image-1924" /> System.ServiceModel opened in Visual Studio 2012's Object Browser[/caption] So for those coming to this post for a solution simply reference System.ServiceModel like so: [caption id="attachment_1925" align="aligncenter" width="300"]System.ServiceModel <span classchecked off in the Reference Manager of Visual Studio 2012" width="300" height="206" class="size-medium wp-image-1925" /> System.ServiceModel checked off in the Reference Manager of Visual Studio 2012[/caption] So then in your code you could do something like this to pull down RSS Items from my site and then in turn display them in your MVC View, Windows Phone 8 page etc.
    var model = new Models.FeedModel(); var sFeed = SyndicationFeed.Load(XmlReader.Create("")); model.FeedItems = sFeed.Items.Select(a => new FeedItem {
         Content = a.Summary.Text, PublicationDate = a.PublishDate.DateTime, Title = a.Title.Text, URL = a.Links.FirstOrDefault().Uri.ToString() }
    ).ToList(); ]]>
    This morning I was adding a document handling page in an ASP.NET WebForms project that uses ASP.NET's Theming functionality. Part of the document handling functionality is to pull in the file and return the bytes to the end user all without exposing the actual file path (huge security holes if you do). Since you're writing via Response.Write in these cases, you'd want your ASPX markup to be empty otherwise you'll end up with a Server cannot set content type after HTTP headers have been sent exception that if you've done WebForms development you know full well what the cause is. For those that don't, the important thing to remember is you need to return only the file you are returning. That means no HTML markup in your ASPX file. Upon deploying the code to my development server I received this exception: ASP.NET Theme Exception Easy solution? Update your Pageset EnableTheming="false", StylesheetTheme="" and Theme="" on the page you want to have an empty markup.
    <%@ Page Language="C#" AutoEventWireup="true" EnableTheming="false" StylesheetTheme="" Theme="" CodeBehind="FileExport.aspx.cs" Inherits="SomeWebApp.Common.FileExport" %> ]]>
    Not something I run into very often as I roll my own Theming Support, but for those in a legacy situation or inherited code as I did in this case, I hope this helps.
    Last Fall I wrote my first Windows Service in C# to assist with a Queue Module addon for a time intensive server side task. 6-7 months have gone by and had forgotten a few details involved, so I'm writing them up here for myself and others who might run into the same issue. First off, I'll assume you've created your Windows Service and are ready to deploy it to your Staging or Production environments. The first thing you'll need to do is place the contents of your release or debug folders on your server or workstation. Secondly, you'll need to open an elevated command prompt and goto the Framework folder for the version of .NET your Windows Service is to run the installutil application. In my case I am running a .NET 4.5 Windows Service so my path is: [powershell] C:\Windows\Microsoft.NET\Framework\v4.0.30319 [/powershell] NOTE: If you do not elevate the command prompt you'll see this exception: [powershell] An exception occurred during the Install phase. System.Security.SecurityException: The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security. [/powershell] Once in the framework folder, simply type the following, assuming your service is located in c:\windows_services\newservice\wservice.exe: [powershell] installutil "c:\windows_services\newservice\wservice.exe" [/powershell] After running the above command with the path of your service you should receive the following: [powershell] The Install phase completed successfully, and the Commit phase is beginning. See the contents of the log file for the c:\windows_services\newservice\wservice.exe assembly's progress. The file is located at c:\windows_services\newservice\wservice.InstallLog. Committing assembly 'c:\windows_services\newservice\wservice.exe'. Affected parameters are: logtoconsole = logfile = c:\windows_services\newservice\wservice.InstallLog assemblypath = c:\windows_services\newservice\wservice.exe The Commit phase completed successfully. The transacted install has completed. C:\Windows\Microsoft.NET\Framework\v4.0.30319> [/powershell] At this point, going to your services.msc via Windows Key + R, you should now see your service listed with the option to start it.
    Digging through some posts that had been in draft for months, I forgot I wrote up some notes on the new Immutable Collections, still in Beta. Since diving into Task Parallel Library (TPL) in September 2011, I've become very attached to writing all of my code especially in my WCF Services, WPF and WinForms apps to utilize parallel programming as much as possible. So when I found out there were brand new immutable collections in the System.Collections.Immutable namespace I was very excited. To get started, goto NuGet and download the package. From there using the collections is extremely easy:
    var testList = System.Collections.Immutable<String>.Empty; testList = testList.Add("Cydonia"); testList = testList.Add("Mercury"); ]]>
    The key thing to remember especially those accustomed to using the System.Collections.List collection type is to take the return value of the action (in this case Add) and reassign it to the collection. One of the first questions you might ask, why should I use these collections instead of ? The better question I think is to ask yourself, what problem are you trying to solve and which collection makes sense? I'm definitely going to be keeping an eye on the project, especially when it comes out of Beta.
    After attending a Windows Phone 8 Jumpstart at Chevy Chase, MD earlier today I got asked about tips developing cross-platform with as much code re-use as possible. In doing a Version 2 of a large platform ubiquitous application since October I've had some new thoughts since my August 2012 post, Cross-Platform Mobile Development and WCF Architecture Notes. Back then I was focused on using a TPL enabled WCF Service to be hit by the various platforms (ASP.NET, Windows Phone, Android, iOS etc.). This approach had a couple problems for a platform that needs to support an ever growing concurrent client base. The main problem is that there is 1 point of failure. If the WCF Service goes down, the entire platform goes. In addition, it does not allow more than 1 WCF server to be involved for the application. The other problem is that while the business logic is hosted in the cloud/a dedicated server with my August 2012 thought process, it doesn't share the actual WCF Service proxies or other common code.

    What is an easy solution for this problem of scalability?

    Taking an existing WCF Service and then implementing a queuing system where possible. This way the client can get an instantaneous response, thus leaving the main WCF Service resources to process the non-queueable Operation Contracts.

    How would you go about doing this?

    You could start out by writing a Windows Service to constantly monitor a set of SQL Tables, XML files etc. depending on your situation. To visualize this: [caption id="attachment_1895" align="aligncenter" width="300"]Queue Based Architecture (3/7/2013) Queue Based Architecture (3/7/2013)[/caption] In a recent project at work, in addition to a Windows Service, I added another database and another WCF Service to help distribute the work. The main idea being for each big operation that is typically a resource intensive task, offload it to another service, with the option to move it to an entirely different server. A good point to make here, is that the connection between WCF Services is done via binary, not JSON or XML.

    Increase your code sharing between platforms

    Something that has become more and more important for me as I add more platforms to my employer's main application is code reuse. This has several advantages:
    1. Updates to one platform affect all, less work and less problems by having to remember to update every platform when an addition, change or fix occurs
    2. For a single developer team like myself, it is a huge time saving principle especially from a maintenance perspective

    What can you do?

    In the last couple of months there have been great new approaches to code re-use. A great way to start is to create a Portable Class Library or PCL. PCLs can be used to create libraries to be compiled by Windows Phone 7/8, ASP.NET, MVC, WinForms, WPF, WCF, MonoDroid and many other platforms. All but MonoDroid is built in, however I recently went through how to Create a Portable Class Library in MonoDroid. The best thing about PCLs, your code is entirely reusable, so you can create your WCF Service proxy(ies), common code such as constants etc. The one thing to keep in mind is to follow the practice of not embedding your business, presentation and data layers in your applications.
    Diving into MVC 4 this week and going through the default Kendo UI MVC 4 project type in Visual Studio 2012 I noticed quite a few assemblies I knew I wouldn't need for my current project, namely the DotNetOpenAuth assemblies. I removed the 6 DotNetOpenAuth.* assemblies: [caption id="attachment_1890" align="aligncenter" width="304"]MVC 4 Assemblies MVC 4 Assemblies[/caption] In addition you'll need to remove the Microsoft.Web.WebPages.OAuth reference as well. To my surprise, upon building and debugging the new project I received the following exception: [caption id="attachment_1891" align="aligncenter" width="550"]MVC 4 Exception - DotNetOpenAuth Not Found MVC 4 Exception - DotNetOpenAuth Not Found[/caption] I double checked my packages.config and web.config config files for any reference, to no avail. As a last resort I deleted my bin and obj folders, rebuilt the solution and sure enough it started without any issues. Hopefully that helps someone out.
    This morning as I was continuing to dive into porting V2 of the product I recently wrapped up the ASP.Net and Windows Phone versions, I got the following exception when attempting to populate a few EditText objects after a WCF request completed: [caption id="attachment_1884" align="aligncenter" width="559"]Lovely exception when trying to update EditText Fields on another thread Lovely exception when trying to update EditText Fields on another thread[/caption] The solution, assuming you're in another thread is to use RunOnUiThread on the function you wish to update the UI with, like so:
    private void setUserProfileFields() {
         editTextEmailAddress = FindViewById<EditText>(Resource.Id.txtBxProfileEmailAddress); editTextEmailAddress.Text = App.ViewModel.CurrentUserProfile.EmailAddress; }
    void MainModel_PropertyChanged(object sender, System.ComponentModel.PropertyChangedEventArgs e) {
         switch (e.PropertyName) {
         case "UP_LOADED": RunOnUiThread(setUserProfileFields); break; }
    Ran into a fun MonoTouch error inside of Visual Studio 2012 today: [caption id="attachment_1877" align="aligncenter" width="550"]Could not load file or assembly 'moscorlib' Could not load file or assembly 'moscorlib'[/caption] Oddly enough, it only started doing that prior to a successful remote deployment. The solution is to go into your Visual Studio 2012 project properties, Build Tab and switch Generate serialization assembly to off. [caption id="attachment_1878" align="aligncenter" width="661"]Visual Studio 2012 Generate Serialization Assembly Dropdown Visual Studio 2012 Generate Serialization Assembly Dropdown[/caption]
    Jumping back into MonoDroid development the last couple days at work after having not touched it in almost a year, I knew I was going to be rusty. Interestingly enough I'm finding it much closer to Windows Phone development than I remembered. Having had no luck in finding MonoDroid for Windows Phone Developers I figured I'd start an ongoing post.

    Open a Browser

    In Windows Phone you would open a Web Browser with the following function:
    private void openBrowser(string url) {
         var task = new WebBrowserTask(); task.Uri = new Uri(url); task.Show(); }
    However in MonoDroid you have to do this:
    private void openBrowser(string url) {
         var browserIntent = new Intent(Intent.ActionView, Android.Net.Uri.Parse(url)); StartActivity(browserIntent); }

    New Line in TextView

    In Windows Phone in your XAML you'd do something like this to insert a new line into your TextBlock:
    <TextBlock>This line is awesome<LineBreak/>but this one is better</TextBlock> ]]>
    However in MonoDroid you need to be sure to set the singleLine property to false like so:
    <TextView android:layout_width="fill_parent" android:singleLine="false" android:text="This line is awesome\r\nbut this one is better" /> ]]>

    Login form (aka tapping enter goes to next field, with the last field hiding the keyboard)

    In your XAML on Windows Phone you might have something like the following:
    <StackPanel Orientation="Vertical"> <TextBox x:Name="TextBoxUsername" /> <TextBox x:Name="TextBoxPassword" /> </StackPanel> ]]>
    And then in your code behind:
    public LoginPage() {
         InitializeComponent(); TextBoxUsername.KeyDown += TextBoxUsername_KeyDown; TextBoxPassword.KeyDown += TextBoxPassword_KeyDown; }
    void TextBoxUsername_KeyDown(object sender, KeyEventArgs e) {
         if (e.Key == Key.Enter) {
         TextBoxPassword.Focus(); }
    void TextBoxPassword_KeyDown(object sender, KeyEventArgs e) {
         if (e.Key == Key.Enter) {
         Focus(); }
    Basically upon hitting the enter key while in the TextBoxUsername field it will set the focus to the TextBoxPassword field. Upon hitting enter in the TextBoxPassword field, it will set the focus to the main page and close the keyboard. For MonoDroid, it is a little different. In your axml:
    <EditText android:id="@+id/TextBoxUsername" android:imeOptions="actionNext" android:singleLine="true" android:layout_width="fill_parent" android:layout_height="wrap_content" /> <EditText android:id="@+id/TextBoxPassword" android:imeOptions="actionDone" android:singleLine="true" android:layout_width="fill_parent" android:layout_height="wrap_content" /> ]]>
    The key part is the android:imeOptions values, actionNext will move the focus to the next EditText field and the actionDone will send the done command back to your keylisteners etc. In your Activity code behind, you need to add this override. Update the Resource.Id.xxxxxxx with the name of the field you want the keyboard to hide upon hitting enter:
    public override bool DispatchKeyEvent(KeyEvent e) {
         if (CurrentFocus.Id == Resource.Id.TextBoxPasswordKey && (e.KeyCode == Keycode.NumpadEnter || e.KeyCode == Keycode.Enter)) {
         var imm = GetSystemService(Context.InputMethodService) as InputMethodManager; if (imm != null) {
         imm.HideSoftInputFromWindow(this.CurrentFocus.WindowToken, 0); }
    return true; }
    return base.DispatchKeyEvent(e); }
    I should also note you'll need the using Android.Views.InputMethods; line added to your code behind as well.

    Capturing Images via the Camera

    In Windows Phone capturing images from your Library or Taking a new picture is pretty trivial, assuming you have a Button to choose/take the picture:
    PhotoChooserTask _pcTask = null; private byte[] _pictureBytes; public PictureUpload() {
         InitializeComponent(); _pcTask = new PhotoChooserTask(); _pcTask.Completed += new EventHandler<PhotoResult>(_pcTask_Completed); }
    void _pcTask_Completed(object sender, PhotoResult e) {
         if (e.TaskResult == TaskResult.OK) {
         MemoryStream ms = new MemoryStream(); e.ChosenPhoto.CopyTo(ms); _pictureBytes = ms.ToArray(); ms.Dispose(); }
    private void btnChooseImage_Click(object sender, RoutedEventArgs e) {
         _pcTask.ShowCamera = true; _pcTask.Show(); }
    From there just upload the _pictureBytes to your WCF Service or wherever. In MonoDroid as expected is a little different, assuming you have a button click event to take the picture and an ImageView to display the image:
    private string _imageUri; // Global variable to access the image's Uri later void btnChooseImage_Click(object sender, EventArgs e) {
         var uri = ContentResolver.Insert(isMounted ? Android.Provider.MediaStore.Images.Media.ExternalContentUri : Android.Provider.MediaStore.Images.Media.InternalContentUri, new ContentValues()); _imageUri = uri.ToString(); var i = new Intent(Android.Provider.MediaStore.ActionImageCapture); i.PutExtra(Android.Provider.MediaStore.ExtraOutput, uri); StartActivityForResult(i, 0); }
    protected override void OnActivityResult(int requestCode, Result resultCode, Intent data) {
         if (resultCode == Result.Ok && requestCode == 0) {
         imageView = FindViewById<ImageView>(Resource.Id.ivThumbnail); imageView.DrawingCacheEnabled = true; imageView.SetImageURI(Android.Net.Uri.Parse(_imageUri)); }
    At this point you have the picture taken and the Uri of the image. In your Layout:
    <Button android:text="Choose Image" android:id="@+id/btnChooseImage" android:layout_width="fill_parent" android:layout_height="wrap_content" /> <ImageView android:id="@+id/ivThumbnail" android:layout_width="300dp" android:layout_height="150dp" /> ]]>

    Loading a picture from local storage and avoiding the dreaded java.lang.outofmemory exception

    A fairly common scenario, maybe pulling an image from the code described above, now you want to upload it some where? In the Windows Phone above in addition to taking/capturing the picture, we have a byte[] with the data, on MonoDroid it is a little different. A situation I ran into on my HTC Vivid was a java.lang.outofmemory exception. Further investigation, apparently Android has a 24mb VM limit per app (and some devices it is set to 16mb). Doing some research, I came across Twig's post. As expected it was in Java, so I converted it over to MonoDroid and added some additional features to fit my needs. So literally this function will return a scaled Bitmap object for you to turn around and convert to a Byte[]. The function:
    private Android.Graphics.Bitmap loadBitmapFromURI(Android.Net.Uri uri, int maxDimension) {
         var inputStream = ContentResolver.OpenInputStream(uri); var bfOptions = new Android.Graphics.BitmapFactory.Options(); bfOptions.InJustDecodeBounds = true; var bitmap = Android.Graphics.BitmapFactory.DecodeStream(inputStream, null, bfOptions); inputStream.Close(); var resizeScale = 1; if (bfOptions.OutHeight > maxDimension || bfOptions.OutWidth > maxDimension) {
         resizeScale = (int)Math.Pow(2, (int)Math.Round(Math.Log(maxDimension / (double)Math.Max(bfOptions.OutHeight, bfOptions.OutWidth)) / Math.Log(0.5))); }
    bfOptions = new Android.Graphics.BitmapFactory.Options(); bfOptions.InSampleSize = resizeScale; inputStream = ContentResolver.OpenInputStream(uri); bitmap = Android.Graphics.BitmapFactory.DecodeStream(inputStream, null, bfOptions); inputStream.Close(); return bitmap; }
    For a practical use, loading the image, scaling if necessary and then getting a Byte[]:
    var bitmap = loadBitmapFromURI(Android.Net.Uri.Parse(_imageUri), 800); var ms = new MemoryStream(); bitmap.Compress(Android.Graphics.Bitmap.CompressFormat.Jpeg, 100, ms); ]]>
    At this point doing a ms.ToArray() will get you to the same point the Windows Phone code above did, so if you had a WCF Service, you could at this point upload the byte array just as you could with a Windows Phone above.

    Transparent Background on a ListView, LayoutView etc?

    In Windows Phone you can set the Background or Foreground properties to simply Transparent like so:
    <StackPanel Background="Transparent" Orientation="Vertical"> <TextBlock>Transparentcy is awesome</TextBlock> </StackPanel> ]]>
    In MonoDroid it's simply "@null" like so for the ListView
    <ListView android:background="@null" android:layout_width="match_parent" android:layout_height="match_parent"> <TextView android:text="Transparentcy is awesome" android:gravity="center" /> ]]>

    Locking Orientation

    In Windows Phone you can set your page's orientation to be forced into Landscape or Portrait in your xaml like so:
    <phone:PhoneApplicationPage SupportedOrientations="Portrait" Orientation="Portrait"> ]]>
    In MonoDroid I couldn't figure out a better way to do it than the following line inside your Activity's OnCreate function like so:
    protected override void OnCreate(Bundle bundle) {
         base.OnCreate(bundle); RequestedOrientation = ScreenOrientation.Portrait; }
    On a side note, you'll need to add this line to the top of your Activity if it isn't already there: using Android.Content.PM;
    For the last 2.5 years I've been using the default MonoDevelop Web Reference in my MonoTouch applications for work, but it's come to the point where I really need and want to make use of the WCF features that I do in my Windows Phone applications. Even with the newly released iOS Integration in Visual Studio 2012, you still have to generate the proxy class with the slsvcutil included with the Silverlight 3.0 SDK. If you're like me, you probably don't have Version 3.0 of the Silverlight SDK, you can get it from Microsoft here. When running the tool you might get the following error: Error: An error occurred in the tool. Error: Could not load file or assembly 'C:\Program Files (x86)\Microsoft Silverl ight\5.1.10411.0\System.Runtime.Serialization.dll' or one of its dependencies. T his assembly is built by a runtime newer than the currently loaded runtime and c annot be loaded. Basically the tool is incorrectly trying to pull the newer 4.0 or 5.0 Silverlight Assemblies, to make it easy, I created a config file to simply drop into your c:\Program Files (x86)\Microsoft SDKs\Silverlight\v3.0\Tools folder, you can download it here. From a command line (remember the shortcut to hold shift down and right click in the folder to open a command prompt): [caption id="attachment_1853" align="aligncenter" width="593"]Silverlight 3 WCF Proxy Generation Silverlight 3 WCF Proxy Generation[/caption] Enter the following, assuming you want to create a proxy for a localhost WCF Service to your c:\tmp folder: SlSvcUtil.exe http://localhost/Service.svc?wsdl /noconfig /d:c:\tmp Though I should note, this will generate Array collections and not List or ObservableCollection collections. If you want to generate your Operation Contracts with return types of those collections simply add for List Collections: /collectionType:System.Collections.Generic.List`1 or ObservableCollection: /collectionType:System.Collections.ObjectModel.ObservableCollection`1
    A few months ago I purchased a Quad R16k 700mhz Silicon Graphics Origin 350, while a nice boost up from my Silicon Graphics Origin 300's, especially for the integrated gigabit, I often found myself wanting a desktop that was of similar power. Unfortunately, the Silicon Graphics Fuel I received about a year ago only supported up to 900mhz R16k MIPS CPU, which for single-threaded tasks would be faster than my Origin 300 or Origin 350, but most of my IRIX tasks these days are multi-threaded.

    So my quest for finding the ultimate MIPS-based Silicon Graphics desktop machine, the Silicon Graphics Tezro began. The Tezro supports up to 4 1ghz R16k cpus, 8gb of ram and has 7 PCI-X slots. The side effect, these machines are still used today and thereby very expensive even in the second hand market. Fortunately for me though there was a Dual 700mhz R16k for next to nothing a few weeks back that I guess no one else saw.

    One neat element of the last generation of Silicon Graphics MIPS-based servers/desktops is that they are all based on the Origin 3000 architecture. Meaning that a lot of the components are shared between the Origin 300, Origin 350, Fuel, Tezro and Onyx 350. For instance the same DDR memory is shared between all of them, handy for me since the Tezro I got only had 4gb of ram. In addition the Origin 350, Tezro and Onyx 350 share the same node board (IP53), making swaps between the different systems pretty much drag and drop.

    This lead me to pursue swapping my infrequently used Quad 700mhz Origin 350 nodeboard for the one in my newly acquired Tezro. Thankfully the steps were pretty easy and intuitive as noted by the pictures below. [caption id="attachment_1839" align="aligncenter" width="550"]Silicon Graphics Origin 350 (Inside) Silicon Graphics Origin 350 (Inside)[/caption] Thankfully, the node board is only held in by 2 different sized Torx screws, make sure to have drill bits or a Torx screw driver set handy before you do this swap. [caption id="attachment_1840" align="aligncenter" width="550"]Silicon Graphics Origin 350 Chasis without Nodeboard (IP53) Silicon Graphics Origin 350 Chasis without Nodeboard (IP53)[/caption] Luckily, the Tezro nodeboard as you might expect, has the same Torx screws, so it's just a matter of removing the same set of screws, although the Tezro's screws are shorter. [caption id="attachment_1841" align="aligncenter" width="550"]Silicon Graphics Tezro Nodeboard (IP53) Removed Silicon Graphics Tezro Nodeboard (IP53) Removed[/caption] After removing the Tezro Nodeboard, I found the connector's to be interesting. For as much data that goes between the nodeboard, the V12 and the PCI-X slots, I expected there to be more pins. [caption id="attachment_1842" align="aligncenter" width="169"]Silicon Graphics Tezro Nodeboard Connector Silicon Graphics Tezro Nodeboard Connector[/caption] [caption id="attachment_1844" align="aligncenter" width="550"]Silicon Graphics Tezro Nodeboard in place Silicon Graphics Tezro Nodeboard in place[/caption] After putting the Origin 350 nodeboard into the Tezro, all that was left was to see if IRIX would see the new nodeboard with 0 issues, sure enough: [caption id="attachment_1845" align="aligncenter" width="550"]IRIX System Manager showing the Quad 700mhz IP53 Nodeboard properly IRIX System Manager showing the Quad 700mhz IP53 Nodeboard properly[/caption]
    Came into a frustrating issue this morning with an app using MVMM in Windows Phone 7. I had a couple of textboxes, 2 with multi-line support and an Application Bar with a Save Icon. The idea being if you wanted to save your changes in the textboxes, tap the Save Icon and everything would save. Little did I know, the Textboxes only trigger an update to your binding when losing focus. So if the end user left the textbox focused, the trigger wouldn't occur and your save functionality in your View Model would not have the newly entered text. A clean work around for this problem is to create a generic OnTextChanged event handler function to trigger the update and then map that function to each of your textboxes. Here's some sample code in my XAML file:
    <telerikPrimitives:RadTextBox TextChanged="Rtb_OnTextChanged" Text="{
        Binding Option.Title, Mode=TwoWay}
    " x:Name="rtbTitle" Watermark="enter your title here" Header="cover page title" HideWatermarkOnFocus="True" /> <telerikPrimitives:RadTextBox TextChanged="Rtb_OnTextChanged" TextWrapping="Wrap" Text="{
        Binding Option.CoverPageText, Mode=TwoWay}
    " x:Name="rtbCoverPage" Height="200" Watermark="enter your cover page text here" Header="cover page text" HideWatermarkOnFocus="True" /> <telerikPrimitives:RadTextBox TextChanged="Rtb_OnTextChanged" TextWrapping="Wrap" Text="{
        Binding Option.SummaryPageText, Mode=TwoWay}
    " x:Name="rtbSummaryPage" Height="200" Watermark="enter your summary page text here" Header="summary page text" HideWatermarkOnFocus="True" /> ]]>
    And then in your code behind:
    private void Rtb_OnTextChanged(object sender, TextChangedEventArgs e) {
         var bindingExpression = ((TextBox) sender).GetBindingExpression(TextBox.TextProperty); if (bindingExpression != null) {
         bindingExpression.UpdateSource(); }
    Had an interesting situation with a pretty extensive Ajax/Telerik ASP.NET 4.5 page where one of the RadPageViews could grow to a pretty sizeable height and the requirement was to have the Submit Button at the very bottom (not sure why in retrospect). Part of the problem was the RadLoadingPanel wasn't expanding to the infinite height of the RadPageVIew. So what you got was only the original height of the RadPageView and since you were at the bottom of the page, you had no indication that the post-back was actually occurring. I am sure you could probably update the height of the RadAjaxLoadingPanel through some JavaScript, but I chose to do the easier approach, simply scroll to the top and then do the post back on my Submit Button. If you had a ton of buttons on a page, you would probably want to rework the JavaScript function to accept the Button's ID and simply call the JavaScript in your Button's OnClientClick event. Some where in your ASPX file: [javascript] <telerik:RadCodeBlock runat="server"> <script type="text/javascript"> function focusTopAndSubmit() {
         window.scrollTo(0, 0); __doPostBack("<%=btnSubmit.UniqueID %>", ""); }
    </script> </telerik:RadCodeBlock> [/javascript] And then in your actual ASP:Button or Telerik:RadButton definition:
    <asp:Button ID="btnSubmit" OnClientClick="focusTopAndSubmit(); return false;" OnClick="btnSubmit_Click" runat="server" Text="Submit" /> ]]>
    Update the JavaScript postback line with the name of your Submit Button, but other then that, it's drop in ready.
    Working on a XNA/XAML game tonight I wanted to have a consistent UI experience between the XAML and XNA views. A key part of that was matching the Photoshop graphics and the in-game font. Having never used a TrueType Font (TTF) in Windows Phone 7.x I was curious how hard it would be to do in Windows Phone 8 where I have found many tasks to be streamlined. Unsurprisingly the process is pretty straight forward:
    1. Copy your TrueType Font to your solution folder
    2. Make sure the Build Action is set to Content and Copy to Output Directory to Copy if newer
    In your XAML then reference the TrueType Font in your XAML with the file path#font name like so:
    <TextBlock HorizontalAlignment="Center" FontFamily=".\Data\OCRAExt.ttf#OCR A Extended" FontSize="80" Text="TTFS ROCK" /> ]]>
    Note you need to keep any spaces found in the font, if you aren't sure, double click on the font and make sure your XAML matches the what I've circled in red: [caption id="attachment_1810" align="aligncenter" width="572"]TrueType Font Name <span classin Windows 8" width="572" height="339" class="size-full wp-image-1810" /> TrueType Font Name in Windows 8[/caption] Something weird I did find with a couple of TrueType Fonts I tried was some font creators put a space at the end. If you're like me that'll drive you nuts, especially when referencing the custom font multiple times in your XAML. If you find a case like that, download TTFEdit from source forge and trim the space off the end and save it. If you follow those steps properly, you'll have your TTF in your Windows Phone 8 App: [caption id="attachment_1805" align="aligncenter" width="480"]TTF <span classin Windows Phone 8" width="480" height="800" class="size-full wp-image-1805" /> TTF in Windows Phone 8[/caption]
    I had an unusual issue that came to my attention this week in regards to using Telerik's RadGrid for displaying a long list of items, in this case 83, when the pagination size was set to 100, as expected the page height grew exponentially over the initial 425px height for this particular RadPageView. On this particular RadGrid my far right hand column had an edit column in which a RadWindow would open to a one field, two button page. Redirecting to an entirely new page made no sense in this situation, thus why I went with the RadWindow control in the first place. Before I dive into the issue and the solution I came up with, you can download the full source code for this post here. For this example I am using a pretty common situation, listing all of the users with their email address and "IDs": [caption id="attachment_1791" align="aligncenter" width="300"]Telerik RadGrid with a PageSize of 10 Telerik RadGrid with a PageSize of 10[/caption] With the Edit User LinkButton opening a RadWindow indicating you can Edit the User's ID: [caption id="attachment_1794" align="aligncenter" width="300"]Telerik RadGrid with RadWindow Telerik RadGrid with RadWindow[/caption] So where does the problem lie? When you have a PageSize high or any content that expands far more than what is visible initially and you open your RadWindow: [caption id="attachment_1795" align="aligncenter" width="300"]Telerik RadGrid with PageSize <span classset to 50" width="300" height="236" class="size-medium wp-image-1795" /> Telerik RadGrid with PageSize set to 50[/caption] The RadWindow appears where you would expect it to if you were still at the top of the page. So can you fix this so the RadWindow appears in the center of the visible area of your browser no matter how far down you are? On your OnItemDataBound code behind:
    protected void rgMain_OnItemDataBound(object sender, GridItemEventArgs e) {
         if (!(e.Item is GridDataItem)) {
         return; }
    var linkButton = (LinkButton) e.Item.FindControl("lbEdit"); var user = ((e.Item as GridDataItem).DataItem is USERS ? (USERS) (e.Item as GridDataItem).DataItem : new USERS()); linkButton.Attributes["href"] = "javascript:void(0);"; linkButton.Attributes["onclick"] = String.Format("return openEditUserWindow('{
    ');", user.ID); }
    The important line here is the linkButton.Attributes["href"] = "javascript:void(0);";. Something else I choose to do in these scenarios where I have a popup is to offer the user a cancel button and a save button, but only refreshing the main window object that needs to be updated. In this case a RadGrid. To achieve this, you need to pass an argument back to the RadWindow from your Popup ASPX Page to indicate when a refresh is necessary. The ASPX for your popup:
    <div style="width: 300px;"> <telerik:RadAjaxPanel runat="server"> <telerik:RadTextBox runat="server" Width="250px" Label="UserID" ID="rTxtBxUserID" /> <div style="padding-top: 10px;"> <div style="float:left;"> <asp:Button ID="btnCancel" runat="server" Text="Cancel" OnClientClick="Close();return false;" /> </div> <div style="float:right"> <asp:Button ID="btnSave" runat="server" OnClientClick="CloseAndSave(); return false;" OnClick="btnSave_Click" Font-Size="14px" Text="Save User" /> </div> </div> </telerik:RadAjaxPanel> </div> ]]>
    The JavaScript in your ASPX Popup Page: [jscript] <telerik:RadScriptBlock ID="RadScriptBlock1" runat="server"> <script type="text/javascript"> function GetRadWindow() {
         var oWindow = null; if (window.radWindow) {
         oWindow = window.radWindow; }
    else if (window.frameElement.radWindow) {
         oWindow = window.frameElement.radWindow; }
    return oWindow; }
    function Close() {
         GetRadWindow().close(); }
    function CloseAndSave() {
         __doPostBack("<%=btnSave.UniqueID %>", ""); GetRadWindow().close(1); }
    </script> </telerik:RadScriptBlock> [/jscript] Then in your main page's ASPX: [jscript] <telerik:RadScriptBlock ID="RadScriptBlock1" runat="server"> <script type="text/javascript"> function openEditUserWindow(UserID) {
         var oWnd = radopen('/edituser_popup.aspx?UserID=' + UserID, "rwEditUser"); }
    function OnClientClose(oWnd, args) {
         var arg = args.get_argument(); if (arg) {
         var masterTable = $find('<%= rgMain.ClientID %>').get_masterTableView(); masterTable.rebind(); }
    </script> </telerik:RadScriptBlock> [/jscript] With this your RadGrid will only refresh if the user hits save on your popup as opposed to doing a costly full post back even if the user didn't make any changes. I hope that helps someone out there who struggled to achieve everything I mentioned in full swoop. Telerik has some great examples on their site, but occasionally it can take some time getting them all working properly. As mentioned above, you can download the full source code for this solution here.
    Finally had some time to revisit jcBENCH tonight and found a few issues on the new WPF and Windows Phone 8 releases that I didn't find over the weekend. Unfortunately for the Windows Phone 8 platform, there is a delay between publishing and actually hitting the store. So in the next 24-48 hours, please check out the Windows Phone 8 App Store for the release. You can alternatively just download the current release and await for the Store to indicate there is an update. But for those on Windows, please download the updated release here.
    Recently I upgraded a fairly large Windows Forms .NET 4 app to the latest version of the Windows Forms Control Suite (2012.3.1211.40) and got a few bug reports from end users saying when they were doing an action that updated the Tree View Control it was throwing an exception. At first I thought maybe the Clear() function no longer worked as intended so I tried the following:
    if(treeViewQuestions != null&& treeViewQuestions.Nodes != null&& treeViewQuestions.Nodes.Count > 0) {
         for(intx = 0; x < treeViewQuestions.Nodes.Count; x++) {
         treeViewQuestions.Nodes[x].Remove(); }
    No dice. Digging into the error a big further, I noticed the "UpdateLine" function was the root cause of the issue: Telerik.WinControls.UI.TreeNodeLinesContainer.UpdateLine(TreeNodeLineElement lineElement, RadTreeNode node, RadTreeNode nextNode, TreeNodeElement lastNode)\r\n at Telerik.WinControls.UI.TreeNodeLinesContainer.UpdateLines()\r\n at Telerik.WinControls.UI.TreeNodeLinesContainer.Synchronize()\r\n at Telerik.WinControls.UI.TreeNodeElement.Synchronize()\r\n at Telerik.WinControls.UI.RadTreeViewElement.SynchronizeNodeElements()\r\n at Telerik.WinControls.UI.RadTreeViewElement.Update(UpdateActions updateAction)\r\n at Telerik.WinControls.UI.RadTreeViewElement.ProcessCurrentNode(RadTreeNode node, Boolean clearSelection)\r\n at Telerik.WinControls.UI.RadTreeNode.OnNotifyPropertyChanged(PropertyChangedEventArgs args)\r\n at Telerik.WinControls.UI.RadTreeNode.SetBooleanProperty(String propertyName, Int32 propertyKey, Boolean value)\r\n at Telerik.WinControls.UI.RadTreeNode.set_Current(Boolean value)\r\n at Telerik.WinControls.UI.RadTreeNode.ClearChildrenState()\r\n at Telerik.WinControls.UI.RadTreeNode.set_Parent(RadTreeNode value)\r\n at Telerik.WinControls.UI.RadTreeNodeCollection.RemoveItem(Int32 index)\r\n at System.Collections.ObjectModel.Collection`1.Remove(T item)\r\n at Telerik.WinControls.UI.RadTreeNode.Remove() Remembering I had turned on the ShowLines property, I humored the idea of turning them off for the clearing/removing of the nodes and then turning them back on like so:
    treeViewQuestions.ShowLines = false; treeViewQuestions.Nodes.Clear(); treeViewQuestions.ShowLines = true; ]]>
    Sure enough that cured the problem, the last word I got back from Telerik was that this is the approved workaround, but no ETA on a true fix. Hopefully that helps someone else out there.
    For quite some time now I've been trying to hunt down a newer release of either the full set of installation CDs or the 6.5.30 Overlay CD set (the last version of IRIX). Hunting on eBay as usual I came across an IRIX 6.5 full set, but there was no information as to what revision only "SC4-AWE-6.5 REV ZC". Last year I bought a full set that was labeled SC4-AWE-6.5 REV S. This turned out to be 6.5.14 and as it so happened looks to have been sold with an SGI O2 machine as there were O2 demo CDs inside it. So which version is the "SC4-AW-6.5 REV ZC"? Turns out it's 6.5.21, which for many will be very important as it added support for the Origin 350 and Onyx 4 systems, the last ones in Silicon Graphics' line up based on MIPS CPUs. [caption id="attachment_1771" align="aligncenter" width="550"]Silicon Graphics SC4-AWE-6.5 REV ZC Full CD Set Silicon Graphics SC4-AWE-6.5 REV ZC Full CD Set[/caption] [caption id="attachment_1772" align="aligncenter" width="550"]Silicon Graphics SC4-AWE-6.5 REV ZC Full CD Set - Back Silicon Graphics SC4-AWE-6.5 REV ZC Full CD Set - Back[/caption]
    I know I've been more quiet on here than in 2012, but I've been working away on several new Windows Phone 8 apps. One of them was the initial port of jcBENCH to Windows Phone 8. Keeping with the original idea of having the same benchmark on every platform, this continues with the Version 0.7 release. I can now have some sense of approximate performance difference between my Nokia Lumia 920 Windows Phone and my AMD FX-8350 CPU, suffice it to say I am surprised at the speed of the Dual 1.5GHz CPU in my phone. You can get the latest release of jcBENCH for Windows Phone 8 in the app store. If you like or hate, please let me know. I tried to keep a consistent experience between the Win32 GUI version as far as color scheme. [caption id="attachment_1760" align="aligncenter" width="300"]jcBENCH Version 0.7 Release jcBENCH Version 0.7 Release[/caption] As for the Win32 GUI edition, it has been updated to match the new results capturing, you can download it here.
    This question was posed by a person I follow on Twitter, no one seemed to have answer so I started thinking, when does the phone expose it's name? 1. When you connect it to your computer, Windows seems to know 2. Bluetooth In checking the Bluetooth documentation, sure enough you can retrieve the current device name by this line of code:
    Windows.Networking.Proximity.PeerFinder.DisplayName; ]]>
    Hopefully that helps someone out there.
    Diving into Windows Phone 8 development for the last two months now and found something in my development process I never really realized was holding me back, utilizing a true MVMM architecture over more of MVM (Model View Model) as best I can describe it. For instance, you might have a static App Model in your App class and then in your page or pages:
    public MainPage() {
         InitializeComponent(); textBoxUsername.Text = App.ViewModel.Username; textBoxPassword.Text = App.ViewMode.Password; }
    public void btnSubmit_Click(object sender, EventArgs e) {
         App.ViewModel.SubmitLogin(textBoxUsername.Text, textBoxPassword.Text); }
    For a small number of fields or a small app, this might work out, but as I have found in the last 2 months, it can be severely limiting with medium to larger apps. Your code gets bloated, more complex than it needs to be and overall less clean. Applying MVMM to the mix, in your page.cs:
    public MainPage() {
         InitializeComponent(); DataContext = App.ViewModel; }
    In your page.xaml:
    <TextBox Text="{
        Binding Username, Mode=TwoWay}
    " /> <TextBox Text="{
        Binding Password, Mode=TwoWay}
    " /> ]]>
    In your viewmodel:
    public string Username {
         get; set;}
    public string Password {
         get; set; }
    The idea behind MVMM at least in my mind is the ability to clearly separate the layers of your UI, business and data layers. When you execute your "submit button" click event, you no longer have to manually pass in the TextBox to the function, it simply works. While I don't advocate one pattern fits them all, I do think MVMM should be utilized in nearly every application, especially when starting out. I think it is better to the learn way your later apps will be designed than to have to get rid of bad habits down the road. If there is demand, I can throw together an example, just let me know.
    Curious if an external graphics card would work in the PCIe x16 Slot (X4 mechanically) on the ASUS C60M1-I I picked up a few weeks ago, I tried out a 1GB Gigabyte Radeon 6450. Gigabyte 6450 Radeon Gigabyte Radeon 6450 Card Sadly, the card would not post with the ASUS C60M1-I motherboard. I had tried reseating it, but to no avail. The card does in fact work as I put it back in an ASUS F1MA55 system and worked fine again. Not to be discouraged, I picked up an XFX Radeon 6670 HD to see if maybe there was some incompatibility with the 6450 and the motherboard. XFX Radeon 6670 XFX Radeon 6670 Card XFX Radeon 6670 installed into a Lian Li PC-Q06 Case Sure enough the card booted up just fine, the Radeon 6290 that is embedded in the C-60 is disabled when an external graphics card is used in case you were considering CrossFire or EyeFinity usage. With the several projects I am working on, I've only had time to do one benchmark, using 3D Mark 2011, here are the applicable scores for both the XFX 6670 and the 6290 in the C-60 itself:
    3D Mark 2011 Scores for the AMD C-60's embedded Radeon 6290AMD C-60 APU (6290 Radeon) 3D Mark 2011 scores for the XFX Radeon 6670 with the AMD C-60AMD C-60 APU with an XFX 6670 Radeon
    As you would expect, the Radeon gave a pretty sizable boost in performance. In real world testing, I tried out StarCraft II and was able to play at 720p with little lag unless there was considerable amount of Units on the screen at once (more than likely CPU Bound at that point). Going forward I will be using this box for OpenCL performance testing, so more to come for sure, but it's safe to say if you don't have a need to use the PCIe slot on the Asus C60M1-I, you're best bet if your intentions are gaming, is to get a decent Radeon 66XX and enjoy the benefits of offloading as much as you can to the GPU. As more and more applications rely on the GPU, a low watt CPU and higher powered GPU I imagine will become much more valued, but I could be way off in that prediction.
    The big update to jcWPLIBRARY is available on nuget. Included in this update:
    1. Fully commented library for all XML functions
    2. Added support for single results for both writing and reading, so you don't need to create a single item List collection anymore
    3. Brand new IO.* namespace to support Files to be accessed in IsolatedStorage, Web Servers, Web Services etc
    The brand new classes in the IO.* namespace are considered Beta at best. I have to do some additional testing and add more features before I will consider it Production ready. When completed I will post an update sample showing off all of the features. However, the new features in the XML.* namespace I have been actively using in my application the last 2 weeks. As always, comments or suggestions send me a message on twitter or leave a comment here.
    Last week I implemented the storing of images to a SQL Server database for the purpose of having the images readily available to my WCF Service to then deploy to all of the clients (Android, iPhone, Windows Phone and Windows Forms and ASP.NET). The idea being, a user could upload an image from one location and sync it to all of the other devices. You could use a more traditional file based system, storing the images on a NAS or SAN, have your ASP.NET server and or WCF Services read from them and hand them off as needed, but I felt like that wasn't as clean as simply doing a SQL Query to pull down the byte[]. Doing some searches I came across a lot of people saying it wasn't a good idea to store your images in a SQL Server database for various reasons:
    1. It makes your SQL Server database larger
    2. It makes queries on the Table with the image data slower to read from
    3. It was slower overall
    These points to me without any sample code or test results made me really wonder if someone at one point said it was a bad idea and it just trickled down over time. So in typical fashion, I wrote a sample application and did my own testing. To be as comprehensive as I could, I had a base of 10 images of various sizes ranging from 6kb to 1065kb to give a more real world scenario. From there I tracked how long it took to clear the SQL Tables, populate each scenario and then retrieve them to a different location to simulate the server side handling of the files. I did this from 10 images to 1000 images. The SQL Server Table Schema: [caption id="attachment_1696" align="aligncenter" width="265"] SQL Server Tables used in testing[/caption] I kept the usual columns I would expect to find for tables in a real world example. In the more traditional approach, it simply writes the file location, while the other approach also stores the byte[] with the filename. In a real world scenario I'd probably have a foreign key relationship to each of those tables depending on the project. For instance if I had Users uploading these images, I'd have a Users2Images relational table, but you may have your own process/design when doing SQL so I won't get into my thoughts on SQL Database design. So what were the results? [caption id="attachment_1695" align="aligncenter" width="300"] SQL Server Image Storing vs NTFS Image Storing[/caption] The biggest thing that struck me was the linearity of retrieval in addition to the virtually identical results between the 2 approaches. But as is always the case, that's only part of a transaction in today's applications. You also have to consider the initial storing of the data. This is where simply writing the bytes to the hard drive versus writing the image's bytes to SQL Server is quite a bit different in performance. If you have a UI thread blocking action on your Windows Phone app, ASP.NET app or even WCF call when performing the upload this could be a big deal. I hope everyone is using Async at this point or at least not writing UI Thread Blocking code, but there could be some valid reasons for not using Async, for instance being stuck in .NET 1.1 where there isn't an easy upgrade path to 4.5. Even at 10 images (1.3mb), SQL Server took over 5X longer than the traditional approach for storing the images. Granted this was fractions of a second, but when thinking about scalability you can't disregard this unless you can guarantee a specific # of concurrent users at a time. Other factors to consider that you may have not thought about:
    1. Is your SQL Server much more powerful than your NAS, SAN or Web Server in regards to File I/O? More specifically, do you use SSDs in your SQL Server and Mechanical for your Web Server or NAS?
    2. How often are images being uploaded to your sites?
    3. Are you considering going to a queue implementation where the user gets an instant kick back or doing the processing Asynchronously?
    4. How big is your SQL Server database now? Do you pay extra in your cloud environment for a SQL Server versus a simple storage or webserver?
    5. Does the added security offered by storing the image in a database out way the costs both financially and performance of storing it initially?
    6. Are you going to be overwriting the images a lot?
    Before I close, I should note I performed these tests on a RAID 0 Stripe on 2 Corsair Force 3 GT 90gb SSDs. I used SQL Server 2012 not SP1 (updating to that tomorrow) and the SQL Server Database was also on this Stripe for those curious. I purposely did not use TPL (i.e Parallel.Foreach) for these tests because I wanted to simulate a worse case scenario server side. I imagine though that you will be severely I/O limited and not CPU, especially on a mechanical drive. At a later date I may do these tests again in a TPL environment and on a mechanical drive to see how those factors change things. As Levar Burton said when I was growing up on Reading Rainbow, don't take my word for it, download the code with the SQL schema here. I'd love to hear some other points on this subject (both for and against) as I think there are some very good real world scenarios that could benefit from a SQL Server Storage approach versus a traditional approach. But like everything, there is but 1 answer to every problem in every scenario: do your homework and find the perfect one for that situation. So please write a comment below or tweet to me on Twitter!
    If you simply want to download the library, click here or download the sample + library, click using jcWPLIBRARY" target="_blank">here. Like many folks who have been doing Windows Phone development for a while now, they might have been upset as I was to find DataSets were not part of the supported runtime. For those that don't know there is a powerful tool that has come with Visual Studio for quite some time now called xsd. xsd is a command line tool that can take an XML file and create a typed dataset class for you to include in your C# applications. I have used this feature extensively for a large offline Windows Forms application I originally wrote in the Spring of 2009 and am still using it today to read and write to XML files. To use the xsd tool I typically launch the Visual Studio Developer Console as shown below or you execute it directly from C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0A\bin\NETFX 4.0 Tools (If I remember correctly it was under Visual Studio\SDK\bin prior to VS2012). [caption id="attachment_1658" align="aligncenter" width="264"] Visual Studio 2012 Developer Command Prompt[/caption] The premise of using the xsd tool is to take an XML file like so: [xml] <?xml version="1.0" standalone="yes"?> <CPUSockets> <CPUSocket> <Name>Socket F</Name> <NumberOfPins>1307</NumberOfPins> <IsLGA>True</IsLGA> </CPUSocket> <CPUSocket> <Name>Socket A</Name> <NumberOfPins>462</NumberOfPins> <IsLGA>False</IsLGA> </CPUSocket> </CPUSockets> [/xml] Do the following command line: [powershell] xsd cpusockets.xml /d [/powershell] This will generate an xsd file. From there you can open this xsd file in Visual Studio to set the column variable types, nullable values etc. and then run the command below or simply run it and it will generate all string column types in your typed dataset class: [powershell] xsd cpusockets.xsd /d [/powershell] The benefit of using Typed Data Sets when working with XML files is that you don't have any hard coded values, everything is extremely clean. For instance to read in the XML shown above all it requires is:
    CPUSockets cpuSockets = new CPUSockets(); cpuSockets.ReadXml("cpusockets.xml"); ]]>
    Very clean and at that point you can iterate over the rows like you would any DataSet (i.e. with TPL, a Foreach loop, LINQ etc). One advantage I found in using straight xml files with the xsd tool is that a non-programmer can assist with the creation of the XML files' schema and can be updated with just Notepad later on. Being the sole programmer at an 80+ employee company leads me to make things very easy to maintain. Bringing me back to my dilemma (and a most likely a lot of other Windows Phone 7.x/8 developers). Server side I have been able to generate XML and read it in my WCF services using the above method, but I wanted a universal code base between Windows Phone and WCF or as close to it as possible. Previously I had written an abstract class with some abstract functions with 2 implementations. One via simply using XmlReader for Windows Phone and the other via Typed Data Sets. Definitely not an ideal situation. Thinking about this problem yesterday I came up with a clean solution and I am sharing it with any one who wishes to use it and figuring I am going to be having a lot of these epiphanies, I am simply bundling the classes into a library called, jcWPLIBRARY, keeping with my other project naming conventions. So how does this solution work? First off I am going to assume you have a class you're already using or something similar to keep the structure of your XML file. With my solution you're going to need to inherit the jcWPLIBRARY.XML.XMLReaderItem class with your own properties like so:
    public class CPUSockets : jcWPLIBRARY.XML.XMLReaderItem {
         public string Name {
         get; set; }
    public int NumberOfPins {
         get; set; }
    public bool IsLGA {
         get; set; }
    How does reading and writing from XML files look? I created a small sample to show the steps. It is a pretty simple Windows Phone 8 application with one button to write XML with some hard coded test data and the other button to read that file back in from IsolatedStorage. [caption id="attachment_1660" align="alignleft" width="180"] Sample jcWPLIBRARY XML application[/caption] [caption id="attachment_1662" align="aligncenter" width="180"] Sample jcWPLIBRARY XML application[/caption] Going along with every thing I have done in the last year or two, I've tried to make it universal and easy to use. The source code behind the Write XML button:
    List<Objects.CPUSockets> cpuSockets = new List<Objects.CPUSockets>(); cpuSockets.Add(new Objects.CPUSockets {
         IsLGA = true, Name = "Socket F", NumberOfPins = 1207 }
    ); cpuSockets.Add(new Objects.CPUSockets {
         IsLGA = false, Name = "Socket AM3+", NumberOfPins = 942 }
    ); cpuSockets.Add(new Objects.CPUSockets {
         IsLGA = false, Name = "Socket AM3", NumberOfPins = 941 }
    ); cpuSockets.Add(new Objects.CPUSockets {
         IsLGA = false, Name = "Socket A", NumberOfPins = 462 }
    ); jcWPLIBRARY.XML.XMLHandler<Objects.CPUSockets> xmlHandler = new jcWPLIBRARY.XML.XMLHandler<Objects.CPUSockets>(fileName: "cpusockets.xml"); var result = xmlHandler.WriteFile(data: cpuSockets); if (result.HasError) {
         MessageBox.Show(result.ErrorString); return; }
    MessageBox.Show("File written successfully"); ]]>
    Pretty simple, create your XMLHandler object using the inherited class previously and optionally pass in the filename you're going to want to later use. Note: You can pass in a different filename to the WriteFile function, by default it uses the constructor set filename however. The WriteFile function returns an XMLResult object. In this object you have a HasError property that if set to true, the ErrorString will contain exception text for you to read. This also applies to the ReadFile function mentioned below. The source code behind the Read XML Button:
    jcWPLIBRARY.XML.XMLHandler<Objects.CPUSockets> xmlHandler = new jcWPLIBRARY.XML.XMLHandler<Objects.CPUSockets>(fileName: "cpusockets.xml"); var xmlresult = xmlHandler.LoadFile(); if (xmlresult.HasError) {
         MessageBox.Show(xmlresult.ErrorString); return; }
    StringBuilder stringBuilder = new StringBuilder(xmlresult.Result.Count()); foreach (var cpuSocket in xmlresult.Result) {
         stringBuilder.Append(cpuSocket.Name + " | " + cpuSocket.IsLGA + " | " + cpuSocket.NumberOfPins); }
    MessageBox.Show(stringBuilder.ToString()); ]]>
    Pretty simple hopefully. My next task (no pun intended) will be to add in support for Task-based Asynchronous Pattern (TAP) as everything I want to be doing going forward will be non-blocking. You can download the initial release of jcWPLIBRARY here. Or if you wish to also download the sample mentioned above, you can get it using jcWPLIBRARY" target="_blank">here. Note, the sample includes the library in the References folder so you don't need to download both to get started on your project with the library. Comments, suggestions whether they are positive or negative I would like to hear them.
    This morning I finally retired my AMD Phenom II X6 1090T CPU from my primary desktop. I had been using it since April 30th 2010, right when it first came out. Looking back, it's interesting to think the power that $309 bought back then and the $185 on the FX-8350 brings today. Just from a numerical standpoint, 6x3.2ghz (19.2ghz) versus 8x4ghz (32ghz) is mind blowing in my opinion. 12 years ago nearly to the day I was about to buy my first 1ghz AMD Athlon "Thunderbird" Socket A CPU. What is also interesting is that 2.5 years later AMD is still using AM3/AM3+, which for a consumer is great. Knowing with a simply bios update I can run the latest CPUs is a great to know. In my case, doing a bios update on my ASUS M5A99X EVO to get support for the just recently released Vishera series of FX CPUs from AMD. [caption id="attachment_1639" align="aligncenter" width="300"] AMD FX-8350 Tin[/caption] [caption id="attachment_1641" align="aligncenter" width="300"] AMD FX-8350 installed into my ASUS M5A99X[/caption] [caption id="attachment_1642" align="aligncenter" width="169"] AMD FX-8350 installed into my ASUS M5A99X[/caption] After installation with no surprise, the FX-8350 showed up properly and automatically increased my memory speed to 1866mhz (previously with my Phenom II the max available was 1600mhz). [caption id="attachment_1643" align="aligncenter" width="300"] AMD FX-8350 showing in the UEFI bios of my ASUS M5A99X[/caption] [caption id="attachment_1644" align="aligncenter" width="300"] AMD FX-8350 Detailed Info showing in the UEFI bios of my ASUS M5A99X[/caption] CPU-Z: [caption id="attachment_1645" align="aligncenter" width="300"] AMD FX-8350 in CPU-Z[/caption] And now the most interesting aspect of any upgrade. Can I justify the cost of the upgrade, especially when applications hadn't seemed sluggish. Integer Benchmark Results: [caption id="attachment_1647" align="aligncenter" width="300"] jcBENCH Integer Benchmarks[/caption] Floating Point Benchmark Results: [caption id="attachment_1648" align="aligncenter" width="300"] jcBENCH Floating Point Benchmark[/caption] I included a few extra CPUs recently benchmarked for comparison. First thoughts, Integer performance over the Phenom II X6 is over 200% across the board for single to 8 core applications/games, meaning the FX-8350 can do what the Phenom II X6 did with half the CPUs leaving the other half for other tasks or making multi-threaded tasks 200% faster theoretically. This is also shown in the A10-4655M CPU, in 4 threads, my laptop was actually faster than my desktop as far as integer only work is concerned. Kudos to AMD for making such a dramatic difference in integer performance. Floating Point results were a bit more interesting. Having seen quite a bit drop off in comparison to the Integer results, I was curious if the FX-8350 would hit the same hurdles. Sure enough because of the drop off of the 1 to 1 relationship between Integer Cores and Floating Point Cores in the Phenom II architecture in favor of a 2 to 1 ratio in the latest generations of AMD's CPUs, the Phenom II actually beat out the much higher clocked FX-8350, albeit the more threads, the less of an impact it made. Definitely more benchmarks will ensue with real world tests of Visual Studio 2012 compiling and After Effects CS6 rendering. Stay tuned.
    My Lian Li PC-Q03 arrived today and like the ASUS C60M1-I recently reviewed, I could only find 1 review of this case on the internet. So here's a mini-review with some notes. [caption id="attachment_1624" align="aligncenter" width="300"] Lian Li PC-Q03 Box[/caption] [caption id="attachment_1625" align="aligncenter" width="300"] Lian Li PC-Q03 Box Opened[/caption] [caption id="attachment_1626" align="aligncenter" width="255"] Lian Li PC-Q03 Opened[/caption] The packaging held the case in place and without any dents, definitely a plus with the light/thin aluminum that the case is made of. I was happy to see the extensive amount of screws included for a small case. [caption id="attachment_1628" align="aligncenter" width="300"] Lian Li PC-Q03 Screws[/caption] The ASUS C60M1-I installed without a hitch, the tray is also removable, which made it a lot easier to screw down the motherboard into the case. [caption id="attachment_1627" align="aligncenter" width="300"] Lian Li PC-Q03 with ASUS C60M1-I Installed[/caption] Installing the OCZ Revodrive was interesting to say the least. I highly suggest you install your DDR3 stick(s) before putting a sizeable length PCIe card in. The amount of space between the OCZ Revodrive and the case was smaller than I thought there would be. So definitely keep that in mind if you want to put a graphics card or SAS Controller in this case. [caption id="attachment_1630" align="aligncenter" width="300"] Notice the space left between the card and the wall of the case. Also note the space between the DDR3 slots and the card.[/caption] After installing the ram left the power supply. I chose to use an Antec VP-450 because it had a 120mm quiet fan and was black to match the case. Getting a modular power supply would probably be a good idea, though the price would go up considerably. It was a tight fit, but the power supply fit and the cables just fit. [caption id="attachment_1631" align="aligncenter" width="250"] Lian Li PC-Q03 with an Antec VP-450 installed[/caption] [caption id="attachment_1632" align="aligncenter" width="278"] Lian Lic PC-Q03 Back[/caption] [caption id="attachment_1633" align="aligncenter" width="261"] Lian Li PC-Q03 powered on[/caption] Overall I am really happy with the case. The internal cables were perfect length, the case feels sturdy, the inclusion of 2 USB 3.0 ports on the front are a great touch and size of the case itself makes it perfect for just about any Mini-ITX solution. The one negative was the USB 3.0 -> USB 2.0 adapter wasn't included (it should have been). Rather than shipping it back to Newegg and then redoing all of the work of putting everything in the case, I simply ordered the adapter on Amazon for $4.
    I just received my ASUS C60M1-I today and figured with the lack of information on this board on the internet I'd post my findings. [caption id="attachment_1602" align="aligncenter" width="300"] ASUS C60M1-I Box Contents[/caption] In the box you'll get the manual, DVD-ROM with drivers, Powered By ASUS sticker, 2 18" SATA III cables and the I/O Shield. Note the I/O Shield is not like the higher end ASUS boards with the padding. [caption id="attachment_1596" align="aligncenter" width="300"] ASUS C60M1-I Motherboard[/caption] [caption id="attachment_1595" align="aligncenter" width="300"] ASUS C60M1-I Ports[/caption] [caption id="attachment_1597" align="aligncenter" width="134"] ASUS C60M1-I DDR3 RAM Slots[/caption] [caption id="attachment_1598" align="aligncenter" width="300"] ASUS C60M1-I 6 SATA III 6gb ports[/caption] First off I was curious if the 8GB maximum was really true. Having 4 8gb Corsair Vengeance DDR3-1833 sticks awaiting to be put into my primary desktop, I popped a pair into the board. Sure enough the motherboard read them without a hitch: [caption id="attachment_1603" align="aligncenter" width="269"] Corsair Vengeance 16GB DDR3-1833[/caption] [caption id="attachment_1604" align="aligncenter" width="300"] Corsair Vengeance 16GB DDR3-1833 Stick[/caption] [caption id="attachment_1605" align="aligncenter" width="300"] ASUS C60M1-I BIOS showing 16GB DDR3[/caption] Also curious is the option of running the ram at 1333mhz. Using 1600mhz and 1833mhz DDR3, you could in theory run it at 1333mhz and keep the timings really nice. [caption id="attachment_1606" align="aligncenter" width="300"] ASUS C60M1-I BIOS showing 1333mhz Option[/caption] Next up was seeing if the board supported RAID of any type, this turned out to be false. A little more investigation on the motherboard itself, the South Bridge is the FCH A50M. It also does not support USB 3.0 nor has a native Gigabit controller. In this case ASUS went with a RealTek 8111F Gigabit Controller. I personally have had the 8111e which ran fine and assume this is just a revision of that controller. If someone has more info on it, please let me know. The big points for me were Jumbo Frame support to 9k and Gigabit, both of which are features of the 8111f that is on this board. Another thing to note is the lack of HDMI port. Not a huge deal with readily available DVI->HDMI adapters, but something to consider if that is make or break. The more expensive/faster ASUS E45M1-I DELUXE has an HDMI port (in addition to USB 3.0), though it does draw more power than the C60M1-I. Another thing I was curious about was total power draw of the system. Based on what I read the CPU is 9W and the A50M Southbridge uses 4.7W, coupled with 2 4gb Sticks I expected maybe 20W. The total idle usage (sitting in the bios) is 23W. The total usage under 100% load is 39W. I should note this was done with Antec VP-450, a higher efficiency power supply might bring that number down a bit. [caption id="attachment_1613" align="aligncenter" width="300"] Antec VP-450 450W Power Supply[/caption] So there you have it, pretty low wattage. For comparison, my Acer Aspire One AO522-BZ465 that I got in June 2011 uses 24W idle in bios and 40W under load. The last question I had, something I imagine a lot of people would be wondering for NAS purposes is if the PCIe x16 slot (x4 mechanical) would support non-graphics cards. I had a 240GB OCZ Revodrive x4 card that I was hoping to use in this board so I gave it a shot, sure enough it worked without any hassle. [caption id="attachment_1607" align="aligncenter" width="300"] 240GB OCZ Revodrive PCIe SSD[/caption] [caption id="attachment_1608" align="aligncenter" width="300"] 240GB OCZ Revodrive PCIe SSD card[/caption] [caption id="attachment_1609" align="aligncenter" width="300"] OCZ Revodrive PCIe SSD BIOS[/caption] I don't have any other PCIe controllers laying around to test, but I imagine you would be ok with them. Options I would consider, maybe Infiniband for a MOSIX SSI or a SAS PCIe x4 controller? Onto the more fun stuff, the benchmarks. [caption id="attachment_1616" align="aligncenter" width="300"] jcBENCH Integer Comparison[/caption] [caption id="attachment_1617" align="aligncenter" width="300"] jcBENCH Floating Point Comparison[/caption] [caption id="attachment_1618" align="aligncenter" width="300"] Windows Experience Index Comparison[/caption] The numbers can speak for themselves, but I should point out the vastly better CPU performance in jcBENCH over the C-50 CPU. The Turbo clock speed of the C-60 really does make a huge difference. So to summarize:
    1. ~23W of usage at idle (BIOS)
    2. ~40W of usage at full power
    3. 16gb of DDR3 is the max this board will take, not 8gb as mentioned on the ASUS Website
    4. Ram can be set to run at 1333mhz not just 1066mhz like on the website
    5. No HDMI Port
    6. No USB 3.0 Controller
    7. No RAID Controller like that found in the 7xx/8xx/9xx AMD Desktop Chipsets
    8. The PCIe x16 (x4 mechanical) can be used for non-graphics cards
    9. Turbo mode of the C-60 does give a considerable boost in CPU performance
    10. OpenCL 1.1 Support
    Any comments, suggestions, wanting more information, please let me know.
    You might be getting this error:
    web.ui.webresource.axd The status code returned from the server was: 500
    in your Telerik ASP.NET Web application. Took some searching/trial and error, but one of these steps by solve your problem:
    1. Try making sure <%@ Page> directive has ValidateRequest="false"
    2. Try switching your IIS ASP.NET 4 to run in Classic Mode
    3. Remove all RadAjaxManager, RadAjaxLoadingPanels and RadAjaxPanels from your ASPX page
    For my solution all I needed to do was remove the RadAjax controls in order for a particular page with a couple RadDatePicker, RadButtons and a RadGrid. Still don't know why it occurred, but I'm glad it is working now. Hopefully this helps someone.
    As months and years go by, devices coming and going I've seen (as most have) an increasing demand to provide a universal experience no matter what device you are on i.e. mobile, desktop, laptop, tablet, website etc. This has driven a lot of companies to pursue ways to deliver that functionality efficiently, both from a monetary standpoint and a performance perspective. A common practice is to provide a Web Service, SOAP or WCF for instance, and then consume the functionality on the device/website. This provides a good layer between your NAS & Database server(s) and your clients. However, you don't want to provide the exact same view on every device. For instance, you're not going to want to edit 500 text fields on a 3.5" Mobile screen nor do you have the ability to upload non-isolated storage documents on mobile devices (at least currently). This brings up a possible problem, do you have the same Operation Contract with a DataContract Class Object and then based on the device that sent it, know server side what to expect? Or do you handle the translation on the most likely slower client side CPU? So for me, there are 2 possible solutions:
    1. Create another layer between the OperationContract and the server side classes to handle device translations
    2. Come up with something outside the box
    Option #1, has pros and cons. It leaves the client side programming relatively the same across all platforms and leaves the work to the server side so pushing out fixes would be relatively easy and most likely affect all clients if written to use as much common code as possible. However, it does leave room for unintended consequences. Forgetting to update all of the device specific code and then having certain clients not get the functionality expected. Further more, as devices evolve, for instance the iPhone 1-4S had a 3.5" screen while the iPhone 5 has a much larger 4" screen. Would this open the doors to having a closer to iPad/Tablet experience? This of course depends on the application and customer base, but something to consider. And if it makes sense to have differing functionality passed to iPhone 5 users versus iPhone 4, there is more complexity in coding to specific platforms. A good route to solve those complexities in my opinion would be to create a Device Profile like Class based on the global functionality, then when the request to push or get data, the Factory classes in your Web Service would know what to do without having tons of if (Device == "IPHONE") conditionals. As more devices arrive, create a new profile server side and you'd be ready to go. Depending on your application this could be a very manageable path to go. Option #2, think outside the box is always interesting to me. I feel like many developers (I am guilty of this too), approach things based on previous experience and go through an iterative approach with each project. While this is a safer approach and I agree with it in most cases, I don't think developers can afford to think this way too much longer. Software being as interconnected with external APIs, web services, integrations (Facebook, Twitter etc.) and countless devices is vastly different than the 90s class library solution. Building a robust as future proof system to me is much more important than the client itself in my opinion. That being said, what could you do? In working on Windows Workflow Foundation last month and really breaking apart what software does at the most basic level, it really is simply:
    1. Client makes a request
    2. Server processes request (possibly doing multiple requests of it's own to databases or file storage)
    3. Return the data the Client expects (hopefully incorporating error handling in the return)
    So how does this affect my thinking of architecting Web Services and Client applications? I am leaning towards creating a generic interface for certain requests to get/set data between the Server and Client. This creates a single funnel to process and return data, thus eliminating duplicate code and making the manageability much higher. However, you're probably thinking about the overhead in translating a generic request to "GetObject" to what the client is actually expecting. I definitely agree and I don't think it should be taken literally especially when considering performance of both server side resources and the amount of data transferring back and forth. What I am implying is doing something like this with your OperationContract definition:
    [OperationContract] Factory.Objects.Ticket GetTicket(ClientToken clientToken, int TokenID); ]]>
    Your implementation:
    public Factory.Objects.Ticket GetTicket(ClientToken clientToken, int TokenID) {
         return new Factory.TicketFactory(Token: clientToken).GetObject<Factory.Objects.Ticket>(ID: TokenID); }
    Then in your Factory class:
    public interface Factory<T> where T : FactoryObject {
         public Factory<T>(ClientToken Token) {
    T GetObject(int ID); FactoryResult AddObject(T object); }
    Then implement that Factory pattern for each object class. I should note implementing a Device Profile layer could be done at the Factory Constructor level. Simply pass in the Device Type inside the ClientToken object. Then a simple check to the Token class for instance:
    public FactoryResult AddObject<Ticket>(Ticket item) {
         if (Token.Device.HasTouchScreen) {
         // do touch screen specific stuff; }
    return FactoryResult(); }
    You could also simply store Device Profile data in a database, text file, xml file and then cache them server side. Obviously this is not a solution for all applications, but has been successful in my implementations. Comments, suggestions, improvements, please let me know below.
    Those wishing to get C++ in their new Windows Phone 8 application might find it difficult to get going as I did, therefore I am writing this how to guide to help get you started. If you wish to skip a head and simply download the complete Visual Studio 2012 solution/projects/source files click here. You will need Visual Studio 2012 and the Windows Phone 8 SDK that got released on Tuesday of this week. For this how to, I chose to do a pretty simple implementation, take a C# String, pass it to a C++ WinRT library and have the C++ Library return the number of characters in the string. Pretty simple, but definitely the foundation that will lead to many more interesting ways to utilize C++ going forward for both your own projects and my own. The end result: [caption id="attachment_1560" align="aligncenter" width="180"] End Result of C++ WinRT/Windows Phone 8 Application[/caption] I won't go over the basic XAML in this how to as I'll assume you've had some XAML experience. To get started, create your Windows Phone project type, I chose a basic Windows Phones App like so: [caption id="attachment_1561" align="aligncenter" width="300"]in Visual Studio 2012" width="300" height="207" class="size-medium wp-image-1561" /> Creating a Windows Phone App project in Visual Studio 2012[/caption] Then we'll be adding our C++ WinRT Library. Return to the Add Project screen (Control + Shift + N), scroll down to C++, Windows Phone and select Windows Phone Runtime Component like so: [caption id="attachment_1573" align="aligncenter" width="300"]in Visual Studio 2012" width="300" height="207" class="size-medium wp-image-1573" /> Creating a C++ Windows Phone WinRT project in Visual Studio 2012[/caption] Now that we have both projects, lets dive into the C++ aspect of the project. Like a traditional C++ project, you still have a header and source file. Having not been keeping up with the new C++ syntax was a bit unusual, surprisingly very similar to C#. Pretty standard class definition with the exception of the ref attribute in the class declaration. From what I've read, this is critical in allowing the Class to be accessed via C#. [cpp] #pragma once #include <string> namespace CppWINRT {
         using namespace Windows::Foundation; using Platform::String; public ref class StringCharacterCounter sealed {
         public: unsigned int GetLength(String^ strToParse); }
    ; }
    [/cpp] And our source file where I am converting the WinRT String to an STL wstring and getting the length. Note you don't need to do the conversion, I was simply seeing how using STL interacted with WinRT. [cpp] // CppWINRT.cpp #include "pch.h" #include "CppWINRT.h" using namespace CppWINRT; using namespace Platform; unsigned int StringCharacterCounter::GetLength(String^ strToParse) {
         std::wstring stlString = strToParse->Data(); return stlString.length(); }
    [/cpp] Now that our C++ code is done, lets add our reference in our Windows Phone 8 project. Luckily, we no longer have to do Interops as in the past when having a C# application call out to C++ code, like described in my blog article earlier this month, PInvoke fun with C++ Library and WPF C# Application. The references are handled just as if you had a C# project/library. [caption id="attachment_1565" align="aligncenter" width="300"]in our Windows Phone 8 Project" width="300" height="206" class="size-medium wp-image-1565" /> Add Reference to our C++ WinRT Library in our Windows Phone 8 Project[/caption] Then notice the reference shows up like a normal library reference: [caption id="attachment_1566" align="aligncenter" width="300"]in our Windows Phone 8 Project" width="300" height="168" class="size-medium wp-image-1566" /> Reference Added in our Windows Phone 8 Project[/caption] With the reference in place, now we can begin to use our new C++ WinRT Library. Simply type out the reference name like you would in a C# library and call the GetLength function we created earlier:
    private void btnSubmit_Click(object sender, RoutedEventArgs e) {
         CppWINRT.StringCharacterCounter sccMain = new CppWINRT.StringCharacterCounter(); txtBlockAnswer.Text = sccMain.GetLength(txtBxString.Text).ToString() + " characters were found in the string above"; }
    Pretty simple and painless no? Again if you wish to download the complete solution, you can do so here. Please leave comments and suggestions, I will be posting more C++/WP8 articles as I progress through my own porting effort of jcBENCH.
    While not completely done with the IRIX 0.3 to 0.6 upgrade, I figured it would be good to get the updated client out. New in this version over the previous is the ability to parse the available hardware and the big news...all C++ code in the IRIX, Linux and Windows clients share common code. [bash] jcBENCH 0.6.522.1025(IRIX Edition) (C) 2012 Jarred Capellman CPU Information --------------------- Manufacturer: MIPS Model: R14000 Count: 4x600mhz Architecture: MIPS --------------------- Running Benchmark.... Integer: 1.61358 seconds Floating Point: 1.57094 seconds [/bash] While yes there are a few: [cpp] #if defined(WIN32) [/cpp] It is nice now to have a common code base. Going forward, I still have to create the Qt and figure out the finer points of using SOAP/WCF Services in a C++ client in a portable manner. You can download this new release here.
    Something I had been meaning to do for quite some time now is move as much of IRIX off of my Maxtor ATLAS II 15k 73gb Ultra 320 SCSI drive in hopes to keeping the small stock pile I have of them alive and well for years to come. The ideal solution would be to buy one of those SCA -> SATA II adapters for 2.5" SATA II/III Drives, but those are still $200+, which for me is not worth it. Looking around for alternative solutions, I came across an LSI SAS3041X-R 4 Port SATA PCI-X card that is IRIX 6.5 supported. Granted I wish it had SATA II ports, but alas for $25 shipped I really can't complain. I should note, this card is 133mhz/PCI-X, which means the maximum bandwidth is 1.06 gb/s. Silicon Graphics Origin 300s only have 64bit/66mhz PCI-X slots, so you'll be looking at a maximum bandwidth of 533.3mb/sec, while the Origin 350 has 2 64bit/100mhz PCI-X slots providing up to 800mb/sec. Keep that in mind if you need more than 533.3mb/sec of bandwidth. [caption id="attachment_1532" align="aligncenter" width="300"] What you'll need for adding SATA support and an SSD to your SGI Origin 300[/caption] [caption id="attachment_1533" align="aligncenter" width="225"] Y 4 pin Molex adapter[/caption] If you can't tell from the pictures, you'll need:
    1. The LSI SAS3041X-R card
    2. SATA cable, longer the better, I had used the standard length and had just enough length
    3. Y Cable that accepts the standard 4 pin Molex connector and provides 2 SATA Power connections
    4. Y Cable that accepts the standard 4 pin Molex connectors and provides 2 additional 4 pin Molex connectors
    5. SSD or other SATA drive, because the space constraints I'd highly suggest at the most a 2.5" mechanical drive.
    LSI SAS3041X-R installed in the middle PCI-X slot: [caption id="attachment_1534" align="aligncenter" width="300"] LSI SAS3041X-R in the middle PCI-X slot of the SGI Origin 300[/caption] Back view: [caption id="attachment_1535" align="aligncenter" width="300"] Back view LSI SAS3041X-R in the middle PCI-X slot of the SGI Origin 300[/caption] Top view: [caption id="attachment_1536" align="aligncenter" width="300"] SGI Origin 300 with LSI SAS3041X-R installed[/caption] Unfortunately, Silicon Graphics didn't think about those wishing to add internal drives that wouldn't be driven off the built-in Ultra 160 controller. Getting creative, I found a spot in the far left fan grille that when running, the actual fan blades don't hit. You'll have to remove the fan to the SATA connector through, which I should mention again, you can swap the fans for much, much quieter fans. I detailed the post here back in February of this year. I took a picture of the SATA Cable going through the grille: [caption id="attachment_1537" align="aligncenter" width="225"] Running the SATA cable through the unobstructed 80mm fan grille[/caption] Once the cable is through, you can pop the 80mm fan back into place and then connect both the Y power cables and your SATA drive as shown in the picture below: [caption id="attachment_1538" align="aligncenter" width="225"] 240gb Sandisk Extreme SSD installed into an SGI Origin 300[/caption] After putting the Origin 300 back into my rack and starting IRIX, IRIX found the LSI card without a need for driver installation: [bash] IRIX Release 6.5 IP35 Version 07202013 System V - 64 Bit Copyright 1987-2006 Silicon Graphics, Inc. All Rights Reserved. NOTICE: Initialising Guaranteed Rate I/O v2 (Jul 20 2006 18:47:01) NOTICE: pcibr_attach: /hw/module/001c01/Ibrick/xtalk/15/pci Bus holds a usb part - settingbridge PCI_RETRY_HLD to 4 Setting hub ixtt.rrsp_ps field to 0x4e20 NOTICE: /hw/module/001c01/Ibrick/xtalk/14/pci/1a/scsi_ctlr/0: 949X fibre channel firmware version NOTICE: /hw/module/001c01/Ibrick/xtalk/14/pci/1b/scsi_ctlr/0: 949X fibre channel firmware version NOTICE: /hw/module/001c01/Ibrick/xtalk/14/pci/2/scsi_ctlr/0: 1064 SAS/SATA firmware version [/bash] Having been removed from using IRIX for a few months, I preferred to do the disk initialization with the GUI over a TightVNC connection. You can initialize a disk through the System Manager, which can be accessed from the Toolchest System option like so: [caption id="attachment_1544" align="aligncenter" width="195"] IRIX System Manager[/caption] Then click on "Hardware and Devices" and then "Disk Manager" like so: [caption id="attachment_1547" align="aligncenter" width="300"] IRIX System Manager[/caption] [caption id="attachment_1539" align="aligncenter" width="300"] IRIX Disk Manager[/caption] With the SSD/SATA drive selected, click on Initialize and then click Next through the series of questions. Afterwards you can close that window, bringing you back to the System Manager window. Click on Files and Data and then Filesystem Manager like so: [caption id="attachment_1549" align="aligncenter" width="300"] Getting to the IRIX File System Manager[/caption] Afterwards you'll be presented with the Filesystem Manager: [caption id="attachment_1540" align="aligncenter" width="300"] IRIX Filesystem Manager[/caption] Click on Mount Local... in the bottom far left of the window, choose the newly initialized disk, the place where you want the drive mounted and click through the wizard. After a few minutes, you'll be presented with a refreshed Filesystem Manager window showing your SSD/SATA drive like in the picture above. Intrigued on how much of a difference the SSD would make in comparison to the Maxtor Atlas II 15k Ultra 320 drive, I ran diskperf and stacked the images side by side. The SSD is on the Left and the Maxtor Ultra 320 drive is on the right. [caption id="attachment_1541" align="aligncenter" width="300"] 240gb SanDisk Extreme SSD vs 73gb Maxtor Atlas II 15k Ultra 320 SCSI Drive[/caption] I actually expected the Maxtor to perform a bit better, but am not surprised overall by the results. About 30 minutes or work, majority of that simply pulling the server out of the rack, I think it is fair to say, if you're running a later generation Silicon Graphics machine with PCI-X slots (ie Fuel, Tezro, Origin 300, Origin 350 etc) you will see a huge performance boost. Next on my todo list is to move everything possible off of the Maxtor, leaving only the minimum boot data (as you cannot boot from an addon card in IRIX). Will post the details of that process at a later time.
    After diving into OpenXML Friday in order to replace a Word 2007 .NET 1.1 Interop cluster of awfulness, I realized there wasn't a complete example from a dotx or docx Word file with Merge Fields, parsing the Merge Fields and then writing the merged file to a docx. So I'm writing this blog article to demonstrate a working example. I don't claim this to be a perfect example nor the "right" way to do it, but it works. If you have a "right" way to do it, please let me know by posting a comment below. To make life easier or if you simply want the code in front of you, I've zipped up the complete source code, sample Word 2013 docx and dotx file here. Use it as a basis for you app or whatever you feel like. I'd like to hear any success stories with it or if you have suggestions though. I am going to assume you have already downloaded the OpenXML SDK 2.0 and installed it. I should note I tested this on Visual Studio 2012 running Windows 8 and the Office 2013 Preview. Word Template with 2 Mail Merge Fields: [caption id="attachment_1524" align="aligncenter" width="300"] Word Template before Mail Merge[/caption] Word Document after Mail Merge: [caption id="attachment_1525" align="aligncenter" width="300"] Word Document after Mail Merge[/caption] So without further adieu lets dive into the code. You'll need to add a reference to DocumentFormat.OpenXml and WindowsBase. In your code you'll need to add these 3 lines to include the appropriate namespaces:
    using DocumentFormat.OpenXml; using DocumentFormat.OpenXml.Packaging; using DocumentFormat.OpenXml.Wordprocessing; ]]>
    The bulk of the document generation code is in this function, I tried to document as much as possible on the "unique" OpenXML code
    public RETURN_VAL GenerateDocument() {
         try {
         // Don't continue if the template file name is not found if (!File.Exists(_templateFileName)) {
         throw new Exception(message: "TemplateFileName (" + _templateFileName + ") does not exist"); }
    // If the file is a DOTX file convert it to docx if (_templateFileName.ToUpper().EndsWith("DOTX")) {
         RETURN_VAL resultValue = ConvertTemplate(); if (!resultValue.Value) {
         return resultValue; }
    else {
         // Otherwise make a copy of the Word Document to the targetFileName File.Copy(_templateFileName, _targetFileName); }
    using (WordprocessingDocument docGenerated = WordprocessingDocument.Open(_targetFileName, true)) {
         docGenerated.ChangeDocumentType(WordprocessingDocumentType.Document); foreach (FieldCode field in docGenerated.MainDocumentPart.RootElement.Descendants<FieldCode>()) {
         var fieldNameStart = field.Text.LastIndexOf(FieldDelimeter, System.StringComparison.Ordinal); var fieldname = field.Text.Substring(fieldNameStart + FieldDelimeter.Length).Trim(); var fieldValue = GetMergeValue(FieldName: fieldname); // Go through all of the Run elements and replace the Text Elements Text Property foreach (Run run in docGenerated.MainDocumentPart.Document.Descendants<Run>()) {
         foreach (Text txtFromRun in run.Descendants<Text>().Where(a => a.Text == "«" + fieldname + "»")) {
         txtFromRun.Text = fieldValue; }
    // If the Document has settings remove them so the end user doesn't get prompted to use the data source DocumentSettingsPart settingsPart = docGenerated.MainDocumentPart.GetPartsOfType<DocumentSettingsPart>().First(); var oxeSettings = settingsPart.Settings.Where(a => a.LocalName == "mailMerge").FirstOrDefault(); if (oxeSettings != null) {
         settingsPart.Settings.RemoveChild(oxeSettings); settingsPart.Settings.Save(); }
    docGenerated.MainDocumentPart.Document.Save(); }
    return new RETURN_VAL {
         Value = true }
    ; }
    catch (Exception ex) {
         return new RETURN_VAL {
         Value = false, Exception = "DocumentGeneration::generateDocument() - " + ex.ToString() }
    ; }
    In my scenario I had tons of dotx files and thus needed to convert them properly (renaming won't do for OpenXML merging)
    private RETURN_VAL ConvertTemplate() {
         try {
         MemoryStream msFile = null; using (Stream sTemplate = File.Open(_templateFileName, FileMode.Open, FileAccess.Read)) {
         msFile = new MemoryStream((int)sTemplate.Length); sTemplate.CopyTo(msFile); msFile.Position = 0L; }
    using (WordprocessingDocument wpdFile = WordprocessingDocument.Open(msFile, true)) {
         wpdFile.ChangeDocumentType(DocumentFormat.OpenXml.WordprocessingDocumentType.Document); MainDocumentPart docPart = wpdFile.MainDocumentPart; docPart.AddExternalRelationship("", new Uri(_templateFileName, UriKind.RelativeOrAbsolute)); docPart.Document.Save(); }
    // Flush the MemoryStream to the file File.WriteAllBytes(_targetFileName, msFile.ToArray()); msFile.Close(); return new RETURN_VAL {
         Value = true }
    ; }
    catch (Exception ex) {
         return new RETURN_VAL {
         Value = false, Exception = "DocumentGeneration::convertTemplate() - " + ex.ToString() }
    ; }
    In my actual application I have a pretty elaborate Mail Merge process pulling in from various sources (SQL Server and WCF Services), but to demonstrate a working application I wrote out a simple switch/case function.
    private string GetMergeValue(string FieldName) {
         switch (FieldName) {
         case "CurrentDate": return DateTime.Now.ToShortDateString(); case "CPU_Count": return Environment.ProcessorCount.ToString(); default: throw new Exception(message: "FieldName (" + FieldName + ") was not found"); }
    I am proud to announce jcBENCH 0.6.522.1013 for Win32 x86. New in this version is the ability to compare results visually against similar cpus: [caption id="attachment_1516" align="aligncenter" width="300"] jcBENCH 0.6 Compare Results[/caption] In addition, I refactored the code to be a lot cleaner, still have some left to do. Also I made some UI changes to clean it up a bit. The big features left are to provide a more detailed compare feature, OpenCL and of course additional ports to Windows Phone 8 and IRIX. You can get the latest release here.
    After a considerably longer than expected development time, the big new 0.5 release of jcBENCH is finally available. You can download it from here. [caption id="attachment_1510" align="aligncenter" width="300"] jcBENCH WPF 0.5 Release[/caption] With this new release is an all new WPF GUI and a new C++ Library to help facilitate cross-platform development. The idea being, use the same C++ code across all platforms and to just develop a frontend for each. Also new is a built in viewer of the Top 10 Results and the return of the ability to submit results. Some future enhancements down the road:
    1. OpenCL Benchmark, with my restructuring I have to re-code my C# OpenCL code -> C++ OpenCL code and then add support to the app to determine if OpenCL drivers exist
    2. More comprehensive comparing of results, filtering down to similar spec machines, comparing CPUs used, Manufacturer etc
    3. Ability to run the whole test suite at once (ie if you have a 6 CPU, benchmark it with each CPU used
    4. IRIX 6.5 and Windows Phone 8 clients
    If you have suggestions/requests, please let me know, I'm definitely interested in hearing what people have to say.
    In working on the new version of jcBench, I made a decision to continue having the actual benchmarking code in C++ (ie unmanaged code) to promote cross-platform deployments. With Windows Phone 8 getting native code support and my obsession with Silicon Graphics IRIX machines I think this is the best route. That being said, the frontends for jcBench will still definitely be done in C# whenever possible. This brings me to my next topic, getting your C++ Library code to be available to your C# application whether that is a console app, WPF, Windows 8 etc. Surprisingly there is a lot of information out there on this, but none of the examples worked for me. With some trial and error I got it working and figured it might help someone out there. So in your C++ (or C) source file: [cpp] extern "C" {
         __declspec( dllexport ) float runIntegerBenchmark(long numObjects, int numThreads); float runIntegerBenchmark(long numObjects, int numThreads) {
         CPUBenchmark cpuBenchmark = CPUBenchmark(numObjects, numThreads); return cpuBenchmark.runIntegerBenchmark(); }
    __declspec( dllexport ) float runFloatingPointBenchmark(long numObjects, int numThreads); float runFloatingPointBenchmark(long numObjects, int numThreads) {
         CPUBenchmark cpuBenchmark = CPUBenchmark(numObjects, numThreads); return cpuBenchmark.runFloatingPointBenchmark(); }
    [/cpp] Notice the __declspec( dllexport ) function declaration, this is key to telling your C# (or any other language) that this function is exposed externally in the DLL. Something else to keep in mind is the difference in types between variables in C++ and C#. A long for instance in C++ is an Int32 in the CLR. Something to keep in mind if you get something like this thrown in your C# application:
    This is likely because the managed PInvoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature
    Then in your C# code:
    [DllImport("jcBenchCppLib.dll", CallingConvention = CallingConvention.Cdecl)] public static extern float runIntegerBenchmark(Int32 numObjects, int numThreads); [DllImport("jcBenchCppLib.dll", CallingConvention = CallingConvention.Cdecl)] public static extern float runFloatingPointBenchmark(Int32 numObjects, int numThreads); ]]>
    To execute the function, call it like you would a normal function:
    lblInteger.Content = runIntegerBenchmark(100000, 6) + " seconds"; ]]>
    About a year in a half ago I started going down the path of Windows Phone 7, WCF and WPF Development. My hope back then was to get a few Windows Phone 7 games out the door using WCF as the backend and then do WCF and WPF development at work. Unfortunately, WPF simply never took off at work, I did a simple Fax Reader application that connected to an Exchange 2010 mailbox, parsed a Code 39 Barcode and then uploaded it to a NAS. Luckily I did do one professional Content Management System on Windows Phone 7 back in January (along with iOS and Android native apps) and if you have been following my blog posts since January, WCF has really become the mainstay at work thanks to persistence of using it as a replacement for the traditional class library approach. Going back to WPF and my plans to finally make a game in my free time outside of my WCF/ASP.NET/WinForm development at work. May 2011 I got a fairly simple tile based top down arcade flight game up and running. You could choose from several airplanes and it would randomly throw enemy fighters at you. You could launch Sidewinder Missiles and the enemy fighters would disappear, but there was no collision detection between you and the fighters. For various reasons this project was sidelined, the biggest was the lack of WPF development at work. Seemed like a waste of time to focus on development for something that wasn't benefiting both personnel and work projects. Something I had noticed looking back since I was 9 years old (and what got me into programming in the first place), is actually programming a game I could be proud of. All of those years doing QBasic programming were some rewarding and then when I got into 2D VGA DOS programming in 1999 I was finally able to display 320x200 (Mode X) 8bit Bitmaps. At that point though, we were in Quake III curved surface, multi-textured and on the cusp of Vertex and Pixel Shaders. Not exactly keeping up with the latest development shops back then. Fast forward to 2003-2005, I spent a lot of time doing C++/OpenGL/SDL work on my Infinity Engine (I believe the sourceforge page is still up), but I was trying to achieve too much. Then in May this year I got back into OpenGL and C++ to work on a Windows/IRIX Wolfenstein 3D like game. This started out like a realistic goal, I quickly learned balancing 2 different languages, C# at work on various versions of .NET and platforms (WCF, ASP.NET, WinForms, Console etc) and then C++ at night was becoming hard to manage and keep up with my C# development. Fast forward another couple of months and death marches at work, I'm now at a point that I need to focus on something enjoyable. Using WPF 4.5 to make a Real-Time Strategy game, something I had tried before seems like the best route to go. I advance my C# skills and get some extensive WPF experience. Last night and Friday night I started work on an improved jcgeEDITOR along with starting from scratch on a WPF 4.5 game engine using the multi-threaded techniques I have been applying and advancing at work. For a Real-Time Strategy game from an engine perspective you need the following:
    1. Map support with various tiles that allow all Unit types, some Unit Types and no Unit Types
    2. Drag and drop support for Unit Buildings
    3. Collision detection of Units to Units, Units to Tiles, Buildings to Buildings and Units to Buildings
    Of course this is excluding AI, Sound and Networking. However I am going to be designing it from the ground up for Network Play in mind. It's one thing to play again a computer player, but an entirely different game when played with a human or multiple human players. A very early test of the Tile Engine and early Map Format: [caption id="attachment_1496" align="aligncenter" width="300"] Early jcGE WPF screenshot on 9.16.2012[/caption] More to come this afternoon when I get the left hand side bar included.
    With the very common practice of taking some Class or Object Collection and moving it into a new collection I've been wondering if there were any clear cut performance differences between the commonly used methods: for loop, foreach, LINQ Expression, PLINQ etc. So getting right into the code, I took my tried and true Users Object:
    public class Users {
         public int ID {
         get; set; }
    public string Name {
         get; set; }
    public string Password {
         get; set; }
    public Users(int id, string name, string password) {
         ID = id; Name = name; Password = password; }
    Had a struct equivalent to copy the data into:
    public struct Users_Struct {
         public int ID; public string Name; public string Password; public Users_Struct(int id, string name, string password) {
         ID = id; Name = name; Password = password; }
    And then the various methods to get the task done:
    private static void LinQMethod(List<Users> mainList) {
         var startTime = DateTime.Now; var linqList = mainList.Select(item => new Users_Struct(item.ID, item.Name, item.Password)).ToList(); Console.WriteLine("LinQ Method: " + DateTime.Now.Subtract(startTime).TotalSeconds); }
    private static void PLinQMethod(List<Users> mainList) {
         var startTime = DateTime.Now; var plinqList = mainList.AsParallel().Select(item => new Users_Struct(item.ID, item.Name, item.Password)).ToList(); Console.WriteLine("PLinQ Method: " + DateTime.Now.Subtract(startTime).TotalSeconds); }
    private static void ForMethod(List<Users> mainList) {
         var startTime = DateTime.Now; var forList = new List<Users_Struct>(); for (var x = 0; x < mainList.Count; x++) {
         forList.Add(new Users_Struct(mainList[x].ID, mainList[x].Name, mainList[x].Password)); }
    Console.WriteLine("For Method: " + DateTime.Now.Subtract(startTime).TotalSeconds); }
    private static void ForEachMethod(List<Users> mainList) {
         var startTime = DateTime.Now; var foreachList = new List<Users_Struct>(); foreach (var user in mainList) {
         foreachList.Add(new Users_Struct(user.ID, user.Name, user.Password)); }
    Console.WriteLine("Foreach Method: " + DateTime.Now.Subtract(startTime).TotalSeconds); }
    private static void PForEachMethod(List<Users> mainList) {
         var startTime = DateTime.Now; var pforeachQueue = new ConcurrentQueue<Users_Struct>(); Parallel.ForEach(mainList, user => {
         pforeachQueue.Enqueue(new Users_Struct(user.ID, user.Name, user.Password)); }
    ); ; Console.WriteLine("Parallel Foreach Method: " + DateTime.Now.Subtract(startTime).TotalSeconds); }
    private static void PForEachExpression(List<Users> mainList) {
         var startTime = DateTime.Now; var plinqQueue = new ConcurrentQueue<Users_Struct>(); Parallel.ForEach(mainList, user => plinqQueue.Enqueue(new Users_Struct(user.ID, user.Name, user.Password))); Console.WriteLine("Parallel LinQ Expression Method: " + DateTime.Now.Subtract(startTime).TotalSeconds); }
    And onto the results: [caption id="attachment_1491" align="aligncenter" width="300"]in various methods" width="300" height="174" class="size-medium wp-image-1491" /> Moving Data From One Collection to Another in various methods[/caption] Sadly to some extent, no method is particularly faster than another. It isn't until you hit 1,000,000 objects that there is sizable difference between the various methods. From clean code factor I've switched over to using an expression. Further investigation is needed I believe, looking at the IL code will probably be next task to see exactly how the C# code is being translated into IL.
    Continuing my pursuit towards the "perfect" WCF Data Layer, I thought I would check out alternatives to returning List Collections of Structs and consider switching over to a DataContract to return a Class object instead. Using my tried and true Users test case, I defined my Struct and DataContract Class:
    [Serializable] public struct USERSTRUCT {
         public int ID; public string Username; public string Password; public bool IsAdmin; }
    [DataContract] public class UserClass {
         private int _ID; private bool _IsAdmin; private string _Username; private string _Password; [DataMember] public string Password {
         get {
         return _Password; }
    set {
         _Password = value; }
    [DataMember] public int ID {
         get {
         return _ID; }
    set {
         _ID = value; }
    [DataMember] public bool IsAdmin {
         get {
         return _IsAdmin; }
    set {
         _IsAdmin = value; }
    [DataMember] public string Username {
         get {
         return _Username; }
    set {
         _Username = value; }
    I populated my Users Table with 5,000 test rows and then ran it for 1 to 5,000 results. The DataContract was 230 bytes versus the Struct of 250 bytes, but the return result time differences were marginal at best and actually went back and forth between which was faster. It was also interesting about the allocation time in populating the collection of each being virtually identical. I was kind of disappointed in the fact the DataContract wasn't faster or that much smaller size with the additional code needed. So for the mean time I'll continue to use Structs. I should note, the struct is a Value Type versus the DataContract which is a Reference Type so the normal rules apply in that regard.
    In doing my refactoring/cleaning up of jcBench, I really wanted to make the new platform very dynamic. Something that has bugged me about my own development sometimes and a lot of other companies is the close dependency between the Application and their Class Libraries. To me an Application should have a set interface to execute code, with the actual implementation or implementations of the interface in Class Libraries. I hate to keep going back to Quake II, but to me that was really good, logical architecture. You had the Quake II engine in the executable, the actual game code in it's own dll and then the OpenGL & Software Rendering code in their own DLLs. Very clean in my opinion. In C#, since introducing the dynamic keyword in .NET 4.0, I've wanted to implement some sort of dynamic class loading where I could follow a similar path that Quake II did. This wasn't on the top of my weekend research projects so what better time than when the next revision of .NET came out :) So let's dive into the code. I'll be using some of the new jcBench code to demonstrate the technique. First off you'll need an interface object to define the required properties, methods etc. For this post I'll provide a simplified IBenchmarkBase Interface from jcBench:
    public interface IBenchmarkBase {
         Common.BENCHMARK_TYPE BenchmarkType {
         get; }
    string BenchmarkShortName {
         get; }
    string BenchmarkName {
         get; }
    int BenchmarkRevisionNumber {
         get; }
    int MaxCPUs {
         get; set; }
    long NumberObjects {
         get; set; }
    jcBenchLib.jcBenchReference.jcBenchSubmittedResult runBenchmark(); }
    Pretty standard run of the mill Interface. I highly suggest you consider switching over to an Interface object versus creating an Abstract class and inheriting/implementing from that for future projects. There's quite a few articles going over the pros and cons of both, so at least give it a chance in doing some research on them for yourself. And then one implementation of that Interface:
    public class TPLBenchmark : IBenchmarkBase {
         public string BenchmarkName {
         get {
         return "Task Parallel Library CPU Benchmark"; }
    public string BenchmarkShortName {
         get {
         return "TPL"; }
    public int BenchmarkRevisionNumber {
         get {
         return 1; }
    public Common.BENCHMARK_TYPE BenchmarkType {
         get {
         return Common.BENCHMARK_TYPE.CPU; }
    private long _numObjects; public long NumberObjects {
         get {
         return _numObjects; }
    set {
         _numObjects = value; }
    private int _maxCPUs; public int MaxCPUs {
         get {
         return _maxCPUs; }
    set {
         _maxCPUs = value; }
    public TPLBenchmark() {
    public jcBenchReference.jcBenchSubmittedResult runBenchmark() {
         jcBenchReference.jcBenchSubmittedResult result = new jcBenchReference.jcBenchSubmittedResult(); DateTime startTime = DateTime.Now; // do the benchmark work here result.TimeTaken = DateTime.Now.Subtract(startTime).TotalSeconds; // fill in the rest of the jcBenchSubmittedResult object here // return the result to the executable to send over the WCF Service return result; }
    Again pretty simple implementation (I removed a lot of the error handling and implementation code since it wasn't relevant to this post). So now onto the fun part of this post. So let's say you had a couple more implementations and in my case I have an OpenCL benchmark as well. Since they follow the same interface path, my application (Win32 Command Line, WinForm, WPF, Windows 8, Windows Phone 8 etc) won't need to care about specific benchmarks, it will actually only be coded to look for implementations of that IBenchmarkBase Interface. Pretty nice and clean thought process right? So onto the "new and fun stuff". For this example to make it easier for a blog post, I am using a standard C# Win32 Command Prompt Visual Studio 2012 project type. Inside my Program.cs:
    public static List<jcBenchLib.Benchmarks.IBenchmarkBase> getModules() {
         List<jcBenchLib.Benchmarks.IBenchmarkBase> modules = new List<jcBenchLib.Benchmarks.IBenchmarkBase>(); Assembly moduleAssembly = Assembly.LoadFile(Directory.GetCurrentDirectory() + "\\jcBenchLib.dll"); Type[] assemblyTypes = moduleAssembly.GetTypes(); foreach (Type type in assemblyTypes) {
         if (type.GetInterface("IBenchmarkBase") != null) {
         var module = (jcBenchLib.Benchmarks.IBenchmarkBase)Activator.CreateInstance(type); modules.Add(module); Console.WriteLine("Loaded " + module.BenchmarkName + " Module..."); }
    return modules; }
    What this does is load my WinRT Portable Library off the disk, get all of the Types defined in that Assembly, parse through each one until it finds a match on my IBenchmarkBase Interface. When it finds a match, it adds the Interface to a List collection, writes the quick message to the Console and then returns the List Collection. So now that the List Collection of "Benchmarks" is loaded, what's next? This largely is based on the UI of the application. Would you populate a ComboBox in a WinForm application or a Listpicker in a Windows Phone 7/8 application? In this case since I am doing it in a command line application that the user will specify it as an argument. And here's the Main Function:
    static void Main(string[] args) {
         var bModule = getModules().Where(a => a.BenchmarkShortName == args[0]).FirstOrDefault(); if (bModule == null) {
         Console.WriteLine("Module (" + args[0] + ") was not found in the WinRT Portable Library"); Console.ReadKey(true); return; }
    bModule.NumberObjects = Convert.ToInt64(args[1]); bModule.MaxCPUs = Convert.ToInt32(args[2]); Console.WriteLine("Using " + bModule.BenchmarkName + " (Version " + bModule.BenchmarkRevisionNumber + ")..."); bModule.runBenchmark(); Console.ReadKey(true); }
    So there you have it minus any sort of error handling or helpful instructions for missing or invalid arguments. Pretty simple lookup from an argument passed into the application off the List Collection function above. Afterwards initializing the class object with the other parameters before executing the "benchmark" function. Depending on your app you could simply have a getModule function instead that takes the proposed BenchmarkShortName however in jcBench I display all of the available modules so it makes sense in my case to store it to a List Collection for later use. I'm not a fan if I have the memory available to do constant lookups whether it is a SQL Query or data loading like in this case, but to each his or her own. Hopefully someone finds this useful in their applications. Coupled with my post about dynamic/self-learning code on Saturday I think this area of C# (and other languages) is going to be a really good mindset to have in the years ahead, but I could wrong. With the ever increasing demands for rapid responsiveness in both physical and web applications coupled with the also ever increasing demand for new features, a clear separation of Application & Class Interface and dynamic approach is what I am going to focus on going forward.


    I've been with the same company now for nearly 5.5 years (if you add the time as a consultant and the last 2.75 years of being a direct employee), over that time period I've watched as the original ASP.NET 1.1 enterprise web application that was put in place September 2006 deteriorate quite rapidly in the last 2-3 years. There are a lot of factors attributed to it, but the main 4 I think are:
    1. New features tacked on rather than really thought out, leading to spaghetti code and a complete break down in Data and UI layers
    2. Other applications going online that replaced areas of the Web Application, leading to not keeping the older .NET 1.1 up to date. This is especially true to the external .NET 3.5 Web Application which displays much of the same data just differently, byt doesn't follow the same business logic for adding/updating/removing data
    3. Architecture itself doesn't lend itself to a multi-threaded environment, throwing more cores at it does little to speed it up
    4. Lack of time and resources to make a proper upgrade in between requested functionality and maintenance
    There's quite a few other attributes, but to sum it up, it should have been refitted at the very least in 2008, instead of tacking on additional functionality that didn't make the first or second releases. The time wasn't all a waste however, as things would waver in speed/functionality, I would make notes of areas that I wanted to address in Version 3, just kept it locked away until the time was given to do it. Last August (2011), it was approved to do a brand new architecture and platform for the largest part of the older system. The other parts would remain in the ASP.NET 1.1 Web Application and be moved in at a later date. Without going into too much detail, there was some disagreements over how this should be implemented especially with the 6 other .NET projects that would need to communicate with this new platform either a traditional Class Library method or a Web Service. This went on for a few months, with a pretty hard deadline of the end of the year. In the end in February 2012 I took a look at what the outside consulting firm's team had done and started from nearly ground zero (I did keep their Master Page Template as it wasn't too bad). Development went on until this week (with a couple of other projects getting mixed in and day to day maintenance) when I hit what I call a true BETA. So why the long introduction/semi-rant? In the IT industry we're lucky to get the time to do something right once, let alone maintain and upgrade the code to keep it from falling into code dystrophy. So why not as part of the initial development add support for the code to optimize itself based on current and future work loads? As time goes on (at least from my experience), you find new ways of doing things, whether it is parsing text, aggregating objects etc. In the end there are patterns for both your older code and your newer methodology. Wouldn't it be neat if as you were programming your latest and greatest code, your new programming patterns could be inherited into your older code? In addition, from experience, your first couple weeks/months of usage with a new project won't give you a real feel for how the product is going to be used. There's the initial week or so of just getting the employees or customers to even use the product as they have already invested the time to learn the old system. It is only after using it for a few weeks that they begin to learn and see the enhancements that went into making their daily tool better and find out how exactly they are using it. It is fascinating to me to hear "I do this, this and this to get to X" from an end user, especially internal employees when the intention was for them to follow a different path. This is where getting User Experience experts involved I think is key, especially for larger developments where the difference between the old and new systems are vast. So where to begin?

    Breakdown of Steps

    Something I think I am fairly good at is being able to break down complex problems into much more manageable tasks. It gives me the label at work as being able to accomplish anything thrown at me while not feeling (or showing) myself being overwhelmed (like when I single-handedly took over and started from scratch on the project mentioned above that should have been a team of 3-4 programmers). So I broke this rather large problem into 5 less large problems. So if you wanted to have your code in place to monitor the current paths taken, adjust the code and then start over again what would be necessary?

    Step 1 - Get the "profiler" data

    In order to be able to self-refactor code you will need to gather as much data as you can. The type of data depends on your application. If you find a lot of people never using a performance impacting ComboBox, then maybe do a load on demand or vis-versa if it constantly is being used load it on Page Load or figure out a better way to present the data to end-user. If you have a clean interface to your Business Layer this wouldn't be too hard to accomplish recording the path to a SQL Server database for instance or dumping it to another service. This part is pretty application scenario specific. If you have a largely offline WinForms application, you're going to want to record it to an XML file possibly and send it along with the regular data. Versus if you have WCF Service you want to profile, inside the Operation Contract level you can simply record the path. The particular case I am thinking of doing this for is an ASP.NET 4.5 Web Application that acts as a front end to a WCF Service. The WCF Service I am very satisfied with the performance at this point, although I want to do some refactoring of the some of the older February/March 2012 code before the project expanded to encompass many more features than originally intended (the definition of Scope Creep if there was one) to make it cleaner going forward. The big thing to be conscious of is your regular performance profiler takes a good bit of your performance out of your application. While great at pinpointing otherwise hidden performance hits by a misplaced for loop or something else (especially easy with Red Gate's ANTS Profiler if you hadn't used it), it does slow your application down. Finding that perfect balance of getting enough data logged and keeping your application running as close to 100% speed as possible should be a prioritized task I believe in order to successfully "profile" your code.

    Step 2 - Create Patterns to replace existing code with

    So now that we have our profile data, we need to be able to take it and apply changes to our code without human interaction. This in particular is the most interesting aspect of this topic for myself. Programming the system to find trends in the the usage via the profile data and then try several different programming models to address any possible performance problems. If no benefit could be found based on the pattern, maybe then escalate it to the actual human programmers to look into. The advantage of this route is that the programmer can focus on new features, while in the background have the system do the performance maintenance. Going back to my ideal environment of a concurrent development process, making best of the qualities of a machine and a human programmer. How exactly would these patterns be done? Regular Expressions against the existing code base and templated replacement code perhaps? If it found a code block like this:
    private bool PersonExists(string Name) {
         for (int x = 0; x < _objects.Count(); x++) {
         if (_objects[x].Name == Name) {
         return true; }
    return false; }
    It would see code could be optimized based on a templated pattern against a "find a result in a List Collection" to be:
    private bool PersonExists(string Name) {
         return _objects.AsParallel().Where(a => a.Name == Name).Count() > 0; }
    As a starting point I would probably start with easier patterns like this until I got more comfortable with the pattern matching.

    Step 3 - Automated Unit Testing of changes prior to deployment

    Keeping with the automated process mentality, after a particular pattern is found and a new method is found to be faster, prior to deployment of the new code, the new code should be put through Unit Testing. Even with great tools out there to assist in Unit Testing, from what I've seen, they are little used either from lack of time to invest in learning a Unit Testing tool and/or to develop them in conjunction with the main development. In a perfect world you would have already had Unit Tests for the particular functions and functionality, so these Unit Tests would be run without changes to verify the results are the same with the existing code.

    Step 4 - Scheduling the self-compiling and deployment

    Assuming the code got this far, the next logical step would be to recompile and deploy the new code to your server(s). You probably wouldn't want this to be done during a peak time, so off-hours during a weekend might be the best route. Or better yet in conjunction with your System Administrator's monthly or weekly maintenance to apply Service Packs, Hotfixes etc. In addition checking the code into your source control Subversion or Team Foundation Server would probably be a good idea too once it got to this point to ensure the human developers got the changes from the automated code changes.

    Step 5 - Record the changes being made to the code and track the increase

    After the code has been deployed, it might be a good idea to alert the human developers that it made a change what it was to ensure everyone was on the same page, especially if a developer had that code checked out for a fix. In addition, like everything I enjoy seeing before and after results. Having it profile the code for a few days against the new code to give performance numbers like Time spent in FunctionX now runs 4X faster.


    If you've gotten this far I hope this makes you think about the potential for something along this line or something completely different. It seems like this might be a good idea, but at the very least an interesting weekend project. I'll working on this as I get time, it definitely isn't a one weekend project and I really want to get the next release of jcBench out the door :)


    Since I was 11 years old I've had a desire to make changes to the games I would love to play. The first modification I can remember is making a Doom level, which spawned into levels and modifications in all the first 3 Quake games. Something that I feel is severely being missed in the mobile market and in the console market is the ability to make modifications. Part of the appeal to me with games, especially those that are mod friendly is to see what other creative developers, artists and designers can come up with. Its kind of an added bonus in between DLC, game addons and sequels from the main developer. Having had an iPhone 3G from June 2009 to November 2010 and a Windows Phone 7 device ever since, I feel that this is one of the contributing factors to gaming not being as engaging as it is on the PC or even the consoles. When you make your $1-$10 purchase on your phone, you are essentially buying a locked copy of the game, with the only updates being for bugs (from what I've seen). You have to wait months or longer for a sequel to come out with more levels/characters/etc.


    A week or two ago while playing around with dynamic code compiling in C#, I had an epiphany, why not create an architecture for your games to really take advantage of the big data centers/Cloud driven Mobile gaming and thereby allowing Mods to be created in the Mobile scene? For me this also puts more emphasis on clear lines between the lower level device code (Isolated Storage checks, version updates, authentication etc), presentation layer and game logic. Chances are you aren't going to want to give out the full source code to your latest groundbreaking game, so why not follow the id software model like back in the Quake II days? Provide the source code to the game side, let the Modders then change the game into something you could never have thought (Quake Rally comes to mind). Or if you aren't comfortable with that, provide an abstract class for the modders to implement as they see fit to start with until trust can be built up? [caption id="attachment_1385" align="aligncenter" width="300"] Proposed Architecture for Mobile Modding[/caption] Something this does bring up is security. Making sure no one maliciously removes other mods or affects anything outside of what is allowed is something I thought about this morning as well. When a mod that contains code gets submitted, have it run through a parser to find malicious looking code, flag it and send back a note to the mod/mod team. One pro of being on a Mobile device is that no matter which platform, you can't access outside of your application's storage area (referred to as IsolatedStorage on Windows Phone). This was a big con when writing a "dropbox" like application on iOS, Android and WP earlier this year, but that is a whole other post. I think this is going to be a big jump in the Mobile arena now that smartphones are pretty ubiquitous, especially with the amount of CPU and GPU power in the next generation of phones. I'm definitely not done discussing/pursuing this...
    Update on 8/18/2012 8:41 AM EST - Expanded my result commentary, added data labels in the graph and made the graph image larger An interesting question came up on Twitter this morning about how the overhead in calling Parallel.ForEach vs the more traditional foreach would impact performance. I had done some testing in .NET 4 earlier this year and found the smaller collections of objects < 10 not worth the performance hit with Parallel.ForEach. So I wrote a new test in .NET 4.5 to test this question on a fairly standard task: taking the results of a EF Stored Procedure and populating a collection of struct objects in addition to populating a List collection for a 1 to many relationship (to make it more real world). First thing I did was write some code to populate my 2 SQL Server 2012 Tables:
    using (TempEntities eFactory = new TempEntities()) {
         Random rnd = new Random(1985); for (int x = 0; x < int.MaxValue; x++) {
         Car car = eFactory.Cars.CreateObject(); car.MakerName = x.ToString() + x.ToString(); car.MilesDriven = x * 10; car.ModelName = x.ToString(); car.ModelYear = (short)rnd.Next(1950, 2012); car.NumDoors = (short)(x % 2 == 0 ? 2 : 4); eFactory.AddToCars(car); eFactory.SaveChanges(); for (int y = 0; y < rnd.Next(1, 5); y++) {
         Owner owner = eFactory.Owners.CreateObject(); owner.Age = y; owner.Name = y.ToString(); owner.StartOfOwnership = DateTime.Now.AddMonths(-y); owner.EndOfOwnership = DateTime.Now; owner.CarID = car.ID; eFactory.AddToOwners(owner); eFactory.SaveChanges(); }
    I ran it long enough to produce 121,501 Car rows and 190,173 Owner rows. My structs:
    public struct OWNERS {
         public string Name; public int Age; public DateTime StartOfOwnership; public DateTime? EndOfOwnership; }
    public struct CAR {
         public string MakerName; public string ModelName; public short ModelYear; public short NumDoors; public decimal MilesDriven; public List<OWNERS> owners; }
    Then for my Parallel.Foreach code:
    static List<OWNERS> ownersTPL(int CarID) {
         using (TempEntities eFactory = new TempEntities()) {
         ConcurrentQueue<OWNERS> owners = new ConcurrentQueue<OWNERS>(); Parallel.ForEach(eFactory.getOwnersFromCarSP(CarID), row => {
         owners.Enqueue(new OWNERS() {
         Age = row.Age, EndOfOwnership = row.EndOfOwnership, Name = row.Name, StartOfOwnership = row.StartOfOwnership }
    ); }
    ); return owners.ToList(); }
    static void runTPL(int numObjects) {
         using (TempEntities eFactory = new TempEntities()) {
         ConcurrentQueue<CAR> cars = new ConcurrentQueue<CAR>(); Parallel.ForEach(eFactory.getCarsSP(numObjects), row => {
         cars.Enqueue(new CAR() {
         MakerName = row.MakerName, MilesDriven = row.MilesDriven, ModelName = row.ModelName, ModelYear = row.ModelYear, NumDoors = row.NumDoors, owners = ownersTPL(CarID: row.ID) }
    ); }
    ); }
    My foreach code:
    static void runREG(int numObjects) {
         using (TempEntities eFactory = new TempEntities()) {
         List<CAR> cars = new List<CAR>(); foreach (getCarsSP_Result row in eFactory.getCarsSP(numObjects)) {
         List<OWNERS> owners = new List<OWNERS>(); foreach (getOwnersFromCarSP_Result oRow in eFactory.getOwnersFromCarSP(row.ID)) {
         owners.Add(new OWNERS() {
         Age = oRow.Age, EndOfOwnership = oRow.EndOfOwnership, Name = oRow.Name, StartOfOwnership = oRow.StartOfOwnership }
    ); }
    cars.Add(new CAR() {
         MakerName = row.MakerName, MilesDriven = row.MilesDriven, ModelName = row.ModelName, ModelYear = row.ModelYear, NumDoors = row.NumDoors, owners = owners }
    ); }
    Onto the results, I ran this on an AMD Phenom II X6 1090T (6x3.2ghz) CPU, 16gb of DDR3-1600 running Windows 8 RTM to give this a semi-real world feel having a decent amount of ram and 6 cores, although a better test would be on an Opteron or Xeon (testing slower, but more cores vs less, but faster cores of my desktop CPU). [caption id="attachment_1444" align="aligncenter" width="300"] Parallel.ForEach vs foreach (Y-Axis is seconds taken to process, X-Axis is the number of objects processed)[/caption] Surprisingly, for a fairly real world scenario .NET 4.5's Parallel.ForEach actually beat out the more traditional foreach loop in every test. Even more interesting is that until around 100 objects Parallel.ForEach wasn't visibly faster (the difference only being .05 seconds for 10 objects, but on a large scale/highly active data service where you're paying for CPU/time that could add up). Which does bring up an interesting point, I haven't looked into Cloud CPU Usage/hr costs, I wonder where the line between between performance of using n number of CPUs/cores in your Cloud environment and cost comes into play. Is 0.5 seconds for an average CPU usage OK in your mind to your customers? Or will you deliver the best possible experience to your customers and either ignore the costs incurred or offload the costs to them? This would be a good investigation I think and relatively simple with the ParallelOptions.MaxDegreeOfParallelism property. I didn't show it, but I also ran the same code using a regular for loop as I had read an article several years ago (probably 5 at this point) that showed foreach loop being much slower than a for loop. Surprisingly, the results were virtually identical to the foreach loop. Take that for what it is worth. Feel free to do your own testing for your own scenarios to see if a Parallel.ForEach loop is ever slower than a foreach loop, but I am pretty comfortable saying that it seems like the Parallel.ForEach loop has been optimized to the point where it should be safe to use it in place of a foreach loop for most scenarios.
    After reading C# Named Parameters based on .NET 4.0 I was curious if the same performance hit would be in .NET 4.5. I wrote a quick test on 2 sets of functions, one with 2 parameters and another with 4 parameters. 2 different sets to see if 2X as many parameters would hinder the performance or not. Iterating int.MaxValue (2147483647) times, here are the results: [caption id="attachment_1434" align="aligncenter" width="300"] .NET 4.5 Named vs Unnamed Parameters[/caption] Pretty interesting results I think. I re-ran the test 3 times to make sure there were no anomalies. So bottom line it looks as if the performance hit in .NET 4.5 isn't as large as it was in .NET 4.0. That being said I'll be switching to it especially with my WCF based architectures where parameters could change, but the parameters order being different or having completely different meaning.
    After attending the "What's New in WCF 4.5" session at Microsoft's TechED North America 2012 I got fascinated by the opportunities with using WebSockets for my next projects both at work and for my personnel projects. The major prerequisite for WebSockets is to be running on Windows 8 or Server 2012 specifically IIS 8.0. In Windows 8, go to your Programs and Features (Windows Key + x to bring up the shortcut menu, click Control Panel) and then click on "Turn Windows features on or off" as shown in the screenshot below: [caption id="attachment_1416" align="aligncenter" width="300"] Turn Windows features on or off[/caption] Then expand Internet Information Services and make sure WebSocket Protocol is checked as shown in the screenshot below: [caption id="attachment_1415" align="aligncenter" width="300"] Windows 8 Windows Feature to enable WebSocket Development[/caption] After getting my system up and running, I figured I would convert something I had done previously with a WPF application a few years back: Monitoring an Exchange 2010 Mailbox and then parse the EmailMessage object. The difference for this test is that I am simply going to kick back the email to a console application. Jumping right into the code my ITestService.cs source file:
    [ServiceContract] public interface ITestCallBackService {
         [OperationContract(IsOneWay = true)] Task getEmail(TestService.EMAIL_MESSAGE email); }
    [ServiceContract(CallbackContract = typeof(ITestCallBackService))] public interface ITestService {
         [OperationContract(IsOneWay = true)] Task MonitorEmailBox(); }
    My TestService.svc.cs source file:
    [Serializable] public struct EMAIL_MESSAGE {
         public string Body; public string Subject; public string Sender; public bool HasAttachment; }
    public async Task MonitorEmailBox() {
         var callback = OperationContext.Current.GetCallbackChannel<ITestCallBackService>(); Microsoft.Exchange.WebServices.Data.ExchangeService service = new Microsoft.Exchange.WebServices.Data.ExchangeService(Microsoft.Exchange.WebServices.Data.ExchangeVersion.Exchange2010_SP1); service.Credentials = new NetworkCredential(ConfigurationManager.AppSettings["ExchangeUsername"].ToString(), ConfigurationManager.AppSettings["ExchangePassword"].ToString(), ConfigurationManager.AppSettings["ExchangeDomain"].ToString()); service.Url = new Uri(ConfigurationManager.AppSettings["ExchangeWSAddress"].ToString()); Microsoft.Exchange.WebServices.Data.ItemView view = new Microsoft.Exchange.WebServices.Data.ItemView(100); Microsoft.Exchange.WebServices.Data.SearchFilter sf = new Microsoft.Exchange.WebServices.Data.SearchFilter.IsEqualTo(Microsoft.Exchange.WebServices.Data.EmailMessageSchema.IsRead, false); while (((IChannel)callback).State == CommunicationState.Opened) {
         Microsoft.Exchange.WebServices.Data.FindItemsResults<Microsoft.Exchange.WebServices.Data.Item> fiItems = service.FindItems(Microsoft.Exchange.WebServices.Data.WellKnownFolderName.Inbox, sf, view); if (fiItems.Items.Count > 0) {
         service.LoadPropertiesForItems(fiItems, new Microsoft.Exchange.WebServices.Data.PropertySet(Microsoft.Exchange.WebServices.Data.ItemSchema.HasAttachments, Microsoft.Exchange.WebServices.Data.ItemSchema.Attachments)); foreach (Microsoft.Exchange.WebServices.Data.Item item in fiItems) {
         if (item is Microsoft.Exchange.WebServices.Data.EmailMessage) {
         Microsoft.Exchange.WebServices.Data.EmailMessage eMessage = item as Microsoft.Exchange.WebServices.Data.EmailMessage; eMessage.IsRead = true; eMessage.Update(Microsoft.Exchange.WebServices.Data.ConflictResolutionMode.AlwaysOverwrite); EMAIL_MESSAGE emailMessage = new EMAIL_MESSAGE(); emailMessage.HasAttachment = eMessage.HasAttachments; emailMessage.Body = eMessage.Body.Text; emailMessage.Sender = eMessage.Sender.Address; emailMessage.Subject = eMessage.Subject; await callback.getEmail(emailMessage); }
    Only additions to the stock Web.config are the protocolMapping properties, but here is my full web.config:
    <?xml version="1.0"?> <configuration> <appSettings> <add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" /> </appSettings> <system.web> <compilation debug="true" targetFramework="4.5" /> </system.web> <system.serviceModel> <protocolMapping> <add scheme="http" binding="netHttpBinding"/> <add scheme="https" binding="netHttpsBinding"/> </protocolMapping> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true"/> <directoryBrowse enabled="true"/> </system.webServer> </configuration> ]]>
    I then created my test Windows Console application, added a reference to the WCF Service created above, here's my program.cs:
    static void Main(string[] args) {
         InstanceContext context = new InstanceContext(new CallbackHandler()); using (WSReference.TestServiceClient tsClient = new WSReference.TestServiceClient(context)) {
         tsClient.MonitorEmailBox(); Console.ReadLine(); }
    private class CallbackHandler : WSReference.ITestServiceCallback {
         public async void getEmail(WSReference.TestServiceEMAIL_MESSAGE email) {
         Console.WriteLine("From: " + email.Sender + "\nSubject: " + email.Subject + "\nBody: " + email.Body + "\nAttachments: " + (email.HasAttachment ? "Yes" : "No")); }
    Nothing fancy, but I think it provides a good beginning thought process for using WebSockets in combination with the new async keyword. For me it is like taking a WinForm, ASP.NET, WP7 (etc) Event Handler across the Internet or Intranet. This brings me one step further towards truly moving from the idea of keeping code inside a DLL or some other project to keeping it in Azure or hosted on an IIS server somewhere for all of my projects to consume. One source of logic to maintain and if there are bugs generally in the WCF Service itself (at least in my experience using a WCF Service like a Web DLL in the last 8 months) and not having to remember to recompile a project/redeploy each project that uses your common code/framework.
    I downloaded Entity Framework 5 two days ago after reading the MSDN Entry: for Entity Framework 5" target="_blank">Performance Considerations for Entity Framework 5. Excited by the Object Caching support I dove right in. In order to get Entity Framework 5: Run this command from your NuGet Console (Tools->Library Package Manager->Package Manager Console): Install-Package EntityFramework –Pre After that, you should see the EF 5.x DbContext Generator for C# as shown below: [caption id="attachment_1406" align="aligncenter" width="300"] EF 5.x Visual Studio 2012 Item Type[/caption] Type in the name of your DbContext and then it will automatically create your new DbContext driven EF5 classes from a database or Entity Model. A small sample:
    using (var db = new Models.TempContext()) {
         db.Configuration.AutoDetectChangesEnabled = false; return db.Users.Find(ID); }
    The syntax is very similar to EF4, but be careful about using Object Caching without reading further on about the AutoDetectChangesEnabled property being left on. The above example uses the new Find() function where it will return the User object with a primary key value of ID. As stated in the MSDN article, using this for random queries will often return negative performance. Bringing me to my findings. Continuning my persuit of performance from my LINQ vs PLINQ vs Stored Procedure Row Count Results blog post, I wanted to check out how EF5 compared to EF4. In the planning phases for my next Data Layer Architecture, I want to be fully versed in all of the latest techniques. So without further adieu: This test involved, returning the same User Entity Object I used in my previous post, but this time grabbing 2500 of them and in various scenarios. A screenshot of the output: [caption id="attachment_1407" align="aligncenter" width="300"] EF4 vs EF5 vs SP Initial Findings[/caption] It is interesting to see EF4 LINQ actually besting out EF5 LINQ. Not sure if it is a maturity problem or some setting I am unaware of (if so please let me know).
    Something that drives me crazy with software companies is that they do not properly handle cross-platform development from what I've seen. While at Microsoft's TechED North America Conference back in June this year, I spoke with several developers who were basically starting from scratch or doing some sort of convoluted way of putting some code in a DLL and then referencing it for each platform. Done right, you can create iOS (iPad/iPhone), Android and Windows Phone 7.x applications with the only platform specific code being taking the data and binding it to each platforms UI.

    My approach is to leave the data and business logic where it should be (in your data center/cloud) and leave the presentation and interaction to the device (iPhone, iPad, Droid, Windows Phone etc). To me, every developer should be applying this ideal. Platforms are coming and going so fast, wouldn't it suck if you just spent all of this time programming in a platform specific language (like Objective-C on iOS) only to find out from your CEO that he or she promised a port of your application or game to Platform XYZ in a month.

    Having a true 3 tier architecture like in any software development should be embraced even if there is added startup time to get that first pixel displayed on your client device.

    My Spring 2012 developed Mobile architecture consists of:
    -SQL Server 2008 R2 Database
    -SQL Stored Procedures for nearly all I/O to the WCF Service
    -Serialized Struct Containers for database/business objects
    -Task Parallel Library usage for all conversions to and from the Serialized Structs to the SQL Stored Procedures
    -Operation Contracts for all I/O (ie Authentication, Dropdown Box objects etc)

    For example, assume you had a Help Ticket System in your Mobile Application. A Help Ticket typically has several 1 to many relationship Tables associated with it. For instance you could have a history with comments, status changes, multiple files attached etc. Pulling all of this information across a web service in multiple calls is costly especially with the latency involved with 3G and 4G connections. It is much more efficient to do one bigger call, thus doing something like this is the best route I found:

    [Serializable] public struct HT_BASE_ITEM {
         public string Description; public string BodyContent; public int CreatedByUserID; public int TicketID; public List<HT_COMMENT_ITEM> Comments; public RETURN_STATUS returnStatus; }
    public HT_BASE_ITEM getHelpTicket(int HelpTicketID) {
         using (SomeModel eFactory = new SomeModel()) {
         HT_BASE_ITEM htBaseItem = new HT_BASE_ITEM(); getHelpTicketSP_Result dbResult = eFactory.getHelpTicketSP(HelpTicketID).FirstOrDefault(); if (dbResult == null) {
         htBaseItem.returnStatus = RETURN_STATUS.NullResult; return htBaseItem; }
    htBaseItem.Description = dbResult.Description; // Setting the rest of the Struct's properties here return htBaseItem; }
    public RETURN_STATUS addHelpTicket(HT_BASE_ITEM newHelpTicket) {
         using (SomeModel eFactory = new SomeModel()) {
         HelpTicket helpTicket = eFactory.HelpTicket.CreateObject(); helpTicket.Description = newHelpTicket.Description; // Setting the rest of the HelpTicket Table's columns here eFactory.HelpTicket.AddObject(helpTicket); eFactory.SaveChanges(); // Error handling here otherwise return Success back to the client return RETURN_STATUS.SUCCESS; }
    As you can see, the input and output are very clean, if more functionality is desired, i.e. a new field to capture, update the Struct & the input/output functions in the WCF Service, update the WCF reference in your device(s) and add the field to your UI to each device. Very quick and easy in my opinion.

    I've gotten in the habit of adding an Enum property to my returned object depending on the possibility of possibly returning Null or some other problem during the data grabbing or setting operations in my WCF Services. It makes tracking down bugs a lot easier and often if an error occurs I simply record it to a SQL Table and add a front end inside the main ASP.NET Web Application with the logged in user, version of the client app, version of the wcf service (captured via AssemblyInfo). Endusers aren't the most reliable to report issues, so being proactive especially in the mobile realm is key from what I've found.

    This approach does take some additional time upfront, but I was able to create a 100% feature to feature port to Windows Phone from iOS in 3 days for a fairly complex application because I only had to worry about creating a good looking UI in XAML and hooking up my new UI to those previously existing Operation Contracts in my WCF Service.

    This architecture probably won't work for everyone, but I took it a step further for a recent enterprise application where there were several ASP.NET, WinForm and Classic SOAP Services on various .NET versions. An easy solution would have been to simply create a common DLL with the commonly used functionality and reference that in the other platforms. The problem with that being, a fix would need to be deployed to all of the clients and I haven't yet tried a .NET 4.5 Class Library in a 1.1 ASP.NET solution though I can't imagine that would work too well if at all. Creating all of the functionality in a WCF Service and having the clients consume this service has been a breeze. One fix in the WCF Service generally fixes all of the platforms, which is great. Those looking to centralize business logic and database access should really look to this approach.

    A pretty common task I run across is counting the number of occurrences for a specific string, Primary Key ID etc. A few examples: checking a valid username/password combination or an existing value in the database to prevent duplicate/redundant data. Typically if there were no joins involved I would typically just do something like the following:
    public bool DoesExist(string someValue) {
         using (SomeEntity eFactory = new SomeEntity()) {
         return eFactory.SomeTable.Where(a => a.Value == someValue).Count() > 0; }
    Or use the Parallel PLINQ version if there were a considerable amount of rows assuming the overhead involved in PLINQ would negate any performance advantage for smaller tables:
    public bool DoesExist(string someValue) {
         using (SomeEntity eFactory = new SomeEntity()) {
         return eFactory.SomeTable.AsParallel().Where(a => a.Value == someValue).Count() > 0; }
    However if there were multiple tables involved I would create a Stored Procedure and return the Count in a Complex Type like so:
    public bool DoesExist(string someValue) {
         using (SomeEntity eFactory = new SomeEntity()) {
         return eFactory.SomeTableSP(someValue).FirstOrDefault().Value > 0; }
    Intrigued on what the real performance impact was across the board and to figure out what made sense depending on the situation I created a common scenario, a Users Table like so: [caption id="attachment_1377" align="aligncenter" width="267"] Users SQL Server Table Schema[/caption] Populated this table with random data from 100 to 4000 rows and ran the above coding scenarios against it averaging 3 separate times to rule out any fluke scores. In addition I tested looking for the same value run 3X and a random number 3X to see if the row's value position would affect performance (if it was at the near the end of the table or closer to the beginning). I should note this was tested on my HP DV7 laptop that has an A10-4600M (4x2.3ghz CPU) running Windows 8 x64 with 16GB of ram and a Sandisk Extreme 240GB SSD. [caption id="attachment_1378" align="aligncenter" width="300"] LINQ vs PLINQ vs Stored Procedure Count Performance Graph[/caption] The most interesting aspect for me was the consistent performance of the Stored Procedure across the board no matter how many rows there were. I imagine the results are the same for 10,000, 20,000 etc. I'll have to do those tests later. In addition I imagine as soon as table joins come into the picture the difference between a Stored Procedure and a LINQ query would be even greater. So bottom line, use a Stored Procedure for counts. The extra time to create a Stored Procedure, import it into Visual Studio (especially in Visual Studio 2012 where it automatically creates the Complex Type for you) is well worth it.
    Been banging my head around a recent Azure WCF Service I've been working on to connect to my new Windows Phone 7 project. To my surprise, it worked flawlessly or so it seemed. Going to use the WCF Service, I noticed the proxy hadn't been generated. Sure enough the Reference.cs was empty:
    //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.17626 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ ]]>
    Vaguely remembering this was fixed with a simple checkbox, I went to the Configure Service section and changed the options circled in red: [caption id="attachment_1370" align="aligncenter" width="300"] WCF Configure Reference[/caption] Note: You only need to uncheck the Reuse types in referenced assemblies, I just prefer generic List objects versus an Observable Collection.
    Something I never understood why it wasn't part of the base TextBox control is the ability to override the GotFocus colors with an easy to use property, like FocusForeground or FocusBackground. You could do something like this:
    private void txtBxPlayerName_GotFocus(object sender, RoutedEventArgs e) {
         Foreground = new SolidColorBrush(Colors.Black); }
    By default, the Background on a Focus event of a Windows Phone TextBox control sets it to White, so if you had White Text previously, upon entering Text, your Text would be invisible. The workaround above isn't pretty, nor handles the LostFocus property (ie to revert back to your normal colors after entering Text. If you have 2,3,4 or more TextBox controls, this could get tedious/clutter up your code. My "fix", extending the TextBox Control:
    using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; namespace WPGame {
         public class jcTextBox : TextBox {
         #region New Background and Foreground Properties public static readonly DependencyProperty FocusForegroundProperty = DependencyProperty.Register("FocusForeground", typeof(Brush), typeof(jcTextBox), null); public Brush FocusForeground {
         get {
         return base.GetValue(FocusForegroundProperty) as Brush; }
    set {
         base.SetValue(FocusForegroundProperty, value); }
    public static readonly DependencyProperty FocusBackgroundProperty = DependencyProperty.Register("FocusBackground", typeof(Brush), typeof(jcTextBox), null); public Brush FocusBackground {
         get {
         return base.GetValue(FocusBackgroundProperty) as Brush; }
    set {
         base.SetValue(FocusBackgroundProperty, value); }
    public static readonly DependencyProperty BaseForegroundProperty = DependencyProperty.Register("BaseForeground", typeof(Brush), typeof(jcTextBox), null); public Brush BaseForeground {
         get {
         return base.GetValue(BaseForegroundProperty) as Brush; }
    set {
         base.SetValue(BaseForegroundProperty, value); }
    public static readonly DependencyProperty BaseBackgroundProperty = DependencyProperty.Register("BaseBackground", typeof(Brush), typeof(jcTextBox), null); public Brush BaseBackground {
         get {
         return base.GetValue(BaseBackgroundProperty) as Brush; }
    set {
         base.SetValue(BaseBackgroundProperty, value); }
    #endregion public jcTextBox() {
         BaseForeground = Foreground; BaseBackground = Background; }
    #region Focus Event Overrides protected override void OnGotFocus(RoutedEventArgs e) {
         Foreground = FocusForeground; Background = FocusBackground; base.OnGotFocus(e); }
    protected override void OnLostFocus(RoutedEventArgs e) {
         Foreground = BaseForeground; Background = BaseBackground; base.OnLostFocus(e); }
    #endregion }
    To use it, add it to your project and then you can just drag and drop the control from the Toolbox and use it like so:
    <my:jcTextBox x:Name="txtBxPlayerName" Background="#1e1e1e" Foreground="White" BaseBackground="#1e1e1e" BaseForeground="White" FocusForeground="Black" FocusBackground="#2e2e2e" AcceptsReturn="False" /> ]]>
    The output: [caption id="attachment_1354" align="aligncenter" width="180"] Unfocused jcTextBox Controls[/caption] [caption id="attachment_1355" align="aligncenter" width="180"] Visible Text with jcTextBox on Focus[/caption] [caption id="attachment_1356" align="aligncenter" width="180"] LostFocus jcTextBox Text with original colors[/caption] So there you have it, a "fix" for a common problem (at least in my experience). Feel free to use, rewrite, whatever with the jcTextBox class.
    Continuing my work on making the new version of jcBench as close to perfect I wanted to handle the lack of an internet connection in pulling the top results down. I had done this in a .NET 4 WinForms app by simply creating a Socket to and then attempting to hit my company's server to connect to the SOAP Web Service. With Metro Style applications you don't get access to the System.Net.Sockets namespace. Thus the reason for this post, in hoping to get someone else on the right track. For this example I didn't want to create a full Response Class Object, figuring some people might be satisfied with a simple bool return type (true being can connect to the server, false being not), I met half way with a simple Enumeration:
         No_Internet_Connection, No_Response_From_Server, Successful_Connection }
    In jcBench, all responses are returned via a class object type of jcBenchResponse. These responses get kicked into a queue like any action or event in the program. A bit more setup time involved in doing it this way, but allows a much greater flexibility when your applications get larger. If anyone were curious, I got the idea from spending more time than I'd like to admit digging through the Quake II and Quake III source code seeing how certain things were designed on such a large scale. A book at some point just digging through the Quake III architecture would be an amazing read. All of the game architecture books I've read, use a simple example, with a couple classes inheriting others, but nothing to the point of showing the bigger picture or even comparing architectures and the pros and cons. In the mean time, the tried and true trial and error method with different application architectures for each project seems to be advancing my knowledge base at a reasonable level.


    Here's the actual asynchronous function using the .NET 4.5 keywords async and await. I kept the full namespace reference to avoid the head scratching questions of which namespace the new Metro references are contained in.
    private async System.Threading.Tasks.Task<SOCKET_RESPONSES> checkServer(string hostAddress, int portNumber) {
         // If no internet connection is found, no reason to continue if (Windows.Networking.Connectivity.NetworkInformation.GetInternetConnectionProfile().GetNetworkConnectivityLevel() != NetworkConnectivityLevel.InternetAccess) {
         return SOCKET_RESPONSES.No_Internet_Connection; }
    // Open StreamSocket Windows.Networking.Sockets.StreamSocket streamSocket = new Windows.Networking.Sockets.StreamSocket(); try {
         await streamSocket.ConnectAsync(new Windows.Networking.HostName(hostAddress), portNumber.ToString(), Windows.Networking.Sockets.SocketProtectionLevel.PlainSocket); if (streamSocket != null) {
         streamSocket.Dispose(); return SOCKET_RESPONSES.Successful_Connection; }
    return SOCKET_RESPONSES.No_Response_From_Server; }
    catch (Exception) {
         // Make sure the socket is diposed of if (streamSocket != null) {
         streamSocket.Dispose(); }
    // Assume socket timedout, thus the exception return SOCKET_RESPONSES.No_Response_From_Server; }

    Network Adapter Type Info

    I should note, you can get the exact active Internet Network Device Type (Internet Assigned Names Authority Type ID) from this line:
    Windows.Networking.Connectivity.NetworkInformation.GetInternetConnectionProfile().NetworkAdapter.IanaInterfaceType ]]>
    So you could do something like this:
    enum DEVICE_TYPE {
         OTHER = 1, ETHERNET = 6, WIRELESS = 71, FIREWIRE = 144 }
    // Pre-Condition: There is an active Internet Connection Profile private DEVICE_TYPE getNetworkType() {
         return (DEVICE_TYPE)Windows.Networking.Connectivity.NetworkInformation.GetInternetConnectionProfile().NetworkAdapter.IanaInterfaceType; }
    The full list of IDs and their associated types is here on MSDN.


    To use the function you could do something like this:
    var response = checkServer("", 80); await response; if (response.Result != SOCKET_RESPONSES.Successful_Connection) {
         // Throw some error to the user or simply hide the element that depends on an internet connection }
    So there you have it, a C# .NET 4.5 Metro way to check a server & port combination that runs asynchronously. Feel free to post feedback as I am sure this is far from the best approach, but it does work.
    Luckily, there was a HP Pavilion dmz1 E-350 (2x1.6ghz) at work that a co-worker was able to run jcBench on so if you were curious about Ontario vs. Zacate vs. Champlain vs. Trinity core pure CPU performance now is your chance to see. Integer Performance: [caption id="attachment_1285" align="aligncenter" width="300"] AMD Ontario vs Zacate vs Trinity vs Champlain CPU Integer Performance[/caption] Floating Point Performance: [caption id="attachment_1286" align="aligncenter" width="300"] AMD Ontario vs Zacate vs Trinity vs Champlain CPU Floating Point Performance[/caption] As expected the E-350 was in between the 2010 Champlain CPU, but what was more interesting was the lack of a dedicated FPU for each Integer Core in the Trinity CPU in the Floating Point benchmark. While still faster in the A10 versus the P920, it wasn't as dramatic as the Integer performance difference. It should be interesting how AMD tries to combat the floating point deficit it most likely faces against Intel's cpus. If I had to guess, I'd say more emphasis on using the integrated Radeon 7xxx series is key to the success of Trinity. Regardless of the platform using OpenCL or C++ AMP is a better route to go nowadays than strictly programming for a CPU as a traditional multi-threaded development would go. Definitely will revisit this in the coming weeks when the new version of jcBench that supports TPL, C++ AMP and OpenCL is released.
    Not that I give much creedence to the Windows Experience Index, but here is my score with the addition of the 16GB of Corsair DDR3-1600 and SanDisk Extreme 240GB SSD: [caption id="attachment_1273" align="aligncenter" width="300"] Windows Experience Index for my HP DV7-7010US[/caption] What is interesting in those numbers is that my older Phenom II P920 (4x1.6ghz) got rated just under that in CPU results. After using the laptop for a week now, it definitely feels and responds orders faster than my older Dell. So I turned to jcBench, a much clearer picture (I included my C-50 Netbook for comparison): Integer Results (note lower is better, the legend indicates the number of threads tested) [caption id="attachment_1274" align="aligncenter" width="300"] AMD C-50 vs AMD Phenom II P920 vs AMD A10-4600M in Integer Performance[/caption] Looking at these numbers, the results make a lot more sense. Neither the Phenom II nor the C-50 have a "Turbo" mode in which the unused cores get tuned down and the used cores get ramped up. For instance the A10 in my DV7 will ramp up to 3.2ghz from 2.3ghz. Thus the interesting result in 2 threads of the Phenom II to 1 thread on the A10, nearly equal time (2x1.6ghz vs 1x3.2ghz effectively). I will run Floating Point on my P920 tonight and update the post, but I feel it will be fairly similar to the Integer results. However, the shared FPU on the Trinity CPUs should make it more interesting since the Phenom II had 1 FPU per core.
    I want to love Solid State Drives, but they continually have problems that I've never had with normal mechanical drives. At first I thought maybe since I started back in 2009 with Solid State Drives (SSDs), that it was just an early adopters problem, faster forward to 2012 and I am still running into the same problems. So I figured I'd document the drives I've had and any issues over the years:
    1. 2 Imation M-Class 64GB SATA II MLC - December 29th, 2009 - 0 Issues for a while other than the subpar performance over time, last year I started to notice issues
    2. 2 OCZ Agility 2 60GB SATA II MLC - September 25th, 2010 - Started to disappear from my Desktop's SATA II Controller randomly, sold off
    3. 2 Crucial M225 64GB SATA II - September 2010 - 0 Issues at first, 6-7 months later random disappearing and loosing information started to occur on both
    4. OCZ Vertex 30GB SATA II MLC - May 29th, 2011 - 0 Issues, used for caching in my ZFS Pool before selling it off
    5. OCZ Vertex 2 60GB SATA II - June 26th, 2011 - 0 Issues, used in my Netbook since purchase
    6. 2 Corsair Force 3 90GB SATA III - October 2011 - 0 Issues, used in RAID 0 24/7 since installing them
    7. OCZ Vertex 3 120GB SATA III - May 2012 - Installed Windows 8, rebooted started to disappear randomly
    8. SanDisk Extreme 240GB SATA III - June 2012 - 0 Issues thus far
    So to put in a nice scoreboard (ordered by % good):
    1. 2 of 2 - Corsair
    2. 1 of 1 - SanDisk
    3. 1 of 2 - Imation
    4. 2 of 5 - OCZ
    5. 0 of 2 - Crucial
    Long story short, I will not be buying another OCZ SSD drive no matter the price, especially after putting in a Help Ticket stating the exact problem only to be responded with:
    Are you receiving any error messages? If so, what error messages are you receiving? Is the drive detected by the computer's BIOS/EFI at all? Does it show up in disk management or disk utility? Are you able to create or delete partitions on the drive? Can you access the drive as a secondary, non-booting, non-raid drive? Can you copy files to or from the drive after creating a new partition on the drive using disk management or disk utility and then full formatting the partition? Normally it is not advisable to full format an SSD, but as a test its a very reliable way to test if the drive is working properly. Can the drive be accessed when attached through a different SATA cable to a different SATA port? There are status indicator lights on the drive. These status indicator lights will provide more information about what the drive is doing. What color lights are lit up when the problem occurs?
    A bit more digging around, found a forum post on guru3d, not sure why I didn't remember to check their first, but there are Catalyst Drivers dated June 12th on there for Windows 7 and 8 that work on the 7660G and probably the other Trinity based APUs. For those interested go here. Using the latest build of jcBENCH, pulled it right up: [caption id="attachment_1263" align="aligncenter" width="300" caption="jcBench showing 7660G (Devastator) as an OpenCL device"][/caption] Gotta love AMD's codenames: Devastator Now onto benchmarking this thing...
    I started playing around with the compiler statistics of the KernelAnalyzer (mentioned in this morning's post) some more. Without doing some more research it looks as though using the built in pow function versus simply multiplying the number by itself is much, much slower. Take this line for instance:
    thirdSide[index] = sqrt((double)(x * x) + (double)(y * y)); ]]>
    The KernelAnalyzer estimates the through put to be about 10 Million Threads\Sec. Adjusting the line to this:
    thirdSide[index] = sqrt(pow((double)x, 2) + pow((double)y,2)); ]]>
    The estimate lowers itself to 2 Million Threads\Sec. Is this a result of the math library function call overhead to loose 5X the performance? Adjusting the line to only using half, results only in an increase to 4 Million Threads\Sec. So perhaps there is an initial performance hit when using a library function, maybe the library needs to be loaded upon every thread? Or is the library shared across threads and thus causing a locking condition during the compute? I think this requires more investigation, wish I was going to the AMD Developer Conference next week...
    I've been working on an OpenCL WPF version of jcBench for the last 2 or 3 weeks in between life, thus the lack of posting. However this morning at a nice time of 6:30 AM, I found a new tool to assist in OpenCL development, the AMD APP Kernel Analyzer. In my code for testing I've just been setting my OpenCL kernel program to a string and then sending it to the ComputeProgram function like so:
    string clKerenel = @"__kernel void computePythag(global long * num, global double * thirdSide) {
         size_t index = get_global_id(0); for (long x = 2; x < num[index] + 2; x++) {
         for (long y = x; y < num[index] + 2; y++) {
         double length = sqrt((double)(x * x) + (double)(y * y)); length *= length; thirdSide[index] = length; }
    return; }
    "; ComputeProgram program = new ComputeProgram(_cContext, clProgramSource); program.Build(null, null, null, IntPtr.Zero); ]]>
    This is problematic as any error isn't captured until runtime best case scenario or you just get a BSOD (as I did last night). This is where AMD's KernelAnalyzer comes into play. [caption id="attachment_1224" align="aligncenter" width="300" caption="AMD APP Kernel Analyzer"][/caption] Note the syntax highlighting and if there were any errors during compilation the Compiler Output window like in any other development tool. An extra feature that I just realized how useful it really is, is that the ability to target to difference generations/platforms of AMD GPUs. I knew the R7XX (4000 series Radeon HD) only had OpenCL 1.0 support, but I didn't realize (naively thinking) that the same program wouldn't compile cleanly across the board. Luckily I still have 2 R7XX series GPUs in use (one in my laptop and another in my secondary desktop), but interesting nonetheless. Definitely more to come on the OpenCL front tonight...
    Got some more results add to the comparison: [caption id="attachment_1218" align="aligncenter" width="300" caption="jcBench Expanded Integer Results - 5/15/2012"][/caption] [caption id="attachment_1219" align="aligncenter" width="300" caption="jcBench Expanded Floating Point Results - 5/15/2012"][/caption] Tonight I am hoping to add a quick comparison page so anyone could ask, well how does a Dual 600mhz Octane perform against a 600mhz O2?
    About a year ago I started work on a new 3D game engine, while originally it was just foray into WPF, it turned into using OpenGL and C# using OpenTK. Eventually this turned into a dead end as I had too high ambitions. A year later (really more like 15 years later), I finally have come up with a more reasonable game plan to create an iterative approach. Following id software's technology jumps starting with Wolfenstein 3D and hopefully one day hit Quake III level of tech. So the initial features will be:
    1. 90' Walls, Floors and Ceilings
    2. Clipping
    3. Texture Mapping with MIP-Mapping Support
    4. OpenGL Rendering with SDL for Window Management
    5. IRIX and Windows Ports
    I started work on the base level editor tonight: [caption id="attachment_1213" align="aligncenter" width="300" caption="jcGEDitor WIP"][/caption]
    Below are some interesting results of the floating point performance differences between MIPS and AMD cpus. [caption id="attachment_1105" align="aligncenter" width="300" caption="jcBench Floating Point Performance"][/caption] The biggest thing to note, is the effect Level 2 cache has on Floating Point performance. The 4mb Level 2 cache in the R16000, clearly helps to compensate for the massive difference in clock speed. Nearly a 1 to 1 relationship between a the 6x3.2ghz Phenom II and the 4x800 MIPS R16k. So bottom line, Level 2 cache makes up for megahertz almost by a factor of 4 in these cases. It's a shame the fastest MIPS R16000 only ran at 1ghz and is extremely rare. More benchmarking later this week...
    I was able to add several more machines to the comparison with the help of a friend over at Nekochan. [caption id="attachment_1102" align="aligncenter" width="300" caption="jcBench Integer Comparison 2"][/caption] Very interesting how MIPS scales and how much of a difference 100mhz and double the Level 2 cache makes an effect on speed.
    Just finished getting the features for jcBench 0.2 completed. The big addition is the separate test of integer and floating point numbers. The reason for the addition of this test is that I heard years ago that the size of Level 2 cache directly affected performance of Floating Point operations. You would always hear of the RISC cpus having several MegaBytes of cache, while my first 1ghz Athlon (Thunderbird), December 2000 only had 256kb. As I get older, I get more and more scrupulous over things I hear now or had heard in the past thus the need for me to prove to myself one or the other. I'm still working on going back and re-running the floating point tests so that will come later today, but here are the integer performance results. Note the y-axis is the number of seconds taken to complete the test, so lower is better. [caption id="attachment_1097" align="aligncenter" width="300" caption="jcBench 0.2 integer comparison"][/caption] Kind of a wide range of CPUs, ranging from a netbook cpu in the C-50, to a mobile cpu in the P920 to desktops cpus. The differences based on my current findings vary much more greatly with floating point operations. A key things I got from this data:
    1. Single Threaded, across the board was ridiculously slow, even with AMD's Turbo Core technology that ramps up a core or two and slows down the unused cores. Another unsettling fact for developers that continue to not write parallel programs.
    2. The biggest jump was from 1 thread to 2 threads across the board
    3. MIPS R14000A 600mhz CPU is slightly faster than a C-50 in both single and 2 threaded tests. Finally found a very near equal comparison, I'm wondering with the Turbo Core on the C-60 if it brings it inline.
    4. numalink really does scale, even over the now not defined as "fast" numalink 3 connection, scaling it across 2 Origin 300s using all 8 cpus really did increase performance (44 seconds versus 12 seconds).
    More to come later today with floating point results...
    Just got the initial C port of jcBench completed. Right now there are IRIX 6.5 MIPS IV and Win32 x86 binaries working, I'm hoping to add additional functionality and then merge back in the changes I made to the original 4 platforms. I should note the performance numbers between the 2 will not be comparable. I rewrote the actual benchmarking algorithm to be solely integer based, that's not to say I won't add a floating point, but it made sense after porting the C# code to C. That being said, after finding out a while back on how Task Parallel Library (TPL) really works, my implementation of multi-threading using POSIX, does things a little differently.

    Where the TPL starts off with one thread and as it continues processing increases the threads dynamically, my implementation simply takes the number of threads specified via the command line, divides the work (in my case the number of objects) by the number of threads and kicks off the threads from the start. While TPLs implementation is great for work that you don't know if it will really even hit the maximum number of cpus/cores efficiently, for my case it actually hinders performance. I'm now wondering if you can specify from the start how many threads to kick off? If not, Microsoft, maybe add support for that? I've got a couple scenarios I know for instance would benefit from at least 4-8 threads initially, especially for data migration that I prefer to do in C# versus SSIS (call me a control freak).

    Back to jcBench, at least with the current algorithm, it appears that a MIPS 600mhz R14000A with 4MB of L2 cache is roughly equivalent to a 1200mhz Phenom II with 512kb L2 cache and 6mb of L3 cache at least in Integer performance. This is based on a couple runs of the new version of jcBench. It'll be interesting to see with numalink if it continues this to 1 to 2 ratio. I'm hoping to see how different generations of AMD cpus compare to the now 10 year old MIPS cpu.
    After waiting about a month in a half, I finally got a numalink cable so I could numalink together two Silicon Graphics Origin 300s. The idea behind numalink is that you can take multiple machines and link them together in a cluster. In my case, I now have a 8 way R14000A 600mhz and 8GB of ram Origin 300. Like with many things I'm finding with Silicon Graphics machines it was pretty easy to setup. I tried to document it all below. First off I needed to update the "rack" position of my Slave (2nd) Origin 300: [bash] 001c01-L1>brick slot 02 brick slot set to 02 (takes effect on next L1 reboot/power cycle) 001c01-L1>reboot_l1 SGI SN1 L1 Controller Firmware Image B: Rev. 1.44.0, Built 07/17/2006 18:20:38 001c02-L1> [/bash] Next I needed to clear the serial based on the error I got: [bash] 001c02-L1> Not able to determine correct System Serial Number 001c02 == M2002931 Please use the command 'serial clear' on the brick which has the serial number you do not wish to keep Not able to determine correct System Serial Number 001c16 == M2100250 Please use the command 'serial clear' on the brick which has the serial number you do not wish to keep 001c02-L1>serial clear [/bash] Unplugged the power to both and then hooked them up with the numalink cable: [caption id="attachment_1089" align="aligncenter" width="225" caption="Silicon Graphics Origin 300s connected via numalink"][/caption] Then plugged both power cables back in and hit power button on both. To my surprise upon starting up IRIX everything worked. [bash] [SPEEDO2]:~ $ hinv -vm Location: /hw/module/001c01/node IP45_4CPU Board: barcode MNS886 part 030-1797-001 rev -B Location: /hw/module/001c01/Ibrick/xtalk/14 IO8 Board: barcode MJX813 part 030-1673-003 rev -F Location: /hw/module/001c01/Ibrick/xtalk/15 IO8 Board: barcode MJX813 part 030-1673-003 rev -F Location: /hw/module/001c02/node IP45_4CPU Board: barcode MNM964 part 030-1797-001 rev -B Location: /hw/module/001c02/Ibrick/xtalk/14 IO8 Board: barcode MHE546 part 030-1673-003 rev -E Location: /hw/module/001c02/Ibrick/xtalk/15 IO8 Board: barcode MHE546 part 030-1673-003 rev -E 8 600 MHZ IP35 Processors CPU: MIPS R14000 Processor Chip Revision: 2.4 FPU: MIPS R14010 Floating Point Chip Revision: 2.4 CPU 0 at Module 001c01/Slot 0/Slice A: 600 Mhz MIPS R14000 Processor Chip (enabled) Processor revision: 2.4. Scache: Size 4 MB Speed 300 Mhz Tap 0x1a CPU 1 at Module 001c01/Slot 0/Slice B: 600 Mhz MIPS R14000 Processor Chip (enabled) Processor revision: 2.4. Scache: Size 4 MB Speed 300 Mhz Tap 0x1a CPU 2 at Module 001c01/Slot 0/Slice C: 600 Mhz MIPS R14000 Processor Chip (enabled) Processor revision: 2.4. Scache: Size 4 MB Speed 300 Mhz Tap 0x1a CPU 3 at Module 001c01/Slot 0/Slice D: 600 Mhz MIPS R14000 Processor Chip (enabled) Processor revision: 2.4. Scache: Size 4 MB Speed 300 Mhz Tap 0x1a CPU 4 at Module 001c02/Slot 0/Slice A: 600 Mhz MIPS R14000 Processor Chip (enabled) Processor revision: 2.4. Scache: Size 4 MB Speed 300 Mhz Tap 0xa CPU 5 at Module 001c02/Slot 0/Slice B: 600 Mhz MIPS R14000 Processor Chip (enabled) Processor revision: 2.4. Scache: Size 4 MB Speed 300 Mhz Tap 0xa CPU 6 at Module 001c02/Slot 0/Slice C: 600 Mhz MIPS R14000 Processor Chip (enabled) Processor revision: 2.4. Scache: Size 4 MB Speed 300 Mhz Tap 0xa CPU 7 at Module 001c02/Slot 0/Slice D: 600 Mhz MIPS R14000 Processor Chip (enabled) Processor revision: 2.4. Scache: Size 4 MB Speed 300 Mhz Tap 0xa Main memory size: 8192 Mbytes Instruction cache size: 32 Kbytes Data cache size: 32 Kbytes Secondary unified instruction/data cache size: 4 Mbytes Memory at Module 001c01/Slot 0: 4096 MB (enabled) Bank 0 contains 1024 MB (Premium) DIMMS (enabled) Bank 1 contains 1024 MB (Premium) DIMMS (enabled) Bank 2 contains 1024 MB (Premium) DIMMS (enabled) Bank 3 contains 1024 MB (Premium) DIMMS (enabled) Memory at Module 001c02/Slot 0: 4096 MB (enabled) Bank 0 contains 1024 MB (Premium) DIMMS (enabled) Bank 1 contains 1024 MB (Premium) DIMMS (enabled) Bank 2 contains 1024 MB (Premium) DIMMS (enabled) Bank 3 contains 1024 MB (Premium) DIMMS (enabled) Integral SCSI controller 8: Version Fibre Channel LS949X Port 0 Integral SCSI controller 9: Version Fibre Channel LS949X Port 1 Integral SCSI controller 10: Version QL12160, low voltage differential Integral SCSI controller 11: Version QL12160, low voltage differential Integral SCSI controller 0: Version QL12160, low voltage differential Disk drive: unit 1 on SCSI controller 0 (unit 1) Integral SCSI controller 1: Version QL12160, low voltage differential Integral SCSI controller 12: Version QL12160, low voltage differential Integral SCSI controller 13: Version QL12160, low voltage differential IOC3/IOC4 serial port: tty5 IOC3/IOC4 serial port: tty6 IOC3/IOC4 serial port: tty7 IOC3/IOC4 serial port: tty8 Gigabit Ethernet: eg1, module 001c01, pci_bus 2, pci_slot 2, firmware version 0.0.0 Gigabit Ethernet: eg2, module 001c02, pci_bus 2, pci_slot 1, firmware version 0.0.0 Integral Fast Ethernet: ef0, version 1, module 001c01, pci 4 Fast Ethernet: ef1, version 1, module 001c02, pci 4 PCI Adapter ID (vendor 0x1000, device 0x0640) PCI slot 1 PCI Adapter ID (vendor 0x1000, device 0x0640) PCI slot 1 PCI Adapter ID (vendor 0x10a9, device 0x0009) PCI slot 2 PCI Adapter ID (vendor 0x10a9, device 0x0009) PCI slot 1 PCI Adapter ID (vendor 0x1077, device 0x1216) PCI slot 2 PCI Adapter ID (vendor 0x1077, device 0x1216) PCI slot 1 PCI Adapter ID (vendor 0x10a9, device 0x0003) PCI slot 4 PCI Adapter ID (vendor 0x11c1, device 0x5802) PCI slot 5 PCI Adapter ID (vendor 0x1077, device 0x1216) PCI slot 1 PCI Adapter ID (vendor 0x10a9, device 0x0003) PCI slot 4 PCI Adapter ID (vendor 0x11c1, device 0x5802) PCI slot 5 IOC3/IOC4 external interrupts: 1 IOC3/IOC4 external interrupts: 2 HUB in Module 001c01/Slot 0: Revision 2 Speed 200.00 Mhz (enabled) HUB in Module 001c02/Slot 0: Revision 2 Speed 200.00 Mhz (enabled) IP35prom in Module 001c01/Slot n0: Revision 6.124 IP35prom in Module 001c02/Slot n0: Revision 6.210 USB controller: type OHCI USB controller: type OHCI [/bash]
    Even in 2012, I find myself exporting large quantities of data for reports or other needs that need additional aggregation or manipulation in C# that doesn't make sense to do in SQL. You've probably done something like this in your code since your C or C++ days:
    string tmpStr = String.Empty; foreach (contact c in contacts) {
         tmpStr += c.FirstName + " " + c.LastName + ","; }
    return tmpStr; ]]>
    And most likely doing some manipulation otherwise it would probably make more sense to simply concatenate the string in your SQL Query itself. After thinking about it some more, I considered the following code instead:
    return string.Join(",", contacts.Select(a => a.FirstName + " " + a.LastName)); ]]>
    Simple, clean and faster? On 10,000 Contact Entity Objects (averaged against 3 test runs): Traditional Method - 1.4050804 seconds Newer Method - 0.0270016 seconds About 50 times faster to do the newer method, what about with an even larger dataset of 100,000? Traditional Method - 151.0996424 seconds Newer Method - 0.09200503 seconds Nearly 1700 times faster to do the newer method, but now what about a smaller set of 1000? Traditional Method - 0.0410024 seconds Newer Method - 0.0160009 seconds About 2 times faster. In visual terms: [caption id="attachment_1081" align="aligncenter" width="300" caption="Traditional vs Newer Method Test Results"][/caption] This is far from a conclusive, in-depth test. But for larger data sets or in a high traffic/high demand (like a WCF Call that returns a delineated String for instance), string.Join should be used instead. That being said though, the data should be formatted properly ahead of time and any possible error (null values etc) should be considered a precondition to using string.Join. For me, it really got my mind thinking about other small blocks of code that I had been stagnantly using over the years that could speed up intensive tasks, especially with the size of a lot of the results I parse through at work.
    Picked up another Silicon Graphics Origin 300 (Dual 600/4gb ram), swapped in my Quad 500 board, replaced the fans and began my fun filled adventure into L1 Land. [caption id="attachment_1070" align="aligncenter" width="300" caption="My 2nd Silicon Graphics Origin 300"][/caption] [caption id="attachment_1068" align="aligncenter" width="225" caption="Original Silicon Graphics Origin 300 Fans out..."][/caption] [caption id="attachment_1071" align="aligncenter" width="225" caption="Quad R14k 500 Swap completed"][/caption] [caption id="attachment_1072" align="aligncenter" width="300" caption="Silicon Graphics Origin 300 1GB DDR Stick"][/caption] Off the bat I was presented with: [caption id="attachment_1069" align="aligncenter" width="300" caption="Silicon Graphics Origin 300 L1 - Unknown Brick Type!"][/caption] Upon hooking up my USB->Null Modem cable I checked the L1 Log: 001?01-L1>log 04/14/12 10:54:46 L1 booting 1.44.0 04/14/12 10:54:49 ** fixing invalid SSN value 04/14/12 10:54:49 ** fixing BSN mismatch 04/14/12 11:13:53 L1 booting 1.44.0 So, good it auto-fixed the invalid SSN and BSN mismatch. 001?01-L1>brick rack: 001, slot: 01, partition: none, type: Unknown [2MB flash], serial:MRH006, source: NVRAM Good again, it sees the brick, but doesn't know what it is. Then tried: 001?01-L1>brick type C brick type changed (nvram) (takes effect on next L1 reboot/power cycle) 001?01-L1>reboot_l1 Upon rebooting the L1, still not avail. Going to have to get creative with this problem...
    Kind of scratching my head as to why Silicon Graphics didn't include gigabit on their IO8 PCI-X card that comes with an Origin 300. I guess maybe back in 2000-2001, the demand for gigabit Ethernet wasn't enough? Personally, I had just upgraded to Fast Ethernet (100mbit) if only half duplex on a Hub. Scored an official Silicon Graphics Gigabit card off eBay for next to nothing, installed it with no problems and upon rebooting IRIX recognized it and am now only using the gigabit connection to the rest of my network. [caption id="attachment_1063" align="aligncenter" width="300" caption="Silicon Graphics Gigabit PCI-X Card"][/caption] Next up was another great find on eBay for ~$30 I got a Dual Channel 4gb LSI Logic PCI-X card that has built in IRIX support. Just waiting on a PCI Express 4gb card to put into my SAN. [caption id="attachment_1064" align="aligncenter" width="225" caption="LSI Dual Channel 4gb Fibre Channel"][/caption] [caption id="attachment_1065" align="aligncenter" width="225" caption="SGI Gigabit and LSI Logic Dual Channel 4gb Fibre PCI-X cards installed"][/caption]
    I had been wondering what the effect of syntax would have on performance. Thinking the interpreter might handle things differently depending on the usage, I wanted to test my theory.

    Using .NET 4.5 with a Win32 Console Application project type, I wrote a little application doing a couple trigonometric manipulations on 1 Billion Double variables.

    For those that are not aware using the Task Parallel Library you have 3 syntaxes to loop through objects:

    Option #1 - Code within the loop's body
    Parallel.ForEach(generateList(numberObjects), item => {
         double tmp = (Math.Tan(item) * Math.Cos(item) * Math.Sin(item)) * Math.Exp(item); tmp *= Math.Log(item); }
    ); ]]>
    Option #2 - Calling a function within a loop's body
    Parallel.ForEach(generateList(numberObjects), item => {
         compute(item); }
    ); ]]>
    Option #3 - Calling a function inline
    Parallel.ForEach(generateList(numberObjects), item => compute(item)); ]]>
    That being said, here are the benchmarks for the 3 syntaxes run 3 times:
    Option #1 4.0716071 seconds 3.9156058 seconds 4.009207 seconds
    Option #2 4.0376657 seconds 4.0716071 seconds 3.9936069 seconds
    Option #3 4.040407 seconds 4.3836076 seconds 4.3056075 seconds
    Unfortunately nothing conclusive, so I figured make the operation more complex.

    That being said, here are the benchmarks for the 3 syntaxes run 2 times:
    Option #1 5.4444095 seconds 5.7313278 seconds
    Option #2 5.5848097 seconds 5.5633182 seconds
    Option #3 5.8793363 seconds 5.6793248 seconds
    Still nothing obvious, maybe there really isn't a difference?
    Found this blog post from 3/14/2012 by Stephen Toub on MSDN, which answers a lot of questions I had and it was nice to have validated an approach I was considering earlier:
    Parallel.For doesn’t just queue MaxDegreeOfParallelism tasks and block waiting for them all to complete; that would be a viable implementation if we could assume that the parallel loop is the only thing doing work on the box, but we can’t assume that, in part because of the question that spawned this blog post. Instead, Parallel.For begins by creating just one task. When that task is executed, it’ll first queue a replica of itself, and will then enlist in the processing of the loop; at this point, it’s the only task processing the loop. The loop will be processed serially until the underlying scheduler decides to spare a thread to process the queued replica. At that point, the replica task will be executed: it’ll first queue a replica of itself, and will then enlist in the processing of the loop.
    So based on that response, at least in the current implementation of the Task Parallel Library in .NET 4.x, the approach is to slowly created parallel threads as the resources allow for and fork off new threads as soon and as many possible.
    Diving into Multi-Threading the last couple nights, but not in C# like I had previously. Instead with C. Long ago, I had played with SDL's Built-In Threading when I was working on the Infinity Project. Back then, I had just gotten a Dual Athlon-XP Mobile (Barton) motherboard, so it was my first chance to play with multi-cpu programming. Fast forward 7 years, my primary desktop has 6 cores and most cell phones have at least 2 CPUs. Everything I've written this year has been with multi-threading in mind whether it is an ASP.NET Web Application, Windows Communication Foundation Web Service or Windows Forms Application. Continuing my quest into "going back to the basics" from last weekend, I chose my next quest would be to dive back into C, and attempt to port jcBench to Silicon Graphics' IRIX 64bit MIPS IV platform (it was on the original list of platforms). The first major hurdle, was programming C like C#. Not having classes, the keyword "new", syntax for certain things being completely different (structs for instance), having to initialize arrays with malloc only to remember after getting segmentation faults that by doing so will overload the heap (the list goes on). I've gotten "lazy" with my almost exclusive use of C# it seems, declaring an "array" like:
    ConcurrentQueue<SomeObject> cqObjects = new ConcurrentQueue<SomeObject>(); ]]>
    After the "reintroduction" to C, I started to map out what would be necessary to make an equivalent approach to the Task Parallel Library, not necessarily the syntax, but how it handled nearly all of the work for you. Doing something like (note you don't need to assign the return value from the Entity Model, it could be simply put in the first argument of Parallel.ForEach, I just kept it there for the example):
    List<SomeEntityObject> lObjects = someEntity.getObjectsSP().ToList(); // To ensure there would be no lazy-loading, use the ToList method ConcurrentQueue<SomeGenericObject> cqGenericObjects = new ConcurrentQueue<SomeGenericObject>(); Parallel.ForEach(lObjects, result => {
         if (result.SomeProperty > 1) {
         cqGenericObjects.Enqueue(new SomeGenericObject(result)); }
    ); ]]>
    A few things off the bat you'd have to "port":
    1. Concurrent Object Collections to support modification of collections in a thread safe manner
    2. Iteratively knowing and handling how cores/cpus are available, and constantly allocating new threads as threads complete (ie 6 cores, 1200 tasks, kick off at least 6 threads and handle when those threads complete and "always" maintain a 6 thread count
    The later I can imagine is going to be decent sized task in itself as it will involve platform specific system calls to determine the CPU count, breaking the task down dynamically and then managing all of the threads. At first thought the easiest solution might simply be:
    1. Get number of CPUs/Cores, n
    2. Divide number of "tasks" by the number cores and allocate those tasks for each core, thus only kicking off n threads
    3. When all tasks complete resume normal application flow
    The problem with that is (or at least one of them), is if the actual data for certain objects is considerably more complex then others, you could have 1 or more CPUs finished before the others, which would be wasteful. You could I guess infer based on a sampling of data, maybe kick off 1 thread to "analyze" the data from various indexes in the passed in array and calculate the average time taken to complete, then anticipate the variation of task completion time to more evenly space out tasks. Also taking into account current cpu utilization, as many operating systems use 1 CPU affinity for Operating System tasks, so giving CPU 1 (or the CPU with Operating System usage) to begin with less tasks might make more sense to truly optimize the threading "manager". Hopefully I can dig up some additional information on how TPL allocates their threads to possible give a 3rd alternative, since I've noticed it handles larger tasks very well across multiple threads. Definitely will post back with my findings....
    [caption id="attachment_1022" align="aligncenter" width="225" caption="SGI Origin 300 Quad 500mhz R14000"][/caption] [caption id="attachment_1023" align="aligncenter" width="225" caption="SGI Origin 300 Quad 500mhz R14000"][/caption] [caption id="attachment_1024" align="aligncenter" width="300" caption="SGI Origin 300 Quad 500mhz R14000"][/caption]
    Might have noticed the lack of postings in the last 2 weeks, I can sum it up in three words: Mass Effect 3. Having cut my gaming time considerably over the last year in a half, when a game like Mass Effect 3 comes out I fully indulge in it. I'm at around 33 hours at the moment, on the second to last mission. While I haven't finished it, I can safely say this is the best game I have ever played. While I miss the inventory system and scouting for objectives in the first game and some of the character driven recruitment aspects of the second, I feel like it took the best of series and rolled it into an amazing game. On so many levels, I've never felt the way I do about Mass Effect 3. Knowing I've only got a few hours left in a 2.5 year journey with my Shepard and crew, it saddens me. That's something that has never happen to me previously. Maybe it was because I've never been attached to characters like I have in Mass Effect or the fact that I've created my "own" story in the Mass Effect universe, versus the linearity of Final Fantasy or "rails" shooters like Call of Duty. Or maybe it is because it combines my favorite genre, Science Fiction with an incredible story blended with action and drama. Hopefully DLC will come out to continue story like Mass Effect 2 did, and even more so an Expansion Pack like what Bioware did for Dragon Age. [caption id="attachment_998" align="aligncenter" width="246" caption="Mass Effect 3 Collector's Edition"][/caption]
    Having been years since I messed with MySQL on a non-Windows platform I had forgotten 2 simple commands after setup on my Origin 300: [sql] CREATE USER 'dbuser'@'%' IDENTIFIED BY 'sqlisawesome'; [/sql] [sql] GRANT ALL PRIVILEDGES ON *.* TO 'dbuser'@'%' WITH GRANT OPTION; [/sql] The wildcard (%) after the username is key if you want access from other machines (which in my case I wanted to the MySQL Workbench on my Windows 7 machine).
    Last Sunday I was curious about IRIX root disk cloning, I had done it previously on an SGI Octane with a none root drive, but never on the root drive itself. Since I have 2 SGI O2s and 2 identical Maxtor Atlas II 15k 73gb Ultra 320 drives, it made sense rather than reinstalling everything just to clone it. Sure enough about 20 minutes later, I had an extra duplicate of my SGI O2 IRIX 6.5.30 install with all of my Nekoware packages (BASH, PHP, MySQL etc). Pulled it out and put into my other SGI O2, worked like a charm. After doing some work on my SGI Origin 300 this morning, I figured I'd replace the failing Maxtor ATLAS 15k 36gb Ultra 320 drive with a brand new Fujitsu MAU3147 15k 36gb Ultra 320 drive. Not to mention the Maxtor had been making a terrible high pitch noise for some time before it ended up in the Origin :) Fujitsu in the Origin 300 Sled: [caption id="attachment_975" align="aligncenter" width="225" caption="Fujitsu MAU ready to go for my Origin 300"]