latest posts

A couple updates since my last "what I am doing" update back in July, still been extremely busy with the Baltimore GiveCamp MVC 5 Web Application and work itself. In between everything however, I have a few announcements.

First off, jcBENCH has a new compare results feature on the web site where you can compare up to 3 results with a bar graph. This feature will make it to the Windows Store and Windows Phone 8.1 apps in the coming weeks. In addition a new OpenCL enabled version is near BETA, just have to add error handling and do more testing. The OpenCL client will be released for Linux/x86 and Win32/x86 initially with a MacOSX/x86 release to come later.

A project my wife requested a few weeks ago is in the Windows Phone Store: jcTRENDNET. This application allows you to remotely view and manage Trendnet IP 751 WIC/WC cameras. More features to come, but a pretty feature filled 1.0 release.

The larger project I mentioned in my July update, is still being worked on. I ran into some huge technical hurdles, that while not impossible will require some extensive work to even get cranking with something of substance. My latest work has been leading me to design my own language to be interfaced with C++, but I'm still working out the short and long term goals.

Lastly, for the last week or two I've been attempting to get back into game development. After a good start last August, I lost momentum after 2 weeks or so for two reasons: last of mapping out what the game would be and other projects. This new idea is modular in the regards that the base game isn't terribly complex, but future features can be added on to enhance the game. At least initially it will only be a Windows Phone 8.1 game, but more than likely a Windows Store 8.1 game as well. I have yet to find a game that can captivate my time on my Lumia 1520 so this should solve it. More details to come with a release when it's done to quote id software.


For the longest time I’ve had these great ideas only to keep them in my head and then watch someone else or some company turn around and develop the idea (not to say someone stole the idea, but given the fact that there are billions of people on this planet, it is only natural to assume one of those billion would come up with the same idea). Watching this happen, as I am sure other developers have had since the 70s I’ve decided to put my outlook on things here, once a year, every July.

As one who reads or has read my blog for a decent amount of time knows I am very much a polyglot of software and enjoy the system building/configuration/maintaining aspect of hardware. For me, they go hand in hand. The more I know about the platform itself (single threaded performance versus multi-threaded performance, disk iops etc.) the better I can program the software I develop. Likewise the more I know about a specific programming model, the better I will know the hardware that it is specialized for. To take it a step further, this makes decisions on implementation at work & my own projects better.

As mentioned in the About Me section, I started out in QBasic and a little later when I was 12 I really started getting into custom pc building (which wasn’t anywhere as big as it is today). Digging through the massive Computer Shopper magazines, drooling over the prospect of the highest end Pentium MMX CPUs, massive (at the time) 8 GB hard drives and 19” monitors. Along with the less glamorous 90s PC issues of IRQ conflicts, pass through 3Dfx Voodoo cards that required a 2D video card (and yet another PCI slot), SCSI PCI controllers and dedicated DVD decoders. Suffice it to say I was glad I experienced all of that because it as it creates a huge appreciation for USB, PCI Express, SATA and if nothing else the stability of running a machine 24/7 on a heavy work load (yes part of that is also software).

To return to the blog’s title…

Thoughts on the Internet of Things?

Universally I do follow the Internet of Things (IoT) mindset. Everything will be interconnected, which brings the question of privacy and what that means for the developer of the hardware, the software and consumer. As we all know, your data is money. If the lights in your house for instance were WiFi enabled and connected into a centralized server in your house with an exposed client on a tablet or phone I would be willing to be the hardware and software developer would love to know the total energy usage, which lights in which room were on, what type of bulbs and when the bulbs were dying. Marketing data could then be sold to let you know of bundle deals, new “more efficient” bulbs, how much time is spent in which rooms (if you are in the home theater room a lot, sell the consumer on blu-rays and snacks for instance). With each component of your home becoming this way, the more data will be captured and in some cases will be able to predict what you want before you realize, simply based off your tendencies.

While I don’t like the lack of privacy in that model (hopefully some laws can be enacted to resolve those issues), being a software developer I would hate to be ever associated with the backlash of capturing that data, but this idea of everything being connected will create a whole new programming model. With the recent trend towards REST web services returning Gzipped JSON with WebAPI for instance, the problem of submitting and retrieving has never been easier and portable across so many platforms. With C# in particular in conjunction with the Http Client library available on NuGet, a lot of the grunt work is already done for you in an asynchronous manner. Where I do see a change, is in the standardization of an API for your lights, TV, garage door, toaster etc. Allowing 3rd party plugins and universal clients to be created rather than having a different app to control element or one company providing a proprietary API that only works on their devices, forcing the difficult decision for the consumer to either stay with that provider to be consistent or mixing the two, requiring 2 apps/devices.

Where do I see mobile technology going?

Much like where the Mobile Devices have headed towards (as I predicted 2 years ago), apps are becoming ever increasingly integrated into your device (for better or for worse). I don’t see this trend changing, but I do hope from a privacy standpoint the apps have to become more explicit in what they are accessing. I know there is fine line from the big three (Apple, Google and Microsoft) in becoming overly explicit before any action (remember Vista?), but think if an app gets more than your current location, the capabilities should be brought to a bolder or larger font to better convey the apps true accessibility to your device. I don’t see this situation getting better from a privacy standpoint, but I do see more and more customer demand for the “native” experience to be like that of Cortana on Windows Phone 8.1. She has access to the data you provide her and will help make your experience better. As the phones provide more and more APIs, this trend will only continue until apps are more of plugins to your base operating system’s experience to integrate into services like Yelp, Facebook, Twitter etc.

Where do I see web technology going?

I enjoyed diving into MVC over the last year in a half. The model definitely feels much more in line with a MVVM XAML project, but still has overwhelming strong tie to the client side between the strong use of jQuery and the level of effort in maintaining the ever changing browser space (i.e. browser updates coming out at alarming rate). While I think we all appreciate when we goto a site on our phones or desktop and it scales nicely providing a rich experience no matter the device, I feel the ultimate goal of trying to achieve a native experience in the browser is waste of effort. I know just about every web developer might stop reading and be in outrage – but what is the goal of the last web site you developed and designed that was also designed for the mobile? Was it to convey information to the masses? Or was it simply a stop gap until you had a native team to develop for the big three mobile platforms?

In certain circumstances I agree with the stance of making HTML 5 web apps instead of native apps, especially when it comes to cost prohibiting of a project. But at a certain point, especially as of late with Xamarin’s first class citizen status with Microsoft you have to ask yourself, could I deliver a richer experience natively and possible faster (especially given the cast range of mobile browsers to content with in the HTML 5 route)?

If you’re a C# developer who wants to see a native experience, definitely give the combination of MVVM Cross, Xamarin’s Framework and utilizing Portable Libraries a try. I wish all of those tools existed when I first dove into iOS development 4 years ago.

Where do I see desktop apps going?

In regards to desktop applications, I don’t see them going away even in the “app store” world we live in now. I do however see a demand for a richer experience expected by customers after having a rich native experience on their phone or after using a XAML Windows 8.x Store App. The point being, I don’t think it will be acceptable for an app to look and feel like the default WinForms grey and black color scheme that we’ve all used at one point in our careers and more than likely began our programming (thinking back to classic Visual Basic).

Touch will also play a big factor in desktop applications (even in the enterprise). Recently at work I did a Windows 8.1 Store App for an executive dashboard. I designed the app with touch in mind, and it was interesting how it changes your perspective of interacting with data. The app in question, utilized Mutli-layered graphs and a Bing Map with several layers (heat maps and pushpins). Gone was the un-natural mouse scrolling and instead pinching, zooming and rotating as if one was in a science fiction movie from just 10 years ago.

I see this trend continuing especially as the number of practical general purpose devices like laptops having touch screens at every price point, instead of the premium they previously demanded. All that needs to come about is a killer application for the Windows Store – could your next app be that app?

Where is programming heading in general?

Getting programmers out of the single-threaded – top to bottom programming mindset. I am hoping next July when I do a prediction post this won’t even be a discussion point, but sadly I don’t see this changing anytime soon. Taking a step back and looking at what this means generally speaking: programmers aren’t utilizing the hardware available to them to their full potential.

Over 5 years ago at this point I found myself at ends with a consultant who kept asking for more and more cpus added to a particular VM. At the time when he first asked, it seemed reasonable as there was considerably more traffic coming to a particular ASP.NET 3.5 Web Application as a result of a lot of eagerly awaited functionality he and his team had just deployed. Even after the additional CPUS were added, his solution was still extremely slow under no load. This triggered me to review his Subversion checkins and I realized the crux of the matter wasn’t the server – it was his single threaded resource intensive/time consuming code. In this case, the code was poorly written on top of trying to achieve a lot of work performed on a particular page. For those that remember back to .NET 3.5’s implementation of LINQ, it wasn’t exactly a strong performer in performance intensive applications, let alone being looped through multiple times as opposed to one larger LINQ Query. The moral of the story being the single-threaded coded only helped for handling the increased load, not the performance of a user’s experience on a 0% load session.

A few months later when .NET 4 came out of beta and further still when the Task Parallel Library was released it changed my view on performance (After all, jcBENCH stemmed from my passion for diving into Parallel Programming on different architectures and operating systems back in January 2012). No longer was I relying on high single threaded performing cpus, bu t instead writing my code to take advantage of the ever-increasing # of cores available to me at this particular client (for those curious 2U 24 core Opteron HP G5 rackmount servers).

With .NET’s 4.5’s async/await I was hopeful that meant more developers I worked with would take advantage of this easy model and no longer lock the UI thread, but I was largely disappointed. If developers couldn’t grasp async/await, let alone TPL how could they proceed to what I feel is an even bigger breakthrough to become available to developers: Heterogeneous Programming, or more specifically OpenCL.

With parallel programming comes the need to break down your problem into independent problems, all coming together at a later time (like breaking down image processing to look at a range of pixels rather than the entire image for instance). This is where Heterogeneous Programming can make an even bigger impact, in particular with GPUs (Graphics Processing Units) which have upwards of hundreds of cores to process tasks.

I had dabbled in OpenCL as far as back as June 2012 in working on the OpenCL version of jcBENCH and I did some further research back in January/February of this year (2014) in preparation for a large project at work – a project I ended up using the TPL extensively instead. The problem wasn’t OpenCL’s performance, but my mindset at the time. Before the project began, I thought I knew the problem inside out, but really I only knew it as a human would think about it – not a machine that only knows 1s and 0s. The problem wasn’t a simple task, nor was it something I had ever even attempted previously so I gave myself some slack two months in when it finally hit me on what I was really trying to solve – teaching a computer to think like a human. Therefore when pursuing Heterogeneous programming as a possible solution, ensure you have a 100% understanding of the problem and what you are in the end trying to achieve, in most cases it might make sense to utilize OpenCL instead of a traditional parallel model like with the TPL.

So why OpenCL outside of the speed boost? Think about the last laptop or desktop you bought, chances are you have an OpenCL 1.x compatible APU and/or GPU in it (i.e. you aren’t required to spend any more money – just utilizing what has already been available to you). In particular on the portable side, laptops/Ultrabooks that already have a lower performing CPU than your desktop, why utilize the CPU when the GPU could off load some of that work?

The only big problem with OpenCL for C# programmers is the lack of an officially supported interop library from AMD, Apple or any of the other members of the OpenCL group. Instead you’re at the mercy of using one of the freely available wrapper libraries like OpenCL.NET or simply writing your own wrapper. I haven’t made up my mind yet as to which path I will go down – but I know at some point a middle ware makes sense. Wouldn’t it be neat to have a generic work item and be able to simply pass it off to your GPU(s) when you wanted?

As far as where to begin with OpenCL in general, I strongly suggest reading the OpenCL Programming Guide. Those who have done OpenGL and are familiar with the “Red Book”, this book follows a similar pattern with a similar expectation and end result.


Could I be way off? Sure – it’s hard to predict the future, while being grounded in the past that brought us here, meaning it’s hard to let go of how we as programmers and technologists in the world have evolved in the last 5 years to satisfy not only our current consumer demand but our own and anticipate what is next. What I am more curious in hearing is programmers outside of the CLR in particular the C++, Java and Python crowds – where they feel the industry is heading and how they see their programming language handling the future, so please leave comments.
I started playing around with the compiler statistics of the KernelAnalyzer (mentioned in this morning's post) some more. Without doing some more research it looks as though using the built in pow function versus simply multiplying the number by itself is much, much slower. Take this line for instance:
thirdSide[index] = sqrt((double)(x * x) + (double)(y * y)); ]]>
The KernelAnalyzer estimates the through put to be about 10 Million Threads\Sec. Adjusting the line to this:
thirdSide[index] = sqrt(pow((double)x, 2) + pow((double)y,2)); ]]>
The estimate lowers itself to 2 Million Threads\Sec. Is this a result of the math library function call overhead to loose 5X the performance? Adjusting the line to only using half, results only in an increase to 4 Million Threads\Sec. So perhaps there is an initial performance hit when using a library function, maybe the library needs to be loaded upon every thread? Or is the library shared across threads and thus causing a locking condition during the compute? I think this requires more investigation, wish I was going to the AMD Developer Conference next week...
I've been working on an OpenCL WPF version of jcBench for the last 2 or 3 weeks in between life, thus the lack of posting. However this morning at a nice time of 6:30 AM, I found a new tool to assist in OpenCL development, the AMD APP Kernel Analyzer. In my code for testing I've just been setting my OpenCL kernel program to a string and then sending it to the ComputeProgram function like so:
string clKerenel = @"__kernel void computePythag(global long * num, global double * thirdSide) {
     size_t index = get_global_id(0); for (long x = 2; x < num[index] + 2; x++) {
     for (long y = x; y < num[index] + 2; y++) {
     double length = sqrt((double)(x * x) + (double)(y * y)); length *= length; thirdSide[index] = length; }
return; }
"; ComputeProgram program = new ComputeProgram(_cContext, clProgramSource); program.Build(null, null, null, IntPtr.Zero); ]]>
This is problematic as any error isn't captured until runtime best case scenario or you just get a BSOD (as I did last night). This is where AMD's KernelAnalyzer comes into play. [caption id="attachment_1224" align="aligncenter" width="300" caption="AMD APP Kernel Analyzer"][/caption] Note the syntax highlighting and if there were any errors during compilation the Compiler Output window like in any other development tool. An extra feature that I just realized how useful it really is, is that the ability to target to difference generations/platforms of AMD GPUs. I knew the R7XX (4000 series Radeon HD) only had OpenCL 1.0 support, but I didn't realize (naively thinking) that the same program wouldn't compile cleanly across the board. Luckily I still have 2 R7XX series GPUs in use (one in my laptop and another in my secondary desktop), but interesting nonetheless. Definitely more to come on the OpenCL front tonight...