Jarred Capellman
Putting 1s and 0s to work since 1995
RSS
Twitter
LinkedIn

Thursday, July 17, 2014

Mac OS X x86 release of jcBENCH

Posted By Jarred Capellman

Just a quick update, I just wrapped up the 0.9.850.0531 release of jcBENCH for Mac OS X/x86. Previously there was a much older port going back to January 2012, so it is nice to finally have it running on the same codebase as the other platforms now. So please if you've got an Apple product, please go run jcBENCH as I only have an older iMac to compare results with.

On a somewhat funny note the iMac I do all my Xamarin work on is several factors slower than my $60 AMD Athlon 5350 I use in my firewall - funny how technology catches up at a fraction of the cost.

Sunday, July 13, 2014

Conflicting versions of ASP.NET Web Pages detected

Posted By Jarred Capellman

In doing some routine maintenance on this blog, I updated the usual JSON.NET, Entity Framework etc. In doing so and testing locally, I came across the following error:
ASP.NET Webpages Conflict

In looking at the Web.config, the NuGet Package did update the dependentAssembly section properly:
ASP.NET Webpages Conflict

However, in the appSettings section, it didn't update the webpages:Version value:
ASP.NET Webpages Conflict

Simply update the "2.0.0.0" to "3.0.0.0" and you'll be good to go again.

Saturday, July 12, 2014

Where do I see the next year taking us in the programming world?

Posted By Jarred Capellman

Intro

For the longest time I’ve had these great ideas only to keep them in my head and then watch someone else or some company turn around and develop the idea (not to say someone stole the idea, but given the fact that there are billions of people on this planet, it is only natural to assume one of those billion would come up with the same idea). Watching this happen, as I am sure other developers have had since the 70s I’ve decided to put my outlook on things here, once a year, every July.

As one who reads or has read my blog for a decent amount of time knows I am very much a polyglot of software and enjoy the system building/configuration/maintaining aspect of hardware. For me, they go hand in hand. The more I know about the platform itself (single threaded performance versus multi-threaded performance, disk iops etc.) the better I can program the software I develop. Likewise the more I know about a specific programming model, the better I will know the hardware that it is specialized for. To take it a step further, this makes decisions on implementation at work & my own projects better.

As mentioned in the About Me section, I started out in QBasic and a little later when I was 12 I really started getting into custom pc building (which wasn’t anywhere as big as it is today). Digging through the massive Computer Shopper magazines, drooling over the prospect of the highest end Pentium MMX CPUs, massive (at the time) 8 GB hard drives and 19” monitors. Along with the less glamorous 90s PC issues of IRQ conflicts, pass through 3Dfx Voodoo cards that required a 2D video card (and yet another PCI slot), SCSI PCI controllers and dedicated DVD decoders. Suffice it to say I was glad I experienced all of that because it as it creates a huge appreciation for USB, PCI Express, SATA and if nothing else the stability of running a machine 24/7 on a heavy work load (yes part of that is also software).

To return to the blog’s title…

Thoughts on the Internet of Things?

Universally I do follow the Internet of Things (IoT) mindset. Everything will be interconnected, which brings the question of privacy and what that means for the developer of the hardware, the software and consumer. As we all know, your data is money. If the lights in your house for instance were WiFi enabled and connected into a centralized server in your house with an exposed client on a tablet or phone I would be willing to be the hardware and software developer would love to know the total energy usage, which lights in which room were on, what type of bulbs and when the bulbs were dying. Marketing data could then be sold to let you know of bundle deals, new “more efficient” bulbs, how much time is spent in which rooms (if you are in the home theater room a lot, sell the consumer on blu-rays and snacks for instance). With each component of your home becoming this way, the more data will be captured and in some cases will be able to predict what you want before you realize, simply based off your tendencies.

While I don’t like the lack of privacy in that model (hopefully some laws can be enacted to resolve those issues), being a software developer I would hate to be ever associated with the backlash of capturing that data, but this idea of everything being connected will create a whole new programming model. With the recent trend towards REST web services returning Gzipped JSON with WebAPI for instance, the problem of submitting and retrieving has never been easier and portable across so many platforms. With C# in particular in conjunction with the Http Client library available on NuGet, a lot of the grunt work is already done for you in an asynchronous manner. Where I do see a change, is in the standardization of an API for your lights, TV, garage door, toaster etc. Allowing 3rd party plugins and universal clients to be created rather than having a different app to control element or one company providing a proprietary API that only works on their devices, forcing the difficult decision for the consumer to either stay with that provider to be consistent or mixing the two, requiring 2 apps/devices.

Where do I see mobile technology going?

Much like where the Mobile Devices have headed towards (as I predicted 2 years ago), apps are becoming ever increasingly integrated into your device (for better or for worse). I don’t see this trend changing, but I do hope from a privacy standpoint the apps have to become more explicit in what they are accessing. I know there is fine line from the big three (Apple, Google and Microsoft) in becoming overly explicit before any action (remember Vista?), but think if an app gets more than your current location, the capabilities should be brought to a bolder or larger font to better convey the apps true accessibility to your device. I don’t see this situation getting better from a privacy standpoint, but I do see more and more customer demand for the “native” experience to be like that of Cortana on Windows Phone 8.1. She has access to the data you provide her and will help make your experience better. As the phones provide more and more APIs, this trend will only continue until apps are more of plugins to your base operating system’s experience to integrate into services like Yelp, Facebook, Twitter etc.

Where do I see web technology going?

I enjoyed diving into MVC over the last year in a half. The model definitely feels much more in line with a MVVM XAML project, but still has overwhelming strong tie to the client side between the strong use of jQuery and the level of effort in maintaining the ever changing browser space (i.e. browser updates coming out at alarming rate). While I think we all appreciate when we goto a site on our phones or desktop and it scales nicely providing a rich experience no matter the device, I feel the ultimate goal of trying to achieve a native experience in the browser is waste of effort. I know just about every web developer might stop reading and be in outrage – but what is the goal of the last web site you developed and designed that was also designed for the mobile? Was it to convey information to the masses? Or was it simply a stop gap until you had a native team to develop for the big three mobile platforms?

In certain circumstances I agree with the stance of making HTML 5 web apps instead of native apps, especially when it comes to cost prohibiting of a project. But at a certain point, especially as of late with Xamarin’s first class citizen status with Microsoft you have to ask yourself, could I deliver a richer experience natively and possible faster (especially given the cast range of mobile browsers to content with in the HTML 5 route)?

If you’re a C# developer who wants to see a native experience, definitely give the combination of MVVM Cross, Xamarin’s Framework and utilizing Portable Libraries a try. I wish all of those tools existed when I first dove into iOS development 4 years ago.

Where do I see desktop apps going?

In regards to desktop applications, I don’t see them going away even in the “app store” world we live in now. I do however see a demand for a richer experience expected by customers after having a rich native experience on their phone or after using a XAML Windows 8.x Store App. The point being, I don’t think it will be acceptable for an app to look and feel like the default WinForms grey and black color scheme that we’ve all used at one point in our careers and more than likely began our programming (thinking back to classic Visual Basic).

Touch will also play a big factor in desktop applications (even in the enterprise). Recently at work I did a Windows 8.1 Store App for an executive dashboard. I designed the app with touch in mind, and it was interesting how it changes your perspective of interacting with data. The app in question, utilized Mutli-layered graphs and a Bing Map with several layers (heat maps and pushpins). Gone was the un-natural mouse scrolling and instead pinching, zooming and rotating as if one was in a science fiction movie from just 10 years ago.

I see this trend continuing especially as the number of practical general purpose devices like laptops having touch screens at every price point, instead of the premium they previously demanded. All that needs to come about is a killer application for the Windows Store – could your next app be that app?

Where is programming heading in general?

Getting programmers out of the single-threaded – top to bottom programming mindset. I am hoping next July when I do a prediction post this won’t even be a discussion point, but sadly I don’t see this changing anytime soon. Taking a step back and looking at what this means generally speaking: programmers aren’t utilizing the hardware available to them to their full potential.

Over 5 years ago at this point I found myself at ends with a consultant who kept asking for more and more cpus added to a particular VM. At the time when he first asked, it seemed reasonable as there was considerably more traffic coming to a particular ASP.NET 3.5 Web Application as a result of a lot of eagerly awaited functionality he and his team had just deployed. Even after the additional CPUS were added, his solution was still extremely slow under no load. This triggered me to review his Subversion checkins and I realized the crux of the matter wasn’t the server – it was his single threaded resource intensive/time consuming code. In this case, the code was poorly written on top of trying to achieve a lot of work performed on a particular page. For those that remember back to .NET 3.5’s implementation of LINQ, it wasn’t exactly a strong performer in performance intensive applications, let alone being looped through multiple times as opposed to one larger LINQ Query. The moral of the story being the single-threaded coded only helped for handling the increased load, not the performance of a user’s experience on a 0% load session.

A few months later when .NET 4 came out of beta and further still when the Task Parallel Library was released it changed my view on performance (After all, jcBENCH stemmed from my passion for diving into Parallel Programming on different architectures and operating systems back in January 2012). No longer was I relying on high single threaded performing cpus, bu t instead writing my code to take advantage of the ever-increasing # of cores available to me at this particular client (for those curious 2U 24 core Opteron HP G5 rackmount servers).

With .NET’s 4.5’s async/await I was hopeful that meant more developers I worked with would take advantage of this easy model and no longer lock the UI thread, but I was largely disappointed. If developers couldn’t grasp async/await, let alone TPL how could they proceed to what I feel is an even bigger breakthrough to become available to developers: Heterogeneous Programming, or more specifically OpenCL.

With parallel programming comes the need to break down your problem into independent problems, all coming together at a later time (like breaking down image processing to look at a range of pixels rather than the entire image for instance). This is where Heterogeneous Programming can make an even bigger impact, in particular with GPUs (Graphics Processing Units) which have upwards of hundreds of cores to process tasks.

I had dabbled in OpenCL as far as back as June 2012 in working on the OpenCL version of jcBENCH and I did some further research back in January/February of this year (2014) in preparation for a large project at work – a project I ended up using the TPL extensively instead. The problem wasn’t OpenCL’s performance, but my mindset at the time. Before the project began, I thought I knew the problem inside out, but really I only knew it as a human would think about it – not a machine that only knows 1s and 0s. The problem wasn’t a simple task, nor was it something I had ever even attempted previously so I gave myself some slack two months in when it finally hit me on what I was really trying to solve – teaching a computer to think like a human. Therefore when pursuing Heterogeneous programming as a possible solution, ensure you have a 100% understanding of the problem and what you are in the end trying to achieve, in most cases it might make sense to utilize OpenCL instead of a traditional parallel model like with the TPL.

So why OpenCL outside of the speed boost? Think about the last laptop or desktop you bought, chances are you have an OpenCL 1.x compatible APU and/or GPU in it (i.e. you aren’t required to spend any more money – just utilizing what has already been available to you). In particular on the portable side, laptops/Ultrabooks that already have a lower performing CPU than your desktop, why utilize the CPU when the GPU could off load some of that work?

The only big problem with OpenCL for C# programmers is the lack of an officially supported interop library from AMD, Apple or any of the other members of the OpenCL group. Instead you’re at the mercy of using one of the freely available wrapper libraries like OpenCL.NET or simply writing your own wrapper. I haven’t made up my mind yet as to which path I will go down – but I know at some point a middle ware makes sense. Wouldn’t it be neat to have a generic work item and be able to simply pass it off to your GPU(s) when you wanted?

As far as where to begin with OpenCL in general, I strongly suggest reading the OpenCL Programming Guide. Those who have done OpenGL and are familiar with the “Red Book”, this book follows a similar pattern with a similar expectation and end result.

Closing

Could I be way off? Sure – it’s hard to predict the future, while being grounded in the past that brought us here, meaning it’s hard to let go of how we as programmers and technologists in the world have evolved in the last 5 years to satisfy not only our current consumer demand but our own and anticipate what is next. What I am more curious in hearing is programmers outside of the CLR in particular the C++, Java and Python crowds – where they feel the industry is heading and how they see their programming language handling the future, so please leave comments.

Monday, July 07, 2014

AMD Athlon 5150 and MSI AM1I

Posted By Jarred Capellman

Over the weekend I picked up an AMD Athlon 5150 APU (4x1.6ghz) along with a MSI AM1I motherboard from Frys on sale for $59. A month or two ago I purchased the 5350 APU (4x2ghz) and an ASUS AM1-I, which had been working great since I set it up so I was curious how the 5150 performed along with the MSI motherboard.

MSI AM1I and AMD 5150

Big differences between the two is the inclusion of a Mini-PCLe slot (for a WiFi or SSD card) along with a PCIe 2.0 x16 slot (x4 mechanical). MSI AM1I and AMD 5150

AMD 5150

For those that have followed my blog for a while, I swapped the motherboard into my Lian Li PC-Q03B case that I bought back in 2012. In the time since I setup that machine, I had installed XBMC to stream TV, Movies and Music from my NAS (to be written up at a later date). Over time it became apparent the low powered AMD C-60 I had in there wasn't truly up to the task of streaming higher bit-rate 1080p video.

One thing I wasn't a big fan of after booting up the machine was MSI's choice of bios in comparison to my ASUS motherboards:
MSI AM1 BIOS

BIOS aside, I had no issues with the motherboard, Windows 8.1 updated itself without the need for a reinstall and I was ready to benchmark it.

Interestingly enough comparing jcBENCH the 5150 is roughly 40% slower in both Integer and Floating Point than my 5350. I included a couple other systems for comparisons:

AMD 5150 Benchmark Results

Sunday, July 06, 2014

jcBENCH Release Announcement and New Project Announcement

Posted By Jarred Capellman

First off, things have been a bit hectic since my last posting (sadly almost a month ago). I’ve been doing extensive volunteer work for an all new custom ASP.NET MVC 5 web application for the Baltimore GiveCamp to the point it’s become a second job (for better or for worse).

In between that work and my day job, I managed to get jcBENCH its own domain along with an all new MVC/WebAPI site hosted on Azure. Newly available is a Windows Store port along with an updated Android port (the last update to the Android port was 2.5 years ago – good to get that done). Big changes in the 0.9 release is the ability to upload your results across every platform along with a new scoring system to better show performance across all devices. Every platform but IRIX/MIPS3 and MacOSX/x86 ports have been updated (hoping to get these done sometime this week). More to come on that front in regards on the web site in regards to graphs and comparisons (along with some responsive UI fixes on lower resolution displays). As of right now every current major platform outside of iOS and Blackberry have ports. If someone wishes to waive the iOS Store fee and Xamarin Framework license fee, I would be happy to do the iOS port. For a free app, I just can’t justify the cost.

In addition a project I’ve resurrected from July/August 2008, I finally figured out how to actually implement it (it wasn’t a focus of my time for sure – I can only remember one instance since where I actually opened Visual Studio to work on it). Technology has definitely advanced far beyond where it was then and I’ve added several extremely neat features (at least to me) purely from how far I’ve come as a programmer in the 6 years since. My goal is to have an alpha released sometime in August. What is neat, is that no one else (to my knowledge) has done anything of this scope or functionality – nor to this scale, especially by a single developer.

Along with this project, two smaller libraries will also be released freely with the hope of some adoption, but at the end of the day I am making them to assist myself in all future projects (creating a jcPLATFORM for lack of a better name). If no one else uses them, there will be no sorrow on my part.

More to come in the days ahead – two long blog posts and the remaining two ports of jcBENCH in particular.
Tags: