latest posts

As a frequent or infrequent visitor might have noticed, the site has undergone a slight refresh. I kept the bootstrap menu, but stripped out the right hand panel (though it might come back in a different form). The biggest undertaking was the complete rewrite of the codebase from the ASP NET MVC 5 and Entity Framework 6.1 to ASP NET Core and Entity Framework Core. The code base is now considerably smaller and as you can tell is faster without caching even turned on as it was before.

As my shift in time spent has shifted from blogging to my GitHub projects, I wanted to shift focus of my blog to my github projects. Over the next couple weeks I will be adding in some feeds and milestones of the projects I am working in the header area. This way a visitor might notice I had not blogged in a while, but can see active progress on GitHub.

That being said I have been dividing my time between a couple different projects. One being bbXP, the codebase that powers this blog. The other being jcFUS, a collaboration tool for businesses and consumers. In the coming weeks expect a lot of coverage on these projects.

One might be asking, where is the updated code for bbXP? I will be pollishing it up today and checking it into the GitHub repository.

Some other features coming back at some point:
  • Archives
  • Content Search
  • White Papers
  • My Computers
So stay tuned for more updates and some other posts on the hardware side of my passion.
Going back to 11/26/1997 (scary it has been almost 18 years), I’ve been fascinated by artificial intelligence. In November 1997 I was still pretty much doing QBasic with a little Visual Basic, so it is not surprisingly the sole surviving code snippet I have from my Jabba Chat application (yes I was a huge Star Wars fan even back then) was in QBasic:

PRINT "Welcome to JABBA CHAT"
PRINT "Your Jabba friend, Karen"
PRINT "What is your first name", name$
CLS
PRINT "Well HI There "; name%
PRINT "It sure is neat to have You Drop by"
PRINT "Press space bar when ready you're ready to start"


As simple as it may be, this is at a basic level following a pre-programmed path, taking input and "learning" a person's name. Now days with programming languages better structured to handle AI, processing power and overall a better understanding of how we humans think now has never been a better time to dive into this area.

As stated in a previous post back in July 2014 I've become heavily vested into making true Artificial Integllience work in C#. In working on a fully automated scheduling system at work a lot of big questions I had never encountered in my 15 years of professional development with one in particular:

How do you not only replicate a human's job, but also make it better taking advantage of one of the biggest advantages a computer has over a human being; the ability to process huge data sets extremely fast (and consistently produce results)?

The answer wasn't the solution, but instead I realized I was asking the wrong question, and only after really deep diving into the complexities of the rather small "artificial intelligence" engine that I came to realize this. The question should have been what drives a human to make decisions? The simple programmatic solution to that question is to go through and apply conditionals for every scenario. Depending on what the end goal of the project is that maybe a good choice, but what if it is a more complex solution or hits one of the most common events in a computer program: the unexpected, that approach can't be applied. This question drove me down a completely different path thinking about how a human being makes decisions when he or she has yet to have encountered a scenario, an unhandled exception if you will. Thinking about decisions I have had to make throughout my life, big or small I have relied on past experience. Thinking about an application just going to production, it has no experience, each nanosecond is a new experience, for all intents and purpose a human infant. Remembering back as far as I can to when I was 4, sometimes you would fail or make a mistake, as we all have, the key as our parents enthralled into us was to learn from the mistake or failure so we wouldn't make it again. Applications for the most part haven't embraced this. Most of the time a try/catch is employed with an even less likely alert to a human notifying them that their program ran into a new (or possibly repeated experience if the error is caught numerous times before being patched, if ever). The human learns of the "infants" mistake and corrects the issue hopefully. The problem here is that the program didn't learn, it simply was told nothing more than to check for a null object or the appropriate way to handle a specific scenario, i.e. a very rigid form of advancement. This has been the accepted practice for as long as I have programmed. A bug arises, a fix is pushed to staging, tested and pushed to production (assuming it wasn't a hotfix). I don't think this is the right approach any longer. Gone are the days of extremely slow x86 CPUs or limited memory. In today's world we have access to extremely fast gpus and vast amounts of memory that largely go unused coupled with languages that facilitate anything we as programmers can dream up.

So where does the solution really reside in?

I believe the key is to architect applications to become more organic in that it should learn from paths taken previously. I have been bouncing around this idea for the last several years looking to self-updating applications where metrics captured during use could used to automatically update the applications own code. The problem with this is then you're relying on both the original programming to affect the production code in addition to the code it would be updating. Let alone also ensuring that changes that are made automatically are then tracked and reported appropriately. I would venture most larger applications would also need projections to be performed prior to any change along with the scope of what was changed.

Something I added into the same platform at work was the tracking of every request by user, what he or she was requesting, the timestamp and the duration the request took from the inital request to returning of information or processing of the request. To me this not only provided the audit trails that most companies desire and there by the ability to add levels of security to specific pieces of information system regardless of the platform, but also providing the ability to obtain the information of "John Doe requests this piece of information and then this piece of information a few seconds later every time". At that point the system would look for these patterns and alert the appropriate party. In that example, is it that the User Interface for what John Doe needs to access requires two different pages to access or is that he was simply investigating something? Without this level of granularity you are relying on the user to report these "issues" which rarely happens as most users get caught up in simply doing their tasks as quickly as possible.

Going forward I hope to add the automatic reporting of trends and take a pro-active approach to performance ridden areas of the system (if metrics for the last 6 months are returning a particular request in .13 seconds on average and then for the last week it jumps to 3.14 seconds on average, the dev team should be alerted so they can investigate the root cause). However, these are far from my longer term goals in designing a system that truly learns. More on this in the coming months as my next generation ideas come to fruition.
While largely quiet on here since my last post two weeks ago I have been hard at work on several smaller projects all of which are on GitHub. As mentioned previously, everything I work on in my freetime will be open sourced under the MIT License.

jcAnalytics

The first item I should mention is some new functionality in my jcAnalytics library. Earlier this week I had some ideas for reducing collections of arbitrary data down to distinct elements. For example if you had 3 Objects of data, with 2 of them being identical, my reduction extension methods would return 2 instead of 3. This is one the biggest problems I find when analyzing data for aggregation or simply reporting, especially when the original amount of data is several hundreds of thousands or more. I attempted the more straight forward single threaded model, as expected the performance as the number of elements increased was dramatically slower than a parallel approach. Wondering if there were any theories on taking a sampling of data quickly to scale as the number of items increased, I was surprised there was not more research on this subject. Doing a Log(n) sample size seemed to be the "goto" method, but I could not find any evidence to support the claim. This is where I think recording patterns of data and then persisting those patterns could actually achieve this goal. Since every problem is unique and every dataset over time the extension methods could in fact learn something along the lines of "I have a collection of 500,000 Addresses, last 10 times I ran I only found 25,000 unique addresses at an average rate of every 4 records." On subseqent runs, it could adapt per request. Maybe assign Guids or another unique identifier for each run with the result patterns on disk, in a SQL database or in Azure Cache. For those curious, I did update the NuGet package as well with these new extension methods. You can download the compiled NuGet Package here on NuGet or via NuGet Console with PM> Install-Package jcANALYTICS.Lib.

jcPIOL

A huge topic in my world at work has been offline/online hybrid mobile applications. The idea that one could "sync" and then pull down data for 100% offline use has been on my mind since it was requested several months ago by one of our clients. Knowing the first approach might not be the best and that I wanted to create a generic portable class library that could be plugged into any mobile application on any platform (iOS, Android, Windows), I figured I would begin my research fully exposed on GitHub and then as stable releases were built I would publish them on NuGet. This project is of a larger nature in that it could quickly blossum into a framework instead of simply a library. As of right now on GitHub I have a the GET, POST and DELETE HTTP verbs working to pull/push data, but not storing the data for offline purposes. I'm still working out the logistics of how I want to achieve everything, but the ultimate goal would be to have any request queued when offline and then when a network connection was made automatically sync data. Handling multiple versions of data is a big question. Hypothetical if you edited a piece of information and then edited it again, should it send the request twice or once? If you were online it would have sent it twice and in some cases you would want the full audit trail (as I do in the large enterprise platform at work). Another question that I have not come up with a great answer for is the source of truth question. If you make an edit, then come online I could see a potential race condition of the data syncing back and a request being made on the same data. Handling the push and pull properly will take some extensive logic and more than likely might be a global option or down to the request type level. I am hoping to have an early alpha of this working perfectly in the coming weeks.

jcTRENDNET

This project came at the request of my wife who wanted a way to view Trendnet cameras from her Nokia Lumia 1020 Windows Phone. Trendnet only offered apps for iOS and Android and there were no free apps available in the Windows Phone marketplace - so I spent an evening and wrote one last August (2014). Again going with the Windows 10 Universal approach, I began to re-write the app to take advantage of all the new XAML and addin features I had long since wanted to add in. Going with my open source initiative, all of the code is checked into GitHub. I am hoping to have everything ported from the old Windows Phone 8.1 app along with all of the new functionality this summer.

jcRSS

Another older project that I see a need to fufill going forward. Since Google Reader faded away, I switched over to feedly, but I really don't like their interface nor how slow it is. Originally this project was going to be an ASP.NET MVC/WebAPI project with a Windows Phone/Windows Store app. As with my other projects, I knew I wanted to simply port over the work I had done to a Windows 10 Universal App, but as I got into working on it, there was no reason to tie the apps back to a WebAPI Service if I did away with the MVC view. Knowing I was going to be freely giving away this application and didn't want to have ads I also didn't want to incur massive Azure fees if this were to take off. So for the time being this project will exist as a Windows 10 Univeral App with full support for multiple devices (i.e. if you read an article on one device, it will mark it as read on the others). You can check out the code on GitHub. I'm hoping for a release in the coming months.

Federation

This was a project I had been slowly designing in my head since summer of 2012 - a turn based Star Trek game without microtransactions and the ability for one to simply keep playing as long as they want. I started coding this in August 2014 and into September 2014, but put it on hold to work on Windows IoT among other topics of interest. Now with Windows 10's release on the immediate horizon I figured I should wrap up the game and in kind open source the project. As of now I'm in the process porting over the XAML to Windows 10 as it was originally targeting Windows Phone 8.1. Once that process is complete, I will return to working on the logic and with any luck release it sometime this summer, but in the meantime you can checkout the code on GitHub.

jcMATH

I originally wrote this "game" for my boss's child since there was not a dot math game in the Windows Phone marketplace. Seeing as how it got 0 downloads, I open sourced it. I did start porting it over to a Windows 10 Universal Application, but have not finished yet.

Closing

Now that Visual Studio 2015 RC is out, I will more than likely be returning to my open source bbXP project. The only reason I put it on hold was the issues I was running into with NuGet packages in CTP6 of Visual Studio 2015. Coming up in a few weeks is the 20th anniversary of when I wrote my first line of code, expect a retrospective post on that in a few weeks.

Continuing my work deep diving into ASP.NET 5 (vNext), I started going down the path of EntityFramework 7. Which similiarly to ASP.NET 5, is like a reboot of the framework itself. Readers interested to dive in, I highly suggest you watch the MVA Video called What's New with ASP.NET 5 that goes over all of the changes in pretty good detail (though I have a running questions list to ask at BUILD in a few weeks).

Noting that EntityFramework 7 beta was included in my ASP.NET 5 project, I hit a road block into finding it through the usual method in the NuGet Package Manager. As of this writing, only 6.1.3 was available. In looking around, the answer is to add another NuGet Package Source. I had done this previously as I setup a private NuGet Package Server at work to host common libraries used throughout all of our projects. For those unaware, goto Tools->NuGet Package Manager->Package Manager Settings.

Once there, click on Package Sources and then the + icon, enter a descriptive name and for the source and paste the following url: https://www.myget.org/F/aspnetvnext/ and click Update. After you're done, you should have something similiar to this:



You can now close out that window and return to the NuGet Package Manager and upon switching the Package Source dropdown to be ASP.NET vNext (or whatever you called it in the previous screen) you should now see EntityFramework 7 (among other pre-release packages) as shown below.



Hopefully that helps someone out there wanting to deep dive into EntityFramework 7.

Per my announcement on Sunday, I'm working on making bbXP (my CMS that runs this site) generic to the point where anyone could just use it with minimal configuration/customizations. Along this journey, I'm going to be utilizing all of the new ASP.NET 5 (vNext) features. This way I'll be able to use my platform as a test bed for all of the new features of ASP.NET 5 and then apply to production products at work, much like what Version 1 was in back in day when I wanted to deep dive into PHP and MySQL back in 2003 and MVC in general almost 2 years ago.

Tonight's deep dive was into the new configuration model. If you've been developing for ASP.NET or .NET in general you're probably accustomed to using either the app.config or the web.config like .

And then in your app you would do something like this:

[csharp] var siteName = ConfigurationManager.AppSettings["SITE_NAME"]; [/csharp] And if you got a little fancier you would add a wrapper in your base page or controller to return typed properties for booleans or integers.

With ASP.NET 5, configuration is completely new and extremely flexible, but with the same end result. I will assume you have at the very least downloaded and installed the Visual Studio 2015 CTP in addition to launching the ASP.NET 5 template to at least get somewhat comfortable with all the changes. If you are just starting, I highly suggest watching Daniel Roth's introduction video.

To dive into the configuration specifically, you will want to open the Startup.cs. You will notice at the top of class is the Startup constructor. For bbXP I wanted to add my own json configuration file so my constructor looks like:
[csharp] public Startup(IHostingEnvironment env) { Configuration = new Configuration() .AddJsonFile("config.json") .AddJsonFile("bbxpconfig.json"); } [/csharp] Knowing I would not have the same ConfigurationManager.AppSettings access as I am used to, I wanted to make a clean method for which to access these configuration options and go one step further to make it strongly typed and utilize dependency injection. So I came up with a quick approach to dynamically populate a class and then use DI to pass the class to my controllers. To get started I wrote a quick function to populate an arbitrary class:

[csharp] private T readConfig() { var tmpObject = Activator.CreateInstance(); var objectType = tmpObject.GetType(); IList props = new List(objectType.GetProperties()); var className = objectType.Name; foreach (var prop in props) { var cfgValue = Configuration.Get(String.Format("{0}:{1}", className, prop.Name)); prop.SetValue(tmpObject, cfgValue, null); } return tmpObject; } [/csharp] And then my arbitrary class:

[csharp] public class GlobalVars { public string SITE_NAME { get; set; } } [/csharp] Scrolling down to the ConfigureServices function also in the Startup.cs:

[csharp] public void ConfigureServices(IServiceCollection services) { services.AddMvc(); services.AddWebApiConventions(); var options = readConfig(); services.AddSingleton(a => options); } [/csharp] In this method the first 2 lines are unchanged, but the last 2 add my GlobalVars to the DI list and initialize it with the options from my file. Now to see it in action inside a controller:

[csharp] [Activate] private GlobalVars _globalVars { get; set; } public IActionResult Index() { ViewBag.Title = _globalVars.SITE_NAME; return View(); } [/csharp] Notice how clean the access to the option is now simply using the new Activate attribute on top of the GlobalVars property. Something I'll be adding to this helper method going forward is type inference so the readConfig method would typecast to the type of the property in your arbitrary class.

Hopefully this helped someone out there in diving into the next version of ASP.NET, more to come for sure.

For as long as I can remember since C# became my language of choice, I've been yearning for the cleanest and most efficient way of getting data from a database (XML, SQL Server etc.) into my C# application whether it was in an ASP.NET, WinForms or Windows Service. For the most part I was content with Typed DataSets back in the .NET 2.0 days. Creating an XML file by hand with all of the different properties, running the command line xsd tool on the XML file and having it generate a C# class I could use in my WinForms application. This had problems later on down the road when Portable Class Libraries (PCLs) became available, eliminating code duplication, but lack to this day Typed Dataset support, not to mention Client<->Web Service interactions have changed greatly since then switching to a mostly JSON REST infrastructure.

Ideally I wish there was a clean way to define a Class inside a Portable Class Library (made available to Android, iOS, Windows Store, Windows Phone and .NET 4.5+) and have Entity Framework map to those entities. Does this exist today? To the best of my research it does not without a lot of work upfront (more on this later). So where does this lead us to today?

For most projects you probably see a simple Entity Framework Model mapped to SQL Tables, Views and possibly Stored Procedures living inside the ASP.NET WebForms or MVC solution. While this might have been acceptable 4 or 5 years ago, in the multi-platform world we live in today you can't assume you'd only have a Web only client. As I've stated on numerous times over the years, investing a little bit more time in the initial development of a project to plan ahead for multiple platforms is key today. Some of you might be saying "Well I know it'll only be a Web project, the client said they'd never want a native mobile app when we asked them" Now ask yourself how many times a client came back and asked for what they said they never wanted when you or the PM asked. Planning ahead not only saves time later, but delivers a better product for your client (internal or external), bringing a better value add to your service.

Going back to the problem at hand: a highly coupled Entity Framework model to the rest of the ASP.NET application. Below are some possible solutions (not all of them, but in my opinion the most common:

1. Easy way out

Some projects I have worked on had the Entity Framework model (and associated code) in their own Library and then the ASP.NET (WebForms, MVC, WebAPI or WCF service) project simply reference the library. While this is better in that if you migrate from the existing project or want a completely different project to reference the same model (a Windows Service perhaps), then you don't have to invest the time in moving all of the code and updating all of the namespaces in both the new library and the project(s) referencing it. However you still have the tight coupling between your project and the Entity Framework model.

2. POCO via Generators

Another possible solution is to use the POCO (Plain Old CLR Object) approach with Entity Framework. There are a number of generators (Entity Framework Power Tools or the EntityFramework Reverse POCO Generator), both have dependencies in the clients that reference the POCO Classes with Entity Framework, thus negating the idea you'd be able to have 1 set of classes for both your clients of your platform and Entity Framework.

3. POCO with Reflection

Yet another possible solution is to create a custom attribute and via reflection map a class object defined in your PCL. This approach has the cleaness of having the following possible POCO Class with custom attributes:
[csharp] [POCOClass] [DataContract] public class UserListingResponseItem { [DataMember] [POCOMemeber("ID")] public int UserID { get; set; } [DataMember] [POCOMemeber("Username")] public string Username { get; set; } [DataMember] [POCOMemeber("FirstName")] public string FirstName { get; set; } [DataMember] [POCOMemeber("LastName")] public string LastName { get; set; } } [/csharp] The problem with this solution is that as any seasoned C# developer knows, reflection is extremely slow. If performance wasn't an issue (very unlikely) then this could be possible solution.

4. DbContext to the rescue

In doing some research on POCO with Entity Framework I came across one approach in which you can retain your existing Model untouched, but then define a new class inheriting from DbContext like so:
[csharp] public class TestModelPocoEntities : DbContext { public DbSet Users { get; set; } protected override void OnModelCreating(DbModelBuilder modelBuilder) { // Configure Code First to ignore PluralizingTableName convention // If you keep this convention then the generated tables will have pluralized names. modelBuilder.Conventions.Remove(); modelBuilder.Entity().ToTable("Users"); modelBuilder.Entity().Property(t => t.UserID).HasColumnName("ID"); modelBuilder.Entity().HasKey(t => t.UserID); } } [/csharp] What this code block does is map the Users table to a POCO Class called UserListingResponseItem (the same definition as above). By doing so you can then in your code do the following:
[csharp] using (var entity = new TestModelPocoEntities()) { return entity.Users.ToList(); } [/csharp] As one can see this is extremely clean on the implementation side, albiet a bit tideous on the backend side. Imagining a recent project at work with hundreds of tables this could be extremely daunting to maintain, let alone implement in an sizeable existing project.

Unsatisfied with these options I was curious how a traditional approach would compare performance wise to Option 4 above, given that it satisfied the requirement of a single class residing in a PCL. For comparison assuming the table is defined as such:

Data Translation User Table

The "traditional" approach:
[csharp] using (var entity = new Entities.testEntities()) { var results = entity.Users.ToList(); return results.Select(a => new UserListingResponseItem { FirstName = a.FirstName, LastName = a.LastName, Username = a.Username, UserID = a.ID }).ToList(); } [/csharp] Returns a List of Users EntityFramework objects and then iterates over every item and sets the equivalent property in the UserListingResponseItem Class before returning the result.

The Benchmark

For the benchmark I started with the MVC Base Template in Visual Studio 2015 Preview, removed all the extra Views, Controllers and Models and implemented a basic UI for testing:

Data Translation Base UI

A simple population of random data for the Users table and deletion of records before each test run:
[csharp] private void createTestData(int numUsersToCreate) { using (var entity = new Entities.testEntities()) { entity.Database.ExecuteSqlCommand("DELETE FROM dbo.Users"); for (var x = 0; x < numUsersToCreate; x++) { var user = entity.Users.Create(); user.Active = true; user.Modified = DateTimeOffset.Now; user.Password = Guid.Empty; user.LastName = (x%2 == 0 ? (x*x).ToString() : x.ToString()); user.FirstName = (x%2 != 0 ? (x * x).ToString() : x.ToString()); user.Username = x.ToString(); entity.Users.Add(user); entity.SaveChanges(); } } } [/csharp]

Benchmark Results

Below are the results running the test 3 times for each size data set. For those that are interested I was running the benchmark on my AMD FX-8350 (8x4ghz), VS 2013 Update 4 and SQL Server 2014 with the database installed on a Samsung 840 Pro SSD.

Data Translation Performance Results

The results weren't too surprising to me figuring the "traditional" approach would be a factor or so slower than the DbContext approach, but I didn't think about it from the standpoint of larger datasets being considerably slower. Granted we're talking fractions of a second, but multiple that by hundreds of thousands (or millions) of concurrent connections it is considerable.

Closing thoughts

Having spent a couple hours deep diving into the newer features of Entity Framework 6.x hoping that the golden solution would exist today I'm having to go back to an idea I had several months ago, jcENTITYFRAMEWORK in which at compile time the associations would be created mapping the existing classes to the equivalent Tables, Views and Stored Procedures. In addition to utilizing the lower level ADO.NET calls instead of simply making EntityFramework calls. Where I left it off I was still hitting an ASP.NET performance hit on smaller data sets (though on larger data sets my implementation was several factors better). More to come on that project in the coming weeks/months as Database I/O with C# is definitely not going away for anyone and there is clearly a problem with the possible solutions today. At the very least coming up with a clean and portable way to allow existing POCOs to be mapped to SQL Tables and Stored Procedures is a new priority for myself.

For those interested in the ASP.NET MVC and PCL code used in benchmarking the two approaches, you can download it here. Not a definitive test, but real world enough. If for some reason I missed a possible approach, please comment below, I am very eager to see a solution.
One machine that I had always wanted finally came to a pricepoint I saw as a good deal. This machine as expected from this blog's title, a Sun X4600 M2. This machine originally was released in Summer 2008 with the version I have in Late 2008. Luckily the version I have has 8 AMD Opteron 8384 Quad Core CPUs for a total of 32 cores. Most configurations I had been seeing on eBay are either Dual Core or lowerend Quad Core models.

Sun Sunfire X4600 M2 - Front

Sun Sunfire X4600 M2 - Inside

Sun Sunfire X4600 M2 - Hard Drives

I had the Mushkin 120gb and Western Digital Black hard drives laying around from a NewEgg sale a while back. I should note the SAS Controller on the X4600 is SATA II only, so your SATA III drives will be limited to SATA II's 300mb/sec.

One word of caution, during the initial bootup of the system the decibal level get quite extreme. After 10 seconds or so the noise gets to a bearable level, but no where near "living room" safe levels. As a result, I'm building a little rack in the garage since it stays around 50' all year round and can just use a Powerline adapter to keep it connected to my other machines.

Having purchased a HP Blade thinking I might one day own an HP Blade enclosure, I had 32gb of compatible DDR2 ECC ram laying around. One thing to note, the arrangement of ram is incredibly picky. If you're populating more than 4 DIMMs per CPU module they all need to match. I had at first just used the 16 DIMMs to populate the CPU modules as I found empty slots. Upon turning on the X4600 I was presented with only a fraction of the CPU modules and 16gb of ram (down from the 80gb it should have been). I ended up populating two CPU modules with the 16 2gb sticks I had from the HP Blade and then maxing out another with the original ram.

Sun Sunfire X4600 M2 - Extra ram from 
HP Blade

Operating System wise I chose Windows Server 2012 R2 since I had already started utilizing Hyper-V on my NAS for FreeBSD, OpenSUSE, Windows XP and Solaris VMs. To get Windows Server 2012 R2 installed I had to use an external USB drive and reduce the ECC Ram Settings in the bios to Good. Otherwise after cycling through the onboard SAS controller it would hang.

After applying all of the Windows Updates since 2012 R2 hit RTM, I pulled up Task Manager:

Sun Sunfire X4600 M2 - Windows Server 2012 R2 

Task Manager

Immediately afterwards, I pulled open jcBENCH to see how it compared to my other systems, not surprisingly it is the current leader in the world:

[bash] jcBENCH 0.9.850.0531(x86/Win32 Edition) (C) 2012-2014 Jarred Capellman Usage: jcBench [Number of Objects] [Number of Threads] Example: jcBench 100000 4 This would process 100000 objects with 4 threads Invalid or no arguments, using default benchmark of 150000 objects using 32 CPUS CPU Information --------------------- Manufacturer: AuthenticAMD Model: Quad-Core AMD Opteron(tm) Processor 8384 Count: 32x2693.27mhz Architecture: x86 --------------------- Running Benchmark.... Integer: 1897 Floating Point: 2265 [/bash] Pretty impressive for a 6 year old machine I believe, a part of me wishes I was still doing 3D Animation and Visual Effects on a routine basis as this would have made a rendering back in the day so much better with 3ds max and After Effects. Going forward this will be my primary Hyper-V server.

Anyone else who’s been around long enough to remember buying an AMD Thunderbird, Palomino, Barton etc. feeling the urge to build a new AMD system to replace their existing system?

My primary desktop is still being powered by a FX-8350 I bought November 2012 - while for everything I do (mainly Visual Studio and SQL Server) coupled with my Samsung SSDs, Radeons etc. it's ok, it just feels extremely weird to not have an upgrade path (ignoring the FX-9000 series) after years of doing a new year build every year or every other year (back in Athlon XP days it was every couple months). In the mean time I've built 2 Kabini systems this year, one as a ClearOS firewall and another as a HTPC and while those have worked out extremely well, there is a void in that my primary machine is relatively unchanged for almost 2 years.

I know there is always the Intel route - an i7 would be a definite upgrade, I know it is silly, I just can't bring myself to doing it. When I switched to AMD December 2000 with a 1 GHz Athlon it seemed like more of an experiment. AMD wasn’t new to making CPUs, but it was new in beating Intel in both price and performance. I had an aging Celeron 300A and tons of new First Person Shooters to play, the Athlon made sense then and if I had to do it again, I most definitely would. The summer before, in 1999 after having worked on the Army base doing Sys Admin work I had over a $1000 to burn – I almost pulled the trigger on building the parts for a Dual Pentium 3 600mz (buying only 1 CPU that summer since they were several hundred dollars). Thankfully, my dad convinced me to instead buy a 19” iiyama CRT (1600x1200) to replace my 15” CTX CRT (1024x768) – much more bang for the buck. Thus the following Christmas in 2000 I received some money and fortunately had enough to buy an Abit KT7-A, 256mb of PC133 ram and a 1ghz AMD Athlon (Socket A) – looking back had I still had my summer job in Germany, I might have built 2 of them since they were so cheap in comparison to the Pentium 3 idea the summer before.

I hope the new AM3+ platform replacement (latest rumor I read was next year) can really bring AMD back to the top. The FM2+ platform is nice, but the lack of 8 core variants without an iGPU is I feel definitely hurting AMD especially those sitting on the AM3+ platform like myself who would love to stay with AMD, but have nothing to upgrade to.

In doing some routine maintenance on this blog, I updated the usual JSON.NET, Entity Framework etc. In doing so and testing locally, I came across the following error:
ASP.NET Webpages Conflict

In looking at the Web.config, the NuGet Package did update the dependentAssembly section properly:
ASP.NET Webpages Conflict

However, in the appSettings section, it didn't update the webpages:Version value:
ASP.NET Webpages Conflict

Simply update the "2.0.0.0" to "3.0.0.0" and you'll be good to go again.
I'm pleased to announce the first release of the x86/FreeBSD port of jcBENCH. A few notes on this port, that I thought would be interesting:

1. At least with FreeBSD 10.0, you need to use clang++ instead of g++.
2. With FreeBSD in Hyper-V I needed to switch to utilizing the Legacy Network Adapter.

You can download the 0.8.755.0504 release here.
[bash] jcBENCH 0.8.755.0505(x86/FreeBSD Edition) (C) 2012-2014 Jarred Capellman CPU Information --------------------- Manufacturer: AuthenticAMD Model: AMD Phenom(tm) II X2 545 Processor Count: 2x1517mhz Architecture: amd64 --------------------- Running Benchmark.... Integer: 14.877 seconds Floating Point: 17.5544 seconds [/bash] I'm hoping to have the x86-Linux release later this week.
A little less than 2 months ago I had some crazy ideas for interacting with a SQL database and C# as opposed to simply using ADO.Net or Microsoft's own Entity Framework. Not sure exactly as to how I was going to implement some of the features, I shelved it until I came up with a clean way to implement it.

With this project I had four goals:
1. Same or similar syntax to Entity Framework - meaning I should be able to simply drop in my framework in place of Entity Framework with little to no changes.
2. Performance should be equal to or better in both console and WebAPI applications - covering both scenarios of desktop applications and normal for today, WebAPI Services returning results and executing SQL server side and then returning results to a client.
3. Implement my own caching syntax that puts the effort of caching on the Framework, not the user of the Framework.
4. Provide an easy way to generate strongly typed classes akin to Microsoft's Entity Framwork. This weekend I was able to achieve #1 and to some degree #2.

I was able to achieve an identical syntax to Entity Framework like in the snippet below:
[csharp] using (var jFactory = new jcEntityFactory()) { jFactory.JCEF_ExecuteTestSP(); } [/csharp] In regards to performance, I wrote 2 tests. One that simply called a stored procedure with a single insert statement and another that returned several thousand rows. To give some what real results I directly referenced the framework in a console application and then wrote a WebAPI Service referencing the framework along with a wrapper function to call the WebAPI Service from a console application.

Without further adieu here are the results running it with 10 to 1000 iterations:
[bash] Console App Tests JC EF 10 Iterations with average of 0.00530009 MS EF 10 Iterations with average of 0.05189771 WebAPI Tests JC EF 10 Iterations with average of 0.18459302 MS EF 10 Iterations with average of 0.12075582 Console App Tests JC EF 100 Iterations with average of 0.000740188 MS EF 100 Iterations with average of 0.005783375 WebAPI Tests JC EF 100 Iterations with average of 0.018184102 MS EF 100 Iterations with average of 0.011673686 Console App Tests JC EF 1000 Iterations with average of 0.0002790646 MS EF 1000 Iterations with average of 0.001455153 WebAPI Tests JC EF 1000 Iterations with average of 0.0017801566 MS EF 1000 Iterations with average of 0.0011440657 [/bash] An interesting note is the WebAPI performance differences between the console application. Sadly, with a WebAPI Service my framework is nearly twice as slow, but in console applications (presumably WinForms and WPF as well) my framework was considerably faster.

So where does that leave the future of the framework? First off, I am going to investigate further on the performance discrepencies between the two approaches. Secondly, I am going to then add in caching support with the following syntax (assuming one would want to cache a query result for 3 hours):
[csharp] using (var jFactory = new jcEntityFactory()) { jFactory.Cache(JCEF_ExecuteTestSP(), HOURS, 3); } [/csharp] More to come with my framework as it progresses over the next few weeks. As far as a release schedule, once all four of my main project requirements are completed I will release a pre-release version on NuGet. I don't plan on open-sourcing the framework, but that may change further down the road. One thing is for sure, it will be freely available through NuGet.

After coming back from BULD 2014 yesterday, I wanted to get my desktop all updated with the Visual Studio 2013 Update 2 RC update, the Roslyn .NET Compiler CTP and the .NET Native Compiler.

The update to Visual Studio 2013 installed without a hitch as did the .NET Native Compiler, however the Roslyn vsix file errored out with the following error:
Project Roslyn VSIX failing to install

Guessing the error was related to my recent installing of Visual Studio 2010 to do ia64 development (more on that this week), I searched around and could only find one person reinstalling Visual Studio 2013 - definitely not something I was willing to do.

Checking in Windows Explorer with the Open With...Choose default program... option I noticed the Visual Studio 2010 Icon was associated with the Microsoft Visual Studio Version Selector:
Windows Explorer showing VS2010 Version Selector

I clicked More options, and then scrolled down to Look for another app on this PC:
Windows Explorer selecting Look for another app on this PC

I then navigated to my Visual Studio 2013 install (c:\Program FIles (x86)\Microsoft Visual Studio 12.0\Common7\IDE folder and selected VSIXInstaller.exe: Selecting Visual Studio 2013s VSIXInstaller

After selecting the Visual Studio 2013 VSIXInstaller application, Roslyn installed and I should note the icon in Windows Explorer (as expected) updated to the 2013 version:
Windows Explorer reflecting VS2013 VSIXInstaller Icon

For those having issues with a 2012 VSIX, simply use c:\Program FIles (x86)\Microsoft Visual Studio 11.0\Common7\IDE instead.

Hopefully that alleviated someone having to reinstall Visual Studio as it did for me.

Having been using Ross Johnson's wonderful POSIX Win32 port for some time now, I was surprised to find an ia64/Win32 port didn't exist. In thinking about it further, with the announcement that Microsoft was phasing out Itanium support nearly 4 years ago to the day, I guess I shouldn't be too surprised.

For those that are stuck with an ia64/Win32 project or are just now wanting to mess around with the platform, you can download the latest (at the time of this writing is 2.9.1) pThreads for ia64 port I did. The pre-compiled dll is in the Pre-built.2\dll\ia64 folder and the pre-compiled lib is in the Pre-built.2\lib\ia64 folder. Also included is the VS2013 solution (originally there was only a VC++ 6.0 Workspace file).

In a future post I will detail my experience with Windows Server 2008 R2 for Itanium on my HP zx2000 Itanium 2 machine, along with native ia64/Win32 releases of jcBench and jcDBench.

Over the Labor Day weekend, I happened to be in Silicon Valley on a short vacation and Fry's Electronics luckily had an Asus X55U Notebook on sale for $258. I had been looking for a long battery life laptop that wouldn't kill me if it ever got hurt during traveling (versus my much more expensive HP laptop). On top of that, I had wanted a Linux laptop to do cross-platform testing of jcDB, jcPIMP and the Mode Xngine. Planning ahead a bit, I brought an older Corsair Force 3 90gb SSD that I was no longer using and a Philips screw driver (yes folks, screw drivers are allowed by the TSA, it just can't be more than a few inches long).

Asus X55U Notebook

Asus X55U Notebook box contents

Asus X55U OpenSUSE Boot Menu

Specifications wise the laptop has:
-15.6" 1366x768 Screen (not the greatest quality, but definitely a lot better than expected at the price point)
-AMD E2-1800 (Dual 1.7ghz APU that clocks itself down to 2x850mhz when performance isn't needed)
-4gb DDR3 1066 (1 slot, upgradeable to 8gb)
-Radeon HD 7340 (DX11, 512mb of system ram is used)
-500gb Hitachi SATA Drive
-1 USB 2.0 and 1 USB 3.0 Port
-HDMI Output
-Gigabit Ethernet
-802.11n WiFi
-VGA Output
-Mic and Headphone Jack
-DVD-RW Drive
-SD/SDXC/SDHC Memory Card Slot

I should note, this APU does support AMD's Virtualization, so you can run Hyper-V, Xen, VMware Workstation etc. on this notebook. Coupled with the 8gb of ram support, this could be a decent portable VM notebook for the price.

Fortunately, doing the swap of the hard drive was extremely easy as opposed to some laptops that require taking apart the entire laptop (looking back at the Dell Inspiron I purchased in October 2010). Just 2 screws to pull off the back, which also contains the single DDR3 SO-DIMM slot.

Asus X55U Notebook (Bottom)

Corsair Force 3 90gb SSD

Curious if the system supported 8gb of DDR3 ram (since the manual didn't specify), I bought an 8gb DDR3-1333 Corsair SO-DIMM:

Corsair 8gb DDR3-1333 SO-DIMM

Swapped out the Hynix 4gb with the Corsair:

Asus X55U with the Corsair 8gb DDR3 SO-DIMM installed

And sure enough, the notebook supports 8gb:

Asus X55U with 8gb showing in the BIOS

While in the BIOS I should mention the charge when off feature this notebook has. Meaning with the lid closed, you can still charge your phone or media player. I wish my HP had that functionality.

Asus X55U BIOS Options

OpenSUSE 12.3 installed without a hitch (even the WiFi worked off the bat). After getting the system configured, the first thought I had was take the recent ia64/Linux port of jcBench and port it over to x86/Linux. On the flight back from SFO, I ported it over and thankfully it was only a re-compile with a slight tweak to the CPU detection.

How does the system perform?
[bash] jcBENCH 0.6.522.0928(x86/Linux Edition) (C) 2012-2013 Jarred Capellman CPU Information --------------------- Manufacturer: AuthenticAMD Model: AMD E2-1800 APU with Radeon(tm) HD Graphics Count: 2x1700.000mhz Architecture: x86/x86-64 --------------------- Running Benchmark.... Integer: 65.4932 seconds Floating Point: 35.6109 seconds [/bash] In comparison to my Silicon Graphics Prism (2x1.5ghz Itanium 2) it performs a little slower in Integer operations, but is nearly 3X faster in Floating Point Operations. In comparison to my HP DV7 laptop (AMD A10), it performs in the same dual threaded applications about 2X as slow, as expected with the slower clock rate and much smaller cache.

Overall, the notebook does exactly what I want and more for a $258 device. Build quality exceeds the Acer netbook and Dell Inspiron I had several years ago, coming close to my HP DV7, if only this Asus used the higher grade plastics. For those curious, battery life is about 4 hours with WiFi enabled the middle of the road screen brightness.
With the ever changing software development world (I and every other developer) live in I’ve found it increasingly harder to keep up with every toolset, every framework let alone language. In the last few years I have attempted to buckle down and focus on what I enjoy the most: C#. Over the last 6+ years of C# development I’ve inherited or developed ASP.Net WebForms 1.1 to 4.5 web applications, MVC4, 3.5 to 4.5 WinForms desktop applications, Windows Workflow, custom Sharepoint 2010 parts, WPF, iOS, Android, Windows Phone 7/8, Web Services in WCF and WebAPI and most recently diving into Windows 8 development. Suffice it to say – I’ve been around the block in regards to C#.

About 6 months ago, I started doing some early research in preparation for a very large project at work that relied more on the mathematical/statistical operations than the traditional “make stuff happen” that I am used to. Keeping with an open, out of the box mentality, I just happened to be in the Book Buyers Inc. bookstore in downtown Mountain View, California on vacation and picked up Professional F# 2.0 for a few dollars used. Knowing they were on already on version 3, I figured it would provide a great introduction to the language and then I would advance my skills through MSDN and future books. I poured through the book on the overly long flight from San Francisco International to Baltimore-Washington International using my laptop the entire flight back writing quick snippets that I could easily port back and forth between C# and F# to see the benefits and best use cases for F#. When I returned home, I found myself wanting more, and as fate would have it shortly afterwards SyncFusion was offering the F# Succinctly e-book by Robert Pickering, for free.

Eager to read the e-book after my introduction to F#, I ended up finishing it after a short weekend. The e-book, while much shorter than the paperback I purchased, provided a great introduction and solidified many of the concepts I was still cementing in my mind. Like other developers I am sure – when investing time into a new technology or language you want some guarantee of its success and worthiness of your time (especially if it is coming out of your precious off hours) Be happy to know the author chose to include real-world quotes and links to successes with F# over the traditional C# implementations. I should note, while the author does not assume some Visual Basic or C# experience, it definitely will help, but I feel that the book provides an in-depth enough explanation and easy to follow examples for anyone with some higher level programming experience to grasp the main concepts and build a solid foundation to grow from.

Another element of the e-book I personally enjoyed was the intuitive and easy to follow progression the author chose to utilize. The author early on in the book offered an introduction to F# and proceeded to dive into the fundamentals before providing real-use cases that a professional software developer would appreciate. Several books provide an introductory chapter only to spend the next half of the book on reference manual text or snippets that don’t jump out to you with a real world applicability or even a component of one.

If there was one element I wished for in the e-book, it would be for it to be longer or a part 2 be written. This "sequel" would build on the concepts provided, assuming a solid foundation of F# and dive into more real-world scenarios where F# would be beneficial over C# or other higher level programming languages. Essentially a "best practices" for the C#/F# programmer.

On a related note, during my own investigations into F# I found the Microsoft Try F# site to be of great assistance.

In conclusion, definitely checkout the F# Succinctly e-book (and others) in SyncFusion’s ever growing library of free e-books.
After updating a large ASP.NET 4.5 WebForms Friday, this afternoon I started to take a look into the new features in the release and discovered the "lightweight" rendering mode of the RadWindow control. Previously - going back to 2011/2012 one of my biggest complaints with the RadWindow was the hacking involved in order to make the popup appear relatively the same across Internet Explorer 9, Chrome and Firefox. Some back and forth with Telerik's Support left much to be desired, so I ended up just padding the bottom and it more or less worked. Thus my excitement for a possible fix to this old problem - turns out the new lightweight mode does infact solve the issue, across Internet Explorer 11, Firefox and Chrome there are only minimal differences now for my popups content (W3C Validated DIVs for the most part).

This is where the fun ended for me temporarily - in the Q2 2013 (2013.2.717.45) release, the h6 tag for the Title was empty:

I immediately pulled open the Internet Explorer 11 Developer Tool Inspector and found this curious:
Not enjoying hacking Control Suites, but needed to implement a fix ASAP so I could continue development, I simply put this CSS Override in my WebForm's CSS Theme File: [css] .RadWindow_Glow .rwTitleWrapper .rwTitle { width: 100% !important; } [/css] Depending on the Skin you selected, you will need to update the name of the class. Hope that helps someone out there - according to the forums, it is a known issue and will be in the Q2 SP1 2013 Release, but in the mean time this will correct the issue.
After a few days of development, jcBENCH2 is moving along nicely. Features completed:

1. WebAPI and SQL Server Backend for CRUD Operations of Results
2. Base UI for the Windows Store App is completed
3. New Time Based CPU Benchmark inside a PCL
4. Bing Maps Integration for viewing the top 20 results

Screenshot of the app as of tonight:
jcBENCH2 Day 4

What's left?

7/17/2013 - Social Networking to share results
7/18/2013 - Integrate into the other #dev777 projects
7/19/2013 - Bug fixes, polish and publish

More details of the development process after the development is complete - I would rather focus on the actual development of the app currently.
Starting a new, old project this weekend as part of the #dev777 project, jcBENCH 2. The idea being, 7 developers, develop 7 apps and have them all communicate with each other on various platforms.

Those that have been following my blog for a while, might know I have a little program I originally wrote January 2012 as part of my trip down the Task Parallel Library in C# called, jcBENCH. Originally I created Mac OS X, IRIX (in C++), Win32 and Windows Phone 7 ports. This year I created a Windows Phone 8 app and a revamped WPF port utilizing a completely new backend.

So why revisit the project? The biggest reason: never being 100% satisfied because of my skill set constantly being expanded I find myself always wanting to go back and make use of a new technology even if the end user sees no benefit. It's the principle - never let your code rot.

So what is Version 2 going to entail? Or better put, what are some of the issues in the jcBENCH 1.x codebase?

Issues in the 1.x Codebase

Issue 1

As it stands today all of the ports have different code bases. In IRIX's case this was a necessity since Mono hasn't been ported to IRIX (yet). With the advent of PCL (Portable Class Libraries) I can now keep one code base for all but the IRIX port, leaving only the UI and other platform specific APIs in the respective ports.

Issue 2

On quad core machines or faster the existing benchmark completes in a fraction of the time. This poses two big problems - doesn't represent a real test of performance over a few second span (meaning all of the CPUs may not have enough time to be tasked before completion) and on the flip side those devices that are much slower (like a cell-phone) it could take several minutes. Solution? Implement a 16 second time benchmark and then calculate the performance based on how many objects were processed during tha time.

Issue 3

When testing multi-processor performance, it was cumbersome to test all of the various scenarios. For instance if you had an 8 core CPU as I do with my AMD FX-8350, I had to select 1 CPU, run the benchmark and then record the result, select 2 CPUs and repeat so on and so forth. This took a long time when in reality it would make sense to offer the ability to either run the benchmark on using all cores by default and then via an advanced option allow the end user to select a specific test or have it do the entire test automatically.

Issue 4

No easy way to share the results exists across the board in the current version. In recent versions I added a centralized result database and charting so no matter the device you could see how your device compared, but there was no easy to get a screenshot of the benchmark, send the results via email or post on a social network. Where is the fun in a benchmark if you can't brag about it easily? In Version 2 I plan to focus on this aspect.

Proposed Features for Version 2

1. Rewritten from the ground up utilizing the latest approaches to cross-platform development I have learned since jcBENCH's original release 1/2012. This includes the extensive use of MVVMCross and Portable Class Libraries to cut down on the code duplication among ports.

2. Sharing functionality via Email and Social Networking (Twitter and Facebook) will be provided, in addition a new Bing Map will visually reflect the top performing devices across the globe (if the result is submitted with location access allowed)

3. Using WebAPI (JSON) instead of WCF XML backend for result submission and retrieval. For this app since there is no backend processing between servers, WebAPI makes a lot more sense.

4. New Timed Based Benchmark as opposed to time to process X amount of tasks

5. Offer an "advanced" mode to allow the entire test suite to be performed or individual tests (by default it will now use all of the cores available)

6. At launch only a Windows Store app will be available, but Windows Phone 7/8 and Mac OS X ports will be released later this month.

Future Features

Ability to benchmark GPUs is something I have been attempting to get working across platforms and for those that remember I had a special Alpha release last Fall using OpenCL. Once the bugs and features for Version 2 are completed I will shift focus to making this feature a reality.

Implement all of this functionality in a upgraded IRIX port and finally create a Linux port (using Mono). One of the biggest hurdles I was having with keeping the IRIX version up to date was the SOAP C++ Libraries not being anywhere near the ease of user a Visual Studio/C# environment offers. By switching over to HTTP/JSON I'm hoping to be able to parse and submit data much easier.

Next Steps

Given that the project is an app in 7 days, today marks the first day of development. As with any project, the first step was getting a basic feature set as mentioned above and now to create a project timeline based on that functional specification.

As with my WordPress to MVC Project in April, this will entail daily blog posts with my progress.

Day 1 (7/13/2013) - Create the new SQL Server Database Schema and WebAPI Backend
Day 2 (7/14/2013) - Create all of the base UI Elements of the Windows Store App
Day 3 (7/15/2013) - Create the PCL that contains the new Benchmark Algorithms
Day 4 (7/16/2013) - Integrate Bing Maps for the location based view
Day 5 (7/17/2013) - Add Social Networking and Email Sharing Options
Day 6 (7/18/2013) - Integrate with fellow #dev777 projects
Day 7 (7/19/2013) - Bug fixing, polish and Windows Store Submission

So stay tuned for an update later today with my the successes and implementation of the new SQL Server Database Schema and WebAPI Backend
Working on a new project at work today and have started to utilize the free Azure hours bundled with our MSDN account for my development environment. Previously I had setup development environments on production servers to alleviate the Sys Admins from having to create new VMs and all of the DNS entries in our firewall for external access. Over the years this hasn't caused any problems (they were in their own App Pools and never crashed the server itself), but with the free Azure hours there is no reason to even have that risk.

So I began diving into creating my environment on Azure. I had been working off and on over the last couple days with a local SQL Server 2012 instance I have on my desktop, so I had my database schema ready to go to deploy.

Unfortunately I was met with:
Some searching around, I uncovered the issue is with the two ON [PRIMARY] tags, specifically the ON option I had from SQL Management Studio's Generate Scripts Option. In looking around I could not find an option in SQL Management Studio to export safely to Azure - hopefully that comes sooner than later. If I had missed the option - please post a comment below.
This morning I will be presenting at the Maryland Code Camp with the topic of Developing Once and Deploying to Many, specifically talking to practices and patterns I've found to help create rich mobile applications efficiently over the last 3 years I've been actively developing for the mobile space. WCF, WPF, PCL, MonoDroid, Azure Mobile Services and Windows Phone 8 are to be discussed.

For the Powerpoint 2013 presentation, all of the code going to be mentioned during the session, the SQL, PSD files and external libraries used, click here to download the zip file.

In addition during the session I will be making reference to an app I wrote earlier this year, jcLOG-IT, specifically the Mobile Azure Service and Windows Live Integration elements.

The code block mentioned for Authentication: [csharp] public async Task AttemptLogin(MobileServiceAuthenticationProvider authType) { try { if (authType == MobileServiceAuthenticationProvider.MicrosoftAccount) { if (!String.IsNullOrEmpty(Settings.GetSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken))) { App.CurrentUser = await App.MobileService.LoginAsync(Settings.GetSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken)); } else { var liveIdClient = new LiveAuthClient(Common.Constants.APP_AUTHKEY_LIVECONNECT); while (_session == null) { var result = await liveIdClient.LoginAsync(new[] {"wl.signin"}); if (result.Status != LiveConnectSessionStatus.Connected) { continue; } _session = result.Session; App.CurrentUser = await App.MobileService.LoginAsync(result.Session.AuthenticationToken); Settings.AddSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken, result.Session.AuthenticationToken); } } } Settings.AddSetting(Settings.SETTINGS_OPTIONS.AuthType, authType.ToString()); Settings.AddSetting(Settings.SETTINGS_OPTIONS.IsFirstRun, false.ToString()); return true; } catch (Exception ex) { Settings.AddSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken, String.Empty); return false; } } [/csharp] The Settings class: [csharp] public class Settings { public enum SETTINGS_OPTIONS { IsFirstRun, LiveConnectToken, AuthType, LocalPassword, EnableLocation } public static void CheckSettings() { var settings = IsolatedStorageSettings.ApplicationSettings; if (!settings.Contains(SETTINGS_OPTIONS.IsFirstRun.ToString())) { WriteDefaults(); } } public static void AddSetting(SETTINGS_OPTIONS optionName, object value) { AddSetting(optionName.ToString(), value); } public static void AddSetting(string name, object value) { var settings = IsolatedStorageSettings.ApplicationSettings; if (!settings.Contains(name)) { settings.Add(name, value); } else { settings[name] = value; } settings.Save(); } public static T GetSetting(SETTINGS_OPTIONS optionName) { return GetSetting(optionName.ToString()); } public static T GetSetting(string name) { if (IsolatedStorageSettings.ApplicationSettings.Contains(name)) { if (typeof(T) == typeof(MobileServiceAuthenticationProvider)) { return (T) Enum.Parse(typeof (MobileServiceAuthenticationProvider), IsolatedStorageSettings.ApplicationSettings[name].ToString()); } return (T) Convert.ChangeType(IsolatedStorageSettings.ApplicationSettings[name], typeof (T)); } return default(T); } public static void WriteDefaults() { AddSetting(SETTINGS_OPTIONS.IsFirstRun, false); AddSetting(SETTINGS_OPTIONS.EnableLocation, false); AddSetting(SETTINGS_OPTIONS.LocalPassword, String.Empty); AddSetting(SETTINGS_OPTIONS.LiveConnectToken, String.Empty); AddSetting(SETTINGS_OPTIONS.AuthType, MobileServiceAuthenticationProvider.MicrosoftAccount); } } [/csharp]
I had the interesting request at work last week to do deletions on several million rows in the two main SQL Server 2012 databases. For years now, nothing had been deleted, only soft-deleted with an Active flag. In general anytime I needed to delete rows it usually meant I was doing a test of migration so I would simply TRUNCATE the tables and call it a day - thus never utilizing C# and there by Entity Framework. So what are your options?

Traditional Approach

You could go down the "traditional" approach: [csharp] using (var eFactory = new SomeEntities()) { var idList = new List(); // assume idList is populated here from a file, other SQL Table etc... foreach (var someObject = eFactory.SomeObjects.Where(a => idList.Contains(a.ID)).ToList()) { eFactory.DeleteObject(someObject); eFactory.SaveChanges(); } } [/csharp] This definitely works, but if you have an inordinate amount of rows I would highly suggest not doing it this way as the memory requirements would be astronomical since you're loading all of the SomeObject entities.

Considerably better Approach

[csharp] using (var eFactory = new SomeEntities()) { var idList = new List(); // assume idList is populated here from a file, other SQL Table etc... string idStr = String.Join(",", idList); eFactory.Database.ExecuteSqlCommand("DELETE FROM dbo.SomeObjects WHERE ID IN ({0})", idStr); } [/csharp] This approach creates a comma seperated string and then executes the SQL Command. This is a considerably better than the approach above in that it doesn't load all of those entity objects into memory and doesn't look through each element. However depending on the size of idList you could get the following error:

Entity Framework 5 - Rare Event

An even better Approach

What I ended up doing to solve the problems of those above was to split the list and then process the elements on multiple threads. [csharp] private static List getList(List original, int elementSize = 500) { var elementCollection = new List(); // If there are no elements dont bother processing if (original.Count == 0) { return elementCollection; } // If the size of the collection if (original.Count <= elementSize) { elementCollection.Add(String.Join(",", original)); return elementCollection; } var elementsToBeProcessed = original.Count; while (elementsToBeProcessed != 0) { var rangeSize = elementsToBeProcessed < elementSize ? elementsToBeProcessed : elementSize; elementCollection.Add(String.Join(",", original.GetRange(original.Count - elementsToBeProcessed, rangeSize))); elementsToBeProcessed -= rangeSize; } return elementCollection; } private static void removeElements(IEnumerable elements, string tableName, string columnName, DbContext objContext, bool debug = false) { var startDate = DateTime.Now; if (debug) { Console.WriteLine("Removing Rows from Table {0} @ {1}", tableName, startDate.ToString(CultureInfo.InvariantCulture)); } try { Parallel.ForEach(elements, elementStr => objContext.Database.ExecuteSqlCommand(String.Format("DELETE FROM dbo.{0} WHERE {1} IN ({2})", tableName, columnName, elementStr))); } catch (Exception ex) { Console.WriteLine(ex); } if (!debug) { return; } var endDate = DateTime.Now; Console.WriteLine("Removed Rows from Table {0} in {1} seconds", tableName, endDate.Subtract(startDate).TotalSeconds); } [/csharp] To utilize these methods you can do something like this: [csharp] using (var eFactory = new SomeEntities()) { var idList = new List(); // assume idList is populated here from a file, other SQL Table etc... var idStr = getList(idList); removeElements(idStr, "SomeObjects", "ID", eFactory); } [/csharp] Note you could simplify this down to: [csharp] using (var eFactory = new SomeEntities()) { removeElements(getList(/* your Int Collection */), "SomeObjects", "ID", eFactory); } [/csharp] Hopefully that helps someone else out there who runs into issues with deleting massive amount of rows. Note I did try to utilize the Entity Framework Extended NuGet library, but ran into errors when trying to delete rows.
Working on a new project at work today I realized with the amount of clients potentially involved with a new WCF Service I would have to adjust my tried and true process with using a WCF Service and Visual Studio's WCF Proxy Generation. I had often wondered about the option circled below, intuitively thinking it would know automatically a type referenced in an OperationContract and in a Class Library before generating a WCF Proxy to not create an entirely new type.

After applying regular expressions to the Post Content

Something like this for instance defined in a common Class Library: [csharp] [DataContract] public class SomeObject { [DataMember] public int ID { get; set;} [DataMember] public string Name { get; set;} public SomeObject() { } } [/csharp] And then in my OperationContract: [csharp] [ OperationContract ] public List< SomeObject > GetObject(int id); [/csharp] Sadly, this is not how it works. Intuitively you would think the SomeObject type since it is being referenced in the Class Library, the Operation Contract and your Client(s) the Proxy Generation with the box checked above simply generate the Proxy Class referencing the SomeObject class. So how can this be achieved cleanly?

The best solution I could come up with was to do some moving around of code in my Visual Studio 2012 projects. In short, the interface for my WCF Service and any external classes (i.e. classes used for delivering and receiving data between the Service and Client(s)) were moved to a Class Library previously setup and a wrapper for the Interface was created.

Let's dive in...

Luckily, I had setup my WCF Service with internally used and externally used classes with a proper folder structure like so:

Objects
-------->External
------------------>Entities
------------------------->Entity.cs
------------------------->EntityStatus.cs
--------->Internal
------------------>Groups
-------------------------->Group.cs
-------------------------->GroupStatus.cs

So it was simply a matter of relocating and adjusting the namespace references.

After moving only the Interface for my WCF Service (leaving the actual implementation in the WCF Service), I wrote my wrapper: [csharp] public class WCFFactory : IDisposable { public IWCFService Client { get; set; } public WCFFactory() { var myBinding = new BasicHttpBinding(); var myEndpoint = new EndpointAddress(ConfigurationManager.AppSettings["WEBSERVICE_Address"]); var cFactory = new ChannelFactory< IWCFService >(myBinding, myEndpoint); Client = cFactory.CreateChannel(); } public void Dispose() { ((IClientChannel)Client).Close(); } } [/csharp] So then in my code I could reference my Operation Contracts like so: [csharp] using (var webService = new WCFFactory()) { var someObject = webService.Client.GetSomeObject(1); } [/csharp] All of this is done without creating any references via the "Add Service Reference" option in Visual Studio 2012.

Downsides of this approach? None that I've been able to uncover. One huge advantage of going this route versus the Proxy Generation approach is that when your Interface Changes, you update it in one spot, simply recompile the Class Library and then update all of the Clients with the updated Class Library. If you've got all of your clients in the same Visual Studio Solution, simply recompiling is all that is necessary.

More to come on coming up with ways to make interoperability between platforms better as I progress on this project, as it involves updating SQL Server Reporting Services, .NET 1.1 WebForms, .NET 3.5 WebForms, .NET 4.5 WebForms, two other WCF Services and the Windows Workflow solution I mentioned earlier this month.
After wrapping up Phase One of my migration from WordPress to MVC4, I began diving into the admin side of the migration trying to replicate a lot of ease of use WordPress offered while adding my own touches. To begin I started with the Add/Edit Post form.

After adding in my view:
@model bbxp.mvc.Models.PostModel

@{
    Layout = "~/Views/Shared/_AdminLayout.cshtml";
}

@using (Html.BeginForm("SavePost", "bbxpAdmin", FormMethod.Post)) { if (@Model.PostID.HasValue) { }

Title


Body


Tags


Categories


}
And then my code behind in the controller:
[HttpGet]
public ActionResult AddEditPost(int? PostID) {
    if (!checkAuth()) {
        var lModel = new Models.LoginModel();

        lModel.ErrorMessage = "Authentication failed...";

        return View("Index", lModel);
    }

    var pModel = new Models.PostModel();

    if (PostID.HasValue) {
        using (var pFactory = new PostFactory()) {
            var post = pFactory.GetPost(PostID.Value);

            pModel.Body = post.Body;
            pModel.PostID = post.ID;
            pModel.Title = post.Title;
            pModel.Tags = string.Join(", ", post.Tags.Select(a => a.Name).ToList());
            pModel.Categories = String.Empty;
        }
    }

    return View(pModel);
}
My post back ActionResult in my Controller never got hit. After inspecting the outputted HTML I noticed the form's action was empty:
form action="" method="post"

Having a hunch it was a result of a bad route, I checked my Global.asax.cs file and added a specific route to handle the Action/Controller:
routes.MapRoute(name: "bbxpAddEditPost", url: "bbxpadmin/{action}/{PostID}", defaults: new { controller = "bbxpAdmin", action = "AddEditPost"});
Sure enough immediately following adding the route, the form posted back properly and I was back at work on adding additional functionality to the backend. Hopefully that helps someone else out as I only found one unanswered StackOverflow post on this issue. I should also note, a handy feature when utilizing Output Caching as discussed in a previous post is to programmatically reset the cache.

In my case I added the following in my SavePost ActionResult:
Response.RemoveOutputCacheItem(Url.Action("Index", "Home"));
This removes the cached copy of my main Post Listing.
In today's post I will be diving into adding Search Functionality, Custom Error Pages and MVC Optimizations. Links to previous parts: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6 and Part 7.

Search Functionality

A few common approaches to adding search functionality to a Web Application:

Web App Search Approaches

  1. Pull down all of the data and then search on it using a for loop or LINQ - An approach I loathe because to me this is a waste of resources, especially if the content base you're pulling from is of a considerable amount. Just ask yourself, if you were at a library and you knew the topic you were looking for, would you pull out all of the books in the entire library and then filter down or simply find the topic's section and get the handful of books?
  2. Implement a Stored Procedure with a query argument and return the results - An approach I have used over the years, it is easy to implement and for me it leaves the querying where it should be - in the database.
  3. Creating a Search Class with a dynamic interface and customizable properties to search and a Stored Procedure backend like in Approach 2 - An approach I will be going down at a later date for site wide search of a very large/complex WebForms app.
For the scope of this project I am going with Option #2 since the scope of the content I am searching for only spans the Posts objects. At a later date in Phase 2 I will probably expand this to fit Option #3. However since I will want to be able to search on various objects and return them all in a meaningful way, fast and efficiently. So let's dive into Option #2. Because the usage of virtually the same block of SQL is being utilized in many Stored Procedures at this point, I created a SQL View: [sql] CREATE VIEW dbo.ActivePosts AS SELECT dbo.Posts.ID, dbo.Posts.Created, dbo.Posts.Title, dbo.Posts.Body, dbo.Users.Username, dbo.Posts.URLSafename, dbo.getTagsByPostFUNC(dbo.Posts.ID) AS 'TagList', dbo.getSafeTagsByPostFUNC(dbo.Posts.ID) AS 'SafeTagList', (SELECT COUNT(*) FROM dbo.PostComments WHERE dbo.PostComments.PostID = dbo.Posts.ID AND dbo.PostComments.Active = 1) AS 'NumComments' FROM dbo.Posts INNER JOIN dbo.Users ON dbo.Users.ID = dbo.Posts.PostedByUserID WHERE dbo.Posts.Active = 1 [/sql] And then create a new Stored Procedures with the ability to search content and reference the new SQL View: [sql] CREATE PROCEDURE [dbo].[getSearchPostListingSP] (@searchQueryString VARCHAR(MAX)) AS SELECT dbo.ActivePosts.* FROM dbo.ActivePosts WHERE (dbo.ActivePosts.Title LIKE '%' + @searchQueryString + '%' OR dbo.ActivePosts.Body LIKE '%' + @searchQueryString + '%') ORDER BY dbo.ActivePosts.Created DESC [/sql] You may be asking why not simply add the ActivePosts SQL View to your Entity Model and do something like this in your C# code: [csharp] public List<ActivePosts> GetSearchPostResults(string searchQueryString) { using (var eFactory = new bbxp_jarredcapellmanEntities()) { return eFactory.ActivePosts.Where(a => a.Title.Contains(searchQueryString) || a.Body.Contains(searchQueryString)).ToList(); } } [/csharp] That's perfectly valid and I am not against doing it that, but I feel like code like that should be done at the Database level, thus the Stored Procedure. Granted Stored Procedures do add a level of maintenance over doing it via code. For one, anytime you update/add/remove columns you have to update the Complex Type in your Entity Model inside of Visual Studio and then update your C# code that makes reference to that Stored Procedure. For me it is worth it, but to each their own. I have not made performance comparisons on this particular scenario, however last summer I did do some aggregate performance comparisons between LINQ, PLINQ and Stored Procedures in my LINQ vs PLINQ vs Stored Procedure Row Count Performance in C#. You can't do a 1 to 1 comparison between varchar column searching and aggregate function performance, but my point, or better put, my lesson I want to convey is to definitely keep an open mind and explore all possible routes. You never want to find yourself in a situation of stagnation in your software development career simply doing something because you know it works. Things change almost daily it seems - near impossible as a polyglot programmer to keep up with every change, but when a new project comes around at work do your homework even if it means sacrificing your nights and weekends. The benefits will become apparent instantly and for me the most rewarding aspect - knowing when you laid down that first character in your code you did so with the knowledge of what you were doing was the best you could provide to your employer and/or clients. Back to implementing the Search functionality, I added the following function to my PostFactory class: [csharp] public List<Objects.Post> GetSearchResults(string searchQueryString) { using (var eFactory = new bbxp_jarredcapellmanEntities()) { return eFactory.getSearchPostListingSP(searchQueryString).Select(a => new Objects.Post(a.ID, a.Created, a.Title, a.Body, a.TagList, a.SafeTagList, a.NumComments.Value, a.URLSafename)).ToList(); } } [/csharp] You might see the similarity to other functions if you've been following this series. The function exposed in an Operation Contract inside the WCF Service: [csharp] [OperationContract] List<lib.Objects.Post> GetPostSearchResults(string searchQueryString); public List<Post> GetPostSearchResults(string searchQueryString) { using (var pFactory = new PostFactory()) { return pFactory.GetSearchResults(searchQueryString); } } [/csharp] Back in the MVC App I created a new route to handle searching: [csharp] routes.MapRoute("Search", "Search/{searchQueryString}", new { controller = "Home", action = "Search" }); [/csharp] So now I can enter values via the url like so: http://www.jarredcapellman.com/Search/mvc Would search all Posts that contained mvc in the title or body. Then in my Controller class: [csharp] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Search(string searchQueryString) { ViewBag.Title = searchQueryString + " << Search Results << " + Common.Constants.SITE_NAME; var model = new Models.HomeModel(baseModel); using (var ws = new WCFServiceClient()) { model.Posts = ws.GetPostSearchResults(searchQueryString); } ViewBag.Model = model; return View("Index", model); } [/csharp] In my partial view: [csharp] <div class="Widget"> <div class="Title"> <h3>Search Post History</h3> </div> <div class="Content"> @using (Html.BeginForm("Search", "Home", new { searchQueryString = "searchQueryString"}, FormMethod.Post)) { <input type="text" id="searchQueryString" name="searchQueryString" class="k-textbox" required placeholder="enter query here" /> <button class="k-button" type="submit">Search >></button> } </div> </div> [/csharp] When all was done: [caption id="attachment_2078" align="aligncenter" width="252"]Search box in MVC App Search box in MVC App[/caption] Now you might be asking, what if there are no results? Your get an empty view: [caption id="attachment_2079" align="aligncenter" width="300"]Empty Result - Wrong way to handle it Empty Result - Wrong way to handle it[/caption] This leads me to my next topic:

Custom Error Pages

We have all been on sites where we go some place we either don't have access to, doesn't exist anymore or we misspelled. WordPress had a fairly good handler for this scenario: [caption id="attachment_2081" align="aligncenter" width="300"]WordPress Content not found Handler WordPress Content not found Handler[/caption] As seen above when no results are found, we want to let the user know, but also create a generic handler for other error events. To get started let's add a Route to the Global.asax.cs: [csharp] routes.MapRoute("Error", "Error", new { controller = "Error", action = "Index" }); [/csharp] This will map to /Error with a tie to an ErrorController and a Views/Error/Index.cshtml. And my ErrorController: [csharp] public class ErrorController : BaseController { public ActionResult Index() { var model = new Models.ErrorModel(baseModel); return View(model); } } [/csharp] And my View: [csharp] @model bbxp.mvc.Models.ErrorModel <div class="errorPage"> <h2>Not Found</h2> <div class="content"> Sorry, but you are looking for something that isn't here. </div> </div> [/csharp] Now you maybe asking why isn't the actual error going to be passed into the Controller to be displayed? For me I personally feel a generic error message to the end user while logging/reporting the errors to administrators and maintainers of a site is the best approach. In addition, a generic message protects you somewhat from exposing sensitive information to a potential hacker such as "No users match the query" or worse off database connection information. That being said I added a wrapper in my BaseController: [csharp] public ActionResult ThrowError(string exceptionString) { // TODO: Log errors either to the database or email powers that be return RedirectToAction("Index", "Error"); } [/csharp] This wrapper will down the road record the error to the database and then email users with alerts turned on. Since I haven't started on the "admin" section I am leaving it as is for the time being. The reason for the argument being there currently is that so when that does happen all of my existing front end code is already good to go as far as logging. Now that I've got my base function implemented, let's revisit the Search function mentioned earlier: [csharp] public ActionResult Search(string searchQueryString) { ViewBag.Title = searchQueryString + " << Search Results << " + Common.Constants.SITE_NAME; var model = new Models.HomeModel(baseModel); using (var ws = new WCFServiceClient()) { model.Posts = ws.GetPostSearchResults(searchQueryString); } if (model.Posts.Count == 0) { ThrowError(searchQueryString + " returned 0 results"); } ViewBag.Model = model; return View("Index", model); } [/csharp] Note the If conditional and the call to the ThrowError, no other work is necessary. As implemented: [caption id="attachment_2083" align="aligncenter" width="300"]Not Found Error Handler Page in the MVC App Not Found Error Handler Page in the MVC App[/caption] Where does this leave us? The final phase in development: Optimization.

Optimization

You might be wondering why I left optimization for last? I feel as though premature optimization leads to not only a longer debugging period when nailing down initial functionality, but also if you do things right as you go on your optimizations are really just tweaking. I've done both approaches in my career and definitely have had more success with doing it last. If you've had the opposite experience please comment below, I would very much like to hear your story. So where do I want to begin?

YSlow and MVC Bundling

For me it makes sense to do the more trivial checks that provide the most bang for the buck. A key tool to assist in this manner is YSlow. I personally use the Firefox Add-on version available here. As with any optimization, you need to do a baseline check to give yourself a basis from which to improve. In this case I am going from a fully featured PHP based CMS, WordPress to a custom MVC4 Web App so I was very intrigued by the initial results below. [caption id="attachment_2088" align="aligncenter" width="300"]WordPress YSlow Ratings WordPress YSlow Ratings[/caption] [caption id="attachment_2089" align="aligncenter" width="300"]Custom MVC 4 App YSlow Results Custom MVC 4 App YSlow Ratings[/caption] Only scoring 1 point less than the battle tested WordPress version with no optimizations I feel is pretty neat. Let's now look into what YSlow marked the MVC 4 App down on. In the first line item, it found that the site is using 13 JavaScript files and 8 CSS files. One of the neat MVC features is the idea of bundling multiple CSS and JavaScript files into one. This not only cuts down on the number of HTTP Requests, but also speeds up the initial page load where most of your content is subsequently cached on future page requests. If you recall going back to an earlier post our _Layout.cshtml we included quite a few CSS and JavaScript files: [csharp] <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.common.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.dataviz.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.default.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.dataviz.default.min.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/kendo/2013.1.319/jquery.min.js")"></script> <script src="@Url.Content("~/Scripts/kendo/2013.1.319/kendo.all.min.js")"></script> <script src="@Url.Content("~/Scripts/kendo/2013.1.319/kendo.aspnetmvc.min.js")"></script> <script src="@Url.Content("~/Scripts/kendo.modernizr.custom.js")"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shCore.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/syntaxhighlighter/shCore.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/syntaxhighlighter/shThemeRDark.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushCSharp.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushPhp.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushXml.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushCpp.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushBash.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushSql.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/lightbox/jquery-1.7.2.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/lightbox/lightbox.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/lightbox/lightbox.css")" rel="stylesheet" type="text/css" /> [/csharp] Let's dive into Bundling all of our JavaScript files. First off create a new class, I called it BundleConfig and inside this class add the following static function: [csharp] public static void RegisterBundles(BundleCollection bundles) { // JavaScript Files bundles.Add(new ScriptBundle("~/Bundles/kendoBundle") .Include("~/Scripts/kendo/2013.1.319/jquery.min.js") .Include("~/Scripts/kendo/2013.1.319/kendo.all.min.js") .Include("~/Scripts/kendo/2013.1.319/kendo.aspnetmvc.min.js") .Include("~/Scripts/kendo.modernizr.custom.js") ); bundles.Add(new ScriptBundle("~/Bundles/syntaxBundle") .Include("~/Scripts/syntaxhighlighter/shCore.js") .Include("~/Scripts/syntaxhighlighter/shBrushCSharp.js") .Include("~/Scripts/syntaxhighlighter/shBrushPhp.js") .Include("~/Scripts/syntaxhighlighter/shBrushXml.js") .Include("~/Scripts/syntaxhighlighter/shBrushCpp.js") .Include("~/Scripts/syntaxhighlighter/shBrushBash.js") .Include("~/Scripts/syntaxhighlighter/shBrushSql.js") ); bundles.Add(new ScriptBundle("~/Bundles/lightboxBundle") .Include("~/Scripts/lightbox/jquery-1.7.2.min.js") .Include("~/Scripts/lightbox/lightbox.js") ); } [/csharp] Then in your _Layout.cshtml replace all of the original JavaScript tags with the following 4 lines: [csharp] @Scripts.Render("~/Bundles/kendoBundle") @Scripts.Render("~/Bundles/syntaxBundle") @Scripts.Render("~/Bundles/lightboxBundle") [/csharp] So afterwards that block of code should look like: [csharp] <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.common.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.dataviz.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.default.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.dataviz.default.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/syntaxhighlighter/shCore.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/syntaxhighlighter/shThemeRDark.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/lightbox/lightbox.css")" rel="stylesheet" type="text/css" /> @Scripts.Render("~/Bundles/kendoBundle") @Scripts.Render("~/Bundles/syntaxBundle") @Scripts.Render("~/Bundles/lightboxBundle") [/csharp] Finally go to your Global.asax.cs file and inside your Application_Start function add the following line: [csharp] BundleConfig.RegisterBundles(BundleTable.Bundles); [/csharp] So in the end your Application_Start function should look like: [csharp] protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterGlobalFilters(GlobalFilters.Filters); RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); } [/csharp] Now after re-running the YSlow test: [caption id="attachment_2092" align="aligncenter" width="300"]YSlow Ratings after Bundling of JavaScript Files in the MVC App YSlow Ratings after Bundling of JavaScript Files in the MVC App[/caption] Much improved, now we're rated better than WordPress itself. Now onto the bundling of the CSS styles. Add the following below the previously added ScriptBundles in your BundleConfig class: [csharp] // CSS Stylesheets bundles.Add(new StyleBundle("~/Bundles/stylesheetBundle") .Include("~/Content/Site.css") .Include("~/Content/lightbox/lightbox.css") .Include("~/Content/syntaxhighlighter/shCore.css") .Include("~/Content/syntaxhighlighter/shThemeRDark.css") .Include("~/Content/kendo/2013.1.319/kendo.common.min.css") .Include("~/Content/kendo/2013.1.319/kendo.dataviz.min.css") .Include("~/Content/kendo/2013.1.319/kendo.default.min.css") .Include("~/Content/kendo/2013.1.319/kendo.dataviz.default.min.css") ); [/csharp] And then in your _Layout.cshtml add the following in place of all of your CSS includes: [csharp] @Styles.Render("~/Bundles/stylesheetBundle") [/csharp] So when you're done, that whole block should look like the following: [csharp] @Styles.Render("~/Bundles/stylesheetBundle") @Scripts.Render("~/Bundles/kendoBundle") @Scripts.Render("~/Bundles/syntaxBundle") @Scripts.Render("~/Bundles/lightboxBundle") [/csharp] One thing that I should note is if your Bundling isn't working check your Routes. Because of my Routes, after deployment (and making sure the is set to false), I was getting 404 errors on my JavaScript and CSS Bundles. My solution was to use the IgnoreRoutes method in my Global.asax.cs file: [csharp] routes.IgnoreRoute("Bundles/*"); [/csharp] For completeness here is my complete RegisterRoutes: [csharp] routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); routes.IgnoreRoute("Bundles/*"); routes.MapRoute("Error", "Error/", new { controller = "Error", action = "Index" }); routes.MapRoute("Search", "Search/{searchQueryString}", new { controller = "Home", action = "Search" }); routes.MapRoute("Feed", "Feed", new {controller = "Home", action = "Feed"}); routes.MapRoute("Tags", "tag/{tagname}", new {controller = "Home", action = "Tags"}); routes.MapRoute("PostsRoute", "{year}/{month}", new { controller = "Home", action = "Posts" }, new { year = @"\d+" }); routes.MapRoute("ContentPageRoute", "{pagename}", new {controller = "Home", action = "ContentPage"}); routes.MapRoute("PostRoute", "{year}/{month}/{day}/{postname}", new { controller = "Home", action = "SinglePost" }, new { year = @"\d+", month = @"\d+", day = @"\d+" }); routes.MapRoute("Default", "{controller}", new { controller = "Home", action = "Index" }); [/csharp] Afterwards everything was set properly and if you check your source code you'll notice how MVC generates the HTML: [csharp] <link href="/Bundles/stylesheetBundle?v=l3WYXmrN_hnNspLLaGDUm95yFLXPFiLx613TTF4zSKY1" rel="stylesheet"/> <script src="/Bundles/kendoBundle?v=-KrP5sDXLpezNwcL3Evn9ASyJPShvE5al3knHAy2MOs1"></script> <script src="/Bundles/syntaxBundle?v=NQ1oIC63jgzh75C-QCK5d0B22diL-20L4v96HctNaPo1"></script> <script src="/Bundles/lightboxBundle?v=lOBITxhp8sGs5ExYzV1hgOS1oN3p1VUnMKCjnAbhO6Y1"></script> [/csharp] After re-running YSlow: [caption id="attachment_2095" align="aligncenter" width="300"]YSlow after all bundling in MVC YSlow after all bundling in MVC[/caption] Now we received a score of 96. What's next? Caching.

MVC Caching

Now that we've reduced the amount of data being pushed out to the client and optimized the number of http requests, lets switch gears to reducing the load on the server and enhance the performance of your site. Without diving into all of the intricacies of caching, I am going to turn on server side caching, specifically Output Caching. At a later date I will dive into other approaches of caching including the new HTML5 client side caching that I recently dove into. That being said, turning on Output Caching in your MVC application is really easy, simply put the OutputCache Attribute above your ActionResults like so: [csharp] [OutputCache(Duration = 3600, VaryByParam = "*")] public ActionResult SinglePost(int year, int month, int day, string postname) { ----- } [/csharp] In this example, the ActionResult will be cached for one hour (3600 seconds = 1 hour) and by setting the VaryByParam to * that means each combination of arguments passed into the function is cached versus caching one argument combination and displaying the one result. I've seen developers simply turn on caching and not thinking about dynamic content - suffice it to say, think about what could be cached and what can't. Common items that don't change often like your header or sidebar can be cached without much thought, but think about User/Role specific content and how bad it would be for a "Guest" user to see content as a Admin because an Admin had accessed the page within the cache time before a Guest user had.

Conclusion

In this post I went through the last big three items left in my migration from WordPress to MVC: Search Handling, Custom Error Pages and Caching. That being said I have a few "polish" items to accomplish before switching over the site to all of the new code, namely additional testing and adding a basic admin section. After those items I will consider Phase 1 completed and go back to my Windows Phone projects. Stay tuned for Post 9 tomorrow night with the polish items.
Can't believe it's been a week to the day when I began this project, but I am glad at the amount of progress I have made on the project thus far. Tonight I will dive into adding a WCF Service to act as a layer in between the logic and data layer done in previous posts Part 1, Part 2, Part 3, Part 4, Part 5 and Part 6) and add adding RSS support to the site.

Integrating a WCF Service

First off, for those that aren't familiar, WCF (Windows Communication Foundation) is an extremely powerful Web Service Technology created by Microsoft. I first dove into WCF April 2010 when diving into Windows Phone development as there was no support for the "classic" ASMX Web Services. Since then I have used WCF Services as the layer for all ASP.NET WebForms, ASP.NET MVC, Native Mobile Apps and other WCF Services at work since. I should note, WCF to WCF communication is done at the binary level, meaning it doesn't send XML between the services, something I found extremely enlightening that Microsoft implemented. At it's most basic level a WCF Service is comprised of two components, the Service Interface Definition file and the actual implementation. In the case of the migration, I created my Interface as follows: [csharp] [ServiceContract] public interface IWCFService { [OperationContract] lib.Objects.Post GetSinglePost(int year, int month, int day, string postname); [OperationContract] List<lib.Objects.Comment> GetCommentsFromPost(int postID); [OperationContract(IsOneWay = true)] void AddComment(string PersonName, string EmailAddress, string Body, int PostID); [OperationContract] lib.Objects.Content GetContent(string pageName); [OperationContract] List<lib.Objects.Post> GetPosts(DateTime startDate, DateTime endDate); [OperationContract] List<lib.Objects.Post> GetPostsByTags(string tagName); [OperationContract] List<lib.Objects.ArchiveItem> GetArchiveList(); [OperationContract] List<lib.Objects.LinkItem> GetLinkList(); [OperationContract] List<lib.Objects.TagCloudItem> GetTagCloud(); [OperationContract] List<lib.Objects.MenuItem> GetMenuItems(); } [/csharp] The one thing to note, IsOneWay a top of the AddComment function indicates, the client doesn't expect a return value. As noted in last night's post, the end user is not going to want to wait for all the emails to be sent, they simply want their comment to be posted and the Comment Listing refreshed with their comment. By setting the IsOneWay to true, you ensure the client's experience is fast no matter the server side work being done. And the actual implementation: [csharp] public class WCFService : IWCFService { public Post GetSinglePost(int year, int month, int day, string postname) { using (var pFactory = new PostFactory()) { var post = pFactory.GetPost(postname)[0]; post.Comments = pFactory.GetCommentsFromPost(post.ID); return post; } } public List<Comment> GetCommentsFromPost(int postID) { using (var pFactory = new PostFactory()) { return pFactory.GetCommentsFromPost(postID); } } public void AddComment(string PersonName, string EmailAddress, string Body, int PostID) { using (var pFactory = new PostFactory()) { pFactory.addComment(PostID, PersonName, EmailAddress, Body); } } public Content GetContent(string pageName) { using (var cFactory = new ContentFactory()) { return cFactory.GetContent(pageName); } } public List<Post> GetPosts(DateTime startDate, DateTime endDate) { using (var pFactory = new PostFactory()) { return pFactory.GetPosts(startDate, endDate); } } public List<Post> GetPostsByTags(string tagName) { using (var pFactory = new PostFactory()) { return pFactory.GetPostsByTags(tagName); } } public List<ArchiveItem> GetArchiveList() { using (var pFactory = new PostFactory()) { return pFactory.GetArchiveList(); } } public List<LinkItem> GetLinkList() { using (var pFactory = new PostFactory()) { return pFactory.GetLinkList(); } } public List<TagCloudItem> GetTagCloud() { using (var pFactory = new PostFactory()) { return pFactory.GetTagCloud(); } } public List<MenuItem> GetMenuItems() { using (var bFactory = new BaseFactory()) { return bFactory.GetMenuItems(); } } } [/csharp] One thing you might be asking, isn't this a security risk? If you're not, you should. Think about it, anyone who has access to your WCF Service could add comments and pull down your data at will. In its current state, this isn't a huge deal since it is only returning data and the AddComment Operation Contract requires a prior approved comment to post, but what about when the administrator functionality is implemented? You definitely don't want to expose your contracts to the outside world with only the parameters needed. So what can you do?
  1. Keep your WCF Service not exposed to the internet - this is problematic in today's world where a mobile presence is almost a necessity. Granted if one were to only create a MVC 4 Mobile Web Application you could keep it behind a firewall. My thought process currently is design and do it right the first time and don't corner yourself into a position where you have to go back and do additional work.
  2. Add username, password or some token to the each Operation Contract and then verify the user - this approach works and I've done it that way for public WCF Services. The problem becomes more of a lot of extra work on both the client and server side. Client Side you can create a base class with the token, username/password and simply pass it into each contract and then server side do a similar implementation
  3. Implement a message level or Forms Membership - This approach requires the most upfront work, but reaps the most benefits as it keeps your Operation Contracts clean and offers an easy path to update at a later date.
Going forward I will be implementing the 3rd option and of course I will document the process. Hopefully this help get developers thinking about security and better approaches to problems. Moving onto the second half of the post, creating an RSS Feed.

Creating an RSS Feed

After getting my class in my WCF Service, I created a new Stored Procedure in preparation: [sql] CREATE PROCEDURE dbo.getRSSFeedListSP AS SELECT TOP 25 dbo.Posts.Created, dbo.Posts.Title, LEFT(CAST(dbo.Posts.Body AS VARCHAR(MAX)), 200) + '...' AS 'Summary', dbo.Posts.URLSafename FROM dbo.Posts INNER JOIN dbo.Users ON dbo.Users.ID = dbo.Posts.PostedByUserID WHERE dbo.Posts.Active = 1 ORDER BY dbo.Posts.Created DESC [/sql] Basically this will return the most recent 25 posts and up to the first 200 characters of the post. Afterwards I created a class to translate the Entity Framework Complex Type: [csharp] [DataContract] public class PostFeedItem { [DataMember] public DateTime Published { get; set; } [DataMember] public string Title { get; set; } [DataMember] public string Description { get; set; } [DataMember] public string URL { get; set; } public PostFeedItem(DateTime published, string title, string description, string url) { Published = published; Title = title; Description = description; URL = url; } } [/csharp] And then I added a new Operation Contract in my WCF Service: [csharp] public List<lib.Objects.PostFeedItem> GetFeedList() { using (var pFactory = new PostFactory()) { return pFactory.GetFeedList(); } } [/csharp] Now I am going to leave it up to you which path to implement. At this point you've got all backend work done to return the data you need to write your XML file for RSS. There are many approaches to how you want to go about to proceeding, and it really depends on how you want to serve your RSS Feed. Do you want it to regenerate on the fly for each request? Or do you want to write an XML file only when a new Post is published and simply serve the static XML file? From what my research gave me, there are multiple ways to do each of those. For me I am in favor of doing the work once and writing it out to a file rather than doing all of that work on each request. The later seems like a waste of server resources. Generate Once
  1. One being using the Typed DataSet approach I used in Part 1 - requires very little work and if you're like me, you like a strongly typed approach.
  2. Another option is to use the SyndicationFeed built in class to create your RSS Feed's XML - an approach I hadn't researched prior to for generating one
  3. Using the lower level XmlWriter functionality in .Net to build your RSS Feed's XML - I strongly urge you to not do this with the 2 approaches above being strongly typed. Unstrongly Typed code leads to spaghetti and a debugging disaster when something goes wrong.
Generate On-Thee-Fly
  1. Use the previously completed WCF OperationContract to simply return the data and then use something like MVC Contrib to return a XmlResult in your MVC Controller.
  2. Set your MVC View to return XML and simply iterate through all of the Post Items
Those are just some ways to accomplish the goal of creating a RSS Feed for your MVC site. Which is right? I think it is up to you to find what works best for you. That being said, I am going to walk through how to do the first 2 Generate Once Options. For both approaches I am going to use IIS's UrlRewrite functionality to route http://www.jarredcapellman.com/feed/ to http://www.jarredcapellman.com/rss.xml. For those interested, all it took was the following block in my web.config in the System.WebService section: [xml] <rewrite> <rules> <rule name="RewriteUserFriendlyURL1" stopProcessing="true"> <match url="^feed$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="rss.xml" /> </rule> </rules> </rewrite> [/xml] To learn more about URL Rewrite go the official site here.

Option 1 - XSD Approach

Utilizing a similar approach to how I got started, utilizing the XSD tool in Part 1, I generated a typed dataset based on the format of an RSS XML file: [xml] <?xml version="1.0"?> <rss version="2.0"> <channel> <title>Jarred Capellman</title> <link>http://www.jarredcapellman.com</link> <description>Putting 1s and 0s to work since 1995</description> <language>en-us</language> <item> <title>Version 2.0 Up!</title> <link>http://www.jarredcapellman.com/2002/05/04/version-2-0-is-up/</link> <description>Yeah in all its glory too, it's far from complete, the forum will be up tonight most likely...</description> <pubDate>5/4/2012 12:00:00 AM</pubDate> </item> </channel> </rss> [/xml] [caption id="attachment_2056" align="aligncenter" width="300"]Generated Typed Data Set for RSS Generated Typed Data Set for RSS[/caption] Then in my HomeController, I wrote a function to handle writing the XML to be called when a new Post is entered into the system: [csharp] private void writeRSSXML() { var dt = new rss(); using (var ws = new WCFServiceClient()) { var feedItems = ws.GetFeedList(); var channelRow = dt.channel.NewchannelRow(); channelRow.title = Common.Constants.SITE_NAME; channelRow.description = Common.Constants.SITE_DESCRIPTION; channelRow.language = Common.Constants.SITE_LANGUAGE; channelRow.link = Common.Constants.URL; dt.channel.AddchannelRow(channelRow); dt.channel.AcceptChanges(); foreach (var item in feedItems) { var itemRow = dt.item.NewitemRow(); itemRow.SetParentRow(channelRow); itemRow.description = item.Description; itemRow.link = buildPostURL(item.URL, item.Published); itemRow.pubDate = item.Published.ToString(CultureInfo.InvariantCulture); itemRow.title = item.Title; dt.item.AdditemRow(itemRow); dt.item.AcceptChanges(); } } var xmlString = dt.GetXml(); xmlString = xmlString.Replace("<rss>", "<?xml version=\"1.0\" encoding=\"utf-8\"?><rss version=\"2.0\">"); using (var sw = new StreamWriter(HttpContext.Server.MapPath("~/rss.xml"))) { sw.Write(xmlString); } } [/csharp] Pretty intuitive code with one exception - I could not find a way to add the version property to the rss element, thus having to use the GetXml() method and then do a more elaborate solution instead of simply calling dt.WriteXml(HttpContext.Server.MapPath("~/rss.xml")). Overall though I find this approach to be very acceptable, but not perfect.

Option 2 - Syndication Approach

Not 100% satisfied with the XSD Approach mentioned above I dove into the SyndicationFeed class. Be sure to include using System.ServiceModel.Syndication; at the top of your MVC Controller. I created the same function as above, but this time utilizing the SyndicationFeed class that is built into .NET: [csharp] private void writeRSSXML() { using (var ws = new WCFServiceClient()) { var feed = new SyndicationFeed(); feed.Title = SyndicationContent.CreatePlaintextContent(Common.Constants.SITE_NAME); feed.Description = SyndicationContent.CreatePlaintextContent(Common.Constants.SITE_DESCRIPTION); feed.Language = Common.Constants.SITE_LANGUAGE; feed.Links.Add(new SyndicationLink(new Uri(Common.Constants.URL))); var feedItems = new List<SyndicationItem>(); foreach (var item in ws.GetFeedList()) { var sItem = new SyndicationItem(); sItem.Title = SyndicationContent.CreatePlaintextContent(item.Title); sItem.PublishDate = item.Published; sItem.Summary = SyndicationContent.CreatePlaintextContent(item.Description); sItem.Links.Add(new SyndicationLink(new Uri(buildPostURL(item.URL, item.Published)))); feedItems.Add(sItem); } feed.Items = feedItems; var rssWriter = XmlWriter.Create(HttpContext.Server.MapPath("~/rss.xml")); var rssFeedFormatter = new Rss20FeedFormatter(feed); rssFeedFormatter.WriteTo(rssWriter); rssWriter.Close(); } } [/csharp] On first glance you might notice very similar code between the two approaches, with one major exception - there's no hacks to make it work as intended. Between the two I am going to go live with the later approach, not having to worry about the String.Replace ever failing and not having any "magic" strings is worth it. But I will leave the decision to you as to which to implement or maybe another approach I didn't mention - please comment if you have another approach. I am always open to using "better" or alternate approaches. Now that the WCF Service is fully integrated and RSS Feeds have been added, as far as the end user view there are but a few features remaining: Caching, Searching Content, Error Pages. Stay tuned for Part 8 tomorrow.
As mentioned yesterday I began diving back into a large project at work that involves Windows Workflow. At this point it had been almost six months to the day when I last touched the initial code I did, so today involved a lot of getting back into the mindset I had then and what I was trying to accomplish. This digging unfortunately left me figuring out I had left the code in a non-functional state, to the point that the Workflow Service was not connecting to the AppFabric databases properly. Long story short, three hours later I was able to get everything where I thought I had left it. Lesson learned here is that before jumping projects, always make sure you leave it in a usable state or at least document what isn't working properly. In my defense, the original sidetracking project was to be only three weeks. Back to the Windows Workflow development - one issue I was having today with my xamlx/Code Activity was my InArgument variables defined at a global level in my xamlx file I was not able to retrieve or set using the more proper method of: [csharp] InArgument<decimal> SomeID { get; set; protected override Guid Execute(CodeActivityContext context) { SomeID.Get(context); // To get the value SomeID.Set(context, 1234); // Set it to 1234 } [/csharp] No matter what, the value was always 0 when getting the variable's value even though it had been set on the entry point of the Workflow. After trying virtually just about everything I could I came up with a work around that does work. Do note I highly doubt this is the way Microsoft intended for it to be accomplished, but for the time being this is the only way I could get my xamlx defined variables updated/set in the custom CodeActivity. What I did was create as generic set/get functions as possible below: [csharp] private T getValue<T>(CodeActivityContext context, string name) { var properties = context.DataContext.GetProperties()[name]; if (properties == null) { return default(T); } return (T) properties.GetValue(context.DataContext); } private void setValue(CodeActivityContext context, string name, object value) { context.DataContext.GetProperties()[name].SetValue(context.DataContext, value); } [/csharp] And then to use the functionality in your CodeActivity: [csharp] protected override Guid Execute(CodeActivityContext context) { var SomeID = getValue<decimal>(context, "SomeID"); // Get the SomeID variable setValue(context, "SomeID", 1234); // Set SomeID to 1234 } [/csharp] Hopefully that gets someone on the right track and if you do eventually find the "correct" way please let me know. Otherwise I will definitely be asking the Windows Workflow experts at TechED North America in June.
Continuing onto Part 5 of my migration from WordPress to MVC 4, I dove into Content, Comments and Routing tonight. (Other Posts: Part 1, Part 2, Part 3 and Part 4). First thing I did tonight was add a new route to handle pages in the same way WordPress does (YYYY/MM/DD/) for several reasons, though my primary reason is to retain all of the links from the existing WordPress site - something I'd highly suggest you consider doing as well. As noted the other night, your MVC Routing is contained in your Global.asax.cs file. Below is the route I added to accept the same format as WordPress: [csharp] routes.MapRoute("ContentPageRoute", "{pagename}", new {controller = "Home", action = "ContentPage"}); [/csharp] Be sure to put it before the Default Route otherwise the route above will not work. After I got the Route setup, I went back into my _Layout.cshtml and updated the header links to pull from a SQL Table and then return the results to the layout: [csharp] <div class="HeaderMenu"> <nav> <ul id="menu"> <li>@Html.ActionLink("home", "Index", "Home")</li> @{ foreach (bbxp.lib.Objects.MenuItem menuItem in @Model.Base.MenuItems) { <li>@Html.ActionLink(@menuItem.Title, "ContentPage", "Home", new {pagename = @menuItem.URLName}, null)</li> } } </ul> </nav> </div> [/csharp] Further down the road I plan to add a UI interface to adjust the menu items, thus the need to make it programmatic from the start. Next on the list was actually importing the content from the export functionality in WordPress. Thankfully the structure is similar to the actual posts so it only took the following code to get them all imported: [csharp] if (item.post_type == "page") { var content = eFactory.Contents.Create(); content.Active = true; content.Body = item.encoded; content.Created = DateTime.Parse(item.post_date); content.Modified = DateTime.Parse(item.post_date); content.PostedByUserID = creator.ID; content.Title = item.title; content.URLSafename = item.post_name; eFactory.Contents.Add(content); eFactory.SaveChanges(); continue; } [/csharp] With some time to spare, I started work on the Comments piece of the migration. Immediately after the Post creation in the Importer, I added the following to import all of the comments: [csharp] foreach (var comment in item.GetcommentRows()) { var nComment = eFactory.PostComments.Create(); nComment.Active = true; nComment.Body = comment.comment_content; nComment.Created = DateTime.Parse(comment.comment_date); nComment.Modified = DateTime.Parse(comment.comment_date); nComment.PostID = post.ID; nComment.Email = comment.comment_author_email; nComment.Name = comment.comment_author; eFactory.PostComments.Add(nComment); eFactory.SaveChanges(); } [/csharp] And now that there were actual comments in the system, I went back into my partial view for the Posts and added the code to display the Comments Link and Total properly: [csharp] <div class="CommentLink"> @{ object commentLink = @bbxp.mvc.Common.Constants.URL + @Model.PostDate.Year + "/" + @Model.PostDate.Month + "/" + @Model.PostDate.Day + "/" + @Model.URLSafename; <h4><a href="@commentLink">@Model.NumComments @(Model.NumComments == 1 ? "Comment" : "Comments")</a></h4> } </div> [/csharp] After getting the Comments Count displayed I wanted to do some refactoring on the code up to now. Now that I've got a pretty good understanding of MVC architecture I started to create Base objects. The commonly pulled in data for instance (Tag Cloud, Menu Items, Archive List etc.) I now have in a BaseModel and pulled in a BaseController. After which all Controllers inherit. I cut down on a good chunk of code and feel pretty confident as time goes on I will be able to expand upon this baseline architecture very easily. [caption id="attachment_2032" align="aligncenter" width="300"]Migration Project as of Part 5 Migration Project as of Part 5[/caption] So what is on the plate next? Getting the Comments displayed, the ability to post new comments and in the back end email people upon a new comment being entered for a particular post.
Today I began a several month project that includes an extensive Windows Workflow implementation, Mail Merging based around a Word Template (dotx) and extensive integrations to WCF Services, WebForms and a WinForms application. Without going into a ton of detail, this project will most likely be my focus for the next 8-9 months at least. That being said, today I dove into OpenXML Mail Merging I began last October. Realizing the scope of the Mail Merging was evolving, I looked into possibly using an external library. Aspose's Word Library looked like it fit the bill for what I was planning on achieving and allowing me to retire some stop gaps I had put in place years ago at this point. Luckily, the way I had implemented OpenXML myself, it was as easy as replacing 20-30 lines in my main document generation class with the following: [csharp] var docGenerated = new Document(finalDocumentFileName); // Callback to handle HTML Tables docGenerated.MailMerge.FieldMergingCallback = new MailMergeFieldHandler(); // Get all of the Mail Merge Fields in the Document var fieldNames = docGenerated.MailMerge.GetFieldNames(); // Check to make sure there Mail Merge Fields if (fieldNames != null && fieldNames.Length > 0) { foreach (string fieldName in fieldNames) { var fieldValue = fm.Merge(fieldName); // Replace System.Environment.NewLine with LineBreak if (!String.IsNullOrEmpty(fieldValue)) { fieldValue = fieldValue.Replace(System.Environment.NewLine, Aspose.Words.ControlChar.LineBreak); } // Perform the Mail Merge docGenerated.MailMerge.Execute(new string[] {fieldName}, new object[] {fieldValue}); } } // Save the document to a PDF docGenerated.Save(finalDocumentFileName.Replace(".dotx", ".pdf")); [/csharp] And the Callback Class referenced above: [csharp] public class MailMergeFieldHandler : IFieldMergingCallback { void IFieldMergingCallback.FieldMerging(FieldMergingArgs e) { if (e.FieldValue == null) { return; } // Only do the more extensive merging for Field Values that start with a <table> tag if (!e.FieldValue.ToString().StartsWith("<table")) { e.Text = e.FieldValue.ToString(); return; } // Merge the HTML Tables var builder = new DocumentBuilder(e.Document); builder.MoveToMergeField(e.DocumentFieldName); builder.InsertHtml((string)e.FieldValue); e.Text = ""; } void IFieldMergingCallback.ImageFieldMerging(ImageFieldMergingArgs args) { } } [/csharp] More to come with the Aspose library as I explore more features, but so far I am very pleased with the ease of use and the performance of library.
A couple weeks back I needed to integrate a Word Press site with a C# WCF Service. Having only interfaced with the older "classic" ASP.NET Web Services (aka asmx) nearly 6 years ago I was curious if there had been improvements to the soapClient interface inside of PHP.

Largely it was exactly as I remembered going back to 2007. The one thing I really wanted to do with this integration was have a serialized object being passed up to the WCF Service from the PHP code - unfortunately due to the time constraints this was not achievable - if anyone knows and can post a snippet I'd be curious. That being said the following sample code passes simple data types as parameters in a WCF Operation Contract from PHP.

Given the following WCF Operation Contract definition: [csharp] [OperationContract] string CreateNewUser(string firstName, string lastName, string emailAddress, string password); [/csharp] Those who have done some PHP in the past should be able to understand the code below, I should note when doing your own WCF Integration, the $params->variableName needs to match casing to the actual WCF Service. In addition the object returned has the name of the WCF Service plus the "Result" suffix. [php] <?php class WCFIntegration { const WCFService_URL = "http://somesite.com/NameOfYourService.svc?wsdl"; public function addUser($emailAddress, $firstName, $lastName, $password) { try { // Initialize the "standard" SOAP Options $options = array('cache_wsdl' => WSDL_CACHE_NONE, 'encoding' => 'utf-8', 'soap_version' => SOAP_1_1, 'exceptions' => true, 'trace' => true); // Create a connection to the WCF Service $client = new SoapClient(self::WCFService_URL, $options); if (!$client == null) { throw new Exception('Could not connect to WCF Service'); } // Set the WCF Service Parameters based on the argument values $params->emailAddress = $emailAddress; $params->firstName = $firstName; $params->lastName = $lastName; $params->password = $password; // Submit the $params object $result = $client->CreateNewUser($params); // Check the return value, in this case the WCF Service Operation Contract returns "Success" upon a succesful insertion if ($result->CreateNewUserResult === "Success") { return true; } throw new Exception($result->CreateNewUserResult); } catch (Exception $ex) { echo 'Error in WCF Service: '.$ex->getMessage().'<br/>'; return false; } } } ?> [/php] Then to actually utilize this class in your existing code: [php] $wcfClient = new WCFIntegration(); $result = $wcfClient->addUser('test@email.com', 'John', 'Doe', 'password'); if (!$result) { echo $result; } [/php] Hopefully that helps someone out who might not be as proficient in PHP as they are in C# or vice-versa.
Wrapping up the large MonoDroid application for work, one thing I had been putting off for one reason or another was handling non-image files. I was semi-fearful the experience was going to be like on MonoTouch, but surprisingly it was very similar to what you find on Windows Phone 8 where you simply tell the OS you need to open a specific file type and let the OS handle which program to open it in for you. There's drawbacks of this approach, you don't have an "in-your-app" experience, but it does alleviate the need to write a universal display control in your app - in my case this would have involved supporting iOS, Windows Phone 7.1/8 and Android, something I did not want to develop or maintain. So how do you accomplish this feat? Pretty simple assuming you have the following namespaces in your Class: [csharp] using System.IO; using Android.App; using Android.Content; using Android.Graphics; using Android.OS; using Android.Webkit; using Android.Widget; using File = Java.IO.File; using Path = System.IO.Path; [/csharp] I wrote a simple function to get the Android MimeType, if none exists then return a wildcard: [csharp] public string getMimeType(string extension) { if (extension.Length > 0) { if (MimeTypeMap.Singleton != null) { var webkitMimeType = MimeTypeMap.Singleton.GetExtensionFromMimeType(extension); if (webkitMimeType != null) { return webkitMimeType; } } } return "*/*"; } [/csharp] And then the actual function that accepts your filename, file extension and the byte array of the actual file: [csharp] private void loadFile(string filename, string extension, byte[] data) { var path = Path.Combine(Android.OS.Environment.ExternalStorageDirectory.Path, filename); using (var fs = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.ReadWrite)) { fs.Write(data, 0, data.Length); fs.Close(); } var targetUri = Android.Net.Uri.FromFile(new File(path)); var intent = new Intent(Intent.ActionView); intent.SetDataAndType(targetUri, getMimeType(extension)); StartActivity(intent); } [/csharp] Granted if you've already got your file on your device you can modify the code appropriately, but in my case I store everything on a NAS and then pull the file into a WCF DataContract and then back down to the phone. This approach has numerous advantages with a little overhead in both complexity and time to deliver the content - however if you have another approach I'd love to hear it. I hope this helps someone out there, I know I couldn't find a simple answer to this problem when I was implementing this functionality today.
Back at MonoDroid Development at work this week and ran into a serious issue with a DataContract and an InvalidDataContractException. Upon logging into my app instead of receiving the DataContract class object, I received this: [caption id="attachment_1943" align="aligncenter" width="300"]MonoDroid InvalidDataContractException MonoDroid InvalidDataContractException[/caption] The obvious suspect would be to verify the class had a getter and setter - sure enough both were public. Digging a bit further MonoTouch apparently had an issue at one point with not preserving all of the DataMembers in a DataContract so I add the following attribute to my class object: [csharp] [DataContract, Android.Runtime.Preserve(AllMembers=true)] [/csharp] Unfortunately it still threw the same exception - diving into the forums on Xamarin's site and Bing, I could only find one other developer who had run into the same issue, but had 0 answers. Exhausting all avenues I turned to checking different combinations of the Mono Android Options form in Visual Studio 2012 since I had a hunch it was related to the linker. After some more time I found the culprit - in my Debug configuration the Linking dropdown was set to Sdk and User Assemblies. [caption id="attachment_1944" align="aligncenter" width="300"]MonoDroid Linking Option in Visual Studio 2012 MonoDroid Linking Option in Visual Studio 2012[/caption] As soon as I switched it back over to Sdk Assemblies Only I was back to receiving my DataContract. Though I should also note - DataContract objects are not treated in MonoDroid like they are in Windows Phone, MVC or any other .NET platform I've found. Null really doesn't mean null, so what I ended up doing was changing my logic to instead return an empty object instead of a null object and both the Windows Phone and MonoDroid apps work perfectly off the same WCF Proxy Class.
If you've been following my blog posts over the last couple of years you'll know I have a profound love of using XML files for reading and writing for various purposes. The files are small and because of things like Typed Datasets in C# you can have clean interfaces to read and write XML files. In Windows Phone however, you do not have Typed Datasets so you're stuck utilizing the XmlSerializer to read and write. To make it a little easier going back to last Thanksgiving I wrote some helper classes in my NuGet library jcWPLIBRARY. The end result within a few lines you can read and write List Collections of Class Objects of your choosing. So why continue down this path? Simple answer: I wanted it better. Tonight I embarked on a "Version 2" of this functionality that really makes it easy to keep with your existing Entity Framework knowledge, but provide the functionality of a database on a Windows Phone 8 device that currently doesn't exist in the same vain it can in a MVC, WinForm, WebForm or Console app. To make this even more of a learning experience, I plan to blog the entire process, the first part of the project: reading all of the objects from an existing file. To begin, I am going to utilize the existing XmlHandler class in my existing Library. This code has been battle tested and I feel no need to write something from scratch especially since I am going to leave the existing classes in the library to not break anyone's apps or my own. First thoughts, what does a XmlSerializer file actually look like when written to? Let's assume you have the following class, a pretty basic class: [csharp] public class Test : jcDB.jObject { public int ID { get; set; } public bool Active { get; set; } public string Name { get; set; } public DateTime Created { get; set; } } [/csharp] The output of the file is like so: [xml] <?xml version="1.0" encoding="utf-8"?> <ArrayOfTest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Test> <ID>1</ID> <Active>true</Active> <Name>Testing Name</Name> <Created>2013-04-03T20:47:09.8491958-04:00</Created> </Test> </ArrayOfTest> [/xml] I often forget the XmlSerializer uses the "ArrayOf" prefix on the name of the root object so when testing with sample data when writing a new Windows Phone 8 app I have to refer back - hopefully that helps someone out. Going back to the task at hand - reading data from an XML file and providing an "Entity Framework" like experience - that requires a custom LINQ Provider and another day of programming it. Stay tuned for Part 2 where I go over creating a custom LINQ Provider bound to an XML File.
Working on a new project in my free time and remembered a few years back using the System.ServiceModel.Syndication namespace to pull down an RSS Feed at work. To my surprise in a new MVC 4 application the Assembly is no longer listed in the available assemblies in Visual Studio 2012: [caption id="attachment_1923" align="aligncenter" width="300"].NET 4.5 Framework Assembly List in Visual Studio 2012 .NET 4.5 Framework Assembly List in Visual Studio 2012[/caption] Digging in the base System.ServiceModel assembly I opened the assembly in the Object Browser, sure enough the System.ServiceModel.Syndication namespace exists there now: [caption id="attachment_1924" align="aligncenter" width="278"]System.ServiceModel opened in Visual Studio 2012's Object Browser System.ServiceModel opened in Visual Studio 2012's Object Browser[/caption] So for those coming to this post for a solution simply reference System.ServiceModel like so: [caption id="attachment_1925" align="aligncenter" width="300"]System.ServiceModel checked off in the Reference Manager of Visual Studio 2012 System.ServiceModel checked off in the Reference Manager of Visual Studio 2012[/caption] So then in your code you could do something like this to pull down RSS Items from my site and then in turn display them in your MVC View, Windows Phone 8 page etc. [csharp] var model = new Models.FeedModel(); var sFeed = SyndicationFeed.Load(XmlReader.Create("http://www.jarredcapellman.com/feed/")); model.FeedItems = sFeed.Items.Select(a => new FeedItem { Content = a.Summary.Text, PublicationDate = a.PublishDate.DateTime, Title = a.Title.Text, URL = a.Links.FirstOrDefault().Uri.ToString() }).ToList(); [/csharp]
This morning I was adding a document handling page in an ASP.NET WebForms project that uses ASP.NET's Theming functionality. Part of the document handling functionality is to pull in the file and return the bytes to the end user all without exposing the actual file path (huge security holes if you do). Since you're writing via Response.Write in these cases, you'd want your ASPX markup to be empty otherwise you'll end up with a Server cannot set content type after HTTP headers have been sent exception that if you've done WebForms development you know full well what the cause is. For those that don't, the important thing to remember is you need to return only the file you are returning. That means no HTML markup in your ASPX file. Upon deploying the code to my development server I received this exception: ASP.NET Theme Exception Easy solution? Update your PageEnableTheming="false", StylesheetTheme="" and Theme="" on the page you want to have an empty markup. [csharp] <%@ Page Language="C#" AutoEventWireup="true" EnableTheming="false" StylesheetTheme="" Theme="" CodeBehind="FileExport.aspx.cs" Inherits="SomeWebApp.Common.FileExport" %> [/csharp] Not something I run into very often as I roll my own Theming Support, but for those in a legacy situation or inherited code as I did in this case, I hope this helps.
After attending a Windows Phone 8 Jumpstart at Chevy Chase, MD earlier today I got asked about tips developing cross-platform with as much code re-use as possible. In doing a Version 2 of a large platform ubiquitous application since October I've had some new thoughts since my August 2012 post, Cross-Platform Mobile Development and WCF Architecture Notes. Back then I was focused on using a TPL enabled WCF Service to be hit by the various platforms (ASP.NET, Windows Phone, Android, iOS etc.). This approach had a couple problems for a platform that needs to support an ever growing concurrent client base. The main problem is that there is 1 point of failure. If the WCF Service goes down, the entire platform goes. In addition, it does not allow more than 1 WCF server to be involved for the application. The other problem is that while the business logic is hosted in the cloud/a dedicated server with my August 2012 thought process, it doesn't share the actual WCF Service proxies or other common code.

What is an easy solution for this problem of scalability?

Taking an existing WCF Service and then implementing a queuing system where possible. This way the client can get an instantaneous response, thus leaving the main WCF Service resources to process the non-queueable Operation Contracts.

How would you go about doing this?

You could start out by writing a Windows Service to constantly monitor a set of SQL Tables, XML files etc. depending on your situation. To visualize this: [caption id="attachment_1895" align="aligncenter" width="300"]Queue Based Architecture (3/7/2013) Queue Based Architecture (3/7/2013)[/caption] In a recent project at work, in addition to a Windows Service, I added another database and another WCF Service to help distribute the work. The main idea being for each big operation that is typically a resource intensive task, offload it to another service, with the option to move it to an entirely different server. A good point to make here, is that the connection between WCF Services is done via binary, not JSON or XML.

Increase your code sharing between platforms

Something that has become more and more important for me as I add more platforms to my employer's main application is code reuse. This has several advantages:
  1. Updates to one platform affect all, less work and less problems by having to remember to update every platform when an addition, change or fix occurs
  2. For a single developer team like myself, it is a huge time saving principle especially from a maintenance perspective

What can you do?

In the last couple of months there have been great new approaches to code re-use. A great way to start is to create a Portable Class Library or PCL. PCLs can be used to create libraries to be compiled by Windows Phone 7/8, ASP.NET, MVC, WinForms, WPF, WCF, MonoDroid and many other platforms. All but MonoDroid is built in, however I recently went through how to Create a Portable Class Library in MonoDroid. The best thing about PCLs, your code is entirely reusable, so you can create your WCF Service proxy(ies), common code such as constants etc. The one thing to keep in mind is to follow the practice of not embedding your business, presentation and data layers in your applications.
Diving into MVC 4 this week and going through the default Kendo UI MVC 4 project type in Visual Studio 2012 I noticed quite a few assemblies I knew I wouldn't need for my current project, namely the DotNetOpenAuth assemblies. I removed the 6 DotNetOpenAuth.* assemblies: [caption id="attachment_1890" align="aligncenter" width="304"]MVC 4 Assemblies MVC 4 Assemblies[/caption] In addition you'll need to remove the Microsoft.Web.WebPages.OAuth reference as well. To my surprise, upon building and debugging the new project I received the following exception: [caption id="attachment_1891" align="aligncenter" width="550"]MVC 4 Exception - DotNetOpenAuth Not Found MVC 4 Exception - DotNetOpenAuth Not Found[/caption] I double checked my packages.config and web.config config files for any reference, to no avail. As a last resort I deleted my bin and obj folders, rebuilt the solution and sure enough it started without any issues. Hopefully that helps someone out.
This morning as I was continuing to dive into porting V2 of the product I recently wrapped up the ASP.Net and Windows Phone versions, I got the following exception when attempting to populate a few EditText objects after a WCF request completed: [caption id="attachment_1884" align="aligncenter" width="559"]Lovely exception when trying to update EditText Fields on another thread Lovely exception when trying to update EditText Fields on another thread[/caption] The solution, assuming you're in another thread is to use RunOnUiThread on the function you wish to update the UI with, like so: [csharp] private void setUserProfileFields() { editTextEmailAddress = FindViewById<EditText>(Resource.Id.txtBxProfileEmailAddress); editTextEmailAddress.Text = App.ViewModel.CurrentUserProfile.EmailAddress; } void MainModel_PropertyChanged(object sender, System.ComponentModel.PropertyChangedEventArgs e) { switch (e.PropertyName) { case "UP_LOADED": RunOnUiThread(setUserProfileFields); break; } } [/csharp]
Ran into a fun MonoTouch error inside of Visual Studio 2012 today: [caption id="attachment_1877" align="aligncenter" width="550"]Could not load file or assembly 'moscorlib' Could not load file or assembly 'moscorlib'[/caption] Oddly enough, it only started doing that prior to a successful remote deployment. The solution is to go into your Visual Studio 2012 project properties, Build Tab and switch Generate serialization assembly to off. [caption id="attachment_1878" align="aligncenter" width="661"]Visual Studio 2012 Generate Serialization Assembly Dropdown Visual Studio 2012 Generate Serialization Assembly Dropdown[/caption]
Jumping back into MonoDroid development the last couple days at work after having not touched it in almost a year, I knew I was going to be rusty. Interestingly enough I'm finding it much closer to Windows Phone development than I remembered. Having had no luck in finding MonoDroid for Windows Phone Developers I figured I'd start an ongoing post.

Open a Browser

In Windows Phone you would open a Web Browser with the following function: [csharp] private void openBrowser(string url) { var task = new WebBrowserTask(); task.Uri = new Uri(url); task.Show(); } [/csharp] However in MonoDroid you have to do this: [csharp] private void openBrowser(string url) { var browserIntent = new Intent(Intent.ActionView, Android.Net.Uri.Parse(url)); StartActivity(browserIntent); } [/csharp]

New Line in TextView

In Windows Phone in your XAML you'd do something like this to insert a new line into your TextBlock: [csharp] <TextBlock>This line is awesome<LineBreak/>but this one is better</TextBlock> [/csharp] However in MonoDroid you need to be sure to set the singleLine property to false like so: [csharp] <TextView android:layout_width="fill_parent" android:singleLine="false" android:text="This line is awesome\r\nbut this one is better" /> [/csharp]

Login form (aka tapping enter goes to next field, with the last field hiding the keyboard)

In your XAML on Windows Phone you might have something like the following: [csharp] <StackPanel Orientation="Vertical"> <TextBox x:Name="TextBoxUsername" /> <TextBox x:Name="TextBoxPassword" /> </StackPanel> [/csharp] And then in your code behind: [csharp] public LoginPage() { InitializeComponent(); TextBoxUsername.KeyDown += TextBoxUsername_KeyDown; TextBoxPassword.KeyDown += TextBoxPassword_KeyDown; } void TextBoxUsername_KeyDown(object sender, KeyEventArgs e) { if (e.Key == Key.Enter) { TextBoxPassword.Focus(); } } void TextBoxPassword_KeyDown(object sender, KeyEventArgs e) { if (e.Key == Key.Enter) { Focus(); } } [/csharp] Basically upon hitting the enter key while in the TextBoxUsername field it will set the focus to the TextBoxPassword field. Upon hitting enter in the TextBoxPassword field, it will set the focus to the main page and close the keyboard. For MonoDroid, it is a little different. In your axml: [csharp] <EditText android:id="@+id/TextBoxUsername" android:imeOptions="actionNext" android:singleLine="true" android:layout_width="fill_parent" android:layout_height="wrap_content" /> <EditText android:id="@+id/TextBoxPassword" android:imeOptions="actionDone" android:singleLine="true" android:layout_width="fill_parent" android:layout_height="wrap_content" /> [/csharp] The key part is the android:imeOptions values, actionNext will move the focus to the next EditText field and the actionDone will send the done command back to your keylisteners etc. In your Activity code behind, you need to add this override. Update the Resource.Id.xxxxxxx with the name of the field you want the keyboard to hide upon hitting enter: [csharp] public override bool DispatchKeyEvent(KeyEvent e) { if (CurrentFocus.Id == Resource.Id.TextBoxPasswordKey && (e.KeyCode == Keycode.NumpadEnter || e.KeyCode == Keycode.Enter)) { var imm = GetSystemService(Context.InputMethodService) as InputMethodManager; if (imm != null) { imm.HideSoftInputFromWindow(this.CurrentFocus.WindowToken, 0); } return true; } return base.DispatchKeyEvent(e); } [/csharp] I should also note you'll need the using Android.Views.InputMethods; line added to your code behind as well.

Capturing Images via the Camera

In Windows Phone capturing images from your Library or Taking a new picture is pretty trivial, assuming you have a Button to choose/take the picture: [csharp] PhotoChooserTask _pcTask = null; private byte[] _pictureBytes; public PictureUpload() { InitializeComponent(); _pcTask = new PhotoChooserTask(); _pcTask.Completed += new EventHandler<PhotoResult>(_pcTask_Completed); } void _pcTask_Completed(object sender, PhotoResult e) { if (e.TaskResult == TaskResult.OK) { MemoryStream ms = new MemoryStream(); e.ChosenPhoto.CopyTo(ms); _pictureBytes = ms.ToArray(); ms.Dispose(); } } private void btnChooseImage_Click(object sender, RoutedEventArgs e) { _pcTask.ShowCamera = true; _pcTask.Show(); } [/csharp] From there just upload the _pictureBytes to your WCF Service or wherever. In MonoDroid as expected is a little different, assuming you have a button click event to take the picture and an ImageView to display the image: [csharp] private string _imageUri; // Global variable to access the image's Uri later void btnChooseImage_Click(object sender, EventArgs e) { var uri = ContentResolver.Insert(isMounted ? Android.Provider.MediaStore.Images.Media.ExternalContentUri : Android.Provider.MediaStore.Images.Media.InternalContentUri, new ContentValues()); _imageUri = uri.ToString(); var i = new Intent(Android.Provider.MediaStore.ActionImageCapture); i.PutExtra(Android.Provider.MediaStore.ExtraOutput, uri); StartActivityForResult(i, 0); } protected override void OnActivityResult(int requestCode, Result resultCode, Intent data) { if (resultCode == Result.Ok && requestCode == 0) { imageView = FindViewById<ImageView>(Resource.Id.ivThumbnail); imageView.DrawingCacheEnabled = true; imageView.SetImageURI(Android.Net.Uri.Parse(_imageUri)); } } [/csharp] At this point you have the picture taken and the Uri of the image. In your Layout: [csharp] <Button android:text="Choose Image" android:id="@+id/btnChooseImage" android:layout_width="fill_parent" android:layout_height="wrap_content" /> <ImageView android:id="@+id/ivThumbnail" android:layout_width="300dp" android:layout_height="150dp" /> [/csharp]

Loading a picture from local storage and avoiding the dreaded java.lang.outofmemory exception

A fairly common scenario, maybe pulling an image from the code described above, now you want to upload it some where? In the Windows Phone above in addition to taking/capturing the picture, we have a byte[] with the data, on MonoDroid it is a little different. A situation I ran into on my HTC Vivid was a java.lang.outofmemory exception. Further investigation, apparently Android has a 24mb VM limit per app (and some devices it is set to 16mb). Doing some research, I came across Twig's post. As expected it was in Java, so I converted it over to MonoDroid and added some additional features to fit my needs. So literally this function will return a scaled Bitmap object for you to turn around and convert to a Byte[]. The function: [csharp] private Android.Graphics.Bitmap loadBitmapFromURI(Android.Net.Uri uri, int maxDimension) { var inputStream = ContentResolver.OpenInputStream(uri); var bfOptions = new Android.Graphics.BitmapFactory.Options(); bfOptions.InJustDecodeBounds = true; var bitmap = Android.Graphics.BitmapFactory.DecodeStream(inputStream, null, bfOptions); inputStream.Close(); var resizeScale = 1; if (bfOptions.OutHeight > maxDimension || bfOptions.OutWidth > maxDimension) { resizeScale = (int)Math.Pow(2, (int)Math.Round(Math.Log(maxDimension / (double)Math.Max(bfOptions.OutHeight, bfOptions.OutWidth)) / Math.Log(0.5))); } bfOptions = new Android.Graphics.BitmapFactory.Options(); bfOptions.InSampleSize = resizeScale; inputStream = ContentResolver.OpenInputStream(uri); bitmap = Android.Graphics.BitmapFactory.DecodeStream(inputStream, null, bfOptions); inputStream.Close(); return bitmap; } [/csharp] For a practical use, loading the image, scaling if necessary and then getting a Byte[]: [csharp] var bitmap = loadBitmapFromURI(Android.Net.Uri.Parse(_imageUri), 800); var ms = new MemoryStream(); bitmap.Compress(Android.Graphics.Bitmap.CompressFormat.Jpeg, 100, ms); [/csharp] At this point doing a ms.ToArray() will get you to the same point the Windows Phone code above did, so if you had a WCF Service, you could at this point upload the byte array just as you could with a Windows Phone above.

Transparent Background on a ListView, LayoutView etc?

In Windows Phone you can set the Background or Foreground properties to simply Transparent like so: [csharp] <StackPanel Background="Transparent" Orientation="Vertical"> <TextBlock>Transparentcy is awesome</TextBlock> </StackPanel> [/csharp] In MonoDroid it's simply "@null" like so for the ListView [csharp] <ListView android:background="@null" android:layout_width="match_parent" android:layout_height="match_parent"> <TextView android:text="Transparentcy is awesome" android:gravity="center" /> [/csharp]

Locking Orientation

In Windows Phone you can set your page's orientation to be forced into Landscape or Portrait in your xaml like so: [csharp] <phone:PhoneApplicationPage SupportedOrientations="Portrait" Orientation="Portrait"> [/csharp] In MonoDroid I couldn't figure out a better way to do it than the following line inside your Activity's OnCreate function like so: [csharp] protected override void OnCreate(Bundle bundle) { base.OnCreate(bundle); RequestedOrientation = ScreenOrientation.Portrait; } [/csharp] On a side note, you'll need to add this line to the top of your Activity if it isn't already there: using Android.Content.PM;
For the last 2.5 years I've been using the default MonoDevelop Web Reference in my MonoTouch applications for work, but it's come to the point where I really need and want to make use of the WCF features that I do in my Windows Phone applications. Even with the newly released iOS Integration in Visual Studio 2012, you still have to generate the proxy class with the slsvcutil included with the Silverlight 3.0 SDK. If you're like me, you probably don't have Version 3.0 of the Silverlight SDK, you can get it from Microsoft here. When running the tool you might get the following error: Error: An error occurred in the tool. Error: Could not load file or assembly 'C:\Program Files (x86)\Microsoft Silverl ight\5.1.10411.0\System.Runtime.Serialization.dll' or one of its dependencies. T his assembly is built by a runtime newer than the currently loaded runtime and c annot be loaded. Basically the tool is incorrectly trying to pull the newer 4.0 or 5.0 Silverlight Assemblies, to make it easy, I created a config file to simply drop into your c:\Program Files (x86)\Microsoft SDKs\Silverlight\v3.0\Tools folder, you can download it here. From a command line (remember the shortcut to hold shift down and right click in the folder to open a command prompt): [caption id="attachment_1853" align="aligncenter" width="593"]Silverlight 3 WCF Proxy Generation Silverlight 3 WCF Proxy Generation[/caption] Enter the following, assuming you want to create a proxy for a localhost WCF Service to your c:\tmp folder: SlSvcUtil.exe http://localhost/Service.svc?wsdl /noconfig /d:c:\tmp Though I should note, this will generate Array collections and not List or ObservableCollection collections. If you want to generate your Operation Contracts with return types of those collections simply add for List Collections: /collectionType:System.Collections.Generic.List`1 or ObservableCollection: /collectionType:System.Collections.ObjectModel.ObservableCollection`1
Came into a frustrating issue this morning with an app using MVMM in Windows Phone 7. I had a couple of textboxes, 2 with multi-line support and an Application Bar with a Save Icon. The idea being if you wanted to save your changes in the textboxes, tap the Save Icon and everything would save. Little did I know, the Textboxes only trigger an update to your binding when losing focus. So if the end user left the textbox focused, the trigger wouldn't occur and your save functionality in your View Model would not have the newly entered text. A clean work around for this problem is to create a generic OnTextChanged event handler function to trigger the update and then map that function to each of your textboxes. Here's some sample code in my XAML file: [csharp] <telerikPrimitives:RadTextBox TextChanged="Rtb_OnTextChanged" Text="{Binding Option.Title, Mode=TwoWay}" x:Name="rtbTitle" Watermark="enter your title here" Header="cover page title" HideWatermarkOnFocus="True" /> <telerikPrimitives:RadTextBox TextChanged="Rtb_OnTextChanged" TextWrapping="Wrap" Text="{Binding Option.CoverPageText, Mode=TwoWay}" x:Name="rtbCoverPage" Height="200" Watermark="enter your cover page text here" Header="cover page text" HideWatermarkOnFocus="True" /> <telerikPrimitives:RadTextBox TextChanged="Rtb_OnTextChanged" TextWrapping="Wrap" Text="{Binding Option.SummaryPageText, Mode=TwoWay}" x:Name="rtbSummaryPage" Height="200" Watermark="enter your summary page text here" Header="summary page text" HideWatermarkOnFocus="True" /> [/csharp] And then in your code behind: [csharp] private void Rtb_OnTextChanged(object sender, TextChangedEventArgs e) { var bindingExpression = ((TextBox) sender).GetBindingExpression(TextBox.TextProperty); if (bindingExpression != null) { bindingExpression.UpdateSource(); } } [/csharp]
Had an interesting situation with a pretty extensive Ajax/Telerik ASP.NET 4.5 page where one of the RadPageViews could grow to a pretty sizeable height and the requirement was to have the Submit Button at the very bottom (not sure why in retrospect). Part of the problem was the RadLoadingPanel wasn't expanding to the infinite height of the RadPageVIew. So what you got was only the original height of the RadPageView and since you were at the bottom of the page, you had no indication that the post-back was actually occurring. I am sure you could probably update the height of the RadAjaxLoadingPanel through some JavaScript, but I chose to do the easier approach, simply scroll to the top and then do the post back on my Submit Button. If you had a ton of buttons on a page, you would probably want to rework the JavaScript function to accept the Button's ID and simply call the JavaScript in your Button's OnClientClick event. Some where in your ASPX file: [javascript] <telerik:RadCodeBlock runat="server"> <script type="text/javascript"> function focusTopAndSubmit() { window.scrollTo(0, 0); __doPostBack("<%=btnSubmit.UniqueID %>", ""); } </script> </telerik:RadCodeBlock> [/javascript] And then in your actual ASP:Button or Telerik:RadButton definition: [csharp] <asp:Button ID="btnSubmit" OnClientClick="focusTopAndSubmit(); return false;" OnClick="btnSubmit_Click" runat="server" Text="Submit" /> [/csharp] Update the JavaScript postback line with the name of your Submit Button, but other then that, it's drop in ready.
Diving into some of my really old work prior to 2005, I randomly recalled writing a ePSXe software GPU plugin back in the Spring of 2001 (April to be exact). I was really big into playing PSX games via an emulator back then (and yes all were legit store bought games, something to be said about those black colored PSX discs). Doing a random search on NGEmu (previously PSXEmu), sure enough they still had my plugin hosted with 908 downloads and an A+ rating, not bad for a 15 year old's work, nearly 12 years ago :) Launching ePSXe, I had forgotten how much I loved lens flares and difference clouds in Photoshop back then: [caption id="attachment_1815" align="aligncenter" width="330"]About screen for a software GPU ePSXe plugin I wrote back in 2001 About screen for a software GPU ePSXe plugin I wrote back in 2001[/caption] Sadly, the email I originally sent to Bobbi is lost and for the time being so is the original readme file I wrote 12 years ago, but thanks to the Way Back Machine, I got the original post announcing it on April 9th 2001: [caption id="attachment_1816" align="aligncenter" width="736"]Announcement of the first release of my software gpu plugin for ePSXe Announcement of the first release of my software gpu plugin for ePSXe[/caption] This weekend I'll try and find a CD backup made around April 2001, though that was during a time when I wasn't as good with backups sadly :(
Working on a XNA/XAML game tonight I wanted to have a consistent UI experience between the XAML and XNA views. A key part of that was matching the Photoshop graphics and the in-game font. Having never used a TrueType Font (TTF) in Windows Phone 7.x I was curious how hard it would be to do in Windows Phone 8 where I have found many tasks to be streamlined. Unsurprisingly the process is pretty straight forward:
  1. Copy your TrueType Font to your solution folder
  2. Make sure the Build Action is set to Content and Copy to Output Directory to Copy if newer
In your XAML then reference the TrueType Font in your XAML with the file path#font name like so: [csharp] <TextBlock HorizontalAlignment="Center" FontFamily=".\Data\OCRAExt.ttf#OCR A Extended" FontSize="80" Text="TTFS ROCK" /> [/csharp] Note you need to keep any spaces found in the font, if you aren't sure, double click on the font and make sure your XAML matches the what I've circled in red: [caption id="attachment_1810" align="aligncenter" width="572"]TrueType Font Name in Windows 8 TrueType Font Name in Windows 8[/caption] Something weird I did find with a couple of TrueType Fonts I tried was some font creators put a space at the end. If you're like me that'll drive you nuts, especially when referencing the custom font multiple times in your XAML. If you find a case like that, download TTFEdit from source forge and trim the space off the end and save it. If you follow those steps properly, you'll have your TTF in your Windows Phone 8 App: [caption id="attachment_1805" align="aligncenter" width="480"]TTF in Windows Phone 8 TTF in Windows Phone 8[/caption]
I had an unusual issue that came to my attention this week in regards to using Telerik's RadGrid for displaying a long list of items, in this case 83, when the pagination size was set to 100, as expected the page height grew exponentially over the initial 425px height for this particular RadPageView. On this particular RadGrid my far right hand column had an edit column in which a RadWindow would open to a one field, two button page. Redirecting to an entirely new page made no sense in this situation, thus why I went with the RadWindow control in the first place. Before I dive into the issue and the solution I came up with, you can download the full source code for this post here. For this example I am using a pretty common situation, listing all of the users with their email address and "IDs": [caption id="attachment_1791" align="aligncenter" width="300"]Telerik RadGrid with a PageSize of 10 Telerik RadGrid with a PageSize of 10[/caption] With the Edit User LinkButton opening a RadWindow indicating you can Edit the User's ID: [caption id="attachment_1794" align="aligncenter" width="300"]Telerik RadGrid with RadWindow Telerik RadGrid with RadWindow[/caption] So where does the problem lie? When you have a PageSize high or any content that expands far more than what is visible initially and you open your RadWindow: [caption id="attachment_1795" align="aligncenter" width="300"]Telerik RadGrid with PageSize set to 50 Telerik RadGrid with PageSize set to 50[/caption] The RadWindow appears where you would expect it to if you were still at the top of the page. So can you fix this so the RadWindow appears in the center of the visible area of your browser no matter how far down you are? On your OnItemDataBound code behind: [csharp] protected void rgMain_OnItemDataBound(object sender, GridItemEventArgs e) { if (!(e.Item is GridDataItem)) { return; } var linkButton = (LinkButton) e.Item.FindControl("lbEdit"); var user = ((e.Item as GridDataItem).DataItem is USERS ? (USERS) (e.Item as GridDataItem).DataItem : new USERS()); linkButton.Attributes["href"] = "javascript:void(0);"; linkButton.Attributes["onclick"] = String.Format("return openEditUserWindow('{0}');", user.ID); } [/csharp] The important line here is the linkButton.Attributes["href"] = "javascript:void(0);";. Something else I choose to do in these scenarios where I have a popup is to offer the user a cancel button and a save button, but only refreshing the main window object that needs to be updated. In this case a RadGrid. To achieve this, you need to pass an argument back to the RadWindow from your Popup ASPX Page to indicate when a refresh is necessary. The ASPX for your popup: [csharp] <div style="width: 300px;"> <telerik:RadAjaxPanel runat="server"> <telerik:RadTextBox runat="server" Width="250px" Label="UserID" ID="rTxtBxUserID" /> <div style="padding-top: 10px;"> <div style="float:left;"> <asp:Button ID="btnCancel" runat="server" Text="Cancel" OnClientClick="Close();return false;" /> </div> <div style="float:right"> <asp:Button ID="btnSave" runat="server" OnClientClick="CloseAndSave(); return false;" OnClick="btnSave_Click" Font-Size="14px" Text="Save User" /> </div> </div> </telerik:RadAjaxPanel> </div> [/csharp] The JavaScript in your ASPX Popup Page: [jscript] <telerik:RadScriptBlock ID="RadScriptBlock1" runat="server"> <script type="text/javascript"> function GetRadWindow() { var oWindow = null; if (window.radWindow) { oWindow = window.radWindow; } else if (window.frameElement.radWindow) { oWindow = window.frameElement.radWindow; } return oWindow; } function Close() { GetRadWindow().close(); } function CloseAndSave() { __doPostBack("<%=btnSave.UniqueID %>", ""); GetRadWindow().close(1); } </script> </telerik:RadScriptBlock> [/jscript] Then in your main page's ASPX: [jscript] <telerik:RadScriptBlock ID="RadScriptBlock1" runat="server"> <script type="text/javascript"> function openEditUserWindow(UserID) { var oWnd = radopen('/edituser_popup.aspx?UserID=' + UserID, "rwEditUser"); } function OnClientClose(oWnd, args) { var arg = args.get_argument(); if (arg) { var masterTable = $find('<%= rgMain.ClientID %>').get_masterTableView(); masterTable.rebind(); } } </script> </telerik:RadScriptBlock> [/jscript] With this your RadGrid will only refresh if the user hits save on your popup as opposed to doing a costly full post back even if the user didn't make any changes. I hope that helps someone out there who struggled to achieve everything I mentioned in full swoop. Telerik has some great examples on their site, but occasionally it can take some time getting them all working properly. As mentioned above, you can download the full source code for this solution here.
Finally had some time to revisit jcBENCH tonight and found a few issues on the new WPF and Windows Phone 8 releases that I didn't find over the weekend. Unfortunately for the Windows Phone 8 platform, there is a delay between publishing and actually hitting the store. So in the next 24-48 hours, please check out the Windows Phone 8 App Store for the release. You can alternatively just download the current release and await for the Store to indicate there is an update. But for those on Windows, please download the updated release here.
Recently I upgraded a fairly large Windows Forms .NET 4 app to the latest version of the Windows Forms Control Suite (2012.3.1211.40) and got a few bug reports from end users saying when they were doing an action that updated the Tree View Control it was throwing an exception. At first I thought maybe the Clear() function no longer worked as intended so I tried the following: [csharp] if(treeViewQuestions != null&& treeViewQuestions.Nodes != null&& treeViewQuestions.Nodes.Count > 0) { for(intx = 0; x < treeViewQuestions.Nodes.Count; x++) { treeViewQuestions.Nodes[x].Remove(); } } [/csharp] No dice. Digging into the error a big further, I noticed the "UpdateLine" function was the root cause of the issue: Telerik.WinControls.UI.TreeNodeLinesContainer.UpdateLine(TreeNodeLineElement lineElement, RadTreeNode node, RadTreeNode nextNode, TreeNodeElement lastNode)\r\n at Telerik.WinControls.UI.TreeNodeLinesContainer.UpdateLines()\r\n at Telerik.WinControls.UI.TreeNodeLinesContainer.Synchronize()\r\n at Telerik.WinControls.UI.TreeNodeElement.Synchronize()\r\n at Telerik.WinControls.UI.RadTreeViewElement.SynchronizeNodeElements()\r\n at Telerik.WinControls.UI.RadTreeViewElement.Update(UpdateActions updateAction)\r\n at Telerik.WinControls.UI.RadTreeViewElement.ProcessCurrentNode(RadTreeNode node, Boolean clearSelection)\r\n at Telerik.WinControls.UI.RadTreeNode.OnNotifyPropertyChanged(PropertyChangedEventArgs args)\r\n at Telerik.WinControls.UI.RadTreeNode.SetBooleanProperty(String propertyName, Int32 propertyKey, Boolean value)\r\n at Telerik.WinControls.UI.RadTreeNode.set_Current(Boolean value)\r\n at Telerik.WinControls.UI.RadTreeNode.ClearChildrenState()\r\n at Telerik.WinControls.UI.RadTreeNode.set_Parent(RadTreeNode value)\r\n at Telerik.WinControls.UI.RadTreeNodeCollection.RemoveItem(Int32 index)\r\n at System.Collections.ObjectModel.Collection`1.Remove(T item)\r\n at Telerik.WinControls.UI.RadTreeNode.Remove() Remembering I had turned on the ShowLines property, I humored the idea of turning them off for the clearing/removing of the nodes and then turning them back on like so: [csharp] treeViewQuestions.ShowLines = false; treeViewQuestions.Nodes.Clear(); treeViewQuestions.ShowLines = true; [/csharp] Sure enough that cured the problem, the last word I got back from Telerik was that this is the approved workaround, but no ETA on a true fix. Hopefully that helps someone else out there.
Last week I implemented the storing of images to a SQL Server database for the purpose of having the images readily available to my WCF Service to then deploy to all of the clients (Android, iPhone, Windows Phone and Windows Forms and ASP.NET). The idea being, a user could upload an image from one location and sync it to all of the other devices. You could use a more traditional file based system, storing the images on a NAS or SAN, have your ASP.NET server and or WCF Services read from them and hand them off as needed, but I felt like that wasn't as clean as simply doing a SQL Query to pull down the byte[]. Doing some searches I came across a lot of people saying it wasn't a good idea to store your images in a SQL Server database for various reasons:
  1. It makes your SQL Server database larger
  2. It makes queries on the Table with the image data slower to read from
  3. It was slower overall
These points to me without any sample code or test results made me really wonder if someone at one point said it was a bad idea and it just trickled down over time. So in typical fashion, I wrote a sample application and did my own testing. To be as comprehensive as I could, I had a base of 10 images of various sizes ranging from 6kb to 1065kb to give a more real world scenario. From there I tracked how long it took to clear the SQL Tables, populate each scenario and then retrieve them to a different location to simulate the server side handling of the files. I did this from 10 images to 1000 images. The SQL Server Table Schema: [caption id="attachment_1696" align="aligncenter" width="265"] SQL Server Tables used in testing[/caption] I kept the usual columns I would expect to find for tables in a real world example. In the more traditional approach, it simply writes the file location, while the other approach also stores the byte[] with the filename. In a real world scenario I'd probably have a foreign key relationship to each of those tables depending on the project. For instance if I had Users uploading these images, I'd have a Users2Images relational table, but you may have your own process/design when doing SQL so I won't get into my thoughts on SQL Database design. So what were the results? [caption id="attachment_1695" align="aligncenter" width="300"] SQL Server Image Storing vs NTFS Image Storing[/caption] The biggest thing that struck me was the linearity of retrieval in addition to the virtually identical results between the 2 approaches. But as is always the case, that's only part of a transaction in today's applications. You also have to consider the initial storing of the data. This is where simply writing the bytes to the hard drive versus writing the image's bytes to SQL Server is quite a bit different in performance. If you have a UI thread blocking action on your Windows Phone app, ASP.NET app or even WCF call when performing the upload this could be a big deal. I hope everyone is using Async at this point or at least not writing UI Thread Blocking code, but there could be some valid reasons for not using Async, for instance being stuck in .NET 1.1 where there isn't an easy upgrade path to 4.5. Even at 10 images (1.3mb), SQL Server took over 5X longer than the traditional approach for storing the images. Granted this was fractions of a second, but when thinking about scalability you can't disregard this unless you can guarantee a specific # of concurrent users at a time. Other factors to consider that you may have not thought about:
  1. Is your SQL Server much more powerful than your NAS, SAN or Web Server in regards to File I/O? More specifically, do you use SSDs in your SQL Server and Mechanical for your Web Server or NAS?
  2. How often are images being uploaded to your sites?
  3. Are you considering going to a queue implementation where the user gets an instant kick back or doing the processing Asynchronously?
  4. How big is your SQL Server database now? Do you pay extra in your cloud environment for a SQL Server versus a simple storage or webserver?
  5. Does the added security offered by storing the image in a database out way the costs both financially and performance of storing it initially?
  6. Are you going to be overwriting the images a lot?
Before I close, I should note I performed these tests on a RAID 0 Stripe on 2 Corsair Force 3 GT 90gb SSDs. I used SQL Server 2012 not SP1 (updating to that tomorrow) and the SQL Server Database was also on this Stripe for those curious. I purposely did not use TPL (i.e Parallel.Foreach) for these tests because I wanted to simulate a worse case scenario server side. I imagine though that you will be severely I/O limited and not CPU, especially on a mechanical drive. At a later date I may do these tests again in a TPL environment and on a mechanical drive to see how those factors change things. As Levar Burton said when I was growing up on Reading Rainbow, don't take my word for it, download the code with the SQL schema here. I'd love to hear some other points on this subject (both for and against) as I think there are some very good real world scenarios that could benefit from a SQL Server Storage approach versus a traditional approach. But like everything, there is but 1 answer to every problem in every scenario: do your homework and find the perfect one for that situation. So please write a comment below or tweet to me on Twitter!
If you simply want to download the library, click here or download the sample + library, click here. Like many folks who have been doing Windows Phone development for a while now, they might have been upset as I was to find DataSets were not part of the supported runtime. For those that don't know there is a powerful tool that has come with Visual Studio for quite some time now called xsd. xsd is a command line tool that can take an XML file and create a typed dataset class for you to include in your C# applications. I have used this feature extensively for a large offline Windows Forms application I originally wrote in the Spring of 2009 and am still using it today to read and write to XML files. To use the xsd tool I typically launch the Visual Studio Developer Console as shown below or you execute it directly from C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0A\bin\NETFX 4.0 Tools (If I remember correctly it was under Visual Studio\SDK\bin prior to VS2012). [caption id="attachment_1658" align="aligncenter" width="264"] Visual Studio 2012 Developer Command Prompt[/caption] The premise of using the xsd tool is to take an XML file like so: [xml] <?xml version="1.0" standalone="yes"?> <CPUSockets> <CPUSocket> <Name>Socket F</Name> <NumberOfPins>1307</NumberOfPins> <IsLGA>True</IsLGA> </CPUSocket> <CPUSocket> <Name>Socket A</Name> <NumberOfPins>462</NumberOfPins> <IsLGA>False</IsLGA> </CPUSocket> </CPUSockets> [/xml] Do the following command line: [powershell] xsd cpusockets.xml /d [/powershell] This will generate an xsd file. From there you can open this xsd file in Visual Studio to set the column variable types, nullable values etc. and then run the command below or simply run it and it will generate all string column types in your typed dataset class: [powershell] xsd cpusockets.xsd /d [/powershell] The benefit of using Typed Data Sets when working with XML files is that you don't have any hard coded values, everything is extremely clean. For instance to read in the XML shown above all it requires is: [csharp] CPUSockets cpuSockets = new CPUSockets(); cpuSockets.ReadXml("cpusockets.xml"); [/csharp] Very clean and at that point you can iterate over the rows like you would any DataSet (i.e. with TPL, a Foreach loop, LINQ etc). One advantage I found in using straight xml files with the xsd tool is that a non-programmer can assist with the creation of the XML files' schema and can be updated with just Notepad later on. Being the sole programmer at an 80+ employee company leads me to make things very easy to maintain. Bringing me back to my dilemma (and a most likely a lot of other Windows Phone 7.x/8 developers). Server side I have been able to generate XML and read it in my WCF services using the above method, but I wanted a universal code base between Windows Phone and WCF or as close to it as possible. Previously I had written an abstract class with some abstract functions with 2 implementations. One via simply using XmlReader for Windows Phone and the other via Typed Data Sets. Definitely not an ideal situation. Thinking about this problem yesterday I came up with a clean solution and I am sharing it with any one who wishes to use it and figuring I am going to be having a lot of these epiphanies, I am simply bundling the classes into a library called, jcWPLIBRARY, keeping with my other project naming conventions. So how does this solution work? First off I am going to assume you have a class you're already using or something similar to keep the structure of your XML file. With my solution you're going to need to inherit the jcWPLIBRARY.XML.XMLReaderItem class with your own properties like so: [csharp] public class CPUSockets : jcWPLIBRARY.XML.XMLReaderItem { public string Name { get; set; } public int NumberOfPins { get; set; } public bool IsLGA { get; set; } } [/csharp] How does reading and writing from XML files look? I created a small sample to show the steps. It is a pretty simple Windows Phone 8 application with one button to write XML with some hard coded test data and the other button to read that file back in from IsolatedStorage. [caption id="attachment_1660" align="alignleft" width="180"] Sample jcWPLIBRARY XML application[/caption] [caption id="attachment_1662" align="aligncenter" width="180"] Sample jcWPLIBRARY XML application[/caption] Going along with every thing I have done in the last year or two, I've tried to make it universal and easy to use. The source code behind the Write XML button: [csharp] List<Objects.CPUSockets> cpuSockets = new List<Objects.CPUSockets>(); cpuSockets.Add(new Objects.CPUSockets { IsLGA = true, Name = "Socket F", NumberOfPins = 1207 }); cpuSockets.Add(new Objects.CPUSockets { IsLGA = false, Name = "Socket AM3+", NumberOfPins = 942 }); cpuSockets.Add(new Objects.CPUSockets { IsLGA = false, Name = "Socket AM3", NumberOfPins = 941 }); cpuSockets.Add(new Objects.CPUSockets { IsLGA = false, Name = "Socket A", NumberOfPins = 462 }); jcWPLIBRARY.XML.XMLHandler<Objects.CPUSockets> xmlHandler = new jcWPLIBRARY.XML.XMLHandler<Objects.CPUSockets>(fileName: "cpusockets.xml"); var result = xmlHandler.WriteFile(data: cpuSockets); if (result.HasError) { MessageBox.Show(result.ErrorString); return; } MessageBox.Show("File written successfully"); [/csharp] Pretty simple, create your XMLHandler object using the inherited class previously and optionally pass in the filename you're going to want to later use. Note: You can pass in a different filename to the WriteFile function, by default it uses the constructor set filename however. The WriteFile function returns an XMLResult object. In this object you have a HasError property that if set to true, the ErrorString will contain exception text for you to read. This also applies to the ReadFile function mentioned below. The source code behind the Read XML Button: [csharp] jcWPLIBRARY.XML.XMLHandler<Objects.CPUSockets> xmlHandler = new jcWPLIBRARY.XML.XMLHandler<Objects.CPUSockets>(fileName: "cpusockets.xml"); var xmlresult = xmlHandler.LoadFile(); if (xmlresult.HasError) { MessageBox.Show(xmlresult.ErrorString); return; } StringBuilder stringBuilder = new StringBuilder(xmlresult.Result.Count()); foreach (var cpuSocket in xmlresult.Result) { stringBuilder.Append(cpuSocket.Name + " | " + cpuSocket.IsLGA + " | " + cpuSocket.NumberOfPins); } MessageBox.Show(stringBuilder.ToString()); [/csharp] Pretty simple hopefully. My next task (no pun intended) will be to add in support for Task-based Asynchronous Pattern (TAP) as everything I want to be doing going forward will be non-blocking. You can download the initial release of jcWPLIBRARY here. Or if you wish to also download the sample mentioned above, you can get it here. Note, the sample includes the library in the References folder so you don't need to download both to get started on your project with the library. Comments, suggestions whether they are positive or negative I would like to hear them.
This morning I finally retired my AMD Phenom II X6 1090T CPU from my primary desktop. I had been using it since April 30th 2010, right when it first came out. Looking back, it's interesting to think the power that $309 bought back then and the $185 on the FX-8350 brings today. Just from a numerical standpoint, 6x3.2ghz (19.2ghz) versus 8x4ghz (32ghz) is mind blowing in my opinion. 12 years ago nearly to the day I was about to buy my first 1ghz AMD Athlon "Thunderbird" Socket A CPU. What is also interesting is that 2.5 years later AMD is still using AM3/AM3+, which for a consumer is great. Knowing with a simply bios update I can run the latest CPUs is a great to know. In my case, doing a bios update on my ASUS M5A99X EVO to get support for the just recently released Vishera series of FX CPUs from AMD. [caption id="attachment_1639" align="aligncenter" width="300"] AMD FX-8350 Tin[/caption] [caption id="attachment_1641" align="aligncenter" width="300"] AMD FX-8350 installed into my ASUS M5A99X[/caption] [caption id="attachment_1642" align="aligncenter" width="169"] AMD FX-8350 installed into my ASUS M5A99X[/caption] After installation with no surprise, the FX-8350 showed up properly and automatically increased my memory speed to 1866mhz (previously with my Phenom II the max available was 1600mhz). [caption id="attachment_1643" align="aligncenter" width="300"] AMD FX-8350 showing in the UEFI bios of my ASUS M5A99X[/caption] [caption id="attachment_1644" align="aligncenter" width="300"] AMD FX-8350 Detailed Info showing in the UEFI bios of my ASUS M5A99X[/caption] CPU-Z: [caption id="attachment_1645" align="aligncenter" width="300"] AMD FX-8350 in CPU-Z[/caption] And now the most interesting aspect of any upgrade. Can I justify the cost of the upgrade, especially when applications hadn't seemed sluggish. Integer Benchmark Results: [caption id="attachment_1647" align="aligncenter" width="300"] jcBENCH Integer Benchmarks[/caption] Floating Point Benchmark Results: [caption id="attachment_1648" align="aligncenter" width="300"] jcBENCH Floating Point Benchmark[/caption] I included a few extra CPUs recently benchmarked for comparison. First thoughts, Integer performance over the Phenom II X6 is over 200% across the board for single to 8 core applications/games, meaning the FX-8350 can do what the Phenom II X6 did with half the CPUs leaving the other half for other tasks or making multi-threaded tasks 200% faster theoretically. This is also shown in the A10-4655M CPU, in 4 threads, my laptop was actually faster than my desktop as far as integer only work is concerned. Kudos to AMD for making such a dramatic difference in integer performance. Floating Point results were a bit more interesting. Having seen quite a bit drop off in comparison to the Integer results, I was curious if the FX-8350 would hit the same hurdles. Sure enough because of the drop off of the 1 to 1 relationship between Integer Cores and Floating Point Cores in the Phenom II architecture in favor of a 2 to 1 ratio in the latest generations of AMD's CPUs, the Phenom II actually beat out the much higher clocked FX-8350, albeit the more threads, the less of an impact it made. Definitely more benchmarks will ensue with real world tests of Visual Studio 2012 compiling and After Effects CS6 rendering. Stay tuned.
You might be getting this error:
web.ui.webresource.axd The status code returned from the server was: 500
in your Telerik ASP.NET Web application. Took some searching/trial and error, but one of these steps by solve your problem:
  1. Try making sure <%@ Page> directive has ValidateRequest="false"
  2. Try switching your IIS ASP.NET 4 to run in Classic Mode
  3. Remove all RadAjaxManager, RadAjaxLoadingPanels and RadAjaxPanels from your ASPX page
For my solution all I needed to do was remove the RadAjax controls in order for a particular page with a couple RadDatePicker, RadButtons and a RadGrid. Still don't know why it occurred, but I'm glad it is working now. Hopefully this helps someone.
As months and years go by, devices coming and going I've seen (as most have) an increasing demand to provide a universal experience no matter what device you are on i.e. mobile, desktop, laptop, tablet, website etc. This has driven a lot of companies to pursue ways to deliver that functionality efficiently, both from a monetary standpoint and a performance perspective. A common practice is to provide a Web Service, SOAP or WCF for instance, and then consume the functionality on the device/website. This provides a good layer between your NAS & Database server(s) and your clients. However, you don't want to provide the exact same view on every device. For instance, you're not going to want to edit 500 text fields on a 3.5" Mobile screen nor do you have the ability to upload non-isolated storage documents on mobile devices (at least currently). This brings up a possible problem, do you have the same Operation Contract with a DataContract Class Object and then based on the device that sent it, know server side what to expect? Or do you handle the translation on the most likely slower client side CPU? So for me, there are 2 possible solutions:
  1. Create another layer between the OperationContract and the server side classes to handle device translations
  2. Come up with something outside the box
Option #1, has pros and cons. It leaves the client side programming relatively the same across all platforms and leaves the work to the server side so pushing out fixes would be relatively easy and most likely affect all clients if written to use as much common code as possible. However, it does leave room for unintended consequences. Forgetting to update all of the device specific code and then having certain clients not get the functionality expected. Further more, as devices evolve, for instance the iPhone 1-4S had a 3.5" screen while the iPhone 5 has a much larger 4" screen. Would this open the doors to having a closer to iPad/Tablet experience? This of course depends on the application and customer base, but something to consider. And if it makes sense to have differing functionality passed to iPhone 5 users versus iPhone 4, there is more complexity in coding to specific platforms. A good route to solve those complexities in my opinion would be to create a Device Profile like Class based on the global functionality, then when the request to push or get data, the Factory classes in your Web Service would know what to do without having tons of if (Device == "IPHONE") conditionals. As more devices arrive, create a new profile server side and you'd be ready to go. Depending on your application this could be a very manageable path to go. Option #2, think outside the box is always interesting to me. I feel like many developers (I am guilty of this too), approach things based on previous experience and go through an iterative approach with each project. While this is a safer approach and I agree with it in most cases, I don't think developers can afford to think this way too much longer. Software being as interconnected with external APIs, web services, integrations (Facebook, Twitter etc.) and countless devices is vastly different than the 90s class library solution. Building a robust as future proof system to me is much more important than the client itself in my opinion. That being said, what could you do? In working on Windows Workflow Foundation last month and really breaking apart what software does at the most basic level, it really is simply:
  1. Client makes a request
  2. Server processes request (possibly doing multiple requests of it's own to databases or file storage)
  3. Return the data the Client expects (hopefully incorporating error handling in the return)
So how does this affect my thinking of architecting Web Services and Client applications? I am leaning towards creating a generic interface for certain requests to get/set data between the Server and Client. This creates a single funnel to process and return data, thus eliminating duplicate code and making the manageability much higher. However, you're probably thinking about the overhead in translating a generic request to "GetObject" to what the client is actually expecting. I definitely agree and I don't think it should be taken literally especially when considering performance of both server side resources and the amount of data transferring back and forth. What I am implying is doing something like this with your OperationContract definition: [csharp] [OperationContract] Factory.Objects.Ticket GetTicket(ClientToken clientToken, int TokenID); [/csharp] Your implementation: [csharp] public Factory.Objects.Ticket GetTicket(ClientToken clientToken, int TokenID) { return new Factory.TicketFactory(Token: clientToken).GetObject<Factory.Objects.Ticket>(ID: TokenID); } [/csharp] Then in your Factory class: [csharp] public interface Factory<T> where T : FactoryObject { public Factory<T>(ClientToken Token) { } T GetObject(int ID); FactoryResult AddObject(T object); } [/csharp] Then implement that Factory pattern for each object class. I should note implementing a Device Profile layer could be done at the Factory Constructor level. Simply pass in the Device Type inside the ClientToken object. Then a simple check to the Token class for instance: [csharp] public FactoryResult AddObject<Ticket>(Ticket item) { if (Token.Device.HasTouchScreen) { // do touch screen specific stuff; } return FactoryResult(); } [/csharp] You could also simply store Device Profile data in a database, text file, xml file and then cache them server side. Obviously this is not a solution for all applications, but has been successful in my implementations. Comments, suggestions, improvements, please let me know below.
Something I had been meaning to do for quite some time now is move as much of IRIX off of my Maxtor ATLAS II 15k 73gb Ultra 320 SCSI drive in hopes to keeping the small stock pile I have of them alive and well for years to come. The ideal solution would be to buy one of those SCA -> SATA II adapters for 2.5" SATA II/III Drives, but those are still $200+, which for me is not worth it. Looking around for alternative solutions, I came across an LSI SAS3041X-R 4 Port SATA PCI-X card that is IRIX 6.5 supported. Granted I wish it had SATA II ports, but alas for $25 shipped I really can't complain. I should note, this card is 133mhz/PCI-X, which means the maximum bandwidth is 1.06 gb/s. Silicon Graphics Origin 300s only have 64bit/66mhz PCI-X slots, so you'll be looking at a maximum bandwidth of 533.3mb/sec, while the Origin 350 has 2 64bit/100mhz PCI-X slots providing up to 800mb/sec. Keep that in mind if you need more than 533.3mb/sec of bandwidth. [caption id="attachment_1532" align="aligncenter" width="300"] What you'll need for adding SATA support and an SSD to your SGI Origin 300[/caption] [caption id="attachment_1533" align="aligncenter" width="225"] Y 4 pin Molex adapter[/caption] If you can't tell from the pictures, you'll need:
  1. The LSI SAS3041X-R card
  2. SATA cable, longer the better, I had used the standard length and had just enough length
  3. Y Cable that accepts the standard 4 pin Molex connector and provides 2 SATA Power connections
  4. Y Cable that accepts the standard 4 pin Molex connectors and provides 2 additional 4 pin Molex connectors
  5. SSD or other SATA drive, because the space constraints I'd highly suggest at the most a 2.5" mechanical drive.
LSI SAS3041X-R installed in the middle PCI-X slot: [caption id="attachment_1534" align="aligncenter" width="300"] LSI SAS3041X-R in the middle PCI-X slot of the SGI Origin 300[/caption] Back view: [caption id="attachment_1535" align="aligncenter" width="300"] Back view LSI SAS3041X-R in the middle PCI-X slot of the SGI Origin 300[/caption] Top view: [caption id="attachment_1536" align="aligncenter" width="300"] SGI Origin 300 with LSI SAS3041X-R installed[/caption] Unfortunately, Silicon Graphics didn't think about those wishing to add internal drives that wouldn't be driven off the built-in Ultra 160 controller. Getting creative, I found a spot in the far left fan grille that when running, the actual fan blades don't hit. You'll have to remove the fan to the SATA connector through, which I should mention again, you can swap the fans for much, much quieter fans. I detailed the post here back in February of this year. I took a picture of the SATA Cable going through the grille: [caption id="attachment_1537" align="aligncenter" width="225"] Running the SATA cable through the unobstructed 80mm fan grille[/caption] Once the cable is through, you can pop the 80mm fan back into place and then connect both the Y power cables and your SATA drive as shown in the picture below: [caption id="attachment_1538" align="aligncenter" width="225"] 240gb Sandisk Extreme SSD installed into an SGI Origin 300[/caption] After putting the Origin 300 back into my rack and starting IRIX, IRIX found the LSI card without a need for driver installation: [bash] IRIX Release 6.5 IP35 Version 07202013 System V - 64 Bit Copyright 1987-2006 Silicon Graphics, Inc. All Rights Reserved. NOTICE: Initialising Guaranteed Rate I/O v2 (Jul 20 2006 18:47:01) NOTICE: pcibr_attach: /hw/module/001c01/Ibrick/xtalk/15/pci Bus holds a usb part - settingbridge PCI_RETRY_HLD to 4 Setting hub ixtt.rrsp_ps field to 0x4e20 NOTICE: /hw/module/001c01/Ibrick/xtalk/14/pci/1a/scsi_ctlr/0: 949X fibre channel firmware version 1.3.23.0 NOTICE: /hw/module/001c01/Ibrick/xtalk/14/pci/1b/scsi_ctlr/0: 949X fibre channel firmware version 1.3.23.0 NOTICE: /hw/module/001c01/Ibrick/xtalk/14/pci/2/scsi_ctlr/0: 1064 SAS/SATA firmware version 1.6.0.0 [/bash] Having been removed from using IRIX for a few months, I preferred to do the disk initialization with the GUI over a TightVNC connection. You can initialize a disk through the System Manager, which can be accessed from the Toolchest System option like so: [caption id="attachment_1544" align="aligncenter" width="195"] IRIX System Manager[/caption] Then click on "Hardware and Devices" and then "Disk Manager" like so: [caption id="attachment_1547" align="aligncenter" width="300"] IRIX System Manager[/caption] [caption id="attachment_1539" align="aligncenter" width="300"] IRIX Disk Manager[/caption] With the SSD/SATA drive selected, click on Initialize and then click Next through the series of questions. Afterwards you can close that window, bringing you back to the System Manager window. Click on Files and Data and then Filesystem Manager like so: [caption id="attachment_1549" align="aligncenter" width="300"] Getting to the IRIX File System Manager[/caption] Afterwards you'll be presented with the Filesystem Manager: [caption id="attachment_1540" align="aligncenter" width="300"] IRIX Filesystem Manager[/caption] Click on Mount Local... in the bottom far left of the window, choose the newly initialized disk, the place where you want the drive mounted and click through the wizard. After a few minutes, you'll be presented with a refreshed Filesystem Manager window showing your SSD/SATA drive like in the picture above. Intrigued on how much of a difference the SSD would make in comparison to the Maxtor Atlas II 15k Ultra 320 drive, I ran diskperf and stacked the images side by side. The SSD is on the Left and the Maxtor Ultra 320 drive is on the right. [caption id="attachment_1541" align="aligncenter" width="300"] 240gb SanDisk Extreme SSD vs 73gb Maxtor Atlas II 15k Ultra 320 SCSI Drive[/caption] I actually expected the Maxtor to perform a bit better, but am not surprised overall by the results. About 30 minutes or work, majority of that simply pulling the server out of the rack, I think it is fair to say, if you're running a later generation Silicon Graphics machine with PCI-X slots (ie Fuel, Tezro, Origin 300, Origin 350 etc) you will see a huge performance boost. Next on my todo list is to move everything possible off of the Maxtor, leaving only the minimum boot data (as you cannot boot from an addon card in IRIX). Will post the details of that process at a later time.
After diving into OpenXML Friday in order to replace a Word 2007 .NET 1.1 Interop cluster of awfulness, I realized there wasn't a complete example from a dotx or docx Word file with Merge Fields, parsing the Merge Fields and then writing the merged file to a docx. So I'm writing this blog article to demonstrate a working example. I don't claim this to be a perfect example nor the "right" way to do it, but it works. If you have a "right" way to do it, please let me know by posting a comment below. To make life easier or if you simply want the code in front of you, I've zipped up the complete source code, sample Word 2013 docx and dotx file here. Use it as a basis for you app or whatever you feel like. I'd like to hear any success stories with it or if you have suggestions though. I am going to assume you have already downloaded the OpenXML SDK 2.0 and installed it. I should note I tested this on Visual Studio 2012 running Windows 8 and the Office 2013 Preview. Word Template with 2 Mail Merge Fields: [caption id="attachment_1524" align="aligncenter" width="300"] Word Template before Mail Merge[/caption] Word Document after Mail Merge: [caption id="attachment_1525" align="aligncenter" width="300"] Word Document after Mail Merge[/caption] So without further adieu lets dive into the code. You'll need to add a reference to DocumentFormat.OpenXml and WindowsBase. In your code you'll need to add these 3 lines to include the appropriate namespaces: [csharp] using DocumentFormat.OpenXml; using DocumentFormat.OpenXml.Packaging; using DocumentFormat.OpenXml.Wordprocessing; [/csharp] The bulk of the document generation code is in this function, I tried to document as much as possible on the "unique" OpenXML code [csharp] public RETURN_VAL GenerateDocument() { try { // Don't continue if the template file name is not found if (!File.Exists(_templateFileName)) { throw new Exception(message: "TemplateFileName (" + _templateFileName + ") does not exist"); } // If the file is a DOTX file convert it to docx if (_templateFileName.ToUpper().EndsWith("DOTX")) { RETURN_VAL resultValue = ConvertTemplate(); if (!resultValue.Value) { return resultValue; } } else { // Otherwise make a copy of the Word Document to the targetFileName File.Copy(_templateFileName, _targetFileName); } using (WordprocessingDocument docGenerated = WordprocessingDocument.Open(_targetFileName, true)) { docGenerated.ChangeDocumentType(WordprocessingDocumentType.Document); foreach (FieldCode field in docGenerated.MainDocumentPart.RootElement.Descendants<FieldCode>()) { var fieldNameStart = field.Text.LastIndexOf(FieldDelimeter, System.StringComparison.Ordinal); var fieldname = field.Text.Substring(fieldNameStart + FieldDelimeter.Length).Trim(); var fieldValue = GetMergeValue(FieldName: fieldname); // Go through all of the Run elements and replace the Text Elements Text Property foreach (Run run in docGenerated.MainDocumentPart.Document.Descendants<Run>()) { foreach (Text txtFromRun in run.Descendants<Text>().Where(a => a.Text == "«" + fieldname + "»")) { txtFromRun.Text = fieldValue; } } } // If the Document has settings remove them so the end user doesn't get prompted to use the data source DocumentSettingsPart settingsPart = docGenerated.MainDocumentPart.GetPartsOfType<DocumentSettingsPart>().First(); var oxeSettings = settingsPart.Settings.Where(a => a.LocalName == "mailMerge").FirstOrDefault(); if (oxeSettings != null) { settingsPart.Settings.RemoveChild(oxeSettings); settingsPart.Settings.Save(); } docGenerated.MainDocumentPart.Document.Save(); } return new RETURN_VAL { Value = true }; } catch (Exception ex) { return new RETURN_VAL { Value = false, Exception = "DocumentGeneration::generateDocument() - " + ex.ToString() }; } } [/csharp] In my scenario I had tons of dotx files and thus needed to convert them properly (renaming won't do for OpenXML merging) [csharp] private RETURN_VAL ConvertTemplate() { try { MemoryStream msFile = null; using (Stream sTemplate = File.Open(_templateFileName, FileMode.Open, FileAccess.Read)) { msFile = new MemoryStream((int)sTemplate.Length); sTemplate.CopyTo(msFile); msFile.Position = 0L; } using (WordprocessingDocument wpdFile = WordprocessingDocument.Open(msFile, true)) { wpdFile.ChangeDocumentType(DocumentFormat.OpenXml.WordprocessingDocumentType.Document); MainDocumentPart docPart = wpdFile.MainDocumentPart; docPart.AddExternalRelationship("http://schemas.openxmlformats.org/officeDocument/2006/relationships/attachedTemplate", new Uri(_templateFileName, UriKind.RelativeOrAbsolute)); docPart.Document.Save(); } // Flush the MemoryStream to the file File.WriteAllBytes(_targetFileName, msFile.ToArray()); msFile.Close(); return new RETURN_VAL { Value = true }; } catch (Exception ex) { return new RETURN_VAL { Value = false, Exception = "DocumentGeneration::convertTemplate() - " + ex.ToString() }; } } [/csharp] In my actual application I have a pretty elaborate Mail Merge process pulling in from various sources (SQL Server and WCF Services), but to demonstrate a working application I wrote out a simple switch/case function. [csharp] private string GetMergeValue(string FieldName) { switch (FieldName) { case "CurrentDate": return DateTime.Now.ToShortDateString(); case "CPU_Count": return Environment.ProcessorCount.ToString(); default: throw new Exception(message: "FieldName (" + FieldName + ") was not found"); } } [/csharp]
After a considerably longer than expected development time, the big new 0.5 release of jcBENCH is finally available. You can download it from here. [caption id="attachment_1510" align="aligncenter" width="300"] jcBENCH WPF 0.5 Release[/caption] With this new release is an all new WPF GUI and a new C++ Library to help facilitate cross-platform development. The idea being, use the same C++ code across all platforms and to just develop a frontend for each. Also new is a built in viewer of the Top 10 Results and the return of the ability to submit results. Some future enhancements down the road:
  1. OpenCL Benchmark, with my restructuring I have to re-code my C# OpenCL code -> C++ OpenCL code and then add support to the app to determine if OpenCL drivers exist
  2. More comprehensive comparing of results, filtering down to similar spec machines, comparing CPUs used, Manufacturer etc
  3. Ability to run the whole test suite at once (ie if you have a 6 CPU, benchmark it with each CPU used
  4. IRIX 6.5 and Windows Phone 8 clients
If you have suggestions/requests, please let me know, I'm definitely interested in hearing what people have to say.
In working on the new version of jcBench, I made a decision to continue having the actual benchmarking code in C++ (ie unmanaged code) to promote cross-platform deployments. With Windows Phone 8 getting native code support and my obsession with Silicon Graphics IRIX machines I think this is the best route. That being said, the frontends for jcBench will still definitely be done in C# whenever possible. This brings me to my next topic, getting your C++ Library code to be available to your C# application whether that is a console app, WPF, Windows 8 etc. Surprisingly there is a lot of information out there on this, but none of the examples worked for me. With some trial and error I got it working and figured it might help someone out there. So in your C++ (or C) source file: [cpp] extern "C" { __declspec( dllexport ) float runIntegerBenchmark(long numObjects, int numThreads); float runIntegerBenchmark(long numObjects, int numThreads) { CPUBenchmark cpuBenchmark = CPUBenchmark(numObjects, numThreads); return cpuBenchmark.runIntegerBenchmark(); } __declspec( dllexport ) float runFloatingPointBenchmark(long numObjects, int numThreads); float runFloatingPointBenchmark(long numObjects, int numThreads) { CPUBenchmark cpuBenchmark = CPUBenchmark(numObjects, numThreads); return cpuBenchmark.runFloatingPointBenchmark(); } } [/cpp] Notice the __declspec( dllexport ) function declaration, this is key to telling your C# (or any other language) that this function is exposed externally in the DLL. Something else to keep in mind is the difference in types between variables in C++ and C#. A long for instance in C++ is an Int32 in the CLR. Something to keep in mind if you get something like this thrown in your C# application:
This is likely because the managed PInvoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature
Then in your C# code: [csharp] [DllImport("jcBenchCppLib.dll", CallingConvention = CallingConvention.Cdecl)] public static extern float runIntegerBenchmark(Int32 numObjects, int numThreads); [DllImport("jcBenchCppLib.dll", CallingConvention = CallingConvention.Cdecl)] public static extern float runFloatingPointBenchmark(Int32 numObjects, int numThreads); [/csharp] To execute the function, call it like you would a normal function: [csharp] lblInteger.Content = runIntegerBenchmark(100000, 6) + " seconds"; [/csharp]
About a year in a half ago I started going down the path of Windows Phone 7, WCF and WPF Development. My hope back then was to get a few Windows Phone 7 games out the door using WCF as the backend and then do WCF and WPF development at work. Unfortunately, WPF simply never took off at work, I did a simple Fax Reader application that connected to an Exchange 2010 mailbox, parsed a Code 39 Barcode and then uploaded it to a NAS. Luckily I did do one professional Content Management System on Windows Phone 7 back in January (along with iOS and Android native apps) and if you have been following my blog posts since January, WCF has really become the mainstay at work thanks to persistence of using it as a replacement for the traditional class library approach. Going back to WPF and my plans to finally make a game in my free time outside of my WCF/ASP.NET/WinForm development at work. May 2011 I got a fairly simple tile based top down arcade flight game up and running. You could choose from several airplanes and it would randomly throw enemy fighters at you. You could launch Sidewinder Missiles and the enemy fighters would disappear, but there was no collision detection between you and the fighters. For various reasons this project was sidelined, the biggest was the lack of WPF development at work. Seemed like a waste of time to focus on development for something that wasn't benefiting both personnel and work projects. Something I had noticed looking back since I was 9 years old (and what got me into programming in the first place), is actually programming a game I could be proud of. All of those years doing QBasic programming were some rewarding and then when I got into 2D VGA DOS programming in 1999 I was finally able to display 320x200 (Mode X) 8bit Bitmaps. At that point though, we were in Quake III curved surface, multi-textured and on the cusp of Vertex and Pixel Shaders. Not exactly keeping up with the latest development shops back then. Fast forward to 2003-2005, I spent a lot of time doing C++/OpenGL/SDL work on my Infinity Engine (I believe the sourceforge page is still up), but I was trying to achieve too much. Then in May this year I got back into OpenGL and C++ to work on a Windows/IRIX Wolfenstein 3D like game. This started out like a realistic goal, I quickly learned balancing 2 different languages, C# at work on various versions of .NET and platforms (WCF, ASP.NET, WinForms, Console etc) and then C++ at night was becoming hard to manage and keep up with my C# development. Fast forward another couple of months and death marches at work, I'm now at a point that I need to focus on something enjoyable. Using WPF 4.5 to make a Real-Time Strategy game, something I had tried before seems like the best route to go. I advance my C# skills and get some extensive WPF experience. Last night and Friday night I started work on an improved jcgeEDITOR along with starting from scratch on a WPF 4.5 game engine using the multi-threaded techniques I have been applying and advancing at work. For a Real-Time Strategy game from an engine perspective you need the following:
  1. Map support with various tiles that allow all Unit types, some Unit Types and no Unit Types
  2. Drag and drop support for Unit Buildings
  3. Collision detection of Units to Units, Units to Tiles, Buildings to Buildings and Units to Buildings
Of course this is excluding AI, Sound and Networking. However I am going to be designing it from the ground up for Network Play in mind. It's one thing to play again a computer player, but an entirely different game when played with a human or multiple human players. A very early test of the Tile Engine and early Map Format: [caption id="attachment_1496" align="aligncenter" width="300"] Early jcGE WPF screenshot on 9.16.2012[/caption] More to come this afternoon when I get the left hand side bar included.
TAGS
With the very common practice of taking some Class or Object Collection and moving it into a new collection I've been wondering if there were any clear cut performance differences between the commonly used methods: for loop, foreach, LINQ Expression, PLINQ etc. So getting right into the code, I took my tried and true Users Object: [csharp] public class Users { public int ID { get; set; } public string Name { get; set; } public string Password { get; set; } public Users(int id, string name, string password) { ID = id; Name = name; Password = password; } } [/csharp] Had a struct equivalent to copy the data into: [csharp] public struct Users_Struct { public int ID; public string Name; public string Password; public Users_Struct(int id, string name, string password) { ID = id; Name = name; Password = password; } } [/csharp] And then the various methods to get the task done: [csharp] private static void LinQMethod(List<Users> mainList) { var startTime = DateTime.Now; var linqList = mainList.Select(item => new Users_Struct(item.ID, item.Name, item.Password)).ToList(); Console.WriteLine("LinQ Method: " + DateTime.Now.Subtract(startTime).TotalSeconds); } private static void PLinQMethod(List<Users> mainList) { var startTime = DateTime.Now; var plinqList = mainList.AsParallel().Select(item => new Users_Struct(item.ID, item.Name, item.Password)).ToList(); Console.WriteLine("PLinQ Method: " + DateTime.Now.Subtract(startTime).TotalSeconds); } private static void ForMethod(List<Users> mainList) { var startTime = DateTime.Now; var forList = new List<Users_Struct>(); for (var x = 0; x < mainList.Count; x++) { forList.Add(new Users_Struct(mainList[x].ID, mainList[x].Name, mainList[x].Password)); } Console.WriteLine("For Method: " + DateTime.Now.Subtract(startTime).TotalSeconds); } private static void ForEachMethod(List<Users> mainList) { var startTime = DateTime.Now; var foreachList = new List<Users_Struct>(); foreach (var user in mainList) { foreachList.Add(new Users_Struct(user.ID, user.Name, user.Password)); } Console.WriteLine("Foreach Method: " + DateTime.Now.Subtract(startTime).TotalSeconds); } private static void PForEachMethod(List<Users> mainList) { var startTime = DateTime.Now; var pforeachQueue = new ConcurrentQueue<Users_Struct>(); Parallel.ForEach(mainList, user => { pforeachQueue.Enqueue(new Users_Struct(user.ID, user.Name, user.Password)); }); ; Console.WriteLine("Parallel Foreach Method: " + DateTime.Now.Subtract(startTime).TotalSeconds); } private static void PForEachExpression(List<Users> mainList) { var startTime = DateTime.Now; var plinqQueue = new ConcurrentQueue<Users_Struct>(); Parallel.ForEach(mainList, user => plinqQueue.Enqueue(new Users_Struct(user.ID, user.Name, user.Password))); Console.WriteLine("Parallel LinQ Expression Method: " + DateTime.Now.Subtract(startTime).TotalSeconds); } [/csharp] And onto the results: [caption id="attachment_1491" align="aligncenter" width="300"] Moving Data From One Collection to Another in various methods[/caption] Sadly to some extent, no method is particularly faster than another. It isn't until you hit 1,000,000 objects that there is sizable difference between the various methods. From clean code factor I've switched over to using an expression. Further investigation is needed I believe, looking at the IL code will probably be next task to see exactly how the C# code is being translated into IL.
Continuing my pursuit towards the "perfect" WCF Data Layer, I thought I would check out alternatives to returning List Collections of Structs and consider switching over to a DataContract to return a Class object instead. Using my tried and true Users test case, I defined my Struct and DataContract Class: [csharp] [Serializable] public struct USERSTRUCT { public int ID; public string Username; public string Password; public bool IsAdmin; } [DataContract] public class UserClass { private int _ID; private bool _IsAdmin; private string _Username; private string _Password; [DataMember] public string Password { get { return _Password; } set { _Password = value; } } [DataMember] public int ID { get { return _ID; } set { _ID = value; } } [DataMember] public bool IsAdmin { get { return _IsAdmin; } set { _IsAdmin = value; } } [DataMember] public string Username { get { return _Username; } set { _Username = value; } } } [/csharp] I populated my Users Table with 5,000 test rows and then ran it for 1 to 5,000 results. The DataContract was 230 bytes versus the Struct of 250 bytes, but the return result time differences were marginal at best and actually went back and forth between which was faster. It was also interesting about the allocation time in populating the collection of each being virtually identical. I was kind of disappointed in the fact the DataContract wasn't faster or that much smaller size with the additional code needed. So for the mean time I'll continue to use Structs. I should note, the struct is a Value Type versus the DataContract which is a Reference Type so the normal rules apply in that regard.
In doing my refactoring/cleaning up of jcBench, I really wanted to make the new platform very dynamic. Something that has bugged me about my own development sometimes and a lot of other companies is the close dependency between the Application and their Class Libraries. To me an Application should have a set interface to execute code, with the actual implementation or implementations of the interface in Class Libraries. I hate to keep going back to Quake II, but to me that was really good, logical architecture. You had the Quake II engine in the executable, the actual game code in it's own dll and then the OpenGL & Software Rendering code in their own DLLs. Very clean in my opinion. In C#, since introducing the dynamic keyword in .NET 4.0, I've wanted to implement some sort of dynamic class loading where I could follow a similar path that Quake II did. This wasn't on the top of my weekend research projects so what better time than when the next revision of .NET came out :) So let's dive into the code. I'll be using some of the new jcBench code to demonstrate the technique. First off you'll need an interface object to define the required properties, methods etc. For this post I'll provide a simplified IBenchmarkBase Interface from jcBench: [csharp] public interface IBenchmarkBase { Common.BENCHMARK_TYPE BenchmarkType { get; } string BenchmarkShortName { get; } string BenchmarkName { get; } int BenchmarkRevisionNumber { get; } int MaxCPUs { get; set; } long NumberObjects { get; set; } jcBenchLib.jcBenchReference.jcBenchSubmittedResult runBenchmark(); } [/csharp] Pretty standard run of the mill Interface. I highly suggest you consider switching over to an Interface object versus creating an Abstract class and inheriting/implementing from that for future projects. There's quite a few articles going over the pros and cons of both, so at least give it a chance in doing some research on them for yourself. And then one implementation of that Interface: [csharp] public class TPLBenchmark : IBenchmarkBase { public string BenchmarkName { get { return "Task Parallel Library CPU Benchmark"; } } public string BenchmarkShortName { get { return "TPL"; } } public int BenchmarkRevisionNumber { get { return 1; } } public Common.BENCHMARK_TYPE BenchmarkType { get { return Common.BENCHMARK_TYPE.CPU; } } private long _numObjects; public long NumberObjects { get { return _numObjects; } set { _numObjects = value; } } private int _maxCPUs; public int MaxCPUs { get { return _maxCPUs; } set { _maxCPUs = value; } } public TPLBenchmark() { } public jcBenchReference.jcBenchSubmittedResult runBenchmark() { jcBenchReference.jcBenchSubmittedResult result = new jcBenchReference.jcBenchSubmittedResult(); DateTime startTime = DateTime.Now; // do the benchmark work here result.TimeTaken = DateTime.Now.Subtract(startTime).TotalSeconds; // fill in the rest of the jcBenchSubmittedResult object here // return the result to the executable to send over the WCF Service return result; } } [/csharp] Again pretty simple implementation (I removed a lot of the error handling and implementation code since it wasn't relevant to this post). So now onto the fun part of this post. So let's say you had a couple more implementations and in my case I have an OpenCL benchmark as well. Since they follow the same interface path, my application (Win32 Command Line, WinForm, WPF, Windows 8, Windows Phone 8 etc) won't need to care about specific benchmarks, it will actually only be coded to look for implementations of that IBenchmarkBase Interface. Pretty nice and clean thought process right? So onto the "new and fun stuff". For this example to make it easier for a blog post, I am using a standard C# Win32 Command Prompt Visual Studio 2012 project type. Inside my Program.cs: [csharp] public static List<jcBenchLib.Benchmarks.IBenchmarkBase> getModules() { List<jcBenchLib.Benchmarks.IBenchmarkBase> modules = new List<jcBenchLib.Benchmarks.IBenchmarkBase>(); Assembly moduleAssembly = Assembly.LoadFile(Directory.GetCurrentDirectory() + "\\jcBenchLib.dll"); Type[] assemblyTypes = moduleAssembly.GetTypes(); foreach (Type type in assemblyTypes) { if (type.GetInterface("IBenchmarkBase") != null) { var module = (jcBenchLib.Benchmarks.IBenchmarkBase)Activator.CreateInstance(type); modules.Add(module); Console.WriteLine("Loaded " + module.BenchmarkName + " Module..."); } } return modules; } [/csharp] What this does is load my WinRT Portable Library off the disk, get all of the Types defined in that Assembly, parse through each one until it finds a match on my IBenchmarkBase Interface. When it finds a match, it adds the Interface to a List collection, writes the quick message to the Console and then returns the List Collection. So now that the List Collection of "Benchmarks" is loaded, what's next? This largely is based on the UI of the application. Would you populate a ComboBox in a WinForm application or a Listpicker in a Windows Phone 7/8 application? In this case since I am doing it in a command line application that the user will specify it as an argument. And here's the Main Function: [csharp] static void Main(string[] args) { var bModule = getModules().Where(a => a.BenchmarkShortName == args[0]).FirstOrDefault(); if (bModule == null) { Console.WriteLine("Module (" + args[0] + ") was not found in the WinRT Portable Library"); Console.ReadKey(true); return; } bModule.NumberObjects = Convert.ToInt64(args[1]); bModule.MaxCPUs = Convert.ToInt32(args[2]); Console.WriteLine("Using " + bModule.BenchmarkName + " (Version " + bModule.BenchmarkRevisionNumber + ")..."); bModule.runBenchmark(); Console.ReadKey(true); } [/csharp] So there you have it minus any sort of error handling or helpful instructions for missing or invalid arguments. Pretty simple lookup from an argument passed into the application off the List Collection function above. Afterwards initializing the class object with the other parameters before executing the "benchmark" function. Depending on your app you could simply have a getModule function instead that takes the proposed BenchmarkShortName however in jcBench I display all of the available modules so it makes sense in my case to store it to a List Collection for later use. I'm not a fan if I have the memory available to do constant lookups whether it is a SQL Query or data loading like in this case, but to each his or her own. Hopefully someone finds this useful in their applications. Coupled with my post about dynamic/self-learning code on Saturday I think this area of C# (and other languages) is going to be a really good mindset to have in the years ahead, but I could wrong. With the ever increasing demands for rapid responsiveness in both physical and web applications coupled with the also ever increasing demand for new features, a clear separation of Application & Class Interface and dynamic approach is what I am going to focus on going forward.

Introduction

I've been with the same company now for nearly 5.5 years (if you add the time as a consultant and the last 2.75 years of being a direct employee), over that time period I've watched as the original ASP.NET 1.1 enterprise web application that was put in place September 2006 deteriorate quite rapidly in the last 2-3 years. There are a lot of factors attributed to it, but the main 4 I think are:
  1. New features tacked on rather than really thought out, leading to spaghetti code and a complete break down in Data and UI layers
  2. Other applications going online that replaced areas of the Web Application, leading to not keeping the older .NET 1.1 up to date. This is especially true to the external .NET 3.5 Web Application which displays much of the same data just differently, byt doesn't follow the same business logic for adding/updating/removing data
  3. Architecture itself doesn't lend itself to a multi-threaded environment, throwing more cores at it does little to speed it up
  4. Lack of time and resources to make a proper upgrade in between requested functionality and maintenance
There's quite a few other attributes, but to sum it up, it should have been refitted at the very least in 2008, instead of tacking on additional functionality that didn't make the first or second releases. The time wasn't all a waste however, as things would waver in speed/functionality, I would make notes of areas that I wanted to address in Version 3, just kept it locked away until the time was given to do it. Last August (2011), it was approved to do a brand new architecture and platform for the largest part of the older system. The other parts would remain in the ASP.NET 1.1 Web Application and be moved in at a later date. Without going into too much detail, there was some disagreements over how this should be implemented especially with the 6 other .NET projects that would need to communicate with this new platform either a traditional Class Library method or a Web Service. This went on for a few months, with a pretty hard deadline of the end of the year. In the end in February 2012 I took a look at what the outside consulting firm's team had done and started from nearly ground zero (I did keep their Master Page Template as it wasn't too bad). Development went on until this week (with a couple of other projects getting mixed in and day to day maintenance) when I hit what I call a true BETA. So why the long introduction/semi-rant? In the IT industry we're lucky to get the time to do something right once, let alone maintain and upgrade the code to keep it from falling into code dystrophy. So why not as part of the initial development add support for the code to optimize itself based on current and future work loads? As time goes on (at least from my experience), you find new ways of doing things, whether it is parsing text, aggregating objects etc. In the end there are patterns for both your older code and your newer methodology. Wouldn't it be neat if as you were programming your latest and greatest code, your new programming patterns could be inherited into your older code? In addition, from experience, your first couple weeks/months of usage with a new project won't give you a real feel for how the product is going to be used. There's the initial week or so of just getting the employees or customers to even use the product as they have already invested the time to learn the old system. It is only after using it for a few weeks that they begin to learn and see the enhancements that went into making their daily tool better and find out how exactly they are using it. It is fascinating to me to hear "I do this, this and this to get to X" from an end user, especially internal employees when the intention was for them to follow a different path. This is where getting User Experience experts involved I think is key, especially for larger developments where the difference between the old and new systems are vast. So where to begin?

Breakdown of Steps

Something I think I am fairly good at is being able to break down complex problems into much more manageable tasks. It gives me the label at work as being able to accomplish anything thrown at me while not feeling (or showing) myself being overwhelmed (like when I single-handedly took over and started from scratch on the project mentioned above that should have been a team of 3-4 programmers). So I broke this rather large problem into 5 less large problems. So if you wanted to have your code in place to monitor the current paths taken, adjust the code and then start over again what would be necessary?

Step 1 - Get the "profiler" data

In order to be able to self-refactor code you will need to gather as much data as you can. The type of data depends on your application. If you find a lot of people never using a performance impacting ComboBox, then maybe do a load on demand or vis-versa if it constantly is being used load it on Page Load or figure out a better way to present the data to end-user. If you have a clean interface to your Business Layer this wouldn't be too hard to accomplish recording the path to a SQL Server database for instance or dumping it to another service. This part is pretty application scenario specific. If you have a largely offline WinForms application, you're going to want to record it to an XML file possibly and send it along with the regular data. Versus if you have WCF Service you want to profile, inside the Operation Contract level you can simply record the path. The particular case I am thinking of doing this for is an ASP.NET 4.5 Web Application that acts as a front end to a WCF Service. The WCF Service I am very satisfied with the performance at this point, although I want to do some refactoring of the some of the older February/March 2012 code before the project expanded to encompass many more features than originally intended (the definition of Scope Creep if there was one) to make it cleaner going forward. The big thing to be conscious of is your regular performance profiler takes a good bit of your performance out of your application. While great at pinpointing otherwise hidden performance hits by a misplaced for loop or something else (especially easy with Red Gate's ANTS Profiler if you hadn't used it), it does slow your application down. Finding that perfect balance of getting enough data logged and keeping your application running as close to 100% speed as possible should be a prioritized task I believe in order to successfully "profile" your code.

Step 2 - Create Patterns to replace existing code with

So now that we have our profile data, we need to be able to take it and apply changes to our code without human interaction. This in particular is the most interesting aspect of this topic for myself. Programming the system to find trends in the the usage via the profile data and then try several different programming models to address any possible performance problems. If no benefit could be found based on the pattern, maybe then escalate it to the actual human programmers to look into. The advantage of this route is that the programmer can focus on new features, while in the background have the system do the performance maintenance. Going back to my ideal environment of a concurrent development process, making best of the qualities of a machine and a human programmer. How exactly would these patterns be done? Regular Expressions against the existing code base and templated replacement code perhaps? If it found a code block like this: [csharp] private bool PersonExists(string Name) { for (int x = 0; x < _objects.Count(); x++) { if (_objects[x].Name == Name) { return true; } } return false; } [/csharp] It would see code could be optimized based on a templated pattern against a "find a result in a List Collection" to be: [csharp] private bool PersonExists(string Name) { return _objects.AsParallel().Where(a => a.Name == Name).Count() > 0; } [/csharp] As a starting point I would probably start with easier patterns like this until I got more comfortable with the pattern matching.

Step 3 - Automated Unit Testing of changes prior to deployment

Keeping with the automated process mentality, after a particular pattern is found and a new method is found to be faster, prior to deployment of the new code, the new code should be put through Unit Testing. Even with great tools out there to assist in Unit Testing, from what I've seen, they are little used either from lack of time to invest in learning a Unit Testing tool and/or to develop them in conjunction with the main development. In a perfect world you would have already had Unit Tests for the particular functions and functionality, so these Unit Tests would be run without changes to verify the results are the same with the existing code.

Step 4 - Scheduling the self-compiling and deployment

Assuming the code got this far, the next logical step would be to recompile and deploy the new code to your server(s). You probably wouldn't want this to be done during a peak time, so off-hours during a weekend might be the best route. Or better yet in conjunction with your System Administrator's monthly or weekly maintenance to apply Service Packs, Hotfixes etc. In addition checking the code into your source control Subversion or Team Foundation Server would probably be a good idea too once it got to this point to ensure the human developers got the changes from the automated code changes.

Step 5 - Record the changes being made to the code and track the increase

After the code has been deployed, it might be a good idea to alert the human developers that it made a change what it was to ensure everyone was on the same page, especially if a developer had that code checked out for a fix. In addition, like everything I enjoy seeing before and after results. Having it profile the code for a few days against the new code to give performance numbers like Time spent in FunctionX now runs 4X faster.

Conclusion

If you've gotten this far I hope this makes you think about the potential for something along this line or something completely different. It seems like this might be a good idea, but at the very least an interesting weekend project. I'll working on this as I get time, it definitely isn't a one weekend project and I really want to get the next release of jcBench out the door :)
Update on 8/18/2012 8:41 AM EST - Expanded my result commentary, added data labels in the graph and made the graph image larger An interesting question came up on Twitter this morning about how the overhead in calling Parallel.ForEach vs the more traditional foreach would impact performance. I had done some testing in .NET 4 earlier this year and found the smaller collections of objects < 10 not worth the performance hit with Parallel.ForEach. So I wrote a new test in .NET 4.5 to test this question on a fairly standard task: taking the results of a EF Stored Procedure and populating a collection of struct objects in addition to populating a List collection for a 1 to many relationship (to make it more real world). First thing I did was write some code to populate my 2 SQL Server 2012 Tables: [csharp] using (TempEntities eFactory = new TempEntities()) { Random rnd = new Random(1985); for (int x = 0; x < int.MaxValue; x++) { Car car = eFactory.Cars.CreateObject(); car.MakerName = x.ToString() + x.ToString(); car.MilesDriven = x * 10; car.ModelName = x.ToString(); car.ModelYear = (short)rnd.Next(1950, 2012); car.NumDoors = (short)(x % 2 == 0 ? 2 : 4); eFactory.AddToCars(car); eFactory.SaveChanges(); for (int y = 0; y < rnd.Next(1, 5); y++) { Owner owner = eFactory.Owners.CreateObject(); owner.Age = y; owner.Name = y.ToString(); owner.StartOfOwnership = DateTime.Now.AddMonths(-y); owner.EndOfOwnership = DateTime.Now; owner.CarID = car.ID; eFactory.AddToOwners(owner); eFactory.SaveChanges(); } } } [/csharp] I ran it long enough to produce 121,501 Car rows and 190,173 Owner rows. My structs: [csharp] public struct OWNERS { public string Name; public int Age; public DateTime StartOfOwnership; public DateTime? EndOfOwnership; } public struct CAR { public string MakerName; public string ModelName; public short ModelYear; public short NumDoors; public decimal MilesDriven; public List<OWNERS> owners; } [/csharp] Then for my Parallel.Foreach code: [csharp] static List<OWNERS> ownersTPL(int CarID) { using (TempEntities eFactory = new TempEntities()) { ConcurrentQueue<OWNERS> owners = new ConcurrentQueue<OWNERS>(); Parallel.ForEach(eFactory.getOwnersFromCarSP(CarID), row => { owners.Enqueue(new OWNERS() { Age = row.Age, EndOfOwnership = row.EndOfOwnership, Name = row.Name, StartOfOwnership = row.StartOfOwnership }); }); return owners.ToList(); } } static void runTPL(int numObjects) { using (TempEntities eFactory = new TempEntities()) { ConcurrentQueue<CAR> cars = new ConcurrentQueue<CAR>(); Parallel.ForEach(eFactory.getCarsSP(numObjects), row => { cars.Enqueue(new CAR() { MakerName = row.MakerName, MilesDriven = row.MilesDriven, ModelName = row.ModelName, ModelYear = row.ModelYear, NumDoors = row.NumDoors, owners = ownersTPL(CarID: row.ID) }); }); } } [/csharp] My foreach code: [csharp] static void runREG(int numObjects) { using (TempEntities eFactory = new TempEntities()) { List<CAR> cars = new List<CAR>(); foreach (getCarsSP_Result row in eFactory.getCarsSP(numObjects)) { List<OWNERS> owners = new List<OWNERS>(); foreach (getOwnersFromCarSP_Result oRow in eFactory.getOwnersFromCarSP(row.ID)) { owners.Add(new OWNERS() { Age = oRow.Age, EndOfOwnership = oRow.EndOfOwnership, Name = oRow.Name, StartOfOwnership = oRow.StartOfOwnership }); } cars.Add(new CAR() { MakerName = row.MakerName, MilesDriven = row.MilesDriven, ModelName = row.ModelName, ModelYear = row.ModelYear, NumDoors = row.NumDoors, owners = owners }); } } } [/csharp] Onto the results, I ran this on an AMD Phenom II X6 1090T (6x3.2ghz) CPU, 16gb of DDR3-1600 running Windows 8 RTM to give this a semi-real world feel having a decent amount of ram and 6 cores, although a better test would be on an Opteron or Xeon (testing slower, but more cores vs less, but faster cores of my desktop CPU). [caption id="attachment_1444" align="aligncenter" width="300"] Parallel.ForEach vs foreach (Y-Axis is seconds taken to process, X-Axis is the number of objects processed)[/caption] Surprisingly, for a fairly real world scenario .NET 4.5's Parallel.ForEach actually beat out the more traditional foreach loop in every test. Even more interesting is that until around 100 objects Parallel.ForEach wasn't visibly faster (the difference only being .05 seconds for 10 objects, but on a large scale/highly active data service where you're paying for CPU/time that could add up). Which does bring up an interesting point, I haven't looked into Cloud CPU Usage/hr costs, I wonder where the line between between performance of using n number of CPUs/cores in your Cloud environment and cost comes into play. Is 0.5 seconds for an average CPU usage OK in your mind to your customers? Or will you deliver the best possible experience to your customers and either ignore the costs incurred or offload the costs to them? This would be a good investigation I think and relatively simple with the ParallelOptions.MaxDegreeOfParallelism property. I didn't show it, but I also ran the same code using a regular for loop as I had read an article several years ago (probably 5 at this point) that showed foreach loop being much slower than a for loop. Surprisingly, the results were virtually identical to the foreach loop. Take that for what it is worth. Feel free to do your own testing for your own scenarios to see if a Parallel.ForEach loop is ever slower than a foreach loop, but I am pretty comfortable saying that it seems like the Parallel.ForEach loop has been optimized to the point where it should be safe to use it in place of a foreach loop for most scenarios.
After attending the "What's New in WCF 4.5" session at Microsoft's TechED North America 2012 I got fascinated by the opportunities with using WebSockets for my next projects both at work and for my personnel projects. The major prerequisite for WebSockets is to be running on Windows 8 or Server 2012 specifically IIS 8.0. In Windows 8, go to your Programs and Features (Windows Key + x to bring up the shortcut menu, click Control Panel) and then click on "Turn Windows features on or off" as shown in the screenshot below: [caption id="attachment_1416" align="aligncenter" width="300"] Turn Windows features on or off[/caption] Then expand Internet Information Services and make sure WebSocket Protocol is checked as shown in the screenshot below: [caption id="attachment_1415" align="aligncenter" width="300"] Windows 8 Windows Feature to enable WebSocket Development[/caption] After getting my system up and running, I figured I would convert something I had done previously with a WPF application a few years back: Monitoring an Exchange 2010 Mailbox and then parse the EmailMessage object. The difference for this test is that I am simply going to kick back the email to a console application. Jumping right into the code my ITestService.cs source file: [csharp] [ServiceContract] public interface ITestCallBackService { [OperationContract(IsOneWay = true)] Task getEmail(TestService.EMAIL_MESSAGE email); } [ServiceContract(CallbackContract = typeof(ITestCallBackService))] public interface ITestService { [OperationContract(IsOneWay = true)] Task MonitorEmailBox(); } [/csharp] My TestService.svc.cs source file: [csharp] [Serializable] public struct EMAIL_MESSAGE { public string Body; public string Subject; public string Sender; public bool HasAttachment; } public async Task MonitorEmailBox() { var callback = OperationContext.Current.GetCallbackChannel<ITestCallBackService>(); Microsoft.Exchange.WebServices.Data.ExchangeService service = new Microsoft.Exchange.WebServices.Data.ExchangeService(Microsoft.Exchange.WebServices.Data.ExchangeVersion.Exchange2010_SP1); service.Credentials = new NetworkCredential(ConfigurationManager.AppSettings["ExchangeUsername"].ToString(), ConfigurationManager.AppSettings["ExchangePassword"].ToString(), ConfigurationManager.AppSettings["ExchangeDomain"].ToString()); service.Url = new Uri(ConfigurationManager.AppSettings["ExchangeWSAddress"].ToString()); Microsoft.Exchange.WebServices.Data.ItemView view = new Microsoft.Exchange.WebServices.Data.ItemView(100); Microsoft.Exchange.WebServices.Data.SearchFilter sf = new Microsoft.Exchange.WebServices.Data.SearchFilter.IsEqualTo(Microsoft.Exchange.WebServices.Data.EmailMessageSchema.IsRead, false); while (((IChannel)callback).State == CommunicationState.Opened) { Microsoft.Exchange.WebServices.Data.FindItemsResults<Microsoft.Exchange.WebServices.Data.Item> fiItems = service.FindItems(Microsoft.Exchange.WebServices.Data.WellKnownFolderName.Inbox, sf, view); if (fiItems.Items.Count > 0) { service.LoadPropertiesForItems(fiItems, new Microsoft.Exchange.WebServices.Data.PropertySet(Microsoft.Exchange.WebServices.Data.ItemSchema.HasAttachments, Microsoft.Exchange.WebServices.Data.ItemSchema.Attachments)); foreach (Microsoft.Exchange.WebServices.Data.Item item in fiItems) { if (item is Microsoft.Exchange.WebServices.Data.EmailMessage) { Microsoft.Exchange.WebServices.Data.EmailMessage eMessage = item as Microsoft.Exchange.WebServices.Data.EmailMessage; eMessage.IsRead = true; eMessage.Update(Microsoft.Exchange.WebServices.Data.ConflictResolutionMode.AlwaysOverwrite); EMAIL_MESSAGE emailMessage = new EMAIL_MESSAGE(); emailMessage.HasAttachment = eMessage.HasAttachments; emailMessage.Body = eMessage.Body.Text; emailMessage.Sender = eMessage.Sender.Address; emailMessage.Subject = eMessage.Subject; await callback.getEmail(emailMessage); } } } } } [/csharp] Only additions to the stock Web.config are the protocolMapping properties, but here is my full web.config: [csharp] <?xml version="1.0"?> <configuration> <appSettings> <add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" /> </appSettings> <system.web> <compilation debug="true" targetFramework="4.5" /> </system.web> <system.serviceModel> <protocolMapping> <add scheme="http" binding="netHttpBinding"/> <add scheme="https" binding="netHttpsBinding"/> </protocolMapping> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true"/> <directoryBrowse enabled="true"/> </system.webServer> </configuration> [/csharp] I then created my test Windows Console application, added a reference to the WCF Service created above, here's my program.cs: [csharp] static void Main(string[] args) { InstanceContext context = new InstanceContext(new CallbackHandler()); using (WSReference.TestServiceClient tsClient = new WSReference.TestServiceClient(context)) { tsClient.MonitorEmailBox(); Console.ReadLine(); } } private class CallbackHandler : WSReference.ITestServiceCallback { public async void getEmail(WSReference.TestServiceEMAIL_MESSAGE email) { Console.WriteLine("From: " + email.Sender + "\nSubject: " + email.Subject + "\nBody: " + email.Body + "\nAttachments: " + (email.HasAttachment ? "Yes" : "No")); } } [/csharp] Nothing fancy, but I think it provides a good beginning thought process for using WebSockets in combination with the new async keyword. For me it is like taking a WinForm, ASP.NET, WP7 (etc) Event Handler across the Internet or Intranet. This brings me one step further towards truly moving from the idea of keeping code inside a DLL or some other project to keeping it in Azure or hosted on an IIS server somewhere for all of my projects to consume. One source of logic to maintain and if there are bugs generally in the WCF Service itself (at least in my experience using a WCF Service like a Web DLL in the last 8 months) and not having to remember to recompile a project/redeploy each project that uses your common code/framework.
I downloaded Entity Framework 5 two days ago after reading the MSDN Entry: Performance Considerations for Entity Framework 5. Excited by the Object Caching support I dove right in. In order to get Entity Framework 5: Run this command from your NuGet Console (Tools->Library Package Manager->Package Manager Console): Install-Package EntityFramework –Pre After that, you should see the EF 5.x DbContext Generator for C# as shown below: [caption id="attachment_1406" align="aligncenter" width="300"] EF 5.x Visual Studio 2012 Item Type[/caption] Type in the name of your DbContext and then it will automatically create your new DbContext driven EF5 classes from a database or Entity Model. A small sample: [csharp] using (var db = new Models.TempContext()) { db.Configuration.AutoDetectChangesEnabled = false; return db.Users.Find(ID); } [/csharp] The syntax is very similar to EF4, but be careful about using Object Caching without reading further on about the AutoDetectChangesEnabled property being left on. The above example uses the new Find() function where it will return the User object with a primary key value of ID. As stated in the MSDN article, using this for random queries will often return negative performance. Bringing me to my findings. Continuning my persuit of performance from my LINQ vs PLINQ vs Stored Procedure Row Count Results blog post, I wanted to check out how EF5 compared to EF4. In the planning phases for my next Data Layer Architecture, I want to be fully versed in all of the latest techniques. So without further adieu: This test involved, returning the same User Entity Object I used in my previous post, but this time grabbing 2500 of them and in various scenarios. A screenshot of the output: [caption id="attachment_1407" align="aligncenter" width="300"] EF4 vs EF5 vs SP Initial Findings[/caption] It is interesting to see EF4 LINQ actually besting out EF5 LINQ. Not sure if it is a maturity problem or some setting I am unaware of (if so please let me know).
Something that drives me crazy with software companies is that they do not properly handle cross-platform development from what I've seen. While at Microsoft's TechED North America Conference back in June this year, I spoke with several developers who were basically starting from scratch or doing some sort of convoluted way of putting some code in a DLL and then referencing it for each platform. Done right, you can create iOS (iPad/iPhone), Android and Windows Phone 7.x applications with the only platform specific code being taking the data and binding it to each platforms UI.

My approach is to leave the data and business logic where it should be (in your data center/cloud) and leave the presentation and interaction to the device (iPhone, iPad, Droid, Windows Phone etc). To me, every developer should be applying this ideal. Platforms are coming and going so fast, wouldn't it suck if you just spent all of this time programming in a platform specific language (like Objective-C on iOS) only to find out from your CEO that he or she promised a port of your application or game to Platform XYZ in a month.

Having a true 3 tier architecture like in any software development should be embraced even if there is added startup time to get that first pixel displayed on your client device.

My Spring 2012 developed Mobile architecture consists of:
-SQL Server 2008 R2 Database
-SQL Stored Procedures for nearly all I/O to the WCF Service
-Serialized Struct Containers for database/business objects
-Task Parallel Library usage for all conversions to and from the Serialized Structs to the SQL Stored Procedures
-Operation Contracts for all I/O (ie Authentication, Dropdown Box objects etc)

For example, assume you had a Help Ticket System in your Mobile Application. A Help Ticket typically has several 1 to many relationship Tables associated with it. For instance you could have a history with comments, status changes, multiple files attached etc. Pulling all of this information across a web service in multiple calls is costly especially with the latency involved with 3G and 4G connections. It is much more efficient to do one bigger call, thus doing something like this is the best route I found:

[csharp] [Serializable] public struct HT_BASE_ITEM { public string Description; public string BodyContent; public int CreatedByUserID; public int TicketID; public List<HT_COMMENT_ITEM> Comments; public RETURN_STATUS returnStatus; } public HT_BASE_ITEM getHelpTicket(int HelpTicketID) { using (SomeModel eFactory = new SomeModel()) { HT_BASE_ITEM htBaseItem = new HT_BASE_ITEM(); getHelpTicketSP_Result dbResult = eFactory.getHelpTicketSP(HelpTicketID).FirstOrDefault(); if (dbResult == null) { htBaseItem.returnStatus = RETURN_STATUS.NullResult; return htBaseItem; } htBaseItem.Description = dbResult.Description; // Setting the rest of the Struct's properties here return htBaseItem; } } public RETURN_STATUS addHelpTicket(HT_BASE_ITEM newHelpTicket) { using (SomeModel eFactory = new SomeModel()) { HelpTicket helpTicket = eFactory.HelpTicket.CreateObject(); helpTicket.Description = newHelpTicket.Description; // Setting the rest of the HelpTicket Table's columns here eFactory.HelpTicket.AddObject(helpTicket); eFactory.SaveChanges(); // Error handling here otherwise return Success back to the client return RETURN_STATUS.SUCCESS; } } [/csharp] As you can see, the input and output are very clean, if more functionality is desired, i.e. a new field to capture, update the Struct & the input/output functions in the WCF Service, update the WCF reference in your device(s) and add the field to your UI to each device. Very quick and easy in my opinion.

I've gotten in the habit of adding an Enum property to my returned object depending on the possibility of possibly returning Null or some other problem during the data grabbing or setting operations in my WCF Services. It makes tracking down bugs a lot easier and often if an error occurs I simply record it to a SQL Table and add a front end inside the main ASP.NET Web Application with the logged in user, version of the client app, version of the wcf service (captured via AssemblyInfo). Endusers aren't the most reliable to report issues, so being proactive especially in the mobile realm is key from what I've found.

This approach does take some additional time upfront, but I was able to create a 100% feature to feature port to Windows Phone from iOS in 3 days for a fairly complex application because I only had to worry about creating a good looking UI in XAML and hooking up my new UI to those previously existing Operation Contracts in my WCF Service.

This architecture probably won't work for everyone, but I took it a step further for a recent enterprise application where there were several ASP.NET, WinForm and Classic SOAP Services on various .NET versions. An easy solution would have been to simply create a common DLL with the commonly used functionality and reference that in the other platforms. The problem with that being, a fix would need to be deployed to all of the clients and I haven't yet tried a .NET 4.5 Class Library in a 1.1 ASP.NET solution though I can't imagine that would work too well if at all. Creating all of the functionality in a WCF Service and having the clients consume this service has been a breeze. One fix in the WCF Service generally fixes all of the platforms, which is great. Those looking to centralize business logic and database access should really look to this approach.

A pretty common task I run across is counting the number of occurrences for a specific string, Primary Key ID etc. A few examples: checking a valid username/password combination or an existing value in the database to prevent duplicate/redundant data. Typically if there were no joins involved I would typically just do something like the following: [csharp] public bool DoesExist(string someValue) { using (SomeEntity eFactory = new SomeEntity()) { return eFactory.SomeTable.Where(a => a.Value == someValue).Count() > 0; } } [/csharp] Or use the Parallel PLINQ version if there were a considerable amount of rows assuming the overhead involved in PLINQ would negate any performance advantage for smaller tables: [csharp] public bool DoesExist(string someValue) { using (SomeEntity eFactory = new SomeEntity()) { return eFactory.SomeTable.AsParallel().Where(a => a.Value == someValue).Count() > 0; } } [/csharp] However if there were multiple tables involved I would create a Stored Procedure and return the Count in a Complex Type like so: [csharp] public bool DoesExist(string someValue) { using (SomeEntity eFactory = new SomeEntity()) { return eFactory.SomeTableSP(someValue).FirstOrDefault().Value > 0; } } [/csharp] Intrigued on what the real performance impact was across the board and to figure out what made sense depending on the situation I created a common scenario, a Users Table like so: [caption id="attachment_1377" align="aligncenter" width="267"] Users SQL Server Table Schema[/caption] Populated this table with random data from 100 to 4000 rows and ran the above coding scenarios against it averaging 3 separate times to rule out any fluke scores. In addition I tested looking for the same value run 3X and a random number 3X to see if the row's value position would affect performance (if it was at the near the end of the table or closer to the beginning). I should note this was tested on my HP DV7 laptop that has an A10-4600M (4x2.3ghz CPU) running Windows 8 x64 with 16GB of ram and a Sandisk Extreme 240GB SSD. [caption id="attachment_1378" align="aligncenter" width="300"] LINQ vs PLINQ vs Stored Procedure Count Performance Graph[/caption] The most interesting aspect for me was the consistent performance of the Stored Procedure across the board no matter how many rows there were. I imagine the results are the same for 10,000, 20,000 etc. I'll have to do those tests later. In addition I imagine as soon as table joins come into the picture the difference between a Stored Procedure and a LINQ query would be even greater. So bottom line, use a Stored Procedure for counts. The extra time to create a Stored Procedure, import it into Visual Studio (especially in Visual Studio 2012 where it automatically creates the Complex Type for you) is well worth it.
Been banging my head around a recent Azure WCF Service I've been working on to connect to my new Windows Phone 7 project. To my surprise, it worked flawlessly or so it seemed. Going to use the WCF Service, I noticed the proxy hadn't been generated. Sure enough the Reference.cs was empty: [csharp] //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.17626 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ [/csharp] Vaguely remembering this was fixed with a simple checkbox, I went to the Configure Service section and changed the options circled in red: [caption id="attachment_1370" align="aligncenter" width="300"] WCF Configure Reference[/caption] Note: You only need to uncheck the Reuse types in referenced assemblies, I just prefer generic List objects versus an Observable Collection.
Something I never understood why it wasn't part of the base TextBox control is the ability to override the GotFocus colors with an easy to use property, like FocusForeground or FocusBackground. You could do something like this: [csharp] private void txtBxPlayerName_GotFocus(object sender, RoutedEventArgs e) { Foreground = new SolidColorBrush(Colors.Black); } [/csharp] By default, the Background on a Focus event of a Windows Phone TextBox control sets it to White, so if you had White Text previously, upon entering Text, your Text would be invisible. The workaround above isn't pretty, nor handles the LostFocus property (ie to revert back to your normal colors after entering Text. If you have 2,3,4 or more TextBox controls, this could get tedious/clutter up your code. My "fix", extending the TextBox Control: [csharp] using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; namespace WPGame { public class jcTextBox : TextBox { #region New Background and Foreground Properties public static readonly DependencyProperty FocusForegroundProperty = DependencyProperty.Register("FocusForeground", typeof(Brush), typeof(jcTextBox), null); public Brush FocusForeground { get { return base.GetValue(FocusForegroundProperty) as Brush; } set { base.SetValue(FocusForegroundProperty, value); } } public static readonly DependencyProperty FocusBackgroundProperty = DependencyProperty.Register("FocusBackground", typeof(Brush), typeof(jcTextBox), null); public Brush FocusBackground { get { return base.GetValue(FocusBackgroundProperty) as Brush; } set { base.SetValue(FocusBackgroundProperty, value); } } public static readonly DependencyProperty BaseForegroundProperty = DependencyProperty.Register("BaseForeground", typeof(Brush), typeof(jcTextBox), null); public Brush BaseForeground { get { return base.GetValue(BaseForegroundProperty) as Brush; } set { base.SetValue(BaseForegroundProperty, value); } } public static readonly DependencyProperty BaseBackgroundProperty = DependencyProperty.Register("BaseBackground", typeof(Brush), typeof(jcTextBox), null); public Brush BaseBackground { get { return base.GetValue(BaseBackgroundProperty) as Brush; } set { base.SetValue(BaseBackgroundProperty, value); } } #endregion public jcTextBox() { BaseForeground = Foreground; BaseBackground = Background; } #region Focus Event Overrides protected override void OnGotFocus(RoutedEventArgs e) { Foreground = FocusForeground; Background = FocusBackground; base.OnGotFocus(e); } protected override void OnLostFocus(RoutedEventArgs e) { Foreground = BaseForeground; Background = BaseBackground; base.OnLostFocus(e); } #endregion } } [/csharp] To use it, add it to your project and then you can just drag and drop the control from the Toolbox and use it like so: [csharp] <my:jcTextBox x:Name="txtBxPlayerName" Background="#1e1e1e" Foreground="White" BaseBackground="#1e1e1e" BaseForeground="White" FocusForeground="Black" FocusBackground="#2e2e2e" AcceptsReturn="False" /> [/csharp] The output: [caption id="attachment_1354" align="aligncenter" width="180"] Unfocused jcTextBox Controls[/caption] [caption id="attachment_1355" align="aligncenter" width="180"] Visible Text with jcTextBox on Focus[/caption] [caption id="attachment_1356" align="aligncenter" width="180"] LostFocus jcTextBox Text with original colors[/caption] So there you have it, a "fix" for a common problem (at least in my experience). Feel free to use, rewrite, whatever with the jcTextBox class.
I started playing around with the compiler statistics of the KernelAnalyzer (mentioned in this morning's post) some more. Without doing some more research it looks as though using the built in pow function versus simply multiplying the number by itself is much, much slower. Take this line for instance: [csharp] thirdSide[index] = sqrt((double)(x * x) + (double)(y * y)); [/csharp] The KernelAnalyzer estimates the through put to be about 10 Million Threads\Sec. Adjusting the line to this: [csharp] thirdSide[index] = sqrt(pow((double)x, 2) + pow((double)y,2)); [/csharp] The estimate lowers itself to 2 Million Threads\Sec. Is this a result of the math library function call overhead to loose 5X the performance? Adjusting the line to only using half, results only in an increase to 4 Million Threads\Sec. So perhaps there is an initial performance hit when using a library function, maybe the library needs to be loaded upon every thread? Or is the library shared across threads and thus causing a locking condition during the compute? I think this requires more investigation, wish I was going to the AMD Developer Conference next week...
Got some more results add to the comparison: [caption id="attachment_1218" align="aligncenter" width="300" caption="jcBench Expanded Integer Results - 5/15/2012"][/caption] [caption id="attachment_1219" align="aligncenter" width="300" caption="jcBench Expanded Floating Point Results - 5/15/2012"][/caption] Tonight I am hoping to add a quick comparison page so anyone could ask, well how does a Dual 600mhz Octane perform against a 600mhz O2?
Below are some interesting results of the floating point performance differences between MIPS and AMD cpus. [caption id="attachment_1105" align="aligncenter" width="300" caption="jcBench Floating Point Performance"][/caption] The biggest thing to note, is the effect Level 2 cache has on Floating Point performance. The 4mb Level 2 cache in the R16000, clearly helps to compensate for the massive difference in clock speed. Nearly a 1 to 1 relationship between a the 6x3.2ghz Phenom II and the 4x800 MIPS R16k. So bottom line, Level 2 cache makes up for megahertz almost by a factor of 4 in these cases. It's a shame the fastest MIPS R16000 only ran at 1ghz and is extremely rare. More benchmarking later this week...
I was able to add several more machines to the comparison with the help of a friend over at Nekochan. [caption id="attachment_1102" align="aligncenter" width="300" caption="jcBench Integer Comparison 2"][/caption] Very interesting how MIPS scales and how much of a difference 100mhz and double the Level 2 cache makes an effect on speed.
Just finished getting the features for jcBench 0.2 completed. The big addition is the separate test of integer and floating point numbers. The reason for the addition of this test is that I heard years ago that the size of Level 2 cache directly affected performance of Floating Point operations. You would always hear of the RISC cpus having several MegaBytes of cache, while my first 1ghz Athlon (Thunderbird), December 2000 only had 256kb. As I get older, I get more and more scrupulous over things I hear now or had heard in the past thus the need for me to prove to myself one or the other. I'm still working on going back and re-running the floating point tests so that will come later today, but here are the integer performance results. Note the y-axis is the number of seconds taken to complete the test, so lower is better. [caption id="attachment_1097" align="aligncenter" width="300" caption="jcBench 0.2 integer comparison"][/caption] Kind of a wide range of CPUs, ranging from a netbook cpu in the C-50, to a mobile cpu in the P920 to desktops cpus. The differences based on my current findings vary much more greatly with floating point operations. A key things I got from this data:
  1. Single Threaded, across the board was ridiculously slow, even with AMD's Turbo Core technology that ramps up a core or two and slows down the unused cores. Another unsettling fact for developers that continue to not write parallel programs.
  2. The biggest jump was from 1 thread to 2 threads across the board
  3. MIPS R14000A 600mhz CPU is slightly faster than a C-50 in both single and 2 threaded tests. Finally found a very near equal comparison, I'm wondering with the Turbo Core on the C-60 if it brings it inline.
  4. numalink really does scale, even over the now not defined as "fast" numalink 3 connection, scaling it across 2 Origin 300s using all 8 cpus really did increase performance (44 seconds versus 12 seconds).
More to come later today with floating point results...
Just got the initial C port of jcBench completed. Right now there are IRIX 6.5 MIPS IV and Win32 x86 binaries working, I'm hoping to add additional functionality and then merge back in the changes I made to the original 4 platforms. I should note the performance numbers between the 2 will not be comparable. I rewrote the actual benchmarking algorithm to be solely integer based, that's not to say I won't add a floating point, but it made sense after porting the C# code to C. That being said, after finding out a while back on how Task Parallel Library (TPL) really works, my implementation of multi-threading using POSIX, does things a little differently.

Where the TPL starts off with one thread and as it continues processing increases the threads dynamically, my implementation simply takes the number of threads specified via the command line, divides the work (in my case the number of objects) by the number of threads and kicks off the threads from the start. While TPLs implementation is great for work that you don't know if it will really even hit the maximum number of cpus/cores efficiently, for my case it actually hinders performance. I'm now wondering if you can specify from the start how many threads to kick off? If not, Microsoft, maybe add support for that? I've got a couple scenarios I know for instance would benefit from at least 4-8 threads initially, especially for data migration that I prefer to do in C# versus SSIS (call me a control freak).

Back to jcBench, at least with the current algorithm, it appears that a MIPS 600mhz R14000A with 4MB of L2 cache is roughly equivalent to a 1200mhz Phenom II with 512kb L2 cache and 6mb of L3 cache at least in Integer performance. This is based on a couple runs of the new version of jcBench. It'll be interesting to see with numalink if it continues this to 1 to 2 ratio. I'm hoping to see how different generations of AMD cpus compare to the now 10 year old MIPS cpu.
Kind of scratching my head as to why Silicon Graphics didn't include gigabit on their IO8 PCI-X card that comes with an Origin 300. I guess maybe back in 2000-2001, the demand for gigabit Ethernet wasn't enough? Personally, I had just upgraded to Fast Ethernet (100mbit) if only half duplex on a Hub. Scored an official Silicon Graphics Gigabit card off eBay for next to nothing, installed it with no problems and upon rebooting IRIX recognized it and am now only using the gigabit connection to the rest of my network. [caption id="attachment_1063" align="aligncenter" width="300" caption="Silicon Graphics Gigabit PCI-X Card"][/caption] Next up was another great find on eBay for ~$30 I got a Dual Channel 4gb LSI Logic PCI-X card that has built in IRIX support. Just waiting on a PCI Express 4gb card to put into my SAN. [caption id="attachment_1064" align="aligncenter" width="225" caption="LSI Dual Channel 4gb Fibre Channel"][/caption] [caption id="attachment_1065" align="aligncenter" width="225" caption="SGI Gigabit and LSI Logic Dual Channel 4gb Fibre PCI-X cards installed"][/caption]
I had been wondering what the effect of syntax would have on performance. Thinking the interpreter might handle things differently depending on the usage, I wanted to test my theory.

Using .NET 4.5 with a Win32 Console Application project type, I wrote a little application doing a couple trigonometric manipulations on 1 Billion Double variables.

For those that are not aware using the Task Parallel Library you have 3 syntaxes to loop through objects:

Option #1 - Code within the loop's body
[csharp] Parallel.ForEach(generateList(numberObjects), item => { double tmp = (Math.Tan(item) * Math.Cos(item) * Math.Sin(item)) * Math.Exp(item); tmp *= Math.Log(item); }); [/csharp] Option #2 - Calling a function within a loop's body
[csharp] Parallel.ForEach(generateList(numberObjects), item => { compute(item); }); [/csharp] Option #3 - Calling a function inline
[csharp] Parallel.ForEach(generateList(numberObjects), item => compute(item)); [/csharp] That being said, here are the benchmarks for the 3 syntaxes run 3 times:
Option #1 4.0716071 seconds 3.9156058 seconds 4.009207 seconds
Option #2 4.0376657 seconds 4.0716071 seconds 3.9936069 seconds
Option #3 4.040407 seconds 4.3836076 seconds 4.3056075 seconds
Unfortunately nothing conclusive, so I figured make the operation more complex.

That being said, here are the benchmarks for the 3 syntaxes run 2 times:
Option #1 5.4444095 seconds 5.7313278 seconds
Option #2 5.5848097 seconds 5.5633182 seconds
Option #3 5.8793363 seconds 5.6793248 seconds
Still nothing obvious, maybe there really isn't a difference?
Found this blog post from 3/14/2012 by Stephen Toub on MSDN, which answers a lot of questions I had and it was nice to have validated an approach I was considering earlier:
Parallel.For doesn’t just queue MaxDegreeOfParallelism tasks and block waiting for them all to complete; that would be a viable implementation if we could assume that the parallel loop is the only thing doing work on the box, but we can’t assume that, in part because of the question that spawned this blog post. Instead, Parallel.For begins by creating just one task. When that task is executed, it’ll first queue a replica of itself, and will then enlist in the processing of the loop; at this point, it’s the only task processing the loop. The loop will be processed serially until the underlying scheduler decides to spare a thread to process the queued replica. At that point, the replica task will be executed: it’ll first queue a replica of itself, and will then enlist in the processing of the loop.
So based on that response, at least in the current implementation of the Task Parallel Library in .NET 4.x, the approach is to slowly created parallel threads as the resources allow for and fork off new threads as soon and as many possible.
Started on a new, well old project I've wanted to do since 2002-2003. Back then there was no real way to handle and manage network renderings with 3ds max 4.x/5.x aside from the very basic Backburner application. So I came up with netMAX, a Perl based manager that would email me when renders were done and an eta so while I was at High School I could be notified and check on renderings. Fast forward 9-10 years, been out of the 3D Animation/Rendering for several years, but after acquiring several rackmount servers I had been looking for a way to really utilize them. I had purchased Maya 2011 nearly 2 years ago, but hadn't really used it as I had always been a 3ds max guy. Looking around my old boxes of stuff, I uncovered my old 3ds max 4.2 and Maya 6.5 boxes. Happy I found Maya 6.5 as it was the last version to be supported on IRIX I immediately looked into seeing if Maya 2011 could export to 6.5. Fortunately, saving the Maya scene files in ASCII format and then adjusting several lines (or hundreds depending on the scene), I wrote a C# script to parse out elements that 6.5 couldn't understand. Copying that Maya ASCII scene file (.ma) to my Origin 300 and running from the /usr/aw/maya6.5/bin folder: [bash] Render test.ma [/bash] Produced test.tiff: [caption id="attachment_1006" align="aligncenter" width="300" caption="jcPIDR First Test Image"][/caption] Still got a good bit of work left:
  1. Web Service to handle all of the traffic
  2. SQL Database Schema to store the job history
  3. Web Interface to submit/cancel/manage jobs
  4. Handle Textures properl
And if I get fancy, a built in way to export to jcPIDR from Maya.
Might have noticed the lack of postings in the last 2 weeks, I can sum it up in three words: Mass Effect 3. Having cut my gaming time considerably over the last year in a half, when a game like Mass Effect 3 comes out I fully indulge in it. I'm at around 33 hours at the moment, on the second to last mission. While I haven't finished it, I can safely say this is the best game I have ever played. While I miss the inventory system and scouting for objectives in the first game and some of the character driven recruitment aspects of the second, I feel like it took the best of series and rolled it into an amazing game. On so many levels, I've never felt the way I do about Mass Effect 3. Knowing I've only got a few hours left in a 2.5 year journey with my Shepard and crew, it saddens me. That's something that has never happen to me previously. Maybe it was because I've never been attached to characters like I have in Mass Effect or the fact that I've created my "own" story in the Mass Effect universe, versus the linearity of Final Fantasy or "rails" shooters like Call of Duty. Or maybe it is because it combines my favorite genre, Science Fiction with an incredible story blended with action and drama. Hopefully DLC will come out to continue story like Mass Effect 2 did, and even more so an Expansion Pack like what Bioware did for Dragon Age. [caption id="attachment_998" align="aligncenter" width="246" caption="Mass Effect 3 Collector's Edition"][/caption]
Having been years since I messed with MySQL on a non-Windows platform I had forgotten 2 simple commands after setup on my Origin 300: [sql] CREATE USER 'dbuser'@'%' IDENTIFIED BY 'sqlisawesome'; [/sql] [sql] GRANT ALL PRIVILEDGES ON *.* TO 'dbuser'@'%' WITH GRANT OPTION; [/sql] The wildcard (%) after the username is key if you want access from other machines (which in my case I wanted to the MySQL Workbench on my Windows 7 machine).
After having used PLINQ and the Concurrent collections for nearly 2 months now, I can say without a doubt, it is definitely the way of the future. This last week I used it extensively in writing a WCF Service that manipulated a lot of data and needed to return it to an ASP.NET client very quickly. And on the flip side it needed to execute a lot of SQL Insertions based on business logic pretty quickly. As of February 25th, 2012, I think the best approach to writing a data layer is:
  1. Expose all Data Layer access through a WCF Service, ensuring a clear separation between UI and Data Layers
  2. Use of ADO.NET Entity Models tied to SQL Stored Procedures that return Complex Types for objects rather doing a .Where(a => a.Active).ToList()
  3. Process larger result sets with PLINQ, using Concurrent Collections (ie ConcurrentQueue or ConcurrentDictionary) and returning them to the Client (ASP.NET, WP7 etc)
Next step in my opinion would be to add in intelligent App Fabric caching like what Smarty Template Engine did for PHP. Just a clean way to cache pages, while providing flexible ways to invalidate the cache. I am so glad I found that back in 2006 when I was still doing a lot of PHP work.
Finally got around to replacing the 4 80mm 40 decibel fans in my Origin 300 this morning. The noise from this one server was enough to travel from the basement inside a rack all the way to the 3rd floor Master Bedroom. Suffice it to say, I definitely couldn't run the server 24/7. Hunting around on Amazon, I found these 80mm Cooler Master fans, not too bad price wise and still put out decent air flow. [caption id="attachment_963" align="aligncenter" width="225" caption="New Cooler Master 80mm replacement fans for my SGI Origin 300"][/caption] Prep for the swap: [caption id="attachment_964" align="aligncenter" width="225" caption="My Quad R14k SGI Origin 300"][/caption] The original fan in case someone needed a part number: [caption id="attachment_965" align="aligncenter" width="225" caption="SGI Origin 300 Stock Fan"][/caption] As I was swapping in the new fans, I realized the fan connector was not the standard ATX style. Stock Connector: [caption id="attachment_966" align="aligncenter" width="300" caption="Stock SGI Origin 300 Fan Connector"][/caption] Versus the standard ATX connector: [caption id="attachment_967" align="aligncenter" width="300" caption="Repacement Cooler Master Fan ATX Connector"][/caption] Luckily the standard 4 pin Molex power connector for the 2 Ultra 160 drives is right next to the fans, so a little wiring job and voila: [caption id="attachment_968" align="aligncenter" width="300" caption="SGI Origin 300 with replacement fans installed"][/caption] Note, doing it this way will throw an error in the L1 Console and will shut your machine down. A way around it is to simply connect to the Origin 300 over a console connection and type: env off. This is dangerous though as the server will not shutdown automatically if a fan fails or the server overheats. Having said that, it came to my attention that IRIX does not install a Serial/Terminal client by default. The common cu is on the IRIX 6.5 Foundation CD 1 disk. Turn on the Subsystems Only in the IRIX Software Manager and scroll down until you see it. Chances are you're not running a base 6.5 install so you'll also need the first disk of your overlays (6.5.30 Overlay CD 1 in my case) in order to install it to resolve the package conflicts. After installing you may receive a "CONNECT FAILURE: NO DEVICES AVAILABLE". Open up vi or your favorite text editor and open up /etc/uucp/Devices Add in a line: Direct ttyd2 - 38400 direct Make sure the spaces are there. You can also try setting it up via the Serial Manager under the System Manager application. Afterwards, simply running: cu -l /dev/ttyd2 -s38400 Allowed me into my L1 console to turn off environment monitoring. Then hit Control + D to get back into the PROM Monitor and hit "1" to start IRIX.
Continuing my work on my secret project, I've been really intrigued on how nmap's ability to determine the Operating System and Web Server of the host you are scanning. With every test I had done with it, it had always returned exactly what was running, even the more obscure hosts like that of those on IRIX. So I started my research into what would be necessary. From what I have read so far, you have really two main options for detection. You can either use the return values of an ICMP request or if they are running IIS, using the WebResponse Headers to determine the version of IIS running (Apache will return something like Apache/1.3.23). Digging into ICMP, I realized that would require a good bit more reading so I chose the later for Stage 1 of my detection mechanism. First off you need to create and return the WebResponse Header: [csharp] private string getHttpServerHeader(string ipAddress) { WebRequest webRequest = WebRequest.Create("http://" + ipAddress); WebResponse webResponse = null; string ServerHeader = String.Empty; try { webResponse = webRequest.GetResponse(); } catch (WebException ex) { if (ex.Response.Headers != null) { ServerHeader = ex.Response.Headers["Server"]; } } finally { if (webResponse != null) { webResponse.Close(); } } return ServerHeader; } [/csharp] Then using the string result of that function: [csharp] private string getWebServerName(string server) { // IIS Detection from HTTP.SYS if (server.StartsWith("Microsoft-HTTPAPI")) { switch(server.Split('/')[1]) { case "1.0": return "IIS 6.0"; case "2.0": return "IIS 6.0/7.x"; } } // IIS Detection not from HTTP.SYS if (server.StartsWith("Microsoft-IIS")) { return "IIS " + server.Split('/')[1]; } // If no conditional has trapped the Server entry, most likely the Web Server is either Apache or a masked IIS Server return server; } [/csharp] In reading about Fingerprinting it occurred to me that within IIS itself you can mask it with a custom Server Header like "WS" for instance. I doubt it would prevent a true attack, but it might save a couple bytes per connection over Microsoft-IIS 7.5 :)
Came across an interesting problem yesterday in which I was previously using HTTP for my WCF Mobile Service and a potential client wanted it encrypted. Rather than write a custom encryption, I simply applied a SSL certificate to the IIS application and created a HTTPS binding in IIS. Afterwards I knew I had to update my Endpoint and Behavior to use HTTPS and allow Meta-Data temporarily so Visual Studio could update my Service Reference. This is where everything went down hill. The Service Configuration file in my WP7 application got wiped and the actual Reference.cs was virtually empty. After trial and error for several hours I ended doing the following:
  1. Added a new WCF Reference using a brand new name
  2. Saved the Project/Solution
  3. Exited Visual Studio 2010
  4. Removed the new Reference
  5. Added the original WCF Reference using the original name
After all that it worked, so hopefully that saves some one hours of headache.
Last night when working on my Silicon Graphics Origin 300 and suffering with an old version Mozilla circa 2005 as seen below: [caption id="attachment_896" align="aligncenter" width="300" caption="Mozilla 1.7.12 on IRIX"][/caption] I started wondering, these machines can function as a Web Server, MySQL server, firewall etc especially my Quad R14k Origin 300, yet web browsing is seriously lacking on them. Firefox 2 is available over at nekoware, but that is painfully slow. Granted I don't use my Origin for web browsing, but when I was using a R12k 400mhz Octane as my primary machine a few years ago as I am sure others around the world are doing it was painful. This problem I don't think is solely for those on EOL'd Silicon Graphics machines, but any older piece of hardware that does everything but web browsing decently. Thinking back to the Amazon Silk platform, using less powerful hardware, but a brilliant software platform, Amazon is able to deliver more with less. The problem arises for the rest of the market because of the diversity of the PC/Workstation market. The way I see it you've got 2 approaches to a "universal" cloud web renderer. You could either:
  1. Write a custom lightweight browser tied to an external WCF/Soap Web Service
  2. Write a packet filter inspector for each platform to intercept requests and return them from a WCF/Soap service either through Firefox Extensions or a lower level implementation, almost like a mini-proxy
Plan A has major problems because you've got various incarnations of Linux, IRIX, Solaris, VMS, Windows etc, all with various levels of Java and .NET/Mono support (if any), so a Java or .NET/Mono implementation is probably not the right choice. Thus you're left trying to make a portable C/C++ application. To cut down on work, I'd probably use a platform independent library like Gsoap to handle the web service calls. But either way the amount of work would be considerable. Plan B, I've never done anything like before, but I would imagine would be a lot less work than Plan A. I spent 2 hours this morning playing around with a WCF service and a WPF application doing kind of like Plan A. [caption id="attachment_897" align="aligncenter" width="300" caption="jcW3CLOUD in action"][/caption] But instead of writing my own browser, I simply used the WebBrowser control, which is just Internet Explorer. The Web Service itself is simply: [csharp] public JCW3CLOUDPage renderPage(string URL) { using (WebClient wc = new WebClient()) { JCW3CLOUDPage page = new JCW3CLOUDPage(); if (!URL.StartsWith("http://")) { URL = "http://" + URL; } page.HTML = wc.DownloadString(URL); return page; } } [/csharp] It simply makes a web request based on the URL from the client, converts the HTML page to a String object and I pass it into a JCW3CLOUDPage object (which would also contain images, although I did not implement image support). Client side (ignoring the WPF UI code): [csharp] private JCW3CLOUDReference.JCW3CLOUDClient _client = new JCW3CLOUDReference.JCW3CLOUDClient(); var page = _client.renderPage(url); int request = getUniqueID(); StreamWriter sw = new StreamWriter(System.AppDomain.CurrentDomain.BaseDirectory + request + ".html"); sw.Write(page.HTML); sw.Close(); wbMain.Navigate(System.AppDomain.CurrentDomain.BaseDirectory + request + ".html"); [/csharp] It simply makes the WCF request based on the property and then returns the HTML and writes it to a temporary HTML file for the WebBrowser control to read from. Nothing special, you'd probably want to add handling for specific pages, images and caching, but this was far more than I wanted to play with. Hopefully it'll help someone get started on something cool. It does not handle requests from with the WebBrowser control, so you would need to override that as well. Otherwise only the initial request would be returned from the "Cloud", but subsequent requests would be made normally. This project would be way too much for myself to handle, but it did bring up some interesting thoughts:
  1. Handling Cloud based rendering, would keeping images/css/etc stored locally and doing modified date checks on every request be faster than simply pulling down each request fully?
  2. Would the extra costs incurred to the 3G/4G providers make it worthwhile?
  3. Would zipping content and unzipping them outway the processing time on both ends (especially if there was very limited space on the client)
  4. Is there really a need/want for such a product? Who would fund such a project, would it be open source?
After playing around with Google Charts and doing some extensive C#/SQL integration with it for a dashboard last summer, I figured I'd give Telerik's Kendo a shot. If you're not familiar with Telerik, they produce very useful controls for WinForm, WPF, WP7 and ASP.NET controls (in addition to many others). If you do .NET programming, their product will save you time and money guaranteed. That being said, I started work on the first module for jcDAL last night and wanted to add some cool bar graphs to the web interface for the analyzer. About 15 minutes of reading through one of their examples I had data coming over a WCF service into the Kendo API to display this: [caption id="attachment_880" align="aligncenter" width="621" caption="jcDBAnalyzer Screengrab showcasing Kendo"][/caption] So far so good, I'll report back with any issues, but so far I am very pleased. A lot of the headaches I had with Google Charts I haven't had yet (+1 for Telerik).
After some additional work getting used to the XAML-ish layout I'm done with the initial jcBENCH Android port. You can download it from here. You will need Android 2.2 or higher for it to run. [caption id="attachment_874" align="aligncenter" width="244" caption="jcBENCH Android"][/caption] I've only tested this on a Dual 1.2ghz HTC Vivid. However, the results were interesting. Comparing single-threaded and multi-threaded operations was curious. Running it in multi-threaded mode was actually 3 times slower. I'm not sure if the way the Task Parallel Library was implemented on Monodroid was done poorly or if there is a bug in the detection for how many cores/cpus there are inside the Mono implementation or not, but something isn't right. Single threaded versus my HTC Titan 1.5ghz SnapDragon it lost out by ~23%. Which makes sense, 300mhz difference or 20% comparing single cores to each other. All that being said I'm content with jcBENCH for the moment until I hear feedback or come up with more features to add.
After some more thought about jcBENCH and what its real purpose was I am going to drop the Solaris and IRIX ports. Solaris has a Mono port, but I only have Sun Blade 100 which has a single cpu. Not expecting a ton of performance from that. IRIX on the other hand, I have a Quad R14k 500 Origin 300, but no port of Mono exists. So I could port it to Java, but then you really couldn't compare benchmarks between the Mono/.NET versions. I am about 50% done with the Android port and am just waiting for the OpenSuse 12.1 compatible MonoDevelop release so I can get started on the Linux Port. After those 2 ports are completed I am thinking of starting something entirely new that I have been thinking about the last couple years. Those that deal with a SQL database and write a data layer for his or her .NET project, know the shortcomings or doing either:
  1. Using an ADO.NET Entity Model, adding your Tables, Views and Stored Procedures and then use that as is or extend it with some business logic
  2. Use an all custom data layer using the base DataTable, DataRows etc, wrap your objects with partial classes and create a "factory"
Both approaches have their pros and cons, the first takes a lot of less time, but you also have a lot less control and could be costly with all of the overhead. Both however will eventually fall apart down the road. The reason, they were built for one audience and one production server or servers. How many times have you gone to your IT Manager and asked for a new Database server because it was quicker then really go back to the architecture of your data layer. As time goes on, this could happen over and over again. I have personally witnessed such an event. A system was designed and built for around 50 internal users, on a single cpu web server and a dual Xeon database server. Over 5 years later, the code has remained the same yet it's been moved to 6 different servers with ever increasing speed. Times have changed and will continue to change, workloads vary from day to day, servers are swapped in and out, so my solution, an adaptive, dynamic data layer. One that profiles itself and uses that data to analyze the server to use either single threaded LINQ queries or PLINQ queries if the added overhead of using PLINQ would out way the time it would take only using one cpu. In addition using Microsoft's AppFabric to cache the commonly used intensive queries that maybe only get run once an hour and the data doesn't change for 24. This doesn't come without a price of course, having only architected this model in my head, I can't say for certain how much overhead the profiling will be. Over the next couple months, I'll be developing this so stay tuned. jcBENCH as you might have guessed was kind of a early test scenario of testing various platforms and how they handled multi-threaded tasks of varying intensity.
With this new release I've fixed a bunch of things in Web Service and the main library that powers all of the clients. In addition I'm tracking a bit more information like the platform and benchmark version (thinking more down the road when specific versions are going to change in the logic in the actual algorithm). Also this release marks the first release of the GUI Mac OS X 10.6 (and later) client. [caption id="attachment_848" align="aligncenter" width="565" caption="jcBENCH on Mac OS X"][/caption] You'll need GTK 2.12 or newer and Mono 2.10.8 or newer to run it. Being my first Mac OS X application ever outside of iOS development, I can without a doubt say I cannot stand development on anything Apple. KVC has got to be the most bloated way of doing things I have ever seen. I am so glad I do not have to do native Mac OS X applications for a living. That being said though, I think it turned out pretty well in matching the WPF version.
Found a bug in either mono or WCF, not sure which one. But if you go to add a Web Reference in Mono Develop (2.8.5) and have an Entity Model Object as a parameter for an object some of the properties will be null. Debugging had me scratching my head as the object was populated prior to sending to the WCF Service. I hadn't tried it as a Native WCF Service in Mono Develop because I've had prior problems with that route. The solution I came up with was create a serializable object in your WCF Service that wraps the object like so: [Serializable] public class BenchResult { public string CPUName; .... } Then in your application your reference will contain (in my case a BenchResult object) the object you defined and upon populating and passing it to your WCF Service all of the data will be populated. Nearly 2 hours gone down the drain on that issue, hopefully it helps someone else.
Found this just now going through some really old stuff. I originally posted this on July 7th, 2002, I'd probably do it quite a bit differently now in CS5, but someone might find it interesting, especially if they are on version 4.x or 5.x. Without further adeu: About a week ago (5/25/02), I started work on a Star Wars movie, I knew I'd have to do lightsabers and force power effects, but doing it properly and making it look good took a long time doing it the way I was doing it before I learned how to use masks. So you get the easy and better looking version. Have fun, if you have any more questions just post them in the forum. [caption id="attachment_809" align="aligncenter" width="360" caption="How it should look when we are done"][/caption]
  • 1st step: Start After Effects (duh)
  • 2nd step: Make a new composition (control+n), select the resolution that your footage/image is
  • 3rd step: Add your image/footage (control+i)
Now your screen should look like this: [caption id="attachment_810" align="aligncenter" width="640" caption="Starting point for the Lightsaber Composition"][/caption]
  • 4th step: Add a solid (control+y), make it white and your composition size
  • 5th step: Now in the timeline with the solid 1 highlighted hit the little icon that looks like an eye (like in the screen cap)
[caption id="attachment_811" align="aligncenter" width="327" caption="Lightsaber Tutorial - Timeline after the 5th Step"][/caption]
  • 6th step: Hit G to bring up the pen mouse pointer, now put a point in each of the four vertices of your lightsaber
  • 7th step: Now if you have your solid shown by hitting the little place where the eye was, your to be lightsaber should be solid white
Now your screen should look like this: [caption id="attachment_814" align="aligncenter" width="640" caption="Lightsaber Composition #2"][/caption]
  • 8th step: Now make another solid (control+y), this time black, then in the timeline drag it so it's under the first solid with our lightsaber mask (like in the screen cap)
  • 9th step: then highlight your first solid (the lightsaber one), and hit control+d three times, your duplicating it
  • 10th step: Now click the little arrow in the timeline right next to "Solid 1" and bring out the effects (like in the screen cap on the lower right)
  • 11th step: Then make the feathering value equal to 10
  • 12th step: Repeat this step for each of the solids, by increasing each feathering value by 10, except Solid 2
Now your screen should look like this: [caption id="attachment_815" align="aligncenter" width="640" caption="Lightsaber Composition #3"][/caption] And your timeline should look like this: [caption id="attachment_816" align="aligncenter" width="561" caption="Lightsaber Composition Timeline #3"][/caption]
  • 13th step: Now make another composition (control+n) and then copy your first composition and then your footage from the bin to the timeline
  • You probably have a black screen with your white glowing saber huh? that's good
  • 14th step: Now with your solid selected in the timeline goto the menu, select layer->transfer mode->screen, now you should see your footage as well
  • 15th step: You probably have a black screen with your white glowing saber huh?
Now your screen should look like this: [caption id="attachment_820" align="aligncenter" width="640" caption="Lightsaber Composition #4"][/caption] Nobody has a white lightsaber right?
  • 16th step: To add color, highlight your solid and then on the menu goto effect->adjust->color balance
  • 17th step: Now mess with the settings to get the color you want
  • (Possibly) 18th Step: If you have more then 1 lightsaber, just repeat the steps
Now your screen should look like this: [caption id="attachment_821" align="aligncenter" width="640" caption="Lightsaber Composition #5"][/caption]
I don't remember the exact day I switched from Internet Explorer 6 to Phoenix (Firefox's codename back in 2004/2005), but today I'm switching to Chrome. Since working at home nearly everyday and using my desktop virtually 24/7 I've been leaving my Firefox 9 browser opened with 2 or 3 tabs. After an hour or so of use my Firefox browser uses over 1.2gb of ram. Not sure if it's a memory leak or it is caching page content? I know Firefox 6 or 7 had a terrible memory leak that forced me to have to upgrade my wife's ram to 16gb since she leaves 20+ tabs open. Also noticed in the last month or two, searching on google images with lots of results brings Firefox to a hault (doesn't happen in IE 9 or Chrome). Much better: [caption id="attachment_801" align="aligncenter" width="470" caption="Chrome with Amazon Cloud Player and 2 other tabs"][/caption]
It's been a very long time since I released something, so without further adieu, I present jcBENCH. It's a floating point CPU benchmark. Down the road I hope to add further tests, this was just something I wrote on the airplane coming back from San Francisco. You'll need to have .NET 4 installed, if you don't have it, click here. Or you can get it from Windows Update. Click here to download the latest version. I'll make an installer in a few days, just unzip it somewhere on your machine and run. Upon running it, the results get automatically uploaded to my server, a results page will be created shortly.
Just getting started on a WCF Service that is going to be handling all of the business logic for the company I work for. With the amount of data, caching was a necessity. Was playing around with Membase last week, couldn't quite get it to work properly and then started yesterday afternoon with AppFabric. Microsoft's own answer to caching (among other things). It installs pretty easily, the built in IIS extensions are very cool. However the setup isn't for the faint of heart. To sum it up:
  1. Install AppFabric on your SQL Server with all of the options
  2. Then install AppFabric on your Web Server and create your WCF and ASP.NET sites
  3. From the PowerShell Cache console type: New-Cache (Where is the name you want to call it, so it could be: New-Cache RandomTexels
  4. Verify it got created by: Get-Cache If you don't see it or get an error make sure the AppFabric Cache Service is running, open up the Run Window (Windows Key + R) and type: services.msc. It should be one of the top items depending on your setup.
  5. After configuring your other server for .NET 4, usual web site permissions and settings in IIS then do this command: Grant-CacheAllowedClientAccount DOMAIN\WEBSERVER$ Replace DOMAIN with your domain name (ie MOJO) and WEBSERVER with the physical name of your webserver. So for instance: Grant-CacheAllowedClientAccount MOJO\BIGWS$ for a domain callled MOJO and a WebServer called BIGWS.
There's plenty of code examples of it out there, but as I create the architecture for this WCF I'll discuss my findings.
I recently got a Rackable Systems 2U Dual Dual Core Opteron system off eBay for dirt cheap.  The problem however though as I noted previously was the noise, Antec .  Got 3 Antec fans and had my Antec 500w ATX power supply connected externally, much quieter, but I didn't feel comfortable leaving my ESXi server opened like that so I scrambled.  Found my original 4U file server case I purchased back in 2009 and started the migration. [caption id="attachment_4001" align="aligncenter" width="300" caption="Old 4U File Server case on the left, 2U Rackable Systems Case on the right"][/caption]

 

[caption id="attachment_4002" align="aligncenter" width="225" caption="Motherboard inside 4U File Server Case"][/caption]

 

[caption id="attachment_4003" align="aligncenter" width="225" caption="Transplant powered on"][/caption]

 

[caption id="attachment_4004" align="aligncenter" width="300" caption="Revised Server Stack"][/caption]

 

I've diving into WPF the last couple of weeks focusing on the 2D animation elements, but today I started on the 3D aspect.  Easy enough, it's accelerated in Direct 3D so I could probably throw a ton of polygons/textures at it versus the old GDI+ method I'm used to.  I added in 3D support to my new engine I started to write a few weeks ago this morning.  It didn't take much, just had to re-learn the 3D thought process as far as programming is concerned since it had been a while since I had done 3D graphics programming. Ran into a small hickup though when rendering out some 3D Terrain: [caption id="attachment_4017" align="aligncenter" width="300" caption="Very Bland Looking Terrain"][/caption] It was almost as if the texture coordinates from 3ds max didn't carry over to the xaml file. After opening 3ds max 2011 back up, I noticed I hadn't created a UV Map for it, I was simply using the standard 3ds max texture coordinates. Voilla... [caption id="attachment_4018" align="aligncenter" width="300" caption="Better looking Terrain"][/caption] Not bad, but not 2011 good either.  512x512 textures were acceptable 5-6 years ago, but 2048x2048 is the new "standard" from what I gather. Googling around a bit, found a 2048x2048 uncompressed texture, applied it over the terrain... [caption id="attachment_4019" align="aligncenter" width="300" caption="Much better terrain"][/caption] It looks kinda weird with the trees not parallaxing, but the level of detail is much improved.  I still need to add in support for detail texturing, but it's a good base.
TAGS
C#
Just scored 2 brand new AMD Opteron 2360SEs off eBay for $69.99 shipped.  These are the highest clocked Quad Cores out there at 2.5ghz.  I had a Dual Quad Opteron 2344HE back in Spring 2009, albeit they were 1.7ghz cores for a total of 13.6ghz versus my new server at 20ghz. It is kind of interesting thinking about it now, my main rig has a 6 core Phenom II has 19.2ghz of power.  It might have been cheaper to simply get another 1090T, 890FX/990FX series AMD motherboard, 16gb of ram and throw it in the same case? Let's see...
  • Phenom II 1090T (3.2ghzX6) - $190
  • Asus Sabertooth 990FX motherboard - $210
  • 16gb G.Skill DDR3-1600 - $180
For a total of $580 Versuses...
  • Rackable Systems with 2U Case, 2X2214HE Opterons - $150
  • 2 Opteron 2360SEs - $140
  • 10gb Kingston DDR2-5300 ECC - $90
For a total of $380, plus I plan on selling the original 2U case and Opterons so probably will end up spending $320. $200 off the bat plus whatever I can get back.  In addition I have plenty of room for more ram 16 slots versus 4 slots on the 990FX motherboard.  Worth it?  I think so  
In case someone else is searching, the default login for Nexenta (at least in 3.0.5) is: root nexenta