latest posts


Taking a break from ASP.NET 5 (and Visual Studio 2015 CTP6) until the next CTP release I've gone back to working on a project I started for all intents and purposes back in 1995, the MODEXngine. As mentioned back on August 8th, 2013, the MODEXngine is intended to not only being a game engine with the usual graphics, audio, input handling etc, but also a cloud platform. With cell phones and tablets establishing themselves as a viable platform to target the last several years, one can no longer simply focus on the PC (Win32/Linux/Mac OSX). With mobility also comes with things that a traditional "PC" developer wouldn't run into: vastly different platform APIs, network drops, "5 minute or less" gameplay and supporting an eco-system that crosses all of those platforms. Coming at this as an enterprise developer who actively develops on virtually every platform (Windows, the Web, Android, iOS, Windows Phone and Windows Store), I feel as though I bring a fairly unique perspective of how everything can exist and bring the experience to each platform natively, putting myself in the shoes of someone who wants to develop a game for every platform, but wants full native control over the platform as opposed to an "Apache Cordova" approach in which it solves the bigger problem of quickly delivering to multiple platforms, but stalls when you have a feature that needs more native functionality (let alone speed). Another advantage I hope to bring is the ease of use. Wrapping native level calls with generic wrappers across the board, it should cut down on the issues of "how do I do that on platform XYZ", similar to how Xamarin Forms has made wrappers for iOS, Android and Windows Phone, but hopefully with less issues.

With the introduction and overall goals out of the way, lets deep diving into the details.

Language and Tech Details

A big decision (one that I am still not 100% decided on) is the overall language used for the platform. Keeping to just one language has the advantage that if someone knows the language I choose, he or she can develop againinst the entire platform. However, one language for a project of this scope goes against my "use the best tool for the job" principle. By utilizing my language of choice, C#, I would be committing people to utilizing Xamarin for iOS and Android deployments. For smaller projects, they could simply get away with the free license, but that would be putting an extra burden on the developer (or development team) which also has to incur the costs of simply getting into the various stores for iOS, Android and Windows. On that same breath, with Microsoft's big push for cross-platform development to Linux and Mac OSX this might be overlooked (hoping that one day that license is just bundled with Visual Studio so this point would be mute for the most part).

The bigger question that has been pestering me for quite some time is the graphics API to use. When Carmack gave his observations of Direct3D back in mid 90s when asked why there was only an OpenGL port of Quake, I chose to follow his path of using OpenGL. It made sense at the time and still does. It is supported by almost every platform and only being focused on graphics I appreciated far more (and still do) than the "I can do everything" model that DirectX followed. While it might be unfair now to continue that mentality almost 20 years later, the idea still holds true. I can utilize OpenGL on Linux, iOS, Android, Mac OSX and regular Windows desktop applications all in C#. The only platforms I would be excluding would be Windows Phone and Windows Store. Which for followers of this blog, know I love from both a consumer and developer's perspective in every aspect but Microsoft's stance on not allowing OpenGL natively supported like they have done since Windows NT. Doing some research into this issue, I came across the ANGLE (Almost Native Graphics Layer Engine) project which translates OpenGL ES calls to DirectX 9 or 11 calls for Windows Phone and Windows Store apps. As of right now I haven't dove into this library to see its full capabilities, but from the MSDN blog posts on it, this approach has been used in production grade apps.

For the time being, I think utilizing C# across the board is the best approach. Web Developers who know ASP.NET would find the WebAPI service and libraries accessible, while Windows Phone/Store developers would find the engine libraries no different than utilizing a NuGet package.

The area where I want to be a lot more flexible is in the CRUD operations on data locally and in the Cloud. In my mind, whether the data is on a device or on a cloud, during retrieval it should make no difference. Akin to how easy Quake III made it to download levels from the games' server without having to leave and come back (as was the case in other games of that era). Obviously if one isn't connected to the internet or dropped connection then handling needs to be in place to handle a hybrid situation, but for all intents and purposes the shock and awe of doing such an implementation really isn't a huge endeavor if one designs the architecture with that in mind.

Along those same lines a big question in my mind is the storage of user data, statistics, level and other game content. A traditional .NET developer approach would be to utilize SQL Server 2014 and possibly Azure File Storage for the content (textures, audio files etc). Open source developers coming from Python or PHP might be drawn to use MySQL or MongoDB in place of SQL Server. My goal is to make the calls abstract so that depending on you, the developer, you can utilize whatever you wish. I more than likely will be using SQL Server for User Data at the very least, but planning ahead for potentially billions of concurrent users storing the rest of the data in that fashion would be extremely inefficient. Databases like Redis or ArangoDB might be a better choice for concurrent data. Or perhaps even my own distrubted key/value database jcDB. Seeing as I am still setting up the overall architecture, this will evolve and will be interesting to start doing simulated performance tests while also taking into account how easy it is to interact with each of the databases for CRUD operations.


Even before my announcement in August of 2013, the year prior in August of 2012 I had seen a huge disconnect between mobile/console games and PC games: the ability mod. One of the things that for myself and I imagine others back in the 90s with the modability of Doom and Quake (among others), it expanded the games' community in a way. Whether it was as "simple" as a new deathmatch level or as extravagent as some of the mods like Quake Rally, it made a huge difference inbetween major game releases. To this day I am not aware of any cross platform games that support modding like id software had provided back in the 90s. Since coming up with that idea technology has changed dramatically, but the idea is the same. Instead of a WCF Service and thinking small scale, I would use a WebAPI service hosted on Azure using Azure Storage with containers for each game. Security being an even bigger issue now than it was almost 3 years ago, I would more than likely employ a human element of reviewing submitted mods prior to implementing a fully automated security scan.

Release and what to look forward to

Those are the main talking points at this point in my mind, but as I get further in the development these more than likely will expand and the "features" list will need its own index.

I imagine at this point a big question on your mind is how soon this be made available in even an alpha state. Well the good news is that as I am developing the engine, I am committing all my code to GitHub under the MIT License (meaning you can use the code freely, but it comes without any warranty). Later on when it is further along and you do find it useful, a ping back would be appreciated especially if you have ideas for ways to make it better.

As for a specific release date. Knowing my freetime is extremely unstable and I still have to deep dive into OpenGL ES far more than I have, I would not expect to see this come to fruition until much later this year, especially with my bbXP project also competing for my free time (not to mention my masters program).

Any questions, comments or suggestions please leave them in the comments section below or email me at jarred at jarredcapellman dot com.

Last night I presented at CMAP's main meeting on using SQL Server's Geospatial Data functionality with MVC and Windows Phone with a focus on getting exposed to using the data types and the associated SQLCLR functions.

As promised, you can download the full Visual Studio 2013 solution (PCL, MVC/WebAPI application and the Windows Phone 8.1 application), in addition to the SQL script to run on your database and the Powerpoint presentation itself here.

Included in the zip file as well is a Readme.txt with the instructions for what you would need to update before running the solution:
-Update the Database Connection String in the Web.Config
-Update the WebAPI Service's connection string in the PCL
-Update the Bing Maps API Key in the Index/View of the MVC App

Any questions, concerns or comments leave them below or email me at jarred at jarredcapellman dot com. As stated during the presentation, this is not a production ready solution (there is no caching, error handling etc.), but merely a demo for one who wants to see how you can begin to use Spatial Data in your applications.
I recently needed to migrate a database from SQL Server to MySQL, knowing the reverse could be easily done with the Microsoft SQL Server Migration Assistant tool I wasn't entirely sure how to achieve the opposite. Figuring MySQL Workbench(in my opinion the best "SQL Management Studio" equivalent for MySQL) had something similar built in, I was pleasantly surprised to find the Migration menu option. I entered my SQL Server credentials and my local MySQL database, but was presented with an undescript error, so I proceeded to the logs:
[bash] File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.1 CE\modules\", line 186, in getCatalogNames return [ row[0] for row in execute_query(connection, query) ] File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.1 CE\modules\", line 62, in execute_query return get_connection(connection_object).cursor().execute(query, *args, **kwargs) pyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Could not find stored procedure 'sp_databases'. (2812) (SQLExecDirectW)") Traceback (most recent call last): File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.1 CE\workbench\", line 192, in thread_work self.func() File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.1 CE\modules\", line 439, in task_fetch_schemata self.main.plan.migrationSource.doFetchSchemaNames(only_these_catalogs) File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.1 CE\modules\", line 241, in doFetchSchemaNames catalog_names = self.getCatalogNames() File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.1 CE\modules\", line 205, in getCatalogNames return self._rev_eng_module.getCatalogNames(self.connection) SystemError: ProgrammingError("('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Could not find stored procedure 'sp_databases'. (2812) (SQLExecDirectW)")"): error calling Python module function DbMssqlRE.getCatalogNames ERROR: Retrieve schema list from source: ProgrammingError("('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Could not find stored procedure 'sp_databases'. (2812) (SQLExecDirectW)")"): error calling Python module function DbMssqlRE.getCatalogNames Failed [/bash] Needing to get this migration performed ASAP, I proceeded to line 186 in C:\Program Files (x86)\MySQL\MySQL Workbench 6.1 CE\modules\
query = 'exec sp_databases' ]]>
Knowing this was wrong I adjusted it to:
query = 'exec sys.sp_databases' ]]>
Saved and then restarted the Migration Tool - no errors and my migration proceeded to work perfectly. Hopefully that helps someone else out there. I know version 6.0 and 6.1 of MySQL Workbench are affected by this bug.

A little less than 2 months ago I had some crazy ideas for interacting with a SQL database and C# as opposed to simply using ADO.Net or Microsoft's own Entity Framework. Not sure exactly as to how I was going to implement some of the features, I shelved it until I came up with a clean way to implement it.

With this project I had four goals:
1. Same or similar syntax to Entity Framework - meaning I should be able to simply drop in my framework in place of Entity Framework with little to no changes.
2. Performance should be equal to or better in both console and WebAPI applications - covering both scenarios of desktop applications and normal for today, WebAPI Services returning results and executing SQL server side and then returning results to a client.
3. Implement my own caching syntax that puts the effort of caching on the Framework, not the user of the Framework.
4. Provide an easy way to generate strongly typed classes akin to Microsoft's Entity Framwork. This weekend I was able to achieve #1 and to some degree #2.

I was able to achieve an identical syntax to Entity Framework like in the snippet below:
using (var jFactory = new jcEntityFactory()) {
     jFactory.JCEF_ExecuteTestSP(); }
In regards to performance, I wrote 2 tests. One that simply called a stored procedure with a single insert statement and another that returned several thousand rows. To give some what real results I directly referenced the framework in a console application and then wrote a WebAPI Service referencing the framework along with a wrapper function to call the WebAPI Service from a console application.

Without further adieu here are the results running it with 10 to 1000 iterations:
[bash] Console App Tests JC EF 10 Iterations with average of 0.00530009 MS EF 10 Iterations with average of 0.05189771 WebAPI Tests JC EF 10 Iterations with average of 0.18459302 MS EF 10 Iterations with average of 0.12075582 Console App Tests JC EF 100 Iterations with average of 0.000740188 MS EF 100 Iterations with average of 0.005783375 WebAPI Tests JC EF 100 Iterations with average of 0.018184102 MS EF 100 Iterations with average of 0.011673686 Console App Tests JC EF 1000 Iterations with average of 0.0002790646 MS EF 1000 Iterations with average of 0.001455153 WebAPI Tests JC EF 1000 Iterations with average of 0.0017801566 MS EF 1000 Iterations with average of 0.0011440657 [/bash] An interesting note is the WebAPI performance differences between the console application. Sadly, with a WebAPI Service my framework is nearly twice as slow, but in console applications (presumably WinForms and WPF as well) my framework was considerably faster.

So where does that leave the future of the framework? First off, I am going to investigate further on the performance discrepencies between the two approaches. Secondly, I am going to then add in caching support with the following syntax (assuming one would want to cache a query result for 3 hours):
using (var jFactory = new jcEntityFactory()) {
     jFactory.Cache(JCEF_ExecuteTestSP(), HOURS, 3); }
More to come with my framework as it progresses over the next few weeks. As far as a release schedule, once all four of my main project requirements are completed I will release a pre-release version on NuGet. I don't plan on open-sourcing the framework, but that may change further down the road. One thing is for sure, it will be freely available through NuGet.

I had the interesting request at work last week to do deletions on several million rows in the two main SQL Server 2012 databases. For years now, nothing had been deleted, only soft-deleted with an Active flag. In general anytime I needed to delete rows it usually meant I was doing a test of migration so I would simply TRUNCATE the tables and call it a day - thus never utilizing C# and there by Entity Framework. So what are your options?

Traditional Approach

You could go down the "traditional" approach:
using (var eFactory = new SomeEntities()) {
     var idList = new List(); // assume idList is populated here from a file, other SQL Table etc... foreach (var someObject = eFactory.SomeObjects.Where(a => idList.Contains(a.ID)).ToList()) {
     eFactory.DeleteObject(someObject); eFactory.SaveChanges(); }
This definitely works, but if you have an inordinate amount of rows I would highly suggest not doing it this way as the memory requirements would be astronomical since you're loading all of the SomeObject entities.

Considerably better Approach

using (var eFactory = new SomeEntities()) {
     var idList = new List(); // assume idList is populated here from a file, other SQL Table etc... string idStr = String.Join(",", idList); eFactory.Database.ExecuteSqlCommand("DELETE FROM dbo.SomeObjects WHERE ID IN ({
)", idStr); }
This approach creates a comma seperated string and then executes the SQL Command. This is a considerably better than the approach above in that it doesn't load all of those entity objects into memory and doesn't look through each element. However depending on the size of idList you could get the following error:

Entity Framework 5 - Rare Event

An even better Approach

What I ended up doing to solve the problems of those above was to split the list and then process the elements on multiple threads.
private static List getList(List original, int elementSize = 500) {
     var elementCollection = new List(); // If there are no elements dont bother processing if (original.Count == 0) {
     return elementCollection; }
// If the size of the collection if (original.Count <= elementSize) {
     elementCollection.Add(String.Join(",", original)); return elementCollection; }
var elementsToBeProcessed = original.Count; while (elementsToBeProcessed != 0) {
     var rangeSize = elementsToBeProcessed < elementSize ? elementsToBeProcessed : elementSize; elementCollection.Add(String.Join(",", original.GetRange(original.Count - elementsToBeProcessed, rangeSize))); elementsToBeProcessed -= rangeSize; }
return elementCollection; }
private static void removeElements(IEnumerable elements, string tableName, string columnName, DbContext objContext, bool debug = false) {
     var startDate = DateTime.Now; if (debug) {
     Console.WriteLine("Removing Rows from Table {
@ {
", tableName, startDate.ToString(CultureInfo.InvariantCulture)); }
try {
     Parallel.ForEach(elements, elementStr => objContext.Database.ExecuteSqlCommand(String.Format("DELETE FROM dbo.{
IN ({
)", tableName, columnName, elementStr))); }
catch (Exception ex) {
     Console.WriteLine(ex); }
if (!debug) {
     return; }
var endDate = DateTime.Now; Console.WriteLine("Removed Rows from Table {
in {
seconds", tableName, endDate.Subtract(startDate).TotalSeconds); }
To utilize these methods you can do something like this:
using (var eFactory = new SomeEntities()) {
     var idList = new List(); // assume idList is populated here from a file, other SQL Table etc... var idStr = getList(idList); removeElements(idStr, "SomeObjects", "ID", eFactory); }
Note you could simplify this down to:
using (var eFactory = new SomeEntities()) {
     removeElements(getList(/* your Int Collection */), "SomeObjects", "ID", eFactory); }
Hopefully that helps someone else out there who runs into issues with deleting massive amount of rows. Note I did try to utilize the Entity Framework Extended NuGet library, but ran into errors when trying to delete rows.
Continuing from last night's post, Part 1 of Migrating WordPress to MVC4 I spent some time today working on the visual display of the migrated posts. Like most programmers using WordPress, I utilize the excellent Syntax Highlighter JavaScript/CSS library for all of my code blocks. The caveat with this, those tags now exist all throughout my posts going back a year or more at this point. Luckily, my Regular Expression skills have gone up considerably with projects like my Windows Phone 8 app, jcCMAP that utilizes XPath and Regular Expressions extensively. So where do you begin? Like many migrations you have a choice, do you migrate the data as is into the new structure or do you manipulate, like in this case with the tags, the actual tags into something that is preprocessed? Being a firm believer in storing data in as bare of a form as possible and then in my business and presentation layers worrying about the UI, I am choosing to leave the tags as they exist. Luckily, the tags follow a very easy to parse syntax with brackets and the name of the language. First steps from last night were to do some refactorization of the Data Layer and split it into a true 3 tier architecture. I first created a PostFactory class to interface with the EntityFramework in my Windows Class Library:
public class PostFactory : IDisposable {
     public List<Objects.Post> GetPosts(DateTime startDate, DateTime endDate) {
     using (var eFactory = new bbxp_jarredcapellmanEntities()) {
     return eFactory.getPostListingSP(startDate, endDate).Select(a => new Objects.Post(a.ID, a.Created, a.Title, a.Body)).ToList(); }
.... ]]>
This block grabs all of the posts for a given date range from the getPostListingSP Stored Procedure and then using LINQ does a translation to a Post object that resides in my PCL library. The Post object exists in the Portable Class Library (PCL) to be utilized by the MVC4 app and the eventual Windows Phone 8 app. Planning ahead and doing things right from the get go will save you time - don't rush your initial architecture, you'll pay for it later. Next I create my Post Object that encapsulates the properties I want exposed to the clients (MVC4 App and Windows Phone). Some might find it silly to not simply reuse the EntityFramework Complex Type object that the stored procedure mentioned above returns. I find that approach to be a lack of separation of concerns and crossing tiers between the data and UI layers. For a simple site I might overlook it, but for 99% of the things I do, I always have an object that acts as a middle man between the data and UI layers. Now onto the code:
public class Post {
     // Propertiess public int ID {
     get; set; }
public DateTime PostDate {
     get; set; }
public string Title {
     get; set; }
public string Body {
     get; set; }
public string PostBy {
     get; set; }
public Post(int id, DateTime postDate, string title, string body) {
     ID = id; PostDate = postDate; Title = title; Body = parsePost(body); }
// Parse the SyntaxHighlighter Tags and replace them with the SyntaxHighlighter <pre> tags private static string parsePost(string content) {
     var matches = Regex.Matches(content, @"\[(.*[a-z])\]"); foreach (Match match in matches) {
     var syntaxTag = new SyntaxTag(match.Value); if (!syntaxTag.IsParseable) {
     continue; }
if (syntaxTag.IsClosingTag) {
     content = content.Replace(syntaxTag.FullTagName, "</pre>"); }
else {
     content = content.Replace(syntaxTag.FullTagName, "<pre class=\"brush: " + syntaxTag.NameOnly + ";\">"); }
return content; }
Pretty stock code, the only "interesting" code is the regular expression to grab all of the SyntaxHighlighter. For those doing Regular Expressions, I find it incredibly useful to use a tool like Regex Hero to build your Regular Expressions since you can test input on the fly without having to constantly rebuild your code and test. Next on the "to code" list was the SyntaxTag object.
public class SyntaxTag {
     public SyntaxTag(string value) {
     FullTagName = value; }
public string NameOnly {
     get {
     return FullTagName.Replace("[/", "").Replace("[", "").Replace("]", ""); }
public bool IsClosingTag {
     get {
     return FullTagName.StartsWith("[/"); }
public string FullTagName {
     get; private set; }
// Acceptable syntaxtags (there are more, but this is all I used previously) private enum SYNTAXTAGS {
     csharp, xml, sql, php, c, bash, shell, cpp, js, java, ps, plain }
public bool IsParseable {
     get {
     SYNTAXTAGS tag; return Enum.TryParse(NameOnly, out tag); }
Again, a pretty basic class. Based on the full tag, it provides a clean interface to the Post class (or others down the road) without mucking up other areas of code. One thing I did do that many might find strange is to use an enumeration to eliminate false positives. I am a huge fan of strongly typed code (thus why I shy away from languages that aren't) so it made perfect sense to again utilize this approach. As I utilize new tags for whatever reason, the logic is contained only here so I won't be hunting around for where to update it. Another less "clean" approach would be to put these in the web.config or in your SQL Database. Though I find both of those more performance intensive and not necessary in this case. Now that the business and data layers are good to go for the time being, let's go back to our MVC4 App. Inside my controller the code is pretty simple still for my Index:
public ActionResult Index() {
     var model = new Models.HomeModel(); using (var pFactory = new PostFactory()) {
     model.Posts = pFactory.GetPosts(new DateTime(2001, 1, 1), new DateTime(2013, 4, 13)); }
ViewBag.Model = model; return View(model); }
At the moment I don't have my WCF Service written yet so for the time being I am simply referencing the Windows Class Library mentioned above, thus why I am referencing the PostFactory class directly in the Controller. Then in my View:
@model bbxp.mvc.Models.HomeModel @{
     ViewBag.Title = "Jarred Capellman"; }
@foreach (var post in Model.Posts) {
     @Html.Partial("PartialPost", post) }
<script type="text/javascript"> SyntaxHighlighter.all() </script> ]]>
As I am looping through each post I am calling out to my Partial View, PartialPost. And for my Partial View:
@model bbxp.lib.Objects.Post <div class="post"> <div class="Date"> <h3>@Model.PostDate.ToLongDateString()</h3> </div> <div class="Content"> <div class="Title"> <h2>@Model.Title</h2> </div> <div class="Body">@(new MvcHtmlString(@Model.Body))</div> </div> </div> ]]>
The @(new MvcHtmlString(@Model.Body)) line is very important otherwise your HTML Tags will not be parsed as you would expect. When all is said and done I went from this last night: [caption id="attachment_1982" align="aligncenter" width="300"]End Result of an Initial Conversion End Result of an Initial Conversion[/caption] To this tonight: [caption id="attachment_1992" align="aligncenter" width="300"]After applying regular expressions to the Post Content After applying regular expressions to the Post Content[/caption] Next up is creating the WordPress Sidebar History and extending functionality of the "engine" to support single Post Views.
Ran into interesting scenario this afternoon when I was doing some house cleaning of my SQL Server 2008 R2 install. I had been bad last year, had 7 databases that were no longer in use, but hadn't detached them. Today I went through and detached them one by one, namely because I was running out of memory on my web server. I assumed as the databases were detached (one of them was actively being used all the way up to this morning) that it would gradually free cached queries that were no longer needed. To my surprise it didn't. Even after leaving only 2 databases, both only a few hundred rows it still hovered at the exact same memory usage. It took restarting the SQL Server service to bring it back down to 74mb. Is there a reason for this design? If this had been a critical production environment I wouldn't have liked to have had to restart the entire service. Maybe Denali will "correct" this.
Learned a little "gotcha" yesterday when I was debugging a stored procedure that used coalesce with both a parameter and the column I was writing a conditional on. I had something like the following: CREATE PROCEDURE dbo.getPeopleSP (@LocationID INT = NULL) AS SELECT dbo.People.ID, dbo.People.Name, dbo.People.LocationID FROM dbo.People WHERE dbo.People.LocationID = COALESCE(@LocationID, dbo.People.LocationID) Thinking that since the @LocationID was optional, the COALESCE would take care of the rest if @LocationID was NULL. This is NOT the case. In order do what I had wanted you have to do this: CREATE PROCEDURE dbo.getPeopleSP (@LocationID INT = NULL) AS SELECT dbo.People.ID, dbo.People.Name, dbo.People.LocationID FROM dbo.People WHERE (@LocationID IS NULL OR dbo.People.LocationID = @LocationID) Not a deal breaker for me, I wish there was an optional parameter in COALESCE to handle it the way you would think it would, NULL being equal to NULL.