latest posts

Introduction

Taking a break from ASP.NET 5 (and Visual Studio 2015 CTP6) until the next CTP release I've gone back to working on a project I started for all intents and purposes back in 1995, the MODEXngine. As mentioned back on August 8th, 2013, the MODEXngine is intended to not only being a game engine with the usual graphics, audio, input handling etc, but also a cloud platform. With cell phones and tablets establishing themselves as a viable platform to target the last several years, one can no longer simply focus on the PC (Win32/Linux/Mac OSX). With mobility also comes with things that a traditional "PC" developer wouldn't run into: vastly different platform APIs, network drops, "5 minute or less" gameplay and supporting an eco-system that crosses all of those platforms. Coming at this as an enterprise developer who actively develops on virtually every platform (Windows, the Web, Android, iOS, Windows Phone and Windows Store), I feel as though I bring a fairly unique perspective of how everything can exist and bring the experience to each platform natively, putting myself in the shoes of someone who wants to develop a game for every platform, but wants full native control over the platform as opposed to an "Apache Cordova" approach in which it solves the bigger problem of quickly delivering to multiple platforms, but stalls when you have a feature that needs more native functionality (let alone speed). Another advantage I hope to bring is the ease of use. Wrapping native level calls with generic wrappers across the board, it should cut down on the issues of "how do I do that on platform XYZ", similar to how Xamarin Forms has made wrappers for iOS, Android and Windows Phone, but hopefully with less issues.

With the introduction and overall goals out of the way, lets deep diving into the details.

Language and Tech Details

A big decision (one that I am still not 100% decided on) is the overall language used for the platform. Keeping to just one language has the advantage that if someone knows the language I choose, he or she can develop againinst the entire platform. However, one language for a project of this scope goes against my "use the best tool for the job" principle. By utilizing my language of choice, C#, I would be committing people to utilizing Xamarin for iOS and Android deployments. For smaller projects, they could simply get away with the free license, but that would be putting an extra burden on the developer (or development team) which also has to incur the costs of simply getting into the various stores for iOS, Android and Windows. On that same breath, with Microsoft's big push for cross-platform development to Linux and Mac OSX this might be overlooked (hoping that one day that license is just bundled with Visual Studio so this point would be mute for the most part).

The bigger question that has been pestering me for quite some time is the graphics API to use. When Carmack gave his observations of Direct3D back in mid 90s when asked why there was only an OpenGL port of Quake, I chose to follow his path of using OpenGL. It made sense at the time and still does. It is supported by almost every platform and only being focused on graphics I appreciated far more (and still do) than the "I can do everything" model that DirectX followed. While it might be unfair now to continue that mentality almost 20 years later, the idea still holds true. I can utilize OpenGL on Linux, iOS, Android, Mac OSX and regular Windows desktop applications all in C#. The only platforms I would be excluding would be Windows Phone and Windows Store. Which for followers of this blog, know I love from both a consumer and developer's perspective in every aspect but Microsoft's stance on not allowing OpenGL natively supported like they have done since Windows NT. Doing some research into this issue, I came across the ANGLE (Almost Native Graphics Layer Engine) project which translates OpenGL ES calls to DirectX 9 or 11 calls for Windows Phone and Windows Store apps. As of right now I haven't dove into this library to see its full capabilities, but from the MSDN blog posts on it, this approach has been used in production grade apps.

For the time being, I think utilizing C# across the board is the best approach. Web Developers who know ASP.NET would find the WebAPI service and libraries accessible, while Windows Phone/Store developers would find the engine libraries no different than utilizing a NuGet package.

The area where I want to be a lot more flexible is in the CRUD operations on data locally and in the Cloud. In my mind, whether the data is on a device or on a cloud, during retrieval it should make no difference. Akin to how easy Quake III made it to download levels from the games' server without having to leave and come back (as was the case in other games of that era). Obviously if one isn't connected to the internet or dropped connection then handling needs to be in place to handle a hybrid situation, but for all intents and purposes the shock and awe of doing such an implementation really isn't a huge endeavor if one designs the architecture with that in mind.

Along those same lines a big question in my mind is the storage of user data, statistics, level and other game content. A traditional .NET developer approach would be to utilize SQL Server 2014 and possibly Azure File Storage for the content (textures, audio files etc). Open source developers coming from Python or PHP might be drawn to use MySQL or MongoDB in place of SQL Server. My goal is to make the calls abstract so that depending on you, the developer, you can utilize whatever you wish. I more than likely will be using SQL Server for User Data at the very least, but planning ahead for potentially billions of concurrent users storing the rest of the data in that fashion would be extremely inefficient. Databases like Redis or ArangoDB might be a better choice for concurrent data. Or perhaps even my own distrubted key/value database jcDB. Seeing as I am still setting up the overall architecture, this will evolve and will be interesting to start doing simulated performance tests while also taking into account how easy it is to interact with each of the databases for CRUD operations.

Modability

Even before my announcement in August of 2013, the year prior in August of 2012 I had seen a huge disconnect between mobile/console games and PC games: the ability mod. One of the things that for myself and I imagine others back in the 90s with the modability of Doom and Quake (among others), it expanded the games' community in a way. Whether it was as "simple" as a new deathmatch level or as extravagent as some of the mods like Quake Rally, it made a huge difference inbetween major game releases. To this day I am not aware of any cross platform games that support modding like id software had provided back in the 90s. Since coming up with that idea technology has changed dramatically, but the idea is the same. Instead of a WCF Service and thinking small scale, I would use a WebAPI service hosted on Azure using Azure Storage with containers for each game. Security being an even bigger issue now than it was almost 3 years ago, I would more than likely employ a human element of reviewing submitted mods prior to implementing a fully automated security scan.

Release and what to look forward to

Those are the main talking points at this point in my mind, but as I get further in the development these more than likely will expand and the "features" list will need its own index.

I imagine at this point a big question on your mind is how soon this be made available in even an alpha state. Well the good news is that as I am developing the engine, I am committing all my code to GitHub under the MIT License (meaning you can use the code freely, but it comes without any warranty). Later on when it is further along and you do find it useful, a ping back would be appreciated especially if you have ideas for ways to make it better.

As for a specific release date. Knowing my freetime is extremely unstable and I still have to deep dive into OpenGL ES far more than I have, I would not expect to see this come to fruition until much later this year, especially with my bbXP project also competing for my free time (not to mention my masters program).

Any questions, comments or suggestions please leave them in the comments section below or email me at jarred at jarredcapellman dot com.

In working on the NetBSD-mips ports of jcBench and jcDBench, I realized I never did a Solaris port of either when working on my Sun Blade 2500 last Fall. Without further adieu I am proud to announce the initial releases of both jcBench and jcDBench for sparc/Solaris compiled with g++ 4.80. Both have 0 dependencies so just extract and run. Man pages and a true installer will come with the big 1.0 release (more on that at a later date).

You can download jcBench 0.8.753.0318 here.

You can download jcDBench 0.1.48.0318 here.

No known issues in my testing on Solaris 5.10 with my Sun Blade 2500.

Any suggestions, issues, comments etc. just leave a comment below and I'll get back to you as soon as possible.
After a little bit of work on both getting the platform specific code working perfectly - I am proud to announce the initial releases of both jcBench and jcDBench for mips/NetBSD. Both have 0 dependencies so just extract and run. Man pages and a true installer will come with the big 1.0 release (more on that at a later date).

You can download jcBench 0.8.752.0306 here.

You can download jcDBench 0.1.47.0312 here.

No known issues in my testing on NetBSD 5.2.2 with my Cobalt Qube 2.

Any suggestions, issues, comments etc. just leave a comment below and I'll get back to you as soon as possible.
Continuing my work on my Cobalt Qube 2, I had some time tonight to re-install NetBSD on a Corsair NOVA 30GB SSD.

A fairly trivial hardware installation with a PATA<->SATA adapter:
Gateway Qube 2 - PATA<->SATA Adapter

Gateway Qube 2 - Corsair NOVA SSD

After installation I ran jcDBench after running it with the "stock" Seagate Barracuda ATA IV PATA drive expecting a huge boost in performance, I was disappointed to see these results: [bash] $ ./jcDBench Running with no arguments... #----------------------------------------------- # jcDBench mips/NetBSD (0.1.47.0312) # (C)2013 Jarred Capellman # # Test Date : 3-13-2014 18:48:14 # Starting Size : 4096 # Maximum Size : 4194304 # Iterations : 100 # Filename : testfile #----------------------------------------------- # test size write read # (bytes) (MB/s) (MB/s) #----------------------------------------------- 4096 1.83MB/s 6.17MB/s 8192 3.01MB/s 10.3MB/s 16384 6.77MB/s 13.2MB/s 32768 8.5MB/s 11.4MB/s 65536 3.55MB/s 10.9MB/s 131072 3.83MB/s 11.6MB/s 262144 3.78MB/s 12.1MB/s 524288 3.87MB/s 12.4MB/s 1048576 3.9MB/s 7.47MB/s 2097152 3.91MB/s 7.56MB/s 4194304 3.93MB/s 7.58MB/s Benchmark Results Uploaded [/bash] The "stock" drive performed reads very similarly and the writes it performed considerably better than the theoretically much faster SSD. Not convinced something else was wrong, I went through the dmesg. Low and behold:
[bash] wd0 at atabus0 drive 0: wd0: drive supports 16-sector PIO transfers, LBA48 addressing wd0: 28626 MB, 58161 cyl, 16 head, 63 sec, 512 bytes/sect x 58626288 sectors wd0: 32-bit data port wd0: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133) wd0(viaide0:0:0): using PIO mode 4, Ultra-DMA mode 2 (Ultra/33) (using DMA) Kernelized RAIDframe activated boot device: wd0 root on wd0a dumps on wd0b root file system type: ffs viaide0:0:0: lost interrupt type: ata tc_bcount: 16384 tc_skip: 0 viaide0:0:0: bus-master DMA error: missing interrupt, status=0x20 wd0: transfer error, downgrading to Ultra-DMA mode 1 wd0(viaide0:0:0): using PIO mode 4, Ultra-DMA mode 1 (using DMA) wd0a: DMA error reading fsbn 28640352 of 28640352-28640383 (wd0 bn 34144032; cn 33873 tn 0 sn 48), retrying wd0: soft error (corrected) viaide0:0:0: lost interrupt type: ata tc_bcount: 16384 tc_skip: 0 viaide0:0:0: bus-master DMA error: missing interrupt, status=0x20 wd0: transfer error, downgrading to PIO mode 4 wd0(viaide0:0:0): using PIO mode 4 wd0a: DMA error reading fsbn 31656736 of 31656736-31656767 (wd0 bn 37160416; cn 36865 tn 7 sn 55), retrying wd0: soft error (corrected) [/bash] For whatever reason the drive got downgraded to PIO mode 4 - or 16.6mb/sec down from the 33mb/sec Ultra-DMA mode 1 offers. Doing some reasch on the issue, two suggestions came up: 1) the drive was failing (which is possible, but unlikely considering I just purchased it) and 2) the controller or adapter is faulty. Having never used this adapter in another system I am leaning towards later - I will have to pull out the adapter I am using for my DEC Personal Workstation 433a which I know works perfectly.

More to come on this issue...
Hoping to wrap up the Windows Store, ARM/Windows Phone 8, SPARC/Solaris 10 and ia64/Linux ports this week, but here is another port: MIPS/Irix:

MIPS/Irix

In case someone missed the other releases:
x86/Mac OS X
x86/Linux
Power PPC/Mac OS X
x86/Windows

In related news, jcBENCH will have it's big official 1.0 release across the same platforms.
A day later than I wanted, but here are the links to the x86/Linux and x86/Mac OS X binaries of the initial jcDBench release :
x86/Mac OS X
x86/Linux

And for those that hadn't seen the initial post, here are the ppc/Mac OS X and x86/Win32 binaries:
Power PPC/Mac OS X
x86/Windows

After a few weeks of development, I'm proud to announce the initial release of jcDBench, a cross-platform disk benchmarking tool with the ability to upload results. These results are anonymous, the only things that are submitted are block size, platform and your scores. These results, in the next version I will add the ability to compare your results with others.

[bash] #----------------------------------------------- # jcDBench x86/Win32 (0.1.45.1029) # (C)2013 Jarred Capellman # # Test Date : 10-28-2013 20:28:2 # Starting Size : 4096 # Maximum Size : 4194304 # Iterations : 10 # Filename : testfile #----------------------------------------------- # test size write read # (bytes) (MB/s) (MB/s) #----------------------------------------------- 4096 0.625MB/s 0.0391MB/s 8192 0.0781MB/s 0.0781MB/s 16384 0.156MB/s 0.156MB/s 32768 5MB/s 0.313MB/s 65536 10MB/s 0.625MB/s 131072 1.25MB/s 1.25MB/s 262144 40MB/s 40MB/s 524288 80MB/s 80MB/s 1048576 80MB/s 160MB/s 2097152 107MB/s 160MB/s 4194304 128MB/s 107MB/s Benchmark Results Uploaded [/bash] This initial release I made native binaries for the following platforms:
Power PPC/Mac OS X
x86/Windows

More platforms to come tomorrow, please leave feedback below.
For frequent visitors to this site, you might have noticed the "metro" look starting to creep into the site design. Going back to April, my gameplan was to simply focus on getting the site functioning and duplicate the look and feel of the WordPress Theme I had been using. A little longer than expected to start the "refit", but I think when it is completed, it will be pretty cool. For now I know the site only looks "perfect" in IE 11, there's some tweaking to be done for Chrome and Firefox, but that's on the back burner until I round out the overall update.

In addition, the My Computers page has been updated to include a break out of non-x86 systems I have. As I get time I'm going to flush out the detail design and start to get content in there for each system. These later generation Silicon Graphics machines seem to have little to no documentation available as far as what upgrades are possible, things to look for and min/max configurations, not to mention "loudness" factor. Look to that section in the coming weeks with a big update.

Lastly, in regards to development, jcDB is progressing well on both ia64/Linux and x86/Win32. I am still on track to wrap up a 0.1 release by the end of the year with a C# Library to assist with interfacing with the database. Mode Xngine is also progressing well, I spent a little bit of time over the last 2 weeks working on the SDL Interface for the platforms that don't support Mono.

This week overall, I expect to post a few DEC Alpha related posts and with any luck get some time to play with NT4/Windows 2000 on one of them.
If you've been following my blog posts over the last couple of years you'll know I have a profound love of using XML files for reading and writing for various purposes. The files are small and because of things like Typed Datasets in C# you can have clean interfaces to read and write XML files. In Windows Phone however, you do not have Typed Datasets so you're stuck utilizing the XmlSerializer to read and write. To make it a little easier going back to last Thanksgiving I wrote some helper classes in my NuGet library jcWPLIBRARY. The end result within a few lines you can read and write List Collections of Class Objects of your choosing. So why continue down this path? Simple answer: I wanted it better. Tonight I embarked on a "Version 2" of this functionality that really makes it easy to keep with your existing Entity Framework knowledge, but provide the functionality of a database on a Windows Phone 8 device that currently doesn't exist in the same vain it can in a MVC, WinForm, WebForm or Console app. To make this even more of a learning experience, I plan to blog the entire process, the first part of the project: reading all of the objects from an existing file. To begin, I am going to utilize the existing XmlHandler class in my existing Library. This code has been battle tested and I feel no need to write something from scratch especially since I am going to leave the existing classes in the library to not break anyone's apps or my own. First thoughts, what does a XmlSerializer file actually look like when written to? Let's assume you have the following class, a pretty basic class:
public class Test : jcDB.jObject {
     public int ID {
     get; set; }
public bool Active {
     get; set; }
public string Name {
     get; set; }
public DateTime Created {
     get; set; }
}
]]>
The output of the file is like so: [xml] <?xml version="1.0" encoding="utf-8"?> <ArrayOfTest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Test> <ID>1</ID> <Active>true</Active> <Name>Testing Name</Name> <Created>2013-04-03T20:47:09.8491958-04:00</Created> </Test> </ArrayOfTest> [/xml] I often forget the XmlSerializer uses the "ArrayOf" prefix on the name of the root object so when testing with sample data when writing a new Windows Phone 8 app I have to refer back - hopefully that helps someone out. Going back to the task at hand - reading data from an XML file and providing an "Entity Framework" like experience - that requires a custom LINQ Provider and another day of programming it. Stay tuned for Part 2 where I go over creating a custom LINQ Provider bound to an XML File.
After playing around with Google Charts and doing some extensive C#/SQL integration with it for a dashboard last summer, I figured I'd give Telerik's Kendo a shot. If you're not familiar with Telerik, they produce very useful controls for WinForm, WPF, WP7 and ASP.NET controls (in addition to many others). If you do .NET programming, their product will save you time and money guaranteed. That being said, I started work on the first module for jcDAL last night and wanted to add some cool bar graphs to the web interface for the analyzer. About 15 minutes of reading through one of their examples I had data coming over a WCF service into the Kendo API to display this: [caption id="attachment_880" align="aligncenter" width="621" caption="jcDBAnalyzer Screengrab showcasing Kendo"][/caption] So far so good, I'll report back with any issues, but so far I am very pleased. A lot of the headaches I had with Google Charts I haven't had yet (+1 for Telerik).