latest posts

Going back to 11/26/1997 (scary it has been almost 18 years), I’ve been fascinated by artificial intelligence. In November 1997 I was still pretty much doing QBasic with a little Visual Basic, so it is not surprisingly the sole surviving code snippet I have from my Jabba Chat application (yes I was a huge Star Wars fan even back then) was in QBasic:

PRINT "Welcome to JABBA CHAT"
PRINT "Your Jabba friend, Karen"
PRINT "What is your first name", name$
CLS
PRINT "Well HI There "; name%
PRINT "It sure is neat to have You Drop by"
PRINT "Press space bar when ready you're ready to start"


As simple as it may be, this is at a basic level following a pre-programmed path, taking input and "learning" a person's name. Now days with programming languages better structured to handle AI, processing power and overall a better understanding of how we humans think now has never been a better time to dive into this area.

As stated in a previous post back in July 2014 I've become heavily vested into making true Artificial Integllience work in C#. In working on a fully automated scheduling system at work a lot of big questions I had never encountered in my 15 years of professional development with one in particular:

How do you not only replicate a human's job, but also make it better taking advantage of one of the biggest advantages a computer has over a human being; the ability to process huge data sets extremely fast (and consistently produce results)?

The answer wasn't the solution, but instead I realized I was asking the wrong question, and only after really deep diving into the complexities of the rather small "artificial intelligence" engine that I came to realize this. The question should have been what drives a human to make decisions? The simple programmatic solution to that question is to go through and apply conditionals for every scenario. Depending on what the end goal of the project is that maybe a good choice, but what if it is a more complex solution or hits one of the most common events in a computer program: the unexpected, that approach can't be applied. This question drove me down a completely different path thinking about how a human being makes decisions when he or she has yet to have encountered a scenario, an unhandled exception if you will. Thinking about decisions I have had to make throughout my life, big or small I have relied on past experience. Thinking about an application just going to production, it has no experience, each nanosecond is a new experience, for all intents and purpose a human infant. Remembering back as far as I can to when I was 4, sometimes you would fail or make a mistake, as we all have, the key as our parents enthralled into us was to learn from the mistake or failure so we wouldn't make it again. Applications for the most part haven't embraced this. Most of the time a try/catch is employed with an even less likely alert to a human notifying them that their program ran into a new (or possibly repeated experience if the error is caught numerous times before being patched, if ever). The human learns of the "infants" mistake and corrects the issue hopefully. The problem here is that the program didn't learn, it simply was told nothing more than to check for a null object or the appropriate way to handle a specific scenario, i.e. a very rigid form of advancement. This has been the accepted practice for as long as I have programmed. A bug arises, a fix is pushed to staging, tested and pushed to production (assuming it wasn't a hotfix). I don't think this is the right approach any longer. Gone are the days of extremely slow x86 CPUs or limited memory. In today's world we have access to extremely fast gpus and vast amounts of memory that largely go unused coupled with languages that facilitate anything we as programmers can dream up.

So where does the solution really reside in?

I believe the key is to architect applications to become more organic in that it should learn from paths taken previously. I have been bouncing around this idea for the last several years looking to self-updating applications where metrics captured during use could used to automatically update the applications own code. The problem with this is then you're relying on both the original programming to affect the production code in addition to the code it would be updating. Let alone also ensuring that changes that are made automatically are then tracked and reported appropriately. I would venture most larger applications would also need projections to be performed prior to any change along with the scope of what was changed.

Something I added into the same platform at work was the tracking of every request by user, what he or she was requesting, the timestamp and the duration the request took from the inital request to returning of information or processing of the request. To me this not only provided the audit trails that most companies desire and there by the ability to add levels of security to specific pieces of information system regardless of the platform, but also providing the ability to obtain the information of "John Doe requests this piece of information and then this piece of information a few seconds later every time". At that point the system would look for these patterns and alert the appropriate party. In that example, is it that the User Interface for what John Doe needs to access requires two different pages to access or is that he was simply investigating something? Without this level of granularity you are relying on the user to report these "issues" which rarely happens as most users get caught up in simply doing their tasks as quickly as possible.

Going forward I hope to add the automatic reporting of trends and take a pro-active approach to performance ridden areas of the system (if metrics for the last 6 months are returning a particular request in .13 seconds on average and then for the last week it jumps to 3.14 seconds on average, the dev team should be alerted so they can investigate the root cause). However, these are far from my longer term goals in designing a system that truly learns. More on this in the coming months as my next generation ideas come to fruition.

Intro

Yesterday I posted about my GitHub repo work over the last couple of weeks. One project I did not include intentionally was jcLSL. Followers of my blog know I had been toying with writing my own scripting language for quite some time, but kept hitting road blocks in parsing and handling anything some what complex. This has led me to think about what I would really want in a scripting language and what value it would bring to the already feature rich C# eco-system that I live and work in every day.

The Problem

At the end of the day I found that I really would only want a scripting language for doing mail merges on strings. Expanding on that, how many times have you needed to have a base template and then populate with a class object or even simply some business logic applied? I'm sure we've all done something like:

[csharp] public string ParseString(string sourceString, string stringToReplace, string stringValue) { return sourceString.Replace(stringToReplace, stringValue); } [/csharp] While at a simplistic level this is acceptable and a better approach than not wrapping the Replace call, it isn't ideal especially if you are working with POCO (Plain Old CLR Objects), then it becomes a lot more dirty in your code, assuming you wrapped the calls like in the above function, let's say you have a basic User class definition like so:

[csharp] public partial class User { public int ID { get; set; } public string Name { get; set; } } [/csharp] And then in your parsing code assuming it also exists in a User class:

[csharp] public string ToParsedString(string sourceString) { sourceString = sourceString.Parse("ID", this.ID); sourceString = sourceString.Parse("Name", this.Name); return sourceString; } [/csharp] As you could guess not only is that code an eye sore, it doesn't scale when more properties are added to your class and you'd end up adding something similar to each POCO in your code base - not ideal. This brought me to my first objective, solving this problem in an extremely clean fashion.

Earlier this week I got it to where any POCO with jcLSLMemberAttribute decorated above a property will be automatically parsed handling the situation of code changing in time (some properties could go away, be renamed or added). With my current implementation all you need is to define a jcLSLGenericParser and then call the Run method passing in the string to mail merge and the class object you wish to merge from like so:

[csharp] var TEST_STRING = "Hello {Name} This is a test of awesomness with User ID #{ID}"; var user = new User { ID = 1, Name = "Testing" }; var gParser = new jcLSLGenericParser(); var parsedString = gParser.Run(TEST_STRING, user); [/csharp] After running that block of code the parsedString will contain: Hello Testing This is a test of awesomeness with User ID #1. There is an optional event for the jcLSLGenericParser class as well if more custom parsing needs to be achieved.

Going forward

One of the first things to do is to add some more options for different scenarios. Maybe you've got a large code base and going through and adding the jcLSLMemberAttribute decoration would be a task in itself. One approach to solve this scenario is to add an optional parameter to the jcLSLGenericParser constructor to simply iterate through all the properties. This takes a performance hit as one would be expect, but it would leave the level of ties to this library to a minimum. Thoughts on this, please post a comment.

On the larger scale my next major goal is to add support for Streams and other output options. Let's say you had a static HTML template that need to be populated with a News Post for instance. The HTML template could be read in, mail merged against the database and then outputed to a string, stream output to a file or return binary. Trying to handle ever scenario I don't think is realistic or feasible, but handling the 90% scenario or better is my goal.

Another item that I hate implementing with mail merge fields is error handling. One approach is to simply return the exception or error in the mail merge itself. This isn't good when you have external customers and they see something like Object Reference Exception or worse yet a full stack trace. The vailidity of your product will go down quickly. However, I think a standardized {JCLSL_ERROR} merge field to store the error to display on an exception page or email would make sense. Handling both objectives in error handling: being aware of the error, but also handling it gracefully.

Further down the road I hope to start adding in support for "true" scripting so you could have conditionals within the merges or other logic you would want to be able to change on the fly without having to deploy an updated Mobile App, ASP.NET MVC Web App, WebAPI Service or whatever platform you're using with this library.

Where to get it

As mentioned earlier you can grab the code and samples on GitHub or grab the binary from either NuGet or from the NuGet Console with: PM> Install-Package jcLSL. As I check code into GitHub and get to stable releases, I'll update the NuGet package.

While largely quiet on here since my last post two weeks ago I have been hard at work on several smaller projects all of which are on GitHub. As mentioned previously, everything I work on in my freetime will be open sourced under the MIT License.

jcAnalytics

The first item I should mention is some new functionality in my jcAnalytics library. Earlier this week I had some ideas for reducing collections of arbitrary data down to distinct elements. For example if you had 3 Objects of data, with 2 of them being identical, my reduction extension methods would return 2 instead of 3. This is one the biggest problems I find when analyzing data for aggregation or simply reporting, especially when the original amount of data is several hundreds of thousands or more. I attempted the more straight forward single threaded model, as expected the performance as the number of elements increased was dramatically slower than a parallel approach. Wondering if there were any theories on taking a sampling of data quickly to scale as the number of items increased, I was surprised there was not more research on this subject. Doing a Log(n) sample size seemed to be the "goto" method, but I could not find any evidence to support the claim. This is where I think recording patterns of data and then persisting those patterns could actually achieve this goal. Since every problem is unique and every dataset over time the extension methods could in fact learn something along the lines of "I have a collection of 500,000 Addresses, last 10 times I ran I only found 25,000 unique addresses at an average rate of every 4 records." On subseqent runs, it could adapt per request. Maybe assign Guids or another unique identifier for each run with the result patterns on disk, in a SQL database or in Azure Cache. For those curious, I did update the NuGet package as well with these new extension methods. You can download the compiled NuGet Package here on NuGet or via NuGet Console with PM> Install-Package jcANALYTICS.Lib.

jcPIOL

A huge topic in my world at work has been offline/online hybrid mobile applications. The idea that one could "sync" and then pull down data for 100% offline use has been on my mind since it was requested several months ago by one of our clients. Knowing the first approach might not be the best and that I wanted to create a generic portable class library that could be plugged into any mobile application on any platform (iOS, Android, Windows), I figured I would begin my research fully exposed on GitHub and then as stable releases were built I would publish them on NuGet. This project is of a larger nature in that it could quickly blossum into a framework instead of simply a library. As of right now on GitHub I have a the GET, POST and DELETE HTTP verbs working to pull/push data, but not storing the data for offline purposes. I'm still working out the logistics of how I want to achieve everything, but the ultimate goal would be to have any request queued when offline and then when a network connection was made automatically sync data. Handling multiple versions of data is a big question. Hypothetical if you edited a piece of information and then edited it again, should it send the request twice or once? If you were online it would have sent it twice and in some cases you would want the full audit trail (as I do in the large enterprise platform at work). Another question that I have not come up with a great answer for is the source of truth question. If you make an edit, then come online I could see a potential race condition of the data syncing back and a request being made on the same data. Handling the push and pull properly will take some extensive logic and more than likely might be a global option or down to the request type level. I am hoping to have an early alpha of this working perfectly in the coming weeks.

jcTRENDNET

This project came at the request of my wife who wanted a way to view Trendnet cameras from her Nokia Lumia 1020 Windows Phone. Trendnet only offered apps for iOS and Android and there were no free apps available in the Windows Phone marketplace - so I spent an evening and wrote one last August (2014). Again going with the Windows 10 Universal approach, I began to re-write the app to take advantage of all the new XAML and addin features I had long since wanted to add in. Going with my open source initiative, all of the code is checked into GitHub. I am hoping to have everything ported from the old Windows Phone 8.1 app along with all of the new functionality this summer.

jcRSS

Another older project that I see a need to fufill going forward. Since Google Reader faded away, I switched over to feedly, but I really don't like their interface nor how slow it is. Originally this project was going to be an ASP.NET MVC/WebAPI project with a Windows Phone/Windows Store app. As with my other projects, I knew I wanted to simply port over the work I had done to a Windows 10 Universal App, but as I got into working on it, there was no reason to tie the apps back to a WebAPI Service if I did away with the MVC view. Knowing I was going to be freely giving away this application and didn't want to have ads I also didn't want to incur massive Azure fees if this were to take off. So for the time being this project will exist as a Windows 10 Univeral App with full support for multiple devices (i.e. if you read an article on one device, it will mark it as read on the others). You can check out the code on GitHub. I'm hoping for a release in the coming months.

Federation

This was a project I had been slowly designing in my head since summer of 2012 - a turn based Star Trek game without microtransactions and the ability for one to simply keep playing as long as they want. I started coding this in August 2014 and into September 2014, but put it on hold to work on Windows IoT among other topics of interest. Now with Windows 10's release on the immediate horizon I figured I should wrap up the game and in kind open source the project. As of now I'm in the process porting over the XAML to Windows 10 as it was originally targeting Windows Phone 8.1. Once that process is complete, I will return to working on the logic and with any luck release it sometime this summer, but in the meantime you can checkout the code on GitHub.

jcMATH

I originally wrote this "game" for my boss's child since there was not a dot math game in the Windows Phone marketplace. Seeing as how it got 0 downloads, I open sourced it. I did start porting it over to a Windows 10 Universal Application, but have not finished yet.

Closing

Now that Visual Studio 2015 RC is out, I will more than likely be returning to my open source bbXP project. The only reason I put it on hold was the issues I was running into with NuGet packages in CTP6 of Visual Studio 2015. Coming up in a few weeks is the 20th anniversary of when I wrote my first line of code, expect a retrospective post on that in a few weeks.

Continuing my work deep diving into ASP.NET 5 (vNext), I started going down the path of EntityFramework 7. Which similiarly to ASP.NET 5, is like a reboot of the framework itself. Readers interested to dive in, I highly suggest you watch the MVA Video called What's New with ASP.NET 5 that goes over all of the changes in pretty good detail (though I have a running questions list to ask at BUILD in a few weeks).

Noting that EntityFramework 7 beta was included in my ASP.NET 5 project, I hit a road block into finding it through the usual method in the NuGet Package Manager. As of this writing, only 6.1.3 was available. In looking around, the answer is to add another NuGet Package Source. I had done this previously as I setup a private NuGet Package Server at work to host common libraries used throughout all of our projects. For those unaware, goto Tools->NuGet Package Manager->Package Manager Settings.

Once there, click on Package Sources and then the + icon, enter a descriptive name and for the source and paste the following url: https://www.myget.org/F/aspnetvnext/ and click Update. After you're done, you should have something similiar to this:



You can now close out that window and return to the NuGet Package Manager and upon switching the Package Source dropdown to be ASP.NET vNext (or whatever you called it in the previous screen) you should now see EntityFramework 7 (among other pre-release packages) as shown below.



Hopefully that helps someone out there wanting to deep dive into EntityFramework 7.

Last Fall I wrote my first Windows Service in C# to assist with a Queue Module addon for a time intensive server side task. 6-7 months have gone by and had forgotten a few details involved, so I'm writing them up here for myself and others who might run into the same issue. First off, I'll assume you've created your Windows Service and are ready to deploy it to your Staging or Production environments. The first thing you'll need to do is place the contents of your release or debug folders on your server or workstation. Secondly, you'll need to open an elevated command prompt and goto the Framework folder for the version of .NET your Windows Service is to run the installutil application. In my case I am running a .NET 4.5 Windows Service so my path is: [powershell] C:\Windows\Microsoft.NET\Framework\v4.0.30319 [/powershell] NOTE: If you do not elevate the command prompt you'll see this exception: [powershell] An exception occurred during the Install phase. System.Security.SecurityException: The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security. [/powershell] Once in the framework folder, simply type the following, assuming your service is located in c:\windows_services\newservice\wservice.exe: [powershell] installutil "c:\windows_services\newservice\wservice.exe" [/powershell] After running the above command with the path of your service you should receive the following: [powershell] The Install phase completed successfully, and the Commit phase is beginning. See the contents of the log file for the c:\windows_services\newservice\wservice.exe assembly's progress. The file is located at c:\windows_services\newservice\wservice.InstallLog. Committing assembly 'c:\windows_services\newservice\wservice.exe'. Affected parameters are: logtoconsole = logfile = c:\windows_services\newservice\wservice.InstallLog assemblypath = c:\windows_services\newservice\wservice.exe The Commit phase completed successfully. The transacted install has completed. C:\Windows\Microsoft.NET\Framework\v4.0.30319> [/powershell] At this point, going to your services.msc via Windows Key + R, you should now see your service listed with the option to start it.
Recently I upgraded a fairly large Windows Forms .NET 4 app to the latest version of the Windows Forms Control Suite (2012.3.1211.40) and got a few bug reports from end users saying when they were doing an action that updated the Tree View Control it was throwing an exception. At first I thought maybe the Clear() function no longer worked as intended so I tried the following: [csharp] if(treeViewQuestions != null&& treeViewQuestions.Nodes != null&& treeViewQuestions.Nodes.Count > 0) { for(intx = 0; x < treeViewQuestions.Nodes.Count; x++) { treeViewQuestions.Nodes[x].Remove(); } } [/csharp] No dice. Digging into the error a big further, I noticed the "UpdateLine" function was the root cause of the issue: Telerik.WinControls.UI.TreeNodeLinesContainer.UpdateLine(TreeNodeLineElement lineElement, RadTreeNode node, RadTreeNode nextNode, TreeNodeElement lastNode)\r\n at Telerik.WinControls.UI.TreeNodeLinesContainer.UpdateLines()\r\n at Telerik.WinControls.UI.TreeNodeLinesContainer.Synchronize()\r\n at Telerik.WinControls.UI.TreeNodeElement.Synchronize()\r\n at Telerik.WinControls.UI.RadTreeViewElement.SynchronizeNodeElements()\r\n at Telerik.WinControls.UI.RadTreeViewElement.Update(UpdateActions updateAction)\r\n at Telerik.WinControls.UI.RadTreeViewElement.ProcessCurrentNode(RadTreeNode node, Boolean clearSelection)\r\n at Telerik.WinControls.UI.RadTreeNode.OnNotifyPropertyChanged(PropertyChangedEventArgs args)\r\n at Telerik.WinControls.UI.RadTreeNode.SetBooleanProperty(String propertyName, Int32 propertyKey, Boolean value)\r\n at Telerik.WinControls.UI.RadTreeNode.set_Current(Boolean value)\r\n at Telerik.WinControls.UI.RadTreeNode.ClearChildrenState()\r\n at Telerik.WinControls.UI.RadTreeNode.set_Parent(RadTreeNode value)\r\n at Telerik.WinControls.UI.RadTreeNodeCollection.RemoveItem(Int32 index)\r\n at System.Collections.ObjectModel.Collection`1.Remove(T item)\r\n at Telerik.WinControls.UI.RadTreeNode.Remove() Remembering I had turned on the ShowLines property, I humored the idea of turning them off for the clearing/removing of the nodes and then turning them back on like so: [csharp] treeViewQuestions.ShowLines = false; treeViewQuestions.Nodes.Clear(); treeViewQuestions.ShowLines = true; [/csharp] Sure enough that cured the problem, the last word I got back from Telerik was that this is the approved workaround, but no ETA on a true fix. Hopefully that helps someone else out there.