latest posts

A pretty common task I run across is counting the number of occurrences for a specific string, Primary Key ID etc. A few examples: checking a valid username/password combination or an existing value in the database to prevent duplicate/redundant data. Typically if there were no joins involved I would typically just do something like the following: [csharp] public bool DoesExist(string someValue) { using (SomeEntity eFactory = new SomeEntity()) { return eFactory.SomeTable.Where(a => a.Value == someValue).Count() > 0; } } [/csharp] Or use the Parallel PLINQ version if there were a considerable amount of rows assuming the overhead involved in PLINQ would negate any performance advantage for smaller tables: [csharp] public bool DoesExist(string someValue) { using (SomeEntity eFactory = new SomeEntity()) { return eFactory.SomeTable.AsParallel().Where(a => a.Value == someValue).Count() > 0; } } [/csharp] However if there were multiple tables involved I would create a Stored Procedure and return the Count in a Complex Type like so: [csharp] public bool DoesExist(string someValue) { using (SomeEntity eFactory = new SomeEntity()) { return eFactory.SomeTableSP(someValue).FirstOrDefault().Value > 0; } } [/csharp] Intrigued on what the real performance impact was across the board and to figure out what made sense depending on the situation I created a common scenario, a Users Table like so: [caption id="attachment_1377" align="aligncenter" width="267"] Users SQL Server Table Schema[/caption] Populated this table with random data from 100 to 4000 rows and ran the above coding scenarios against it averaging 3 separate times to rule out any fluke scores. In addition I tested looking for the same value run 3X and a random number 3X to see if the row's value position would affect performance (if it was at the near the end of the table or closer to the beginning). I should note this was tested on my HP DV7 laptop that has an A10-4600M (4x2.3ghz CPU) running Windows 8 x64 with 16GB of ram and a Sandisk Extreme 240GB SSD. [caption id="attachment_1378" align="aligncenter" width="300"] LINQ vs PLINQ vs Stored Procedure Count Performance Graph[/caption] The most interesting aspect for me was the consistent performance of the Stored Procedure across the board no matter how many rows there were. I imagine the results are the same for 10,000, 20,000 etc. I'll have to do those tests later. In addition I imagine as soon as table joins come into the picture the difference between a Stored Procedure and a LINQ query would be even greater. So bottom line, use a Stored Procedure for counts. The extra time to create a Stored Procedure, import it into Visual Studio (especially in Visual Studio 2012 where it automatically creates the Complex Type for you) is well worth it.
Diving into Multi-Threading the last couple nights, but not in C# like I had previously. Instead with C. Long ago, I had played with SDL's Built-In Threading when I was working on the Infinity Project. Back then, I had just gotten a Dual Athlon-XP Mobile (Barton) motherboard, so it was my first chance to play with multi-cpu programming. Fast forward 7 years, my primary desktop has 6 cores and most cell phones have at least 2 CPUs. Everything I've written this year has been with multi-threading in mind whether it is an ASP.NET Web Application, Windows Communication Foundation Web Service or Windows Forms Application. Continuing my quest into "going back to the basics" from last weekend, I chose my next quest would be to dive back into C, and attempt to port jcBench to Silicon Graphics' IRIX 64bit MIPS IV platform (it was on the original list of platforms). The first major hurdle, was programming C like C#. Not having classes, the keyword "new", syntax for certain things being completely different (structs for instance), having to initialize arrays with malloc only to remember after getting segmentation faults that by doing so will overload the heap (the list goes on). I've gotten "lazy" with my almost exclusive use of C# it seems, declaring an "array" like: [csharp] ConcurrentQueue<SomeObject> cqObjects = new ConcurrentQueue<SomeObject>(); [/csharp] After the "reintroduction" to C, I started to map out what would be necessary to make an equivalent approach to the Task Parallel Library, not necessarily the syntax, but how it handled nearly all of the work for you. Doing something like (note you don't need to assign the return value from the Entity Model, it could be simply put in the first argument of Parallel.ForEach, I just kept it there for the example): [csharp] List<SomeEntityObject> lObjects = someEntity.getObjectsSP().ToList(); // To ensure there would be no lazy-loading, use the ToList method ConcurrentQueue<SomeGenericObject> cqGenericObjects = new ConcurrentQueue<SomeGenericObject>(); Parallel.ForEach(lObjects, result => { if (result.SomeProperty > 1) { cqGenericObjects.Enqueue(new SomeGenericObject(result)); } }); [/csharp] A few things off the bat you'd have to "port":
  1. Concurrent Object Collections to support modification of collections in a thread safe manner
  2. Iteratively knowing and handling how cores/cpus are available, and constantly allocating new threads as threads complete (ie 6 cores, 1200 tasks, kick off at least 6 threads and handle when those threads complete and "always" maintain a 6 thread count
The later I can imagine is going to be decent sized task in itself as it will involve platform specific system calls to determine the CPU count, breaking the task down dynamically and then managing all of the threads. At first thought the easiest solution might simply be:
  1. Get number of CPUs/Cores, n
  2. Divide number of "tasks" by the number cores and allocate those tasks for each core, thus only kicking off n threads
  3. When all tasks complete resume normal application flow
The problem with that is (or at least one of them), is if the actual data for certain objects is considerably more complex then others, you could have 1 or more CPUs finished before the others, which would be wasteful. You could I guess infer based on a sampling of data, maybe kick off 1 thread to "analyze" the data from various indexes in the passed in array and calculate the average time taken to complete, then anticipate the variation of task completion time to more evenly space out tasks. Also taking into account current cpu utilization, as many operating systems use 1 CPU affinity for Operating System tasks, so giving CPU 1 (or the CPU with Operating System usage) to begin with less tasks might make more sense to truly optimize the threading "manager". Hopefully I can dig up some additional information on how TPL allocates their threads to possible give a 3rd alternative, since I've noticed it handles larger tasks very well across multiple threads. Definitely will post back with my findings....
After having used PLINQ and the Concurrent collections for nearly 2 months now, I can say without a doubt, it is definitely the way of the future. This last week I used it extensively in writing a WCF Service that manipulated a lot of data and needed to return it to an ASP.NET client very quickly. And on the flip side it needed to execute a lot of SQL Insertions based on business logic pretty quickly. As of February 25th, 2012, I think the best approach to writing a data layer is:
  1. Expose all Data Layer access through a WCF Service, ensuring a clear separation between UI and Data Layers
  2. Use of ADO.NET Entity Models tied to SQL Stored Procedures that return Complex Types for objects rather doing a .Where(a => a.Active).ToList()
  3. Process larger result sets with PLINQ, using Concurrent Collections (ie ConcurrentQueue or ConcurrentDictionary) and returning them to the Client (ASP.NET, WP7 etc)
Next step in my opinion would be to add in intelligent App Fabric caching like what Smarty Template Engine did for PHP. Just a clean way to cache pages, while providing flexible ways to invalidate the cache. I am so glad I found that back in 2006 when I was still doing a lot of PHP work.
Spent the afternoon reading IronPython documentation and finally got a chance to play with it a bit. I have to hand it to Microsoft for making it extremely easy: [csharp] var ipy = Python.CreateRuntime(); dynamic pyTest = ipy.UseFile(""); [/csharp] Only 2 lines of code to have access to a python script. For me this opens new doors for scripting operations I still typically write in PHP because of the simplicity of opening notepad and pushing it out to an IIS server running PHP. Now I can write a few line C# app with an interface to run whichever script I wish to run. Being curious about performance, I ported jcBENCH to use IronPython. To put it simply, the math operations pale in comparison to native C# math operations. The actual overhead in creating the Python Interpreter was negligible if anyone was curious even on an AMD C-50 (Dual 1ghz) Netbook with a OCZ Vertex 2 SSD. Porting it did bring an interesting idea, this could work for a workflow process. The ability to process multiple workflows utilizing PLINQ, but with the flexibility to adjust the logic in script and not in the C# code.
After some more thought about jcBENCH and what its real purpose was I am going to drop the Solaris and IRIX ports. Solaris has a Mono port, but I only have Sun Blade 100 which has a single cpu. Not expecting a ton of performance from that. IRIX on the other hand, I have a Quad R14k 500 Origin 300, but no port of Mono exists. So I could port it to Java, but then you really couldn't compare benchmarks between the Mono/.NET versions. I am about 50% done with the Android port and am just waiting for the OpenSuse 12.1 compatible MonoDevelop release so I can get started on the Linux Port. After those 2 ports are completed I am thinking of starting something entirely new that I have been thinking about the last couple years. Those that deal with a SQL database and write a data layer for his or her .NET project, know the shortcomings or doing either:
  1. Using an ADO.NET Entity Model, adding your Tables, Views and Stored Procedures and then use that as is or extend it with some business logic
  2. Use an all custom data layer using the base DataTable, DataRows etc, wrap your objects with partial classes and create a "factory"
Both approaches have their pros and cons, the first takes a lot of less time, but you also have a lot less control and could be costly with all of the overhead. Both however will eventually fall apart down the road. The reason, they were built for one audience and one production server or servers. How many times have you gone to your IT Manager and asked for a new Database server because it was quicker then really go back to the architecture of your data layer. As time goes on, this could happen over and over again. I have personally witnessed such an event. A system was designed and built for around 50 internal users, on a single cpu web server and a dual Xeon database server. Over 5 years later, the code has remained the same yet it's been moved to 6 different servers with ever increasing speed. Times have changed and will continue to change, workloads vary from day to day, servers are swapped in and out, so my solution, an adaptive, dynamic data layer. One that profiles itself and uses that data to analyze the server to use either single threaded LINQ queries or PLINQ queries if the added overhead of using PLINQ would out way the time it would take only using one cpu. In addition using Microsoft's AppFabric to cache the commonly used intensive queries that maybe only get run once an hour and the data doesn't change for 24. This doesn't come without a price of course, having only architected this model in my head, I can't say for certain how much overhead the profiling will be. Over the next couple months, I'll be developing this so stay tuned. jcBENCH as you might have guessed was kind of a early test scenario of testing various platforms and how they handled multi-threaded tasks of varying intensity.