latest posts

This morning I will be presenting at the Maryland Code Camp with the topic of Developing Once and Deploying to Many, specifically talking to practices and patterns I've found to help create rich mobile applications efficiently over the last 3 years I've been actively developing for the mobile space. WCF, WPF, PCL, MonoDroid, Azure Mobile Services and Windows Phone 8 are to be discussed.

For the Powerpoint 2013 presentation, all of the code going to be mentioned during the session, the SQL, PSD files and external libraries used, click here to download the zip file.

In addition during the session I will be making reference to an app I wrote earlier this year, jcLOG-IT, specifically the Mobile Azure Service and Windows Live Integration elements.

The code block mentioned for Authentication:
public async Task AttemptLogin(MobileServiceAuthenticationProvider authType) {
     try {
     if (authType == MobileServiceAuthenticationProvider.MicrosoftAccount) {
     if (!String.IsNullOrEmpty(Settings.GetSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken))) {
     App.CurrentUser = await App.MobileService.LoginAsync(Settings.GetSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken)); }
else {
     var liveIdClient = new LiveAuthClient(Common.Constants.APP_AUTHKEY_LIVECONNECT); while (_session == null) {
     var result = await liveIdClient.LoginAsync(new[] {
); if (result.Status != LiveConnectSessionStatus.Connected) {
     continue; }
_session = result.Session; App.CurrentUser = await App.MobileService.LoginAsync(result.Session.AuthenticationToken); Settings.AddSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken, result.Session.AuthenticationToken); }
Settings.AddSetting(Settings.SETTINGS_OPTIONS.AuthType, authType.ToString()); Settings.AddSetting(Settings.SETTINGS_OPTIONS.IsFirstRun, false.ToString()); return true; }
catch (Exception ex) {
     Settings.AddSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken, String.Empty); return false; }
The Settings class:
public class Settings {
     public enum SETTINGS_OPTIONS {
     IsFirstRun, LiveConnectToken, AuthType, LocalPassword, EnableLocation }
public static void CheckSettings() {
     var settings = IsolatedStorageSettings.ApplicationSettings; if (!settings.Contains(SETTINGS_OPTIONS.IsFirstRun.ToString())) {
     WriteDefaults(); }
public static void AddSetting(SETTINGS_OPTIONS optionName, object value) {
     AddSetting(optionName.ToString(), value); }
public static void AddSetting(string name, object value) {
     var settings = IsolatedStorageSettings.ApplicationSettings; if (!settings.Contains(name)) {
     settings.Add(name, value); }
else {
     settings[name] = value; }
settings.Save(); }
public static T GetSetting(SETTINGS_OPTIONS optionName) {
     return GetSetting(optionName.ToString()); }
public static T GetSetting(string name) {
     if (IsolatedStorageSettings.ApplicationSettings.Contains(name)) {
     if (typeof(T) == typeof(MobileServiceAuthenticationProvider)) {
     return (T) Enum.Parse(typeof (MobileServiceAuthenticationProvider), IsolatedStorageSettings.ApplicationSettings[name].ToString()); }
return (T) Convert.ChangeType(IsolatedStorageSettings.ApplicationSettings[name], typeof (T)); }
return default(T); }
public static void WriteDefaults() {
     AddSetting(SETTINGS_OPTIONS.IsFirstRun, false); AddSetting(SETTINGS_OPTIONS.EnableLocation, false); AddSetting(SETTINGS_OPTIONS.LocalPassword, String.Empty); AddSetting(SETTINGS_OPTIONS.LiveConnectToken, String.Empty); AddSetting(SETTINGS_OPTIONS.AuthType, MobileServiceAuthenticationProvider.MicrosoftAccount); }
Working on a new project at work today I realized with the amount of clients potentially involved with a new WCF Service I would have to adjust my tried and true process with using a WCF Service and Visual Studio's WCF Proxy Generation. I had often wondered about the option circled below, intuitively thinking it would know automatically a type referenced in an OperationContract and in a Class Library before generating a WCF Proxy to not create an entirely new type.

After applying regular expressions to the Post Content

Something like this for instance defined in a common Class Library:
[DataContract] public class SomeObject {
     [DataMember] public int ID {
     get; set;}
[DataMember] public string Name {
     get; set;}
public SomeObject() {
And then in my OperationContract:
[ OperationContract ] public List< SomeObject > GetObject(int id); ]]>
Sadly, this is not how it works. Intuitively you would think the SomeObject type since it is being referenced in the Class Library, the Operation Contract and your Client(s) the Proxy Generation with the box checked above simply generate the Proxy Class referencing the SomeObject class. So how can this be achieved cleanly?

The best solution I could come up with was to do some moving around of code in my Visual Studio 2012 projects. In short, the interface for my WCF Service and any external classes (i.e. classes used for delivering and receiving data between the Service and Client(s)) were moved to a Class Library previously setup and a wrapper for the Interface was created.

Let's dive in...

Luckily, I had setup my WCF Service with internally used and externally used classes with a proper folder structure like so:


So it was simply a matter of relocating and adjusting the namespace references.

After moving only the Interface for my WCF Service (leaving the actual implementation in the WCF Service), I wrote my wrapper:
public class WCFFactory : IDisposable {
     public IWCFService Client {
     get; set; }
public WCFFactory() {
     var myBinding = new BasicHttpBinding(); var myEndpoint = new EndpointAddress(ConfigurationManager.AppSettings["WEBSERVICE_Address"]); var cFactory = new ChannelFactory< IWCFService >(myBinding, myEndpoint); Client = cFactory.CreateChannel(); }
public void Dispose() {
     ((IClientChannel)Client).Close(); }
So then in my code I could reference my Operation Contracts like so:
using (var webService = new WCFFactory()) {
     var someObject = webService.Client.GetSomeObject(1); }
All of this is done without creating any references via the "Add Service Reference" option in Visual Studio 2012.

Downsides of this approach? None that I've been able to uncover. One huge advantage of going this route versus the Proxy Generation approach is that when your Interface Changes, you update it in one spot, simply recompile the Class Library and then update all of the Clients with the updated Class Library. If you've got all of your clients in the same Visual Studio Solution, simply recompiling is all that is necessary.

More to come on coming up with ways to make interoperability between platforms better as I progress on this project, as it involves updating SQL Server Reporting Services, .NET 1.1 WebForms, .NET 3.5 WebForms, .NET 4.5 WebForms, two other WCF Services and the Windows Workflow solution I mentioned earlier this month.
Can't believe it's been a week to the day when I began this project, but I am glad at the amount of progress I have made on the project thus far. Tonight I will dive into adding a WCF Service to act as a layer in between the logic and data layer done in previous posts Part 1, Part 2, Part 3, Part 4, Part 5 and Part 6) and add adding RSS support to the site.

Integrating a WCF Service

First off, for those that aren't familiar, WCF (Windows Communication Foundation) is an extremely powerful Web Service Technology created by Microsoft. I first dove into WCF April 2010 when diving into Windows Phone development as there was no support for the "classic" ASMX Web Services. Since then I have used WCF Services as the layer for all ASP.NET WebForms, ASP.NET MVC, Native Mobile Apps and other WCF Services at work since. I should note, WCF to WCF communication is done at the binary level, meaning it doesn't send XML between the services, something I found extremely enlightening that Microsoft implemented. At it's most basic level a WCF Service is comprised of two components, the Service Interface Definition file and the actual implementation. In the case of the migration, I created my Interface as follows:
[ServiceContract] public interface IWCFService {
     [OperationContract] lib.Objects.Post GetSinglePost(int year, int month, int day, string postname); [OperationContract] List<lib.Objects.Comment> GetCommentsFromPost(int postID); [OperationContract(IsOneWay = true)] void AddComment(string PersonName, string EmailAddress, string Body, int PostID); [OperationContract] lib.Objects.Content GetContent(string pageName); [OperationContract] List<lib.Objects.Post> GetPosts(DateTime startDate, DateTime endDate); [OperationContract] List<lib.Objects.Post> GetPostsByTags(string tagName); [OperationContract] List<lib.Objects.ArchiveItem> GetArchiveList(); [OperationContract] List<lib.Objects.LinkItem> GetLinkList(); [OperationContract] List<lib.Objects.TagCloudItem> GetTagCloud(); [OperationContract] List<lib.Objects.MenuItem> GetMenuItems(); }
The one thing to note, IsOneWay a top of the AddComment function indicates, the client doesn't expect a return value. As noted in last night's post, the end user is not going to want to wait for all the emails to be sent, they simply want their comment to be posted and the Comment Listing refreshed with their comment. By setting the IsOneWay to true, you ensure the client's experience is fast no matter the server side work being done. And the actual implementation:
public class WCFService : IWCFService {
     public Post GetSinglePost(int year, int month, int day, string postname) {
     using (var pFactory = new PostFactory()) {
     var post = pFactory.GetPost(postname)[0]; post.Comments = pFactory.GetCommentsFromPost(post.ID); return post; }
public List<Comment> GetCommentsFromPost(int postID) {
     using (var pFactory = new PostFactory()) {
     return pFactory.GetCommentsFromPost(postID); }
public void AddComment(string PersonName, string EmailAddress, string Body, int PostID) {
     using (var pFactory = new PostFactory()) {
     pFactory.addComment(PostID, PersonName, EmailAddress, Body); }
public Content GetContent(string pageName) {
     using (var cFactory = new ContentFactory()) {
     return cFactory.GetContent(pageName); }
public List<Post> GetPosts(DateTime startDate, DateTime endDate) {
     using (var pFactory = new PostFactory()) {
     return pFactory.GetPosts(startDate, endDate); }
public List<Post> GetPostsByTags(string tagName) {
     using (var pFactory = new PostFactory()) {
     return pFactory.GetPostsByTags(tagName); }
public List<ArchiveItem> GetArchiveList() {
     using (var pFactory = new PostFactory()) {
     return pFactory.GetArchiveList(); }
public List<LinkItem> GetLinkList() {
     using (var pFactory = new PostFactory()) {
     return pFactory.GetLinkList(); }
public List<TagCloudItem> GetTagCloud() {
     using (var pFactory = new PostFactory()) {
     return pFactory.GetTagCloud(); }
public List<MenuItem> GetMenuItems() {
     using (var bFactory = new BaseFactory()) {
     return bFactory.GetMenuItems(); }
One thing you might be asking, isn't this a security risk? If you're not, you should. Think about it, anyone who has access to your WCF Service could add comments and pull down your data at will. In its current state, this isn't a huge deal since it is only returning data and the AddComment Operation Contract requires a prior approved comment to post, but what about when the administrator functionality is implemented? You definitely don't want to expose your contracts to the outside world with only the parameters needed. So what can you do?
  1. Keep your WCF Service not exposed to the internet - this is problematic in today's world where a mobile presence is almost a necessity. Granted if one were to only create a MVC 4 Mobile Web Application you could keep it behind a firewall. My thought process currently is design and do it right the first time and don't corner yourself into a position where you have to go back and do additional work.
  2. Add username, password or some token to the each Operation Contract and then verify the user - this approach works and I've done it that way for public WCF Services. The problem becomes more of a lot of extra work on both the client and server side. Client Side you can create a base class with the token, username/password and simply pass it into each contract and then server side do a similar implementation
  3. Implement a message level or Forms Membership - This approach requires the most upfront work, but reaps the most benefits as it keeps your Operation Contracts clean and offers an easy path to update at a later date.
Going forward I will be implementing the 3rd option and of course I will document the process. Hopefully this help get developers thinking about security and better approaches to problems. Moving onto the second half of the post, creating an RSS Feed.

Creating an RSS Feed

After getting my class in my WCF Service, I created a new Stored Procedure in preparation: [sql] CREATE PROCEDURE dbo.getRSSFeedListSP AS SELECT TOP 25 dbo.Posts.Created, dbo.Posts.Title, LEFT(CAST(dbo.Posts.Body AS VARCHAR(MAX)), 200) + '...' AS 'Summary', dbo.Posts.URLSafename FROM dbo.Posts INNER JOIN dbo.Users ON dbo.Users.ID = dbo.Posts.PostedByUserID WHERE dbo.Posts.Active = 1 ORDER BY dbo.Posts.Created DESC [/sql] Basically this will return the most recent 25 posts and up to the first 200 characters of the post. Afterwards I created a class to translate the Entity Framework Complex Type:
[DataContract] public class PostFeedItem {
     [DataMember] public DateTime Published {
     get; set; }
[DataMember] public string Title {
     get; set; }
[DataMember] public string Description {
     get; set; }
[DataMember] public string URL {
     get; set; }
public PostFeedItem(DateTime published, string title, string description, string url) {
     Published = published; Title = title; Description = description; URL = url; }
And then I added a new Operation Contract in my WCF Service:
public List<lib.Objects.PostFeedItem> GetFeedList() {
     using (var pFactory = new PostFactory()) {
     return pFactory.GetFeedList(); }
Now I am going to leave it up to you which path to implement. At this point you've got all backend work done to return the data you need to write your XML file for RSS. There are many approaches to how you want to go about to proceeding, and it really depends on how you want to serve your RSS Feed. Do you want it to regenerate on the fly for each request? Or do you want to write an XML file only when a new Post is published and simply serve the static XML file? From what my research gave me, there are multiple ways to do each of those. For me I am in favor of doing the work once and writing it out to a file rather than doing all of that work on each request. The later seems like a waste of server resources. Generate Once
  1. One being using the Typed DataSet approach I used in Part 1 - requires very little work and if you're like me, you like a strongly typed approach.
  2. Another option is to use the SyndicationFeed built in class to create your RSS Feed's XML - an approach I hadn't researched prior to for generating one
  3. Using the lower level XmlWriter functionality in .Net to build your RSS Feed's XML - I strongly urge you to not do this with the 2 approaches above being strongly typed. Unstrongly Typed code leads to spaghetti and a debugging disaster when something goes wrong.
Generate On-Thee-Fly
  1. Use the previously completed WCF OperationContract to simply return the data and then use something like MVC Contrib to return a XmlResult in your MVC Controller.
  2. Set your MVC View to return XML and simply iterate through all of the Post Items
Those are just some ways to accomplish the goal of creating a RSS Feed for your MVC site. Which is right? I think it is up to you to find what works best for you. That being said, I am going to walk through how to do the first 2 Generate Once Options. For both approaches I am going to use IIS's UrlRewrite functionality to route to For those interested, all it took was the following block in my web.config in the System.WebService section: [xml] <rewrite> <rules> <rule name="RewriteUserFriendlyURL1" stopProcessing="true"> <match url="^feed$" /> <conditions> <add input="{
" matchType="IsFile" negate="true" /> <add input="{
" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="rss.xml" /> </rule> </rules> </rewrite> [/xml] To learn more about URL Rewrite go the official site here.

Option 1 - XSD Approach

Utilizing a similar approach to how I got started, utilizing the XSD tool in Part 1, I generated a typed dataset based on the format of an RSS XML file: [xml] <?xml version="1.0"?> <rss version="2.0"> <channel> <title>Jarred Capellman</title> <link></link> <description>Putting 1s and 0s to work since 1995</description> <language>en-us</language> <item> <title>Version 2.0 Up!</title> <link></link> <description>Yeah in all its glory too, it's far from complete, the forum will be up tonight most likely...</description> <pubDate>5/4/2012 12:00:00 AM</pubDate> </item> </channel> </rss> [/xml] [caption id="attachment_2056" align="aligncenter" width="300"]Generated Typed Data Set <span classfor RSS" width="300" height="151" class="size-medium wp-image-2056" /> Generated Typed Data Set for RSS[/caption] Then in my HomeController, I wrote a function to handle writing the XML to be called when a new Post is entered into the system:
private void writeRSSXML() {
     var dt = new rss(); using (var ws = new WCFServiceClient()) {
     var feedItems = ws.GetFeedList(); var channelRow =; channelRow.title = Common.Constants.SITE_NAME; channelRow.description = Common.Constants.SITE_DESCRIPTION; channelRow.language = Common.Constants.SITE_LANGUAGE; = Common.Constants.URL;;; foreach (var item in feedItems) {
     var itemRow = dt.item.NewitemRow(); itemRow.SetParentRow(channelRow); itemRow.description = item.Description; = buildPostURL(item.URL, item.Published); itemRow.pubDate = item.Published.ToString(CultureInfo.InvariantCulture); itemRow.title = item.Title; dt.item.AdditemRow(itemRow); dt.item.AcceptChanges(); }
var xmlString = dt.GetXml(); xmlString = xmlString.Replace("<rss>", "<?xml version=\"1.0\" encoding=\"utf-8\"?><rss version=\"2.0\">"); using (var sw = new StreamWriter(HttpContext.Server.MapPath("~/rss.xml"))) {
     sw.Write(xmlString); }
Pretty intuitive code with one exception - I could not find a way to add the version property to the rss element, thus having to use the GetXml() method and then do a more elaborate solution instead of simply calling dt.WriteXml(HttpContext.Server.MapPath("~/rss.xml")). Overall though I find this approach to be very acceptable, but not perfect.

Option 2 - Syndication Approach

Not 100% satisfied with the XSD Approach mentioned above I dove into the SyndicationFeed class. Be sure to include using System.ServiceModel.Syndication; at the top of your MVC Controller. I created the same function as above, but this time utilizing the SyndicationFeed class that is built into .NET:
private void writeRSSXML() {
     using (var ws = new WCFServiceClient()) {
     var feed = new SyndicationFeed(); feed.Title = SyndicationContent.CreatePlaintextContent(Common.Constants.SITE_NAME); feed.Description = SyndicationContent.CreatePlaintextContent(Common.Constants.SITE_DESCRIPTION); feed.Language = Common.Constants.SITE_LANGUAGE; feed.Links.Add(new SyndicationLink(new Uri(Common.Constants.URL))); var feedItems = new List<SyndicationItem>(); foreach (var item in ws.GetFeedList()) {
     var sItem = new SyndicationItem(); sItem.Title = SyndicationContent.CreatePlaintextContent(item.Title); sItem.PublishDate = item.Published; sItem.Summary = SyndicationContent.CreatePlaintextContent(item.Description); sItem.Links.Add(new SyndicationLink(new Uri(buildPostURL(item.URL, item.Published)))); feedItems.Add(sItem); }
feed.Items = feedItems; var rssWriter = XmlWriter.Create(HttpContext.Server.MapPath("~/rss.xml")); var rssFeedFormatter = new Rss20FeedFormatter(feed); rssFeedFormatter.WriteTo(rssWriter); rssWriter.Close(); }
On first glance you might notice very similar code between the two approaches, with one major exception - there's no hacks to make it work as intended. Between the two I am going to go live with the later approach, not having to worry about the String.Replace ever failing and not having any "magic" strings is worth it. But I will leave the decision to you as to which to implement or maybe another approach I didn't mention - please comment if you have another approach. I am always open to using "better" or alternate approaches. Now that the WCF Service is fully integrated and RSS Feeds have been added, as far as the end user view there are but a few features remaining: Caching, Searching Content, Error Pages. Stay tuned for Part 8 tomorrow.
A couple weeks back I needed to integrate a Word Press site with a C# WCF Service. Having only interfaced with the older "classic" ASP.NET Web Services (aka asmx) nearly 6 years ago I was curious if there had been improvements to the soapClient interface inside of PHP.

Largely it was exactly as I remembered going back to 2007. The one thing I really wanted to do with this integration was have a serialized object being passed up to the WCF Service from the PHP code - unfortunately due to the time constraints this was not achievable - if anyone knows and can post a snippet I'd be curious. That being said the following sample code passes simple data types as parameters in a WCF Operation Contract from PHP.

Given the following WCF Operation Contract definition:
[OperationContract] string CreateNewUser(string firstName, string lastName, string emailAddress, string password); ]]>
Those who have done some PHP in the past should be able to understand the code below, I should note when doing your own WCF Integration, the $params->variableName needs to match casing to the actual WCF Service. In addition the object returned has the name of the WCF Service plus the "Result" suffix. [php] <?php class WCFIntegration {
     const WCFService_URL = ""; public function addUser($emailAddress, $firstName, $lastName, $password) {
     try {
     // Initialize the "standard" SOAP Options $options = array('cache_wsdl' => WSDL_CACHE_NONE, 'encoding' => 'utf-8', 'soap_version' => SOAP_1_1, 'exceptions' => true, 'trace' => true); // Create a connection to the WCF Service $client = new SoapClient(self::WCFService_URL, $options); if (!$client == null) {
     throw new Exception('Could not connect to WCF Service'); }
// Set the WCF Service Parameters based on the argument values $params->emailAddress = $emailAddress; $params->firstName = $firstName; $params->lastName = $lastName; $params->password = $password; // Submit the $params object $result = $client->CreateNewUser($params); // Check the return value, in this case the WCF Service Operation Contract returns "Success" upon a succesful insertion if ($result->CreateNewUserResult === "Success") {
     return true; }
throw new Exception($result->CreateNewUserResult); }
catch (Exception $ex) {
     echo 'Error in WCF Service: '.$ex->getMessage().'<br/>'; return false; }
?> [/php] Then to actually utilize this class in your existing code: [php] $wcfClient = new WCFIntegration(); $result = $wcfClient->addUser('', 'John', 'Doe', 'password'); if (!$result) {
     echo $result; }
[/php] Hopefully that helps someone out who might not be as proficient in PHP as they are in C# or vice-versa.
Back at MonoDroid Development at work this week and ran into a serious issue with a DataContract and an InvalidDataContractException. Upon logging into my app instead of receiving the DataContract class object, I received this: [caption id="attachment_1943" align="aligncenter" width="300"]MonoDroid InvalidDataContractException MonoDroid InvalidDataContractException[/caption] The obvious suspect would be to verify the class had a getter and setter - sure enough both were public. Digging a bit further MonoTouch apparently had an issue at one point with not preserving all of the DataMembers in a DataContract so I add the following attribute to my class object:
[DataContract, Android.Runtime.Preserve(AllMembers=true)] ]]>
Unfortunately it still threw the same exception - diving into the forums on Xamarin's site and Bing, I could only find one other developer who had run into the same issue, but had 0 answers. Exhausting all avenues I turned to checking different combinations of the Mono Android Options form in Visual Studio 2012 since I had a hunch it was related to the linker. After some more time I found the culprit - in my Debug configuration the Linking dropdown was set to Sdk and User Assemblies. [caption id="attachment_1944" align="aligncenter" width="300"]MonoDroid Linking Option <span classin Visual Studio 2012" width="300" height="138" class="size-medium wp-image-1944" /> MonoDroid Linking Option in Visual Studio 2012[/caption] As soon as I switched it back over to Sdk Assemblies Only I was back to receiving my DataContract. Though I should also note - DataContract objects are not treated in MonoDroid like they are in Windows Phone, MVC or any other .NET platform I've found. Null really doesn't mean null, so what I ended up doing was changing my logic to instead return an empty object instead of a null object and both the Windows Phone and MonoDroid apps work perfectly off the same WCF Proxy Class.
After attending a Windows Phone 8 Jumpstart at Chevy Chase, MD earlier today I got asked about tips developing cross-platform with as much code re-use as possible. In doing a Version 2 of a large platform ubiquitous application since October I've had some new thoughts since my August 2012 post, Cross-Platform Mobile Development and WCF Architecture Notes. Back then I was focused on using a TPL enabled WCF Service to be hit by the various platforms (ASP.NET, Windows Phone, Android, iOS etc.). This approach had a couple problems for a platform that needs to support an ever growing concurrent client base. The main problem is that there is 1 point of failure. If the WCF Service goes down, the entire platform goes. In addition, it does not allow more than 1 WCF server to be involved for the application. The other problem is that while the business logic is hosted in the cloud/a dedicated server with my August 2012 thought process, it doesn't share the actual WCF Service proxies or other common code.

What is an easy solution for this problem of scalability?

Taking an existing WCF Service and then implementing a queuing system where possible. This way the client can get an instantaneous response, thus leaving the main WCF Service resources to process the non-queueable Operation Contracts.

How would you go about doing this?

You could start out by writing a Windows Service to constantly monitor a set of SQL Tables, XML files etc. depending on your situation. To visualize this: [caption id="attachment_1895" align="aligncenter" width="300"]Queue Based Architecture (3/7/2013) Queue Based Architecture (3/7/2013)[/caption] In a recent project at work, in addition to a Windows Service, I added another database and another WCF Service to help distribute the work. The main idea being for each big operation that is typically a resource intensive task, offload it to another service, with the option to move it to an entirely different server. A good point to make here, is that the connection between WCF Services is done via binary, not JSON or XML.

Increase your code sharing between platforms

Something that has become more and more important for me as I add more platforms to my employer's main application is code reuse. This has several advantages:
  1. Updates to one platform affect all, less work and less problems by having to remember to update every platform when an addition, change or fix occurs
  2. For a single developer team like myself, it is a huge time saving principle especially from a maintenance perspective

What can you do?

In the last couple of months there have been great new approaches to code re-use. A great way to start is to create a Portable Class Library or PCL. PCLs can be used to create libraries to be compiled by Windows Phone 7/8, ASP.NET, MVC, WinForms, WPF, WCF, MonoDroid and many other platforms. All but MonoDroid is built in, however I recently went through how to Create a Portable Class Library in MonoDroid. The best thing about PCLs, your code is entirely reusable, so you can create your WCF Service proxy(ies), common code such as constants etc. The one thing to keep in mind is to follow the practice of not embedding your business, presentation and data layers in your applications.
For the last 2.5 years I've been using the default MonoDevelop Web Reference in my MonoTouch applications for work, but it's come to the point where I really need and want to make use of the WCF features that I do in my Windows Phone applications. Even with the newly released iOS Integration in Visual Studio 2012, you still have to generate the proxy class with the slsvcutil included with the Silverlight 3.0 SDK. If you're like me, you probably don't have Version 3.0 of the Silverlight SDK, you can get it from Microsoft here. When running the tool you might get the following error: Error: An error occurred in the tool. Error: Could not load file or assembly 'C:\Program Files (x86)\Microsoft Silverl ight\5.1.10411.0\System.Runtime.Serialization.dll' or one of its dependencies. T his assembly is built by a runtime newer than the currently loaded runtime and c annot be loaded. Basically the tool is incorrectly trying to pull the newer 4.0 or 5.0 Silverlight Assemblies, to make it easy, I created a config file to simply drop into your c:\Program Files (x86)\Microsoft SDKs\Silverlight\v3.0\Tools folder, you can download it here. From a command line (remember the shortcut to hold shift down and right click in the folder to open a command prompt): [caption id="attachment_1853" align="aligncenter" width="593"]Silverlight 3 WCF Proxy Generation Silverlight 3 WCF Proxy Generation[/caption] Enter the following, assuming you want to create a proxy for a localhost WCF Service to your c:\tmp folder: SlSvcUtil.exe http://localhost/Service.svc?wsdl /noconfig /d:c:\tmp Though I should note, this will generate Array collections and not List or ObservableCollection collections. If you want to generate your Operation Contracts with return types of those collections simply add for List Collections: /collectionType:System.Collections.Generic.List`1 or ObservableCollection: /collectionType:System.Collections.ObjectModel.ObservableCollection`1
As months and years go by, devices coming and going I've seen (as most have) an increasing demand to provide a universal experience no matter what device you are on i.e. mobile, desktop, laptop, tablet, website etc. This has driven a lot of companies to pursue ways to deliver that functionality efficiently, both from a monetary standpoint and a performance perspective. A common practice is to provide a Web Service, SOAP or WCF for instance, and then consume the functionality on the device/website. This provides a good layer between your NAS & Database server(s) and your clients. However, you don't want to provide the exact same view on every device. For instance, you're not going to want to edit 500 text fields on a 3.5" Mobile screen nor do you have the ability to upload non-isolated storage documents on mobile devices (at least currently). This brings up a possible problem, do you have the same Operation Contract with a DataContract Class Object and then based on the device that sent it, know server side what to expect? Or do you handle the translation on the most likely slower client side CPU? So for me, there are 2 possible solutions:
  1. Create another layer between the OperationContract and the server side classes to handle device translations
  2. Come up with something outside the box
Option #1, has pros and cons. It leaves the client side programming relatively the same across all platforms and leaves the work to the server side so pushing out fixes would be relatively easy and most likely affect all clients if written to use as much common code as possible. However, it does leave room for unintended consequences. Forgetting to update all of the device specific code and then having certain clients not get the functionality expected. Further more, as devices evolve, for instance the iPhone 1-4S had a 3.5" screen while the iPhone 5 has a much larger 4" screen. Would this open the doors to having a closer to iPad/Tablet experience? This of course depends on the application and customer base, but something to consider. And if it makes sense to have differing functionality passed to iPhone 5 users versus iPhone 4, there is more complexity in coding to specific platforms. A good route to solve those complexities in my opinion would be to create a Device Profile like Class based on the global functionality, then when the request to push or get data, the Factory classes in your Web Service would know what to do without having tons of if (Device == "IPHONE") conditionals. As more devices arrive, create a new profile server side and you'd be ready to go. Depending on your application this could be a very manageable path to go. Option #2, think outside the box is always interesting to me. I feel like many developers (I am guilty of this too), approach things based on previous experience and go through an iterative approach with each project. While this is a safer approach and I agree with it in most cases, I don't think developers can afford to think this way too much longer. Software being as interconnected with external APIs, web services, integrations (Facebook, Twitter etc.) and countless devices is vastly different than the 90s class library solution. Building a robust as future proof system to me is much more important than the client itself in my opinion. That being said, what could you do? In working on Windows Workflow Foundation last month and really breaking apart what software does at the most basic level, it really is simply:
  1. Client makes a request
  2. Server processes request (possibly doing multiple requests of it's own to databases or file storage)
  3. Return the data the Client expects (hopefully incorporating error handling in the return)
So how does this affect my thinking of architecting Web Services and Client applications? I am leaning towards creating a generic interface for certain requests to get/set data between the Server and Client. This creates a single funnel to process and return data, thus eliminating duplicate code and making the manageability much higher. However, you're probably thinking about the overhead in translating a generic request to "GetObject" to what the client is actually expecting. I definitely agree and I don't think it should be taken literally especially when considering performance of both server side resources and the amount of data transferring back and forth. What I am implying is doing something like this with your OperationContract definition:
[OperationContract] Factory.Objects.Ticket GetTicket(ClientToken clientToken, int TokenID); ]]>
Your implementation:
public Factory.Objects.Ticket GetTicket(ClientToken clientToken, int TokenID) {
     return new Factory.TicketFactory(Token: clientToken).GetObject<Factory.Objects.Ticket>(ID: TokenID); }
Then in your Factory class:
public interface Factory<T> where T : FactoryObject {
     public Factory<T>(ClientToken Token) {
T GetObject(int ID); FactoryResult AddObject(T object); }
Then implement that Factory pattern for each object class. I should note implementing a Device Profile layer could be done at the Factory Constructor level. Simply pass in the Device Type inside the ClientToken object. Then a simple check to the Token class for instance:
public FactoryResult AddObject<Ticket>(Ticket item) {
     if (Token.Device.HasTouchScreen) {
     // do touch screen specific stuff; }
return FactoryResult(); }
You could also simply store Device Profile data in a database, text file, xml file and then cache them server side. Obviously this is not a solution for all applications, but has been successful in my implementations. Comments, suggestions, improvements, please let me know below.
After attending the "What's New in WCF 4.5" session at Microsoft's TechED North America 2012 I got fascinated by the opportunities with using WebSockets for my next projects both at work and for my personnel projects. The major prerequisite for WebSockets is to be running on Windows 8 or Server 2012 specifically IIS 8.0. In Windows 8, go to your Programs and Features (Windows Key + x to bring up the shortcut menu, click Control Panel) and then click on "Turn Windows features on or off" as shown in the screenshot below: [caption id="attachment_1416" align="aligncenter" width="300"] Turn Windows features on or off[/caption] Then expand Internet Information Services and make sure WebSocket Protocol is checked as shown in the screenshot below: [caption id="attachment_1415" align="aligncenter" width="300"] Windows 8 Windows Feature to enable WebSocket Development[/caption] After getting my system up and running, I figured I would convert something I had done previously with a WPF application a few years back: Monitoring an Exchange 2010 Mailbox and then parse the EmailMessage object. The difference for this test is that I am simply going to kick back the email to a console application. Jumping right into the code my ITestService.cs source file:
[ServiceContract] public interface ITestCallBackService {
     [OperationContract(IsOneWay = true)] Task getEmail(TestService.EMAIL_MESSAGE email); }
[ServiceContract(CallbackContract = typeof(ITestCallBackService))] public interface ITestService {
     [OperationContract(IsOneWay = true)] Task MonitorEmailBox(); }
My TestService.svc.cs source file:
[Serializable] public struct EMAIL_MESSAGE {
     public string Body; public string Subject; public string Sender; public bool HasAttachment; }
public async Task MonitorEmailBox() {
     var callback = OperationContext.Current.GetCallbackChannel<ITestCallBackService>(); Microsoft.Exchange.WebServices.Data.ExchangeService service = new Microsoft.Exchange.WebServices.Data.ExchangeService(Microsoft.Exchange.WebServices.Data.ExchangeVersion.Exchange2010_SP1); service.Credentials = new NetworkCredential(ConfigurationManager.AppSettings["ExchangeUsername"].ToString(), ConfigurationManager.AppSettings["ExchangePassword"].ToString(), ConfigurationManager.AppSettings["ExchangeDomain"].ToString()); service.Url = new Uri(ConfigurationManager.AppSettings["ExchangeWSAddress"].ToString()); Microsoft.Exchange.WebServices.Data.ItemView view = new Microsoft.Exchange.WebServices.Data.ItemView(100); Microsoft.Exchange.WebServices.Data.SearchFilter sf = new Microsoft.Exchange.WebServices.Data.SearchFilter.IsEqualTo(Microsoft.Exchange.WebServices.Data.EmailMessageSchema.IsRead, false); while (((IChannel)callback).State == CommunicationState.Opened) {
     Microsoft.Exchange.WebServices.Data.FindItemsResults<Microsoft.Exchange.WebServices.Data.Item> fiItems = service.FindItems(Microsoft.Exchange.WebServices.Data.WellKnownFolderName.Inbox, sf, view); if (fiItems.Items.Count > 0) {
     service.LoadPropertiesForItems(fiItems, new Microsoft.Exchange.WebServices.Data.PropertySet(Microsoft.Exchange.WebServices.Data.ItemSchema.HasAttachments, Microsoft.Exchange.WebServices.Data.ItemSchema.Attachments)); foreach (Microsoft.Exchange.WebServices.Data.Item item in fiItems) {
     if (item is Microsoft.Exchange.WebServices.Data.EmailMessage) {
     Microsoft.Exchange.WebServices.Data.EmailMessage eMessage = item as Microsoft.Exchange.WebServices.Data.EmailMessage; eMessage.IsRead = true; eMessage.Update(Microsoft.Exchange.WebServices.Data.ConflictResolutionMode.AlwaysOverwrite); EMAIL_MESSAGE emailMessage = new EMAIL_MESSAGE(); emailMessage.HasAttachment = eMessage.HasAttachments; emailMessage.Body = eMessage.Body.Text; emailMessage.Sender = eMessage.Sender.Address; emailMessage.Subject = eMessage.Subject; await callback.getEmail(emailMessage); }
Only additions to the stock Web.config are the protocolMapping properties, but here is my full web.config:
<?xml version="1.0"?> <configuration> <appSettings> <add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" /> </appSettings> <system.web> <compilation debug="true" targetFramework="4.5" /> </system.web> <system.serviceModel> <protocolMapping> <add scheme="http" binding="netHttpBinding"/> <add scheme="https" binding="netHttpsBinding"/> </protocolMapping> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true"/> <directoryBrowse enabled="true"/> </system.webServer> </configuration> ]]>
I then created my test Windows Console application, added a reference to the WCF Service created above, here's my program.cs:
static void Main(string[] args) {
     InstanceContext context = new InstanceContext(new CallbackHandler()); using (WSReference.TestServiceClient tsClient = new WSReference.TestServiceClient(context)) {
     tsClient.MonitorEmailBox(); Console.ReadLine(); }
private class CallbackHandler : WSReference.ITestServiceCallback {
     public async void getEmail(WSReference.TestServiceEMAIL_MESSAGE email) {
     Console.WriteLine("From: " + email.Sender + "\nSubject: " + email.Subject + "\nBody: " + email.Body + "\nAttachments: " + (email.HasAttachment ? "Yes" : "No")); }
Nothing fancy, but I think it provides a good beginning thought process for using WebSockets in combination with the new async keyword. For me it is like taking a WinForm, ASP.NET, WP7 (etc) Event Handler across the Internet or Intranet. This brings me one step further towards truly moving from the idea of keeping code inside a DLL or some other project to keeping it in Azure or hosted on an IIS server somewhere for all of my projects to consume. One source of logic to maintain and if there are bugs generally in the WCF Service itself (at least in my experience using a WCF Service like a Web DLL in the last 8 months) and not having to remember to recompile a project/redeploy each project that uses your common code/framework.
Been banging my head around a recent Azure WCF Service I've been working on to connect to my new Windows Phone 7 project. To my surprise, it worked flawlessly or so it seemed. Going to use the WCF Service, I noticed the proxy hadn't been generated. Sure enough the Reference.cs was empty:
//------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.17626 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ ]]>
Vaguely remembering this was fixed with a simple checkbox, I went to the Configure Service section and changed the options circled in red: [caption id="attachment_1370" align="aligncenter" width="300"] WCF Configure Reference[/caption] Note: You only need to uncheck the Reuse types in referenced assemblies, I just prefer generic List objects versus an Observable Collection.
Came across an interesting problem yesterday in which I was previously using HTTP for my WCF Mobile Service and a potential client wanted it encrypted. Rather than write a custom encryption, I simply applied a SSL certificate to the IIS application and created a HTTPS binding in IIS. Afterwards I knew I had to update my Endpoint and Behavior to use HTTPS and allow Meta-Data temporarily so Visual Studio could update my Service Reference. This is where everything went down hill. The Service Configuration file in my WP7 application got wiped and the actual Reference.cs was virtually empty. After trial and error for several hours I ended doing the following:
  1. Added a new WCF Reference using a brand new name
  2. Saved the Project/Solution
  3. Exited Visual Studio 2010
  4. Removed the new Reference
  5. Added the original WCF Reference using the original name
After all that it worked, so hopefully that saves some one hours of headache.
Last night when working on my Silicon Graphics Origin 300 and suffering with an old version Mozilla circa 2005 as seen below: [caption id="attachment_896" align="aligncenter" width="300" caption="Mozilla 1.7.12 on IRIX"][/caption] I started wondering, these machines can function as a Web Server, MySQL server, firewall etc especially my Quad R14k Origin 300, yet web browsing is seriously lacking on them. Firefox 2 is available over at nekoware, but that is painfully slow. Granted I don't use my Origin for web browsing, but when I was using a R12k 400mhz Octane as my primary machine a few years ago as I am sure others around the world are doing it was painful. This problem I don't think is solely for those on EOL'd Silicon Graphics machines, but any older piece of hardware that does everything but web browsing decently. Thinking back to the Amazon Silk platform, using less powerful hardware, but a brilliant software platform, Amazon is able to deliver more with less. The problem arises for the rest of the market because of the diversity of the PC/Workstation market. The way I see it you've got 2 approaches to a "universal" cloud web renderer. You could either:
  1. Write a custom lightweight browser tied to an external WCF/Soap Web Service
  2. Write a packet filter inspector for each platform to intercept requests and return them from a WCF/Soap service either through Firefox Extensions or a lower level implementation, almost like a mini-proxy
Plan A has major problems because you've got various incarnations of Linux, IRIX, Solaris, VMS, Windows etc, all with various levels of Java and .NET/Mono support (if any), so a Java or .NET/Mono implementation is probably not the right choice. Thus you're left trying to make a portable C/C++ application. To cut down on work, I'd probably use a platform independent library like Gsoap to handle the web service calls. But either way the amount of work would be considerable. Plan B, I've never done anything like before, but I would imagine would be a lot less work than Plan A. I spent 2 hours this morning playing around with a WCF service and a WPF application doing kind of like Plan A. [caption id="attachment_897" align="aligncenter" width="300" caption="jcW3CLOUD in action"]in action" width="300" height="189" class="size-medium wp-image-897" />[/caption] But instead of writing my own browser, I simply used the WebBrowser control, which is just Internet Explorer. The Web Service itself is simply:
public JCW3CLOUDPage renderPage(string URL) {
     using (WebClient wc = new WebClient()) {
     JCW3CLOUDPage page = new JCW3CLOUDPage(); if (!URL.StartsWith("http://")) {
     URL = "http://" + URL; }
page.HTML = wc.DownloadString(URL); return page; }
It simply makes a web request based on the URL from the client, converts the HTML page to a String object and I pass it into a JCW3CLOUDPage object (which would also contain images, although I did not implement image support). Client side (ignoring the WPF UI code):
private JCW3CLOUDReference.JCW3CLOUDClient _client = new JCW3CLOUDReference.JCW3CLOUDClient(); var page = _client.renderPage(url); int request = getUniqueID(); StreamWriter sw = new StreamWriter(System.AppDomain.CurrentDomain.BaseDirectory + request + ".html"); sw.Write(page.HTML); sw.Close(); wbMain.Navigate(System.AppDomain.CurrentDomain.BaseDirectory + request + ".html"); ]]>
It simply makes the WCF request based on the property and then returns the HTML and writes it to a temporary HTML file for the WebBrowser control to read from. Nothing special, you'd probably want to add handling for specific pages, images and caching, but this was far more than I wanted to play with. Hopefully it'll help someone get started on something cool. It does not handle requests from with the WebBrowser control, so you would need to override that as well. Otherwise only the initial request would be returned from the "Cloud", but subsequent requests would be made normally. This project would be way too much for myself to handle, but it did bring up some interesting thoughts:
  1. Handling Cloud based rendering, would keeping images/css/etc stored locally and doing modified date checks on every request be faster than simply pulling down each request fully?
  2. Would the extra costs incurred to the 3G/4G providers make it worthwhile?
  3. Would zipping content and unzipping them outway the processing time on both ends (especially if there was very limited space on the client)
  4. Is there really a need/want for such a product? Who would fund such a project, would it be open source?
After playing around with Google Charts and doing some extensive C#/SQL integration with it for a dashboard last summer, I figured I'd give Telerik's Kendo a shot. If you're not familiar with Telerik, they produce very useful controls for WinForm, WPF, WP7 and ASP.NET controls (in addition to many others). If you do .NET programming, their product will save you time and money guaranteed. That being said, I started work on the first module for jcDAL last night and wanted to add some cool bar graphs to the web interface for the analyzer. About 15 minutes of reading through one of their examples I had data coming over a WCF service into the Kendo API to display this: [caption id="attachment_880" align="aligncenter" width="621" caption="jcDBAnalyzer Screengrab showcasing Kendo"][/caption] So far so good, I'll report back with any issues, but so far I am very pleased. A lot of the headaches I had with Google Charts I haven't had yet (+1 for Telerik).
Found a bug in either mono or WCF, not sure which one. But if you go to add a Web Reference in Mono Develop (2.8.5) and have an Entity Model Object as a parameter for an object some of the properties will be null. Debugging had me scratching my head as the object was populated prior to sending to the WCF Service. I hadn't tried it as a Native WCF Service in Mono Develop because I've had prior problems with that route. The solution I came up with was create a serializable object in your WCF Service that wraps the object like so: [Serializable] public class BenchResult { public string CPUName; .... } Then in your application your reference will contain (in my case a BenchResult object) the object you defined and upon populating and passing it to your WCF Service all of the data will be populated. Nearly 2 hours gone down the drain on that issue, hopefully it helps someone else.
It's been a very long time since I released something, so without further adieu, I present jcBENCH. It's a floating point CPU benchmark. Down the road I hope to add further tests, this was just something I wrote on the airplane coming back from San Francisco. You'll need to have .NET 4 installed, if you don't have it, click here. Or you can get it from Windows Update. Click here to download the latest version. I'll make an installer in a few days, just unzip it somewhere on your machine and run. Upon running it, the results get automatically uploaded to my server, a results page will be created shortly.
Just getting started on a WCF Service that is going to be handling all of the business logic for the company I work for. With the amount of data, caching was a necessity. Was playing around with Membase last week, couldn't quite get it to work properly and then started yesterday afternoon with AppFabric. Microsoft's own answer to caching (among other things). It installs pretty easily, the built in IIS extensions are very cool. However the setup isn't for the faint of heart. To sum it up:
  1. Install AppFabric on your SQL Server with all of the options
  2. Then install AppFabric on your Web Server and create your WCF and ASP.NET sites
  3. From the PowerShell Cache console type: New-Cache (Where is the name you want to call it, so it could be: New-Cache RandomTexels
  4. Verify it got created by: Get-Cache If you don't see it or get an error make sure the AppFabric Cache Service is running, open up the Run Window (Windows Key + R) and type: services.msc. It should be one of the top items depending on your setup.
  5. After configuring your other server for .NET 4, usual web site permissions and settings in IIS then do this command: Grant-CacheAllowedClientAccount DOMAIN\WEBSERVER$ Replace DOMAIN with your domain name (ie MOJO) and WEBSERVER with the physical name of your webserver. So for instance: Grant-CacheAllowedClientAccount MOJO\BIGWS$ for a domain callled MOJO and a WebServer called BIGWS.
There's plenty of code examples of it out there, but as I create the architecture for this WCF I'll discuss my findings.