latest posts

Another day, another updated port of jcBENCH, this time for x86/Linux. There are no dependencies, just execute and run.

You can download the x86-linux-0.8.755.0507 release here.

[bash] jcBENCH 0.8.755.0507(x86/Linux Edition) (C) 2012-2014 Jarred Capellman CPU Information --------------------- Manufacturer: AuthenticAMD Model: AMD Phenom(tm) II X2 545 Processor Count: 2x2999.977mhz Architecture: x86 --------------------- Running Benchmark.... Integer: 15.8167 seconds Floating Point: 22.9469 seconds [/bash] To recap the following ports are still needing to be updated:
-x86/MacOS X
-arm/Windows Phone 8
-Windows Store

Hoping to get more of these ports knocked out in the coming days so as always check back here. In addition I'm hoping to get an official page up and running with all of the release conveniently located instead of the current situation. No eta on that project however.

Brief Introduction

The Silicon Graphics Indy workstation was originally released in late 1993 starting at $4,995. For that price you received a diskless 100 MHz R4000 Indy, 32mb of a ram, the base 8bit graphics card and a 15” monitor. A more reasonable configuration: 64mb of ram, 1 GB hard drive, flooptical drive, 24bit XL graphics card and external CD-ROM was $23,695 at launch. I should note, standard features on the Indy included an ISDN modem, 10baseT Ethernet, four channel stereo sound card and composite video & s-video input, pretty advanced for the time especially compared to the Apple Mac Quadra.

Silicon Graphics Indy - Front
Silicon Graphics Indy - Back

My Story - Initial Hardware Setup

I actually received my Indy way back in May 2012 for a whopping $37 shipped with no hard drive and no memory, while having the R4400SC 150 MHz CPU and 8bit graphics. The SC in the R4400SC, stands for Secondary Cache. Commonly you will find the R4000PC and R4600PC on eBay which lack the L2 cache.

Silicon Graphics Indy - Boot Menu
Silicon Graphics Indy - hinv
Luckily the Indy takes 72 pin FPM memory that was pretty standard back in 1993 when it was first released, this made finding compatible working ram on eBay much easier. The Indy has 8 slots and supports up to 256mb of memory (8x32mb), which I was able to find for < $10.

Knowing I would be using this at some point for at least some vintage SGI Doom I also picked up the 24bit XL Graphics Option for $20 later in May 2012, hoping I would get the R5000 180 MHz CPU (more on this later).

Fast forward to January 23rd 2014, I was cleaning up my office after work and noticed the Indy sitting on top of my Prism and decided to invest the time in getting it up and running.

Little did I know all of the spare Ultra 160 and Ultra 320 SCSI drives I had laying around either were dead or didn’t have the backwards compatibility to SCSI-2 which the Indy utilizes (I didn’t realize some manufacturers dropped SCSI-2 support in the U160/U320 era). Luckily, I just purchased several Maxtor ATLAS 15k II 73 GB U320 drives (Model #8E073L0) for use in my Fuel, Tezros, Origin 300s and Origin 350. Maxtor ATLAS 15k II Ultra 320 Drive
Realizing it was a long shot, I put one of those in the Indy (with an SCA->50pin adapter I got off of eBay for $2) and the Indy recognized it without any problems. Granted the SCSI-2’s 10mb/sec bus limits the outbound and inbound bandwidth the drive has (I had previously benchmarked it around 95mb/sec of actual transfer speed), fluid dynamic bearing motors (virtually silent), the 3.5ms access time and internal transfers far outweigh trying to find an “original” SCSI-2 drives, which I might add often go for $40+ on a Seagate 5400pm 2gb drive. I should note the Seagate Cheetah 15k.3 18 GB U320 (Model #ST318453LC) and the Fujitsu 18 GB U160 (Model #MAJ3182MC) drives did not downgrade to SCSI-2.

I should note, my Indy randomly refused to boot (no power to even the power supply fan). Apparently this was a common problem with the initial power supplies from Nidec. The later Sony models didn’t have this problem, but had the problem of not running the fan 100% of the time, instead only when the temperatures hit a high point. Some folks have modified their Sony power supplies to keep the fan on 100% of the time, I did not as I only really use the Indy in the basement where the hottest it gets is about 69’.

Silicon Graphics Indy - Nidec
A solution I found was to disconnect the power cable for a good 20-30 minutes and then try again. 9 times out of 10 this worked and the system had 0 issues booting into IRIX and maintaining it. So before you run out to buy a “new” power supply of eBay try this solution out. These are 20-21 year old machines afterall.

My Story – Getting IRIX Installed

Having installed IRIX now on a Fuel, Tezro and an Origin 300 I am well versed in the process. For those who are not, checkout this howto guide (, it is very detailed and should be all you need to get going. This assumes you have IRIX 6.5.x. Depending on which workstation/server you have, you might need a higher version. 6.5.30 is the latest version of IRIX released, however those typically go for $300+ on eBay. I highly suggest simply getting some version of 6.5 and downloading the 6.5.22m tar files off of SGI via their Supportfollio (this is free after registration).

In my case, the Indy is so old that any version of 6.5 is acceptable, though I wanted to get it to 6.5.22m so I could utilize nekoware. Nekoware is a community project where several contributors compile typical open source software like bash, Apache, MySQL, PHP etc. for MIPS/IRIX. You can download the tardist files here ( (a tardist is similar to a rpm if you’re coming from a Linux background).

I should note, if you install from an older IRIX 6.5.x release (prior to 6.5.22m) you need to install Patch 5086 (available via the Supportfollio for free) prior to the upgrade.

Another question that might arise especially for those installing to an Indy, I pulled out the DVD-ROM Drive (SCSI-2 50pin) from my Silicon Graphics Fuel to install IRIX. For newer systems that utilize SCA like an Octane or Origin 3x0, you could use a 50pin -> SCA adapter with the drive or do as I did with my Origin 300 a while back and setup a VM of DINA. Basically this allows you to install IRIX over the network to your hardware. After installation before continuing I highly recommend you clone your drive and keep it locked away in case you ever accidentially mess up your installation of if your hard drive dies. Depending on the speed of your system, you could have just invested several hours of time. Cloing a disk is very easy in IRIX, simply follow this guide ( I had done it myself previously on my Origin 300 and it only took about 10 minutes. For my Indy for those curious, it took about 20 minutes.

My Story – Post IRIX Installation

After installation my first step was to get nekoware installed, I typically install the following (including the dependencies for them):
  • BASH
  • OpenSSH
  • wget
  • SDL
  • Firefox
  • GCC
  • Nedit
  • rdesktop
  • Samba
  • Subversion
There’s countless others, but those are the essentials that I utilize very frequently. Depending on your system, the installation (especially of GCC) could take some time so be patient. Something to note, if you have an R4x00 CPU you need to utilize the MIPS3 tardists, however if you have an R5000 Indy you can use the MIPS4 variations. At some point it seems contributors to nekoware for MIPS3 trickled off so you’ll more than likely be compiling from source for most things. As I compile stuff I’ll start contributing to my local repository as well.

What’s Next?

I’ve been on a hunt for an R5000 180 MHz CPU or at the least a 150 MHz variant so I can be close to the highest end Indy available.

As for its use, I plan to start on the MODEXngine project now that I have a pretty clear multiplatform architecture.

In addition I want to use it as test bed for writing efficient C++, in a future blog I will be focusing on laziness of programmers (not all, but many) that I feel the commodity PC hardware of today has gotten so fast and cheap that programmers don’t consider efficiency or writing with performance in mind.
With the ever changing software development world (I and every other developer) live in I’ve found it increasingly harder to keep up with every toolset, every framework let alone language. In the last few years I have attempted to buckle down and focus on what I enjoy the most: C#. Over the last 6+ years of C# development I’ve inherited or developed ASP.Net WebForms 1.1 to 4.5 web applications, MVC4, 3.5 to 4.5 WinForms desktop applications, Windows Workflow, custom Sharepoint 2010 parts, WPF, iOS, Android, Windows Phone 7/8, Web Services in WCF and WebAPI and most recently diving into Windows 8 development. Suffice it to say – I’ve been around the block in regards to C#.

About 6 months ago, I started doing some early research in preparation for a very large project at work that relied more on the mathematical/statistical operations than the traditional “make stuff happen” that I am used to. Keeping with an open, out of the box mentality, I just happened to be in the Book Buyers Inc. bookstore in downtown Mountain View, California on vacation and picked up Professional F# 2.0 for a few dollars used. Knowing they were on already on version 3, I figured it would provide a great introduction to the language and then I would advance my skills through MSDN and future books. I poured through the book on the overly long flight from San Francisco International to Baltimore-Washington International using my laptop the entire flight back writing quick snippets that I could easily port back and forth between C# and F# to see the benefits and best use cases for F#. When I returned home, I found myself wanting more, and as fate would have it shortly afterwards SyncFusion was offering the F# Succinctly e-book by Robert Pickering, for free.

Eager to read the e-book after my introduction to F#, I ended up finishing it after a short weekend. The e-book, while much shorter than the paperback I purchased, provided a great introduction and solidified many of the concepts I was still cementing in my mind. Like other developers I am sure – when investing time into a new technology or language you want some guarantee of its success and worthiness of your time (especially if it is coming out of your precious off hours) Be happy to know the author chose to include real-world quotes and links to successes with F# over the traditional C# implementations. I should note, while the author does not assume some Visual Basic or C# experience, it definitely will help, but I feel that the book provides an in-depth enough explanation and easy to follow examples for anyone with some higher level programming experience to grasp the main concepts and build a solid foundation to grow from.

Another element of the e-book I personally enjoyed was the intuitive and easy to follow progression the author chose to utilize. The author early on in the book offered an introduction to F# and proceeded to dive into the fundamentals before providing real-use cases that a professional software developer would appreciate. Several books provide an introductory chapter only to spend the next half of the book on reference manual text or snippets that don’t jump out to you with a real world applicability or even a component of one.

If there was one element I wished for in the e-book, it would be for it to be longer or a part 2 be written. This "sequel" would build on the concepts provided, assuming a solid foundation of F# and dive into more real-world scenarios where F# would be beneficial over C# or other higher level programming languages. Essentially a "best practices" for the C#/F# programmer.

On a related note, during my own investigations into F# I found the Microsoft Try F# site to be of great assistance.

In conclusion, definitely checkout the F# Succinctly e-book (and others) in SyncFusion’s ever growing library of free e-books.
As months and years go by, devices coming and going I've seen (as most have) an increasing demand to provide a universal experience no matter what device you are on i.e. mobile, desktop, laptop, tablet, website etc. This has driven a lot of companies to pursue ways to deliver that functionality efficiently, both from a monetary standpoint and a performance perspective. A common practice is to provide a Web Service, SOAP or WCF for instance, and then consume the functionality on the device/website. This provides a good layer between your NAS & Database server(s) and your clients. However, you don't want to provide the exact same view on every device. For instance, you're not going to want to edit 500 text fields on a 3.5" Mobile screen nor do you have the ability to upload non-isolated storage documents on mobile devices (at least currently). This brings up a possible problem, do you have the same Operation Contract with a DataContract Class Object and then based on the device that sent it, know server side what to expect? Or do you handle the translation on the most likely slower client side CPU? So for me, there are 2 possible solutions:
  1. Create another layer between the OperationContract and the server side classes to handle device translations
  2. Come up with something outside the box
Option #1, has pros and cons. It leaves the client side programming relatively the same across all platforms and leaves the work to the server side so pushing out fixes would be relatively easy and most likely affect all clients if written to use as much common code as possible. However, it does leave room for unintended consequences. Forgetting to update all of the device specific code and then having certain clients not get the functionality expected. Further more, as devices evolve, for instance the iPhone 1-4S had a 3.5" screen while the iPhone 5 has a much larger 4" screen. Would this open the doors to having a closer to iPad/Tablet experience? This of course depends on the application and customer base, but something to consider. And if it makes sense to have differing functionality passed to iPhone 5 users versus iPhone 4, there is more complexity in coding to specific platforms. A good route to solve those complexities in my opinion would be to create a Device Profile like Class based on the global functionality, then when the request to push or get data, the Factory classes in your Web Service would know what to do without having tons of if (Device == "IPHONE") conditionals. As more devices arrive, create a new profile server side and you'd be ready to go. Depending on your application this could be a very manageable path to go. Option #2, think outside the box is always interesting to me. I feel like many developers (I am guilty of this too), approach things based on previous experience and go through an iterative approach with each project. While this is a safer approach and I agree with it in most cases, I don't think developers can afford to think this way too much longer. Software being as interconnected with external APIs, web services, integrations (Facebook, Twitter etc.) and countless devices is vastly different than the 90s class library solution. Building a robust as future proof system to me is much more important than the client itself in my opinion. That being said, what could you do? In working on Windows Workflow Foundation last month and really breaking apart what software does at the most basic level, it really is simply:
  1. Client makes a request
  2. Server processes request (possibly doing multiple requests of it's own to databases or file storage)
  3. Return the data the Client expects (hopefully incorporating error handling in the return)
So how does this affect my thinking of architecting Web Services and Client applications? I am leaning towards creating a generic interface for certain requests to get/set data between the Server and Client. This creates a single funnel to process and return data, thus eliminating duplicate code and making the manageability much higher. However, you're probably thinking about the overhead in translating a generic request to "GetObject" to what the client is actually expecting. I definitely agree and I don't think it should be taken literally especially when considering performance of both server side resources and the amount of data transferring back and forth. What I am implying is doing something like this with your OperationContract definition: [csharp] [OperationContract] Factory.Objects.Ticket GetTicket(ClientToken clientToken, int TokenID); [/csharp] Your implementation: [csharp] public Factory.Objects.Ticket GetTicket(ClientToken clientToken, int TokenID) { return new Factory.TicketFactory(Token: clientToken).GetObject<Factory.Objects.Ticket>(ID: TokenID); } [/csharp] Then in your Factory class: [csharp] public interface Factory<T> where T : FactoryObject { public Factory<T>(ClientToken Token) { } T GetObject(int ID); FactoryResult AddObject(T object); } [/csharp] Then implement that Factory pattern for each object class. I should note implementing a Device Profile layer could be done at the Factory Constructor level. Simply pass in the Device Type inside the ClientToken object. Then a simple check to the Token class for instance: [csharp] public FactoryResult AddObject<Ticket>(Ticket item) { if (Token.Device.HasTouchScreen) { // do touch screen specific stuff; } return FactoryResult(); } [/csharp] You could also simply store Device Profile data in a database, text file, xml file and then cache them server side. Obviously this is not a solution for all applications, but has been successful in my implementations. Comments, suggestions, improvements, please let me know below.