latest posts

In working on my Silicon Graphics Indy (as detailed in these posts: R5000 upgrade and the initial post), I realized I hadn't built a jcBENCH MIPS3/Irix release before and hadn't released a revamped MIPS4/Irix build on the cross-platform code I worked on a few weeks ago.

Without further adieu:

MIPS3 Release:
jcBench-0.8.751.0208.irix_mips3.cmdl.gz

MIPS4 Release:
jcBench-0.8.751.0208.irix_mips4.cmdl.gz

More to come in the coming days in regards to jcBENCH.

Brief Introduction

The Silicon Graphics Indy workstation was originally released in late 1993 starting at $4,995. For that price you received a diskless 100 MHz R4000 Indy, 32mb of a ram, the base 8bit graphics card and a 15” monitor. A more reasonable configuration: 64mb of ram, 1 GB hard drive, flooptical drive, 24bit XL graphics card and external CD-ROM was $23,695 at launch. I should note, standard features on the Indy included an ISDN modem, 10baseT Ethernet, four channel stereo sound card and composite video & s-video input, pretty advanced for the time especially compared to the Apple Mac Quadra.

Silicon Graphics Indy - Front
Silicon Graphics Indy - Back

My Story - Initial Hardware Setup

I actually received my Indy way back in May 2012 for a whopping $37 shipped with no hard drive and no memory, while having the R4400SC 150 MHz CPU and 8bit graphics. The SC in the R4400SC, stands for Secondary Cache. Commonly you will find the R4000PC and R4600PC on eBay which lack the L2 cache.

Silicon Graphics Indy - Boot Menu
Silicon Graphics Indy - hinv
Luckily the Indy takes 72 pin FPM memory that was pretty standard back in 1993 when it was first released, this made finding compatible working ram on eBay much easier. The Indy has 8 slots and supports up to 256mb of memory (8x32mb), which I was able to find for < $10.

Knowing I would be using this at some point for at least some vintage SGI Doom I also picked up the 24bit XL Graphics Option for $20 later in May 2012, hoping I would get the R5000 180 MHz CPU (more on this later).

Fast forward to January 23rd 2014, I was cleaning up my office after work and noticed the Indy sitting on top of my Prism and decided to invest the time in getting it up and running.

Little did I know all of the spare Ultra 160 and Ultra 320 SCSI drives I had laying around either were dead or didn’t have the backwards compatibility to SCSI-2 which the Indy utilizes (I didn’t realize some manufacturers dropped SCSI-2 support in the U160/U320 era). Luckily, I just purchased several Maxtor ATLAS 15k II 73 GB U320 drives (Model #8E073L0) for use in my Fuel, Tezros, Origin 300s and Origin 350. Maxtor ATLAS 15k II Ultra 320 Drive
Realizing it was a long shot, I put one of those in the Indy (with an SCA->50pin adapter I got off of eBay for $2) and the Indy recognized it without any problems. Granted the SCSI-2’s 10mb/sec bus limits the outbound and inbound bandwidth the drive has (I had previously benchmarked it around 95mb/sec of actual transfer speed), fluid dynamic bearing motors (virtually silent), the 3.5ms access time and internal transfers far outweigh trying to find an “original” SCSI-2 drives, which I might add often go for $40+ on a Seagate 5400pm 2gb drive. I should note the Seagate Cheetah 15k.3 18 GB U320 (Model #ST318453LC) and the Fujitsu 18 GB U160 (Model #MAJ3182MC) drives did not downgrade to SCSI-2.

I should note, my Indy randomly refused to boot (no power to even the power supply fan). Apparently this was a common problem with the initial power supplies from Nidec. The later Sony models didn’t have this problem, but had the problem of not running the fan 100% of the time, instead only when the temperatures hit a high point. Some folks have modified their Sony power supplies to keep the fan on 100% of the time, I did not as I only really use the Indy in the basement where the hottest it gets is about 69’.

Silicon Graphics Indy - Nidec
A solution I found was to disconnect the power cable for a good 20-30 minutes and then try again. 9 times out of 10 this worked and the system had 0 issues booting into IRIX and maintaining it. So before you run out to buy a “new” power supply of eBay try this solution out. These are 20-21 year old machines afterall.

My Story – Getting IRIX Installed

Having installed IRIX now on a Fuel, Tezro and an Origin 300 I am well versed in the process. For those who are not, checkout this howto guide (http://www.futuretech.blinkenlights.nl/6.5inst.html), it is very detailed and should be all you need to get going. This assumes you have IRIX 6.5.x. Depending on which workstation/server you have, you might need a higher version. 6.5.30 is the latest version of IRIX released, however those typically go for $300+ on eBay. I highly suggest simply getting some version of 6.5 and downloading the 6.5.22m tar files off of SGI via their Supportfollio (this is free after registration).

In my case, the Indy is so old that any version of 6.5 is acceptable, though I wanted to get it to 6.5.22m so I could utilize nekoware. Nekoware is a community project where several contributors compile typical open source software like bash, Apache, MySQL, PHP etc. for MIPS/IRIX. You can download the tardist files here (http://nekoware.dustytech.net/index.php?path=current/) (a tardist is similar to a rpm if you’re coming from a Linux background).

I should note, if you install from an older IRIX 6.5.x release (prior to 6.5.22m) you need to install Patch 5086 (available via the Supportfollio for free) prior to the upgrade.

Another question that might arise especially for those installing to an Indy, I pulled out the DVD-ROM Drive (SCSI-2 50pin) from my Silicon Graphics Fuel to install IRIX. For newer systems that utilize SCA like an Octane or Origin 3x0, you could use a 50pin -> SCA adapter with the drive or do as I did with my Origin 300 a while back and setup a VM of DINA. Basically this allows you to install IRIX over the network to your hardware. After installation before continuing I highly recommend you clone your drive and keep it locked away in case you ever accidentially mess up your installation of if your hard drive dies. Depending on the speed of your system, you could have just invested several hours of time. Cloing a disk is very easy in IRIX, simply follow this guide (http://www.sgidepot.co.uk/disksfiles.html#CLONE). I had done it myself previously on my Origin 300 and it only took about 10 minutes. For my Indy for those curious, it took about 20 minutes.

My Story – Post IRIX Installation

After installation my first step was to get nekoware installed, I typically install the following (including the dependencies for them):
  • BASH
  • OpenSSH
  • wget
  • SDL
  • Firefox
  • GCC
  • Nedit
  • rdesktop
  • Samba
  • Subversion
There’s countless others, but those are the essentials that I utilize very frequently. Depending on your system, the installation (especially of GCC) could take some time so be patient. Something to note, if you have an R4x00 CPU you need to utilize the MIPS3 tardists, however if you have an R5000 Indy you can use the MIPS4 variations. At some point it seems contributors to nekoware for MIPS3 trickled off so you’ll more than likely be compiling from source for most things. As I compile stuff I’ll start contributing to my local repository as well.

What’s Next?

I’ve been on a hunt for an R5000 180 MHz CPU or at the least a 150 MHz variant so I can be close to the highest end Indy available.

As for its use, I plan to start on the MODEXngine project now that I have a pretty clear multiplatform architecture.

In addition I want to use it as test bed for writing efficient C++, in a future blog I will be focusing on laziness of programmers (not all, but many) that I feel the commodity PC hardware of today has gotten so fast and cheap that programmers don’t consider efficiency or writing with performance in mind.
For quite some time now I've been trying to hunt down a newer release of either the full set of installation CDs or the 6.5.30 Overlay CD set (the last version of IRIX). Hunting on eBay as usual I came across an IRIX 6.5 full set, but there was no information as to what revision only "SC4-AWE-6.5 REV ZC". Last year I bought a full set that was labeled SC4-AWE-6.5 REV S. This turned out to be 6.5.14 and as it so happened looks to have been sold with an SGI O2 machine as there were O2 demo CDs inside it. So which version is the "SC4-AW-6.5 REV ZC"? Turns out it's 6.5.21, which for many will be very important as it added support for the Origin 350 and Onyx 4 systems, the last ones in Silicon Graphics' line up based on MIPS CPUs. [caption id="attachment_1771" align="aligncenter" width="550"]Silicon Graphics SC4-AWE-6.5 REV ZC Full CD Set Silicon Graphics SC4-AWE-6.5 REV ZC Full CD Set[/caption] [caption id="attachment_1772" align="aligncenter" width="550"]Silicon Graphics SC4-AWE-6.5 REV ZC Full CD Set - Back Silicon Graphics SC4-AWE-6.5 REV ZC Full CD Set - Back[/caption]
While not completely done with the IRIX 0.3 to 0.6 upgrade, I figured it would be good to get the updated client out. New in this version over the previous is the ability to parse the available hardware and the big news...all C++ code in the IRIX, Linux and Windows clients share common code. [bash] jcBENCH 0.6.522.1025(IRIX Edition) (C) 2012 Jarred Capellman CPU Information --------------------- Manufacturer: MIPS Model: R14000 Count: 4x600mhz Architecture: MIPS --------------------- Running Benchmark.... Integer: 1.61358 seconds Floating Point: 1.57094 seconds [/bash] While yes there are a few: [cpp] #if defined(WIN32) [/cpp] It is nice now to have a common code base. Going forward, I still have to create the Qt and figure out the finer points of using SOAP/WCF Services in a C++ client in a portable manner. You can download this new release here.
About a year ago I started work on a new 3D game engine, while originally it was just foray into WPF, it turned into using OpenGL and C# using OpenTK. Eventually this turned into a dead end as I had too high ambitions. A year later (really more like 15 years later), I finally have come up with a more reasonable game plan to create an iterative approach. Following id software's technology jumps starting with Wolfenstein 3D and hopefully one day hit Quake III level of tech. So the initial features will be:
  1. 90' Walls, Floors and Ceilings
  2. Clipping
  3. Texture Mapping with MIP-Mapping Support
  4. OpenGL Rendering with SDL for Window Management
  5. IRIX and Windows Ports
I started work on the base level editor tonight: [caption id="attachment_1213" align="aligncenter" width="300" caption="jcGEDitor WIP"][/caption]
Just got the initial C port of jcBench completed. Right now there are IRIX 6.5 MIPS IV and Win32 x86 binaries working, I'm hoping to add additional functionality and then merge back in the changes I made to the original 4 platforms. I should note the performance numbers between the 2 will not be comparable. I rewrote the actual benchmarking algorithm to be solely integer based, that's not to say I won't add a floating point, but it made sense after porting the C# code to C. That being said, after finding out a while back on how Task Parallel Library (TPL) really works, my implementation of multi-threading using POSIX, does things a little differently.

Where the TPL starts off with one thread and as it continues processing increases the threads dynamically, my implementation simply takes the number of threads specified via the command line, divides the work (in my case the number of objects) by the number of threads and kicks off the threads from the start. While TPLs implementation is great for work that you don't know if it will really even hit the maximum number of cpus/cores efficiently, for my case it actually hinders performance. I'm now wondering if you can specify from the start how many threads to kick off? If not, Microsoft, maybe add support for that? I've got a couple scenarios I know for instance would benefit from at least 4-8 threads initially, especially for data migration that I prefer to do in C# versus SSIS (call me a control freak).

Back to jcBench, at least with the current algorithm, it appears that a MIPS 600mhz R14000A with 4MB of L2 cache is roughly equivalent to a 1200mhz Phenom II with 512kb L2 cache and 6mb of L3 cache at least in Integer performance. This is based on a couple runs of the new version of jcBench. It'll be interesting to see with numalink if it continues this to 1 to 2 ratio. I'm hoping to see how different generations of AMD cpus compare to the now 10 year old MIPS cpu.
Started on a new, well old project I've wanted to do since 2002-2003. Back then there was no real way to handle and manage network renderings with 3ds max 4.x/5.x aside from the very basic Backburner application. So I came up with netMAX, a Perl based manager that would email me when renders were done and an eta so while I was at High School I could be notified and check on renderings. Fast forward 9-10 years, been out of the 3D Animation/Rendering for several years, but after acquiring several rackmount servers I had been looking for a way to really utilize them. I had purchased Maya 2011 nearly 2 years ago, but hadn't really used it as I had always been a 3ds max guy. Looking around my old boxes of stuff, I uncovered my old 3ds max 4.2 and Maya 6.5 boxes. Happy I found Maya 6.5 as it was the last version to be supported on IRIX I immediately looked into seeing if Maya 2011 could export to 6.5. Fortunately, saving the Maya scene files in ASCII format and then adjusting several lines (or hundreds depending on the scene), I wrote a C# script to parse out elements that 6.5 couldn't understand. Copying that Maya ASCII scene file (.ma) to my Origin 300 and running from the /usr/aw/maya6.5/bin folder: [bash] Render test.ma [/bash] Produced test.tiff: [caption id="attachment_1006" align="aligncenter" width="300" caption="jcPIDR First Test Image"][/caption] Still got a good bit of work left:
  1. Web Service to handle all of the traffic
  2. SQL Database Schema to store the job history
  3. Web Interface to submit/cancel/manage jobs
  4. Handle Textures properl
And if I get fancy, a built in way to export to jcPIDR from Maya.
Having been years since I messed with MySQL on a non-Windows platform I had forgotten 2 simple commands after setup on my Origin 300: [sql] CREATE USER 'dbuser'@'%' IDENTIFIED BY 'sqlisawesome'; [/sql] [sql] GRANT ALL PRIVILEDGES ON *.* TO 'dbuser'@'%' WITH GRANT OPTION; [/sql] The wildcard (%) after the username is key if you want access from other machines (which in my case I wanted to the MySQL Workbench on my Windows 7 machine).
An idea came up over at Nekochan a few weeks ago to try and see if an RV280 (Radeon 9250) was capable of being installed into an Origin 300 much like the V10/V12 VPro hacks for an Origin 350. [Edit on 4/19/2013: I chose my words improperly, VNC is just one of the ways you can display X over the network from your Origin 300 to your PC or other Silicon Graphics machine.]As it stands currently, the Origin 300 does not have a graphics option, so you're limited to VNC to run X on it. It just so happens I have a Radeon 9250 that's been sitting around since last summer when I was going to try and hack it into a Cobalt Qube 3: [caption id="attachment_982" align="aligncenter" width="225" caption="256mb Radeon 9250 PCI"][/caption] Installed it into my Origin 300, and immediately did a hinv -vm when it booted: [shell] Location: /hw/module/001c16/node IP45_4CPU Board: barcode MJP842 part 030-1728-002 rev -D Location: /hw/module/001c16/Ibrick/xtalk/14 IO8 Board: barcode MJX813 part 030-1673-003 rev -F Location: /hw/module/001c16/Ibrick/xtalk/15 IO8 Board: barcode MJX813 part 030-1673-003 rev -F 4 500 MHZ IP35 Processors CPU: MIPS R14000 Processor Chip Revision: 1.4 FPU: MIPS R14010 Floating Point Chip Revision: 1.4 CPU 0 at Module 001c16/Slot 0/Slice A: 500 Mhz MIPS R14000 Processor Chip (enabled) Processor revision: 1.4. Scache: Size 2 MB Speed 250 Mhz Tap 0xa CPU 1 at Module 001c16/Slot 0/Slice B: 500 Mhz MIPS R14000 Processor Chip (enabled) Processor revision: 1.4. Scache: Size 2 MB Speed 250 Mhz Tap 0xa CPU 2 at Module 001c16/Slot 0/Slice C: 500 Mhz MIPS R14000 Processor Chip (enabled) Processor revision: 1.4. Scache: Size 2 MB Speed 250 Mhz Tap 0xa CPU 3 at Module 001c16/Slot 0/Slice D: 500 Mhz MIPS R14000 Processor Chip (enabled) Processor revision: 1.4. Scache: Size 2 MB Speed 250 Mhz Tap 0xa Main memory size: 2048 Mbytes Instruction cache size: 32 Kbytes Data cache size: 32 Kbytes Secondary unified instruction/data cache size: 2 Mbytes Memory at Module 001c16/Slot 0: 2048 MB (enabled) Bank 0 contains 1024 MB (Premium) DIMMS (enabled) Bank 1 contains 1024 MB (Premium) DIMMS (enabled) Integral SCSI controller 0: Version QL12160, low voltage differential Disk drive: unit 1 on SCSI controller 0 (unit 1) Integral SCSI controller 1: Version QL12160, low voltage differential IOC3/IOC4 serial port: tty3 IOC3/IOC4 serial port: tty4 Integral Fast Ethernet: ef0, version 1, module 001c16, pci 4 <strong>PCI Adapter ID (vendor 0x1002, device 0x5960) PCI slot 1</strong> <strong>PCI Adapter ID (vendor 0x1002, device 0x5940) PCI slot 1</strong> <strong>PCI Adapter ID (vendor 0x1077, device 0x1216) PCI slot 1</strong> PCI Adapter ID (vendor 0x10a9, device 0x0003) PCI slot 4 PCI Adapter ID (vendor 0x11c1, device 0x5802) PCI slot 5 IOC3/IOC4 external interrupts: 1 HUB in Module 001c16/Slot 0: Revision 2 Speed 200.00 Mhz (enabled) IP35prom in Module 001c16/Slot n0: Revision 6.210 USB controller: type OHCI [/shell] It picked it up as noted by the bolded PCI entries, but nothing more than that. It was suggested to do an inst -m GFXBOARD=VOYAGER, but I couldn't get my O2 running as a BOOTP server. Maybe another day...
Last Sunday I was curious about IRIX root disk cloning, I had done it previously on an SGI Octane with a none root drive, but never on the root drive itself. Since I have 2 SGI O2s and 2 identical Maxtor Atlas II 15k 73gb Ultra 320 drives, it made sense rather than reinstalling everything just to clone it. Sure enough about 20 minutes later, I had an extra duplicate of my SGI O2 IRIX 6.5.30 install with all of my Nekoware packages (BASH, PHP, MySQL etc). Pulled it out and put into my other SGI O2, worked like a charm. After doing some work on my SGI Origin 300 this morning, I figured I'd replace the failing Maxtor ATLAS 15k 36gb Ultra 320 drive with a brand new Fujitsu MAU3147 15k 36gb Ultra 320 drive. Not to mention the Maxtor had been making a terrible high pitch noise for some time before it ended up in the Origin :) Fujitsu in the Origin 300 Sled: [caption id="attachment_975" align="aligncenter" width="225" caption="Fujitsu MAU ready to go for my Origin 300"][/caption] After sliding it into my already powered Origin 300, I realized the SCSI ports weren't auto-scanning for changes. Some googling later, I found the Solaris equivalent of probe-scsi-all is scsiha -p 0; on IRIX. Immediately following that running a hinv command: 4 500 MHZ IP35 Processors CPU: MIPS R14000 Processor Chip Revision: 1.4 FPU: MIPS R14010 Floating Point Chip Revision: 1.4 Main memory size: 2048 Mbytes Instruction cache size: 32 Kbytes Data cache size: 32 Kbytes Secondary unified instruction/data cache size: 2 Mbytes Integral SCSI controller 3: Version Fibre Channel QL2200A Integral SCSI controller 0: Version QL12160, low voltage differential Disk drive: unit 1 on SCSI controller 0 Disk drive: unit 2 on SCSI controller 0 Integral SCSI controller 1: Version QL12160, low voltage differential IOC3/IOC4 serial port: tty3 IOC3/IOC4 serial port: tty4 Integral Fast Ethernet: ef0, version 1, module 001c16, pci 4 IOC3/IOC4 external interrupts: 1 USB controller: type OHCI IRIX found the drive on Controller 0, so I was ready to begin the cloning procedure. Following these steps, 10 minutes later, I had my data copied to the Fujitsu drive and was ready to become my new root drive on my Origin 300. Pretty cool Silicon Graphics built that in, cloning a Windows system drive is a huge pain natively. Although there is a sweet tool by Arconis if you own a Western Digital on there site, here for free. I used that when I moved off of my RAID 0 stripe of 2 Western Digital Black 500gb drives to the 1TB versions and didn't feel like reinstalling all of my applications including the ones that had limited activations without having to call the company to get them to reset the activations. Maxtor, you've been good from 2004-2008 and from 2011-2012: [caption id="attachment_976" align="aligncenter" width="225" caption="Maxtor ATLAS 15k 36gb drive, finally being retired"][/caption]
Finally got around to replacing the 4 80mm 40 decibel fans in my Origin 300 this morning. The noise from this one server was enough to travel from the basement inside a rack all the way to the 3rd floor Master Bedroom. Suffice it to say, I definitely couldn't run the server 24/7. Hunting around on Amazon, I found these 80mm Cooler Master fans, not too bad price wise and still put out decent air flow. [caption id="attachment_963" align="aligncenter" width="225" caption="New Cooler Master 80mm replacement fans for my SGI Origin 300"][/caption] Prep for the swap: [caption id="attachment_964" align="aligncenter" width="225" caption="My Quad R14k SGI Origin 300"][/caption] The original fan in case someone needed a part number: [caption id="attachment_965" align="aligncenter" width="225" caption="SGI Origin 300 Stock Fan"][/caption] As I was swapping in the new fans, I realized the fan connector was not the standard ATX style. Stock Connector: [caption id="attachment_966" align="aligncenter" width="300" caption="Stock SGI Origin 300 Fan Connector"][/caption] Versus the standard ATX connector: [caption id="attachment_967" align="aligncenter" width="300" caption="Repacement Cooler Master Fan ATX Connector"][/caption] Luckily the standard 4 pin Molex power connector for the 2 Ultra 160 drives is right next to the fans, so a little wiring job and voila: [caption id="attachment_968" align="aligncenter" width="300" caption="SGI Origin 300 with replacement fans installed"][/caption] Note, doing it this way will throw an error in the L1 Console and will shut your machine down. A way around it is to simply connect to the Origin 300 over a console connection and type: env off. This is dangerous though as the server will not shutdown automatically if a fan fails or the server overheats. Having said that, it came to my attention that IRIX does not install a Serial/Terminal client by default. The common cu is on the IRIX 6.5 Foundation CD 1 disk. Turn on the Subsystems Only in the IRIX Software Manager and scroll down until you see it. Chances are you're not running a base 6.5 install so you'll also need the first disk of your overlays (6.5.30 Overlay CD 1 in my case) in order to install it to resolve the package conflicts. After installing you may receive a "CONNECT FAILURE: NO DEVICES AVAILABLE". Open up vi or your favorite text editor and open up /etc/uucp/Devices Add in a line: Direct ttyd2 - 38400 direct Make sure the spaces are there. You can also try setting it up via the Serial Manager under the System Manager application. Afterwards, simply running: cu -l /dev/ttyd2 -s38400 Allowed me into my L1 console to turn off environment monitoring. Then hit Control + D to get back into the PROM Monitor and hit "1" to start IRIX.
Just benchmarked my 180mhz R5000SC cpu with only 128mb of memory in Quake 2, it's marginally worse than when I benchmarked the same CPU, but with 1gb of memory. This makes sense though in that back in 1997 when Quake 2 came out, I think I only had just upgraded to 32mb of ram in my Pentium 200 MMX running Windows NT 4. So all the extra ram wouldn't help this scenario. The one major caveat I noticed was running in a 320x240 window @ 640x480, my O2 ran out of memory and started paging to the Virtual Memory (thank you gmemusage). I tested it twice to make sure, but a word to the wise, do not run Quake 2 in a window if you only have 128mb of memory. I'll test it out with 192mb shortly. In case anyone was interested here is an updated list: 320x240 128mb - 180mhz R5000SC - 4.1 / 169.6 seconds 1024mb - 180mhz R5000SC - 14.9 / 46.3 seconds 1024mb - 300mhz R5200SC - 18.8 / 36.7 seconds 640x480 128mb - 180mhz R5000SC - 12.2 / 56.3 seconds 1024mb - 180mhz R5000SC - 12.4fps / 55.4 seconds 1024mb - 300mhz R5200SC - 14.5fps / 47.4 seconds 800x600 128mb - 180mhz R5000SC - 10.2 / 67.3 seconds 1024mb - 180mhz R5000SC - 10.3 / 67.2 seconds 1024mb - 300mhz R5200SC - 11.7 / 58.7 seconds
I got a 300mhz R5200SC CPU (the highest R5k series CPU for the O2) for my main Silicon Graphics O2. I ran before and after Quake 2 results out of curiosity if it would make a big difference having nearly 2X the mhz and that the CRIME graphics core is tightly integrated with the CPU. Using the following commands on the console: timedemo 1 map demo1.dm2 Here's the results: 320x240 180mhz - 14.9 / 46.3 seconds 300mhz - 18.8 / 36.7 seconds 640x480 180mhz - 12.4fps / 55.4 seconds 300mhz - 14.5fps / 47.4 seconds 800x600 180mhz - 10.3 / 67.2 seconds 300mhz - 11.7 / 58.7 seconds Kind of surprised at the results, I'll be testing my 180mhz R5000PC vs 180mhz R5000SC shortly to see what if any difference the Level 2 cache makes.
I figured after attempting to compile SDLQuake and running into x86 assembly, UDP and linking issues I decided to try the "official" IRIX port from SGI. Oddly enough, Quake II runs at 640x480x16 with full textures on my Silicon Graphics O2. Feeling adventurous, I wanted to see if it would play with the x86 Windows 3.20 version. Sure enough, I was able to play with an IRIX hosted game (it would probably work the other way as well) with my Windows 7 workstation. [caption id="attachment_927" align="aligncenter" width="300" caption="Quake II on my O2"][/caption] On a side note, I got SDLQuake to get all the way to the linking stage before giving up on it. For whatever reason using -lSDL was not including what looks like X11 or OpenAL libraries. I included both of those libraries and was still getting the errors.
Finally got Quake working on my Silicon Graphics O2. Using the "official" SGI port from September 1997, it runs fairly smooth at 1024x768x16. [caption id="attachment_924" align="aligncenter" width="300" caption="Quake on my SGI O2"][/caption] It did bring up old memories of my first LAN party in Summer of 1998 playing Quake, particularly E1M7 as seen in the picture. Definitely makes me want to have a Retrogaming LAN with Quake or Descent.
Been working on my Silicon Graphics O2 a lot and surprisingly it's pretty responsive for directory browsing, C/C++ source code editing/compiling and software installation. For a 16 year old 180mhz machine with only 192mb of memory I am really shocked what an optimized and lightweight operating system can do. For web browsing, FLAC music playing etc, it won't stand up against my Phenom II X6, but for everything I'm doing, it is neck and neck. This thought got my thinking, everyday new cpus/apus, gpus etc come out, the last generation is put into bargain bins or rebadged as a lower end part of the next generation. I'm voting to rethink "computer performance", at least its definition. If someone running Microsoft Word, Zune and Chrome on a Windows 7 machine with 6 cores and 16gb ram notices no difference when he or she is on 2 core, 4gb ram system, then why justify the badging of that machine as a lowerend? To me it is all relative to what you're actually doing. I can browse to a C++ file from a terminal window, open it in vi and start editing on my SGI O2 faster than I could open up Visual Studio 2010, then browse to the project and start coding on my Phenom II system. It saddens me to know that so many people will frown upon not top of the line specs even if they are only going to use 50% of the power available. Gluttony it seems has taken hold of consumers...
Well actually it did on Tuesday, but just finally got it in working state last night with SSH, BASH and SAMBA. I had a spare Maxtor Atlas II 15k 73gb Ultra 320 SCA drive that I replaced the existing Quantum 10k 9gb, it's now both quiet and pretty fast considering it's one cpu away from being the lowest end O2 and it's 14-15 years old. [caption id="attachment_907" align="aligncenter" width="225" caption="Silicon Graphics O2"][/caption] [caption id="attachment_908" align="aligncenter" width="300" caption="Silicon Graphics O2 Startup Menu"][/caption] Next on the todo list is get GCC setup along with NEDIT so I can start programming on it.
Last night when working on my Silicon Graphics Origin 300 and suffering with an old version Mozilla circa 2005 as seen below: [caption id="attachment_896" align="aligncenter" width="300" caption="Mozilla 1.7.12 on IRIX"][/caption] I started wondering, these machines can function as a Web Server, MySQL server, firewall etc especially my Quad R14k Origin 300, yet web browsing is seriously lacking on them. Firefox 2 is available over at nekoware, but that is painfully slow. Granted I don't use my Origin for web browsing, but when I was using a R12k 400mhz Octane as my primary machine a few years ago as I am sure others around the world are doing it was painful. This problem I don't think is solely for those on EOL'd Silicon Graphics machines, but any older piece of hardware that does everything but web browsing decently. Thinking back to the Amazon Silk platform, using less powerful hardware, but a brilliant software platform, Amazon is able to deliver more with less. The problem arises for the rest of the market because of the diversity of the PC/Workstation market. The way I see it you've got 2 approaches to a "universal" cloud web renderer. You could either:
  1. Write a custom lightweight browser tied to an external WCF/Soap Web Service
  2. Write a packet filter inspector for each platform to intercept requests and return them from a WCF/Soap service either through Firefox Extensions or a lower level implementation, almost like a mini-proxy
Plan A has major problems because you've got various incarnations of Linux, IRIX, Solaris, VMS, Windows etc, all with various levels of Java and .NET/Mono support (if any), so a Java or .NET/Mono implementation is probably not the right choice. Thus you're left trying to make a portable C/C++ application. To cut down on work, I'd probably use a platform independent library like Gsoap to handle the web service calls. But either way the amount of work would be considerable. Plan B, I've never done anything like before, but I would imagine would be a lot less work than Plan A. I spent 2 hours this morning playing around with a WCF service and a WPF application doing kind of like Plan A. [caption id="attachment_897" align="aligncenter" width="300" caption="jcW3CLOUD in action"][/caption] But instead of writing my own browser, I simply used the WebBrowser control, which is just Internet Explorer. The Web Service itself is simply: [csharp] public JCW3CLOUDPage renderPage(string URL) { using (WebClient wc = new WebClient()) { JCW3CLOUDPage page = new JCW3CLOUDPage(); if (!URL.StartsWith("http://")) { URL = "http://" + URL; } page.HTML = wc.DownloadString(URL); return page; } } [/csharp] It simply makes a web request based on the URL from the client, converts the HTML page to a String object and I pass it into a JCW3CLOUDPage object (which would also contain images, although I did not implement image support). Client side (ignoring the WPF UI code): [csharp] private JCW3CLOUDReference.JCW3CLOUDClient _client = new JCW3CLOUDReference.JCW3CLOUDClient(); var page = _client.renderPage(url); int request = getUniqueID(); StreamWriter sw = new StreamWriter(System.AppDomain.CurrentDomain.BaseDirectory + request + ".html"); sw.Write(page.HTML); sw.Close(); wbMain.Navigate(System.AppDomain.CurrentDomain.BaseDirectory + request + ".html"); [/csharp] It simply makes the WCF request based on the property and then returns the HTML and writes it to a temporary HTML file for the WebBrowser control to read from. Nothing special, you'd probably want to add handling for specific pages, images and caching, but this was far more than I wanted to play with. Hopefully it'll help someone get started on something cool. It does not handle requests from with the WebBrowser control, so you would need to override that as well. Otherwise only the initial request would be returned from the "Cloud", but subsequent requests would be made normally. This project would be way too much for myself to handle, but it did bring up some interesting thoughts:
  1. Handling Cloud based rendering, would keeping images/css/etc stored locally and doing modified date checks on every request be faster than simply pulling down each request fully?
  2. Would the extra costs incurred to the 3G/4G providers make it worthwhile?
  3. Would zipping content and unzipping them outway the processing time on both ends (especially if there was very limited space on the client)
  4. Is there really a need/want for such a product? Who would fund such a project, would it be open source?
After some more thought about jcBENCH and what its real purpose was I am going to drop the Solaris and IRIX ports. Solaris has a Mono port, but I only have Sun Blade 100 which has a single cpu. Not expecting a ton of performance from that. IRIX on the other hand, I have a Quad R14k 500 Origin 300, but no port of Mono exists. So I could port it to Java, but then you really couldn't compare benchmarks between the Mono/.NET versions. I am about 50% done with the Android port and am just waiting for the OpenSuse 12.1 compatible MonoDevelop release so I can get started on the Linux Port. After those 2 ports are completed I am thinking of starting something entirely new that I have been thinking about the last couple years. Those that deal with a SQL database and write a data layer for his or her .NET project, know the shortcomings or doing either:
  1. Using an ADO.NET Entity Model, adding your Tables, Views and Stored Procedures and then use that as is or extend it with some business logic
  2. Use an all custom data layer using the base DataTable, DataRows etc, wrap your objects with partial classes and create a "factory"
Both approaches have their pros and cons, the first takes a lot of less time, but you also have a lot less control and could be costly with all of the overhead. Both however will eventually fall apart down the road. The reason, they were built for one audience and one production server or servers. How many times have you gone to your IT Manager and asked for a new Database server because it was quicker then really go back to the architecture of your data layer. As time goes on, this could happen over and over again. I have personally witnessed such an event. A system was designed and built for around 50 internal users, on a single cpu web server and a dual Xeon database server. Over 5 years later, the code has remained the same yet it's been moved to 6 different servers with ever increasing speed. Times have changed and will continue to change, workloads vary from day to day, servers are swapped in and out, so my solution, an adaptive, dynamic data layer. One that profiles itself and uses that data to analyze the server to use either single threaded LINQ queries or PLINQ queries if the added overhead of using PLINQ would out way the time it would take only using one cpu. In addition using Microsoft's AppFabric to cache the commonly used intensive queries that maybe only get run once an hour and the data doesn't change for 24. This doesn't come without a price of course, having only architected this model in my head, I can't say for certain how much overhead the profiling will be. Over the next couple months, I'll be developing this so stay tuned. jcBENCH as you might have guessed was kind of a early test scenario of testing various platforms and how they handled multi-threaded tasks of varying intensity.
A picture says a thousand words: [caption id="attachment_48" align="aligncenter" width="550" caption="IRIX-6.5.30-running"][/caption] Now to actually get MySQL, PHP etc on it like I wanted to last summer :)
Spent virtually every evening this week working on my Origin 300 I got last summer and finally got to this point: [caption id="attachment_41" align="aligncenter" width="761" caption="IRIX 6.5.30 Launching"][/caption] Called it a night at around 12:30 this morning after getting BASH, Samba, SSH, TightVNC on it, figuring I would wake up and get it running pretty quickly. And I did to some extent: [caption id="attachment_42" align="aligncenter" width="643" caption="IRIX 6.5.30 over VNC"][/caption] Not exactly exciting to see, but an improvement over a console/ssh/telnet session. After some more configuring, started to get when running 4Dwm (IRIX's Windowing Manager, think of it like GNOME or KDE) Media Error: Unrecovered data block read errors Googling the error (and just intuition), the hard drive I spent all week prepping, installing and getting frustrated on is on it's way out. Luckily I remembered from years back upgrading a drive in my origin Silicon Graphics Octane you can pretty easily clone a root drive. Pulled up this link over at the SGI Depot. Followed the steps, tons of errors followed when copying. Crossing my fingers upon pulling out the original installation drive...
Years ago when I was a kid I was enamored by Silicon Graphics. My father spoke highly of them and I saw what they were able to produce in the movies I loved. In high school I had wanted a then new, Octane 2. They were around $14,000 back then. A few years later in 2004/2005, I broke down and spent around $300 on an Octane I. Finally got an Octane II in 2007, but needed cash so I sold off all of my Silicon Graphics equipment. Fast forward until last summer, Origin 300s finally came down to a reasonable price point (~$120), picked up a Quad R14k, 2GB Origin 300 with a Fiber Channel PCI-X card. Only kicker was it didn't have any drives and being an Origin 300, no cd/dvd drive and no video card either. Busy with other stuff, I left it sitting in my "stack". Last night, UPS delivered, IRIX at last! [caption id="attachment_23" align="aligncenter" width="550" caption="IRIX 6.5.4 Installation CDs"][/caption] Still waiting on the drives I bought, found a guy on eBay who had 3 Maxtor Atlas 15K IIs for dirt cheap. One step closer to getting it finally setup :)