Untitled

MSI P4N SLI motherboard has a build in nVidea nForce 04 NIC. OpenSolaris doesn’t have driver for it, however a driver can be downloaded from Masayuki Murayama’s Free NIC drivers for Solaris page (Drivers there are SPARC/x86 capable, one might need a functional 64 bit compiler to recompile them for their platform).

His driver will work out of the box, as long as the PCI device ID matches the ones in adddrv.sh script. To verify that, one might need to run /usr/X11/bin/scanpci -v and verify that the PCI id matches. In my case, PCI ID was pci10d3,38, which was not in the adddrv.sh script, however is in fact an nForce4 ethernet controller.
After I’ve added the ID in the script, driver worked right away.

root@dara:/[07:49 PM]# cd ; /usr/X11/bin/scanpci -v
[...]
pci bus 0x0000 cardnum 0x0e function 0x00: vendor 0x10de device 0x0038
 nVidia Corporation MCP04 Ethernet Controller
 CardVendor 0x3462 card 0x7160 (Card unknown)
  STATUS    0x00a0  COMMAND 0x0007
  CLASS     0x06 0x80 0x00  REVISION 0xa2
  BIST      0x00  HEADER 0x00  LATENCY 0x00  CACHE 0x00
  BASE0     0xfe9fc000  addr 0xfe9fc000  MEM
  BASE1     0x0000c481  addr 0x0000c480  I/O
  MAX_LAT   0x14  MIN_GNT 0x01  INT_PIN 0x01  INT_LINE 0x05
  BYTE_0    0x62  BYTE_1  0x34  BYTE_2  0x60  BYTE_3  0x71

[...]
root@dara:/[07:50 PM]# modinfo | grep nfo
 Id Loadaddr   Size Info Rev Module Name
 44 feabbbc4   1e50  15   1  mntfs (mount information file system)
141 febc78d4   4768  88   1  devinfo (DEVINFO Driver 1.73)
219 f946c000   fc40 207   1  nfo (nVIDIA nForce nic driver v1.1.2)
root@dara:/[07:50 PM]# dmesg | grep -v UltraDMA

Sat Nov 25 19:50:28 EST 2006
Nov 25 19:38:58 dara.NotBSD.org nfo: [ID 306776 kern.info] nfo0: doesn't have pci power management capability
Nov 25 19:38:58 dara.NotBSD.org nfo: [ID 130221 kern.info] nfo0: nForce mac type 11 (MCP04) (vid: 0x10de, did: 0x0038, revid: 0xa2)
Nov 25 19:38:58 dara.NotBSD.org nfo: [ID 451511 kern.info] nfo0: MII PHY (0x01410cc2) found at 1
Nov 25 19:38:58 dara.NotBSD.org nfo: [ID 426109 kern.info] nfo0: PHY control:0, status:7949<100_BASEX_FD,100_BASEX,10_BASE_FD,10_BASE,XSTATUS,MFPRMBLSUPR,CANAUTONEG,EXTENDED>, advert:de1, lpar:0
Nov 25 19:38:58 dara.NotBSD.org nfo: [ID 119377 kern.info] nfo0: xstatus:3000<1000BASET_FD,1000BASET>
Nov 25 19:38:58 dara.NotBSD.org nfo: [ID 716252 kern.info] nfo0: resetting PHY
Nov 25 19:38:58 dara.NotBSD.org gld: [ID 944156 kern.info] nfo0: nVIDIA nForce nic driver v1.1.2: type "ether" mac address 00:13:d3:5f:53:2f
Nov 25 19:38:58 dara.NotBSD.org npe: [ID 236367 kern.notice] PCI Express-device: pci1462,7160@e, nfo0
Nov 25 19:38:58 dara.NotBSD.org genunix: [ID 936769 kern.notice] nfo0 is /pci@0,0/pci1462,7160@e
Nov 25 19:38:58 dara.NotBSD.org unix: [ID 954099 kern.info] NOTICE: IRQ21 is being shared by drivers with different interrupt levels.
Nov 25 19:38:58 dara.NotBSD.org This may result in reduced system performance.
Nov 25 19:38:58 dara.NotBSD.org last message repeated 1 time
Nov 25 19:38:58 dara.NotBSD.org last message repeated 1 time
Nov 25 19:38:59 dara.NotBSD.org nfo: [ID 831844 kern.info] nfo0: auto-negotiation started
Nov 25 19:39:04 dara.NotBSD.org nfo: [ID 503627 kern.warning] WARNING: nfo0: auto-negotiation failed: timeout
root@dara:/[07:50 PM]# 

ZFS (Part 1)

Over the last year I was getting more and more curious/excited about OpenSolaris. Specifically I got interested in ZFS – Sun’s new filesystem/volume manager.

So I finally got my act together and gave it a whirl.

Test system: Pentium 4, 3.0Ghz in an MSI P4N SLI motherboard. Three ATA Seagate ST3300831A hard drives, one Maxtor 6L300R0 ATA drive (all are nominally 300 gigs – see previous post on slight capacity differences). One Western Digital WDC WD800JD-60LU SATA 80 gig hard drive. Solaris Express Community Release (SXCR) 51.

Originally I started this project running SXCR 41, but back then I only had 3 300 gig drives, and that was interfering with my plans for RAID 5 greatness. In the end the wait was worth it, as ZFS got revved since.

A bit about MSI motherboard. I like it. For a PC system I like it alot. It has two PCI slots, two full length PCI E slots (16x), and one PCIE 1x slot. Technically it supports SLI with two ATI Cross-Fire or Nvidea SLI capable cards, however in that case both full length slots will run at 8x. Single slot will run at 16x. Two dual channel IDE connectors, four SATA connectors, built in high end audio with SPDIF, built in GigE NIC based on Marvell chipset/PHY, serial, parallel, built in IEEE1394 (iLink/Firewire) with 3 ports (one on the back of the board, two more can be brought out). Plenty of USB 2.0 connectors (4 brought out on the back of the board, 6 more can be brought out from conector banks on the motherboard). Overall, pretty shiny.

My setup consists of four IDE hard drives on the IDE bus, and an 80 gig WD on SATA bus for the OS. Motherboard BIOS allowed me to specify that I want to boot from the SATA drive first, so I took advantage of the offer.

Installation of SXCR was from IDE DVD (a pair of hard drives was unplugged for the time).
SXCR recognized pretty much everything in the system, except built in Marvell Gig E nic. Shit happens, I tossed in a PCI 3Com 3c509C NIC that I had kicking around, and restarted. There was a bit of a hold up with SATA drive – Solaris didn’t recognize it, and wanted the geometry, number of heads and number of clusters so that it could create an apropriate volume label. Luckily WD made identical drive but in IDE configuration, for which it actually provided the heads/custers/sectors information, so I plugged those numbers in, and format and fdisk cheered up.

Other then that, normal Solaris install. I did console/text install just because I am alot more familiar with them, however Radeon Sapphire X550 PCIE video card was recognized, and system happily boots into OpenWindows/CDE if you want it to.

So I proceeded to create a ZFS pool.
First thing I wanted to check is how portable ZFS is. Specifically, Sun claims that it’s endinanness neutral (ie I can connect the same drives to the little endian PC, or big endian SPARC system, and as long as both run OS that recognizes ZFS, things will work). I wondered how it deals with device numbers. Traditionally Solaris is very picky about the device IDs, and changing things like controllers or SCSI IDs on a system can be tricky.

Here I wanted to know if I can just create, say, a “travelling zfs pool”, where I’ll have an external enclosure with a few SATA drives, an internal PCI SATA controller card, and if things go wrong in a particular system, I could always unplug the drives, and move them to a different system, and things will work. So I wanted to find out if ZFS can deal with changes in device IDs.

In order for ZFS to work reliably, it has to use a whole drive. It, in turn, writes an EFI disk label on the drive, with a unique identifier. Note that certain PC motherboards choke on EFI disk labels, and refuse to boot. Luckily most of the time this is fixable using a BIOS update.

root@dara:/[03:00 AM]# uname -a
SunOS dara.NotBSD.org 5.11 snv_51 i86pc i386 i86pc
root@dara:/[03:00 AM]# zpool create raid1 raidz c0d0 c0d1 c1d0 c1d1
root@dara:/[03:01 AM]# zpool status
  pool: raid1
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        raid1       ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c0d0    ONLINE       0     0     0
            c0d1    ONLINE       0     0     0
            c1d0    ONLINE       0     0     0
            c1d1    ONLINE       0     0     0

errors: No known data errors
root@dara:/[03:02 AM]# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
raid1                  1.09T    238K   1.09T     0%  ONLINE     -
root@dara:/[03:02 AM]# df -h /raid1 
Filesystem             size   used  avail capacity  Mounted on
raid1                  822G    37K   822G     1%    /raid1
root@dara:/[03:02 AM]# 

Here I created a raidz1 (zfs equivalent of RAID5 with one parity disk, giving me (N-1)*[capacity of the drives]. raidz can survive death of one hard drive. zfs pool can also be creatd with raidz2 command, giving an equivalent of raid5 with two parity disks. Such configuration can survive death of 2 disks) pool.

Note the difference in volume that zpool list and df produce. zpool list shows capacity not counting parity. df shows the more traditional available disk space. Using df will likely cause less confusion in normal operation.

So far so good.

Then I proceeded to create a large file on the ZFS pool:

root@dara:/raid1[03:04 AM]# time mkfile 10g reely_beeg_file

real    2m8.943s
user    0m0.062s
sys     0m5.460s
root@dara:/raid1[03:06 AM]# ls -la /raid1/reely_beeg_file 
-rw------T   1 root     root     10737418240 Nov 10 03:06 /raid1/reely_beeg_file
root@dara:/raid1[03:06 AM]#

While this was running, I was running zpool iostat -v raid1 10 in a different window.

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
raid1        211M  1.09T      0    187      0  18.7M
  raidz1     211M  1.09T      0    187      0  18.7M
    c1d0        -      -      0    110      0  6.26M
    c1d1        -      -      0    110      0  6.27M
    c0d0        -      -      0    110      0  6.25M
    c0d1        -      -      0     94      0  6.23M
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
raid1       1014M  1.09T      0    601      0  59.5M
  raidz1    1014M  1.09T      0    601      0  59.5M
    c1d0        -      -      0    364      0  20.0M
    c1d1        -      -      0    363      0  20.0M
    c0d0        -      -      0    355      0  19.9M
    c0d1        -      -      0    301      0  19.9M
----------  -----  -----  -----  -----  -----  -----

[...]
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
raid1       8.78G  1.08T      0    778    363  91.1M
  raidz1    8.78G  1.08T      0    778    363  91.1M
    c1d0        -      -      0    412      0  30.4M
    c1d1        -      -      0    411  5.68K  30.4M
    c0d0        -      -      0    411  5.68K  30.4M
    c0d1        -      -      0    383  5.68K  30.4M
----------  -----  -----  -----  -----  -----  -----

10 gigabytes written over 128 seconds. About 80 megabytes a second on continuous writes. I think I can live with that.

Next I wanted to run some md5 digests of some files on the /raid1, then export the pool, shut system down, switch around IDE cables, boot system back up, reimport the pool, and re-run the md5 digests. This would simulate moving a disk pool to a different system, screwing up disk ordering in process.

root@dara:/[12:20 PM]# digest -a md5 /raid1/*
(/raid1/reely_beeg_file) = 2dd26c4d4799ebd29fa31e48d49e8e53
(/raid1/sunstudio11-ii-20060829-sol-x86.tar.gz) = e7585f12317f95caecf8cfcf93d71b3e
root@dara:/[12:23 PM]# zpool status
  pool: raid1
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        raid1       ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c0d0    ONLINE       0     0     0
            c0d1    ONLINE       0     0     0
            c1d0    ONLINE       0     0     0
            c1d1    ONLINE       0     0     0

errors: No known data errors
root@dara:/[12:23 PM]# zpool export raid1
root@dara:/[12:23 PM]# zpool status
no pools available
root@dara:/[12:23 PM]#

System was shutdown, IDE cables switched around, system was rebooted.

root@dara:/[02:09 PM]# zpool status
no pools available
root@dara:/[02:09 PM]# zpool import raid1
root@dara:/[02:11 PM]# zpool status
  pool: raid1
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        raid1       ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c1d0    ONLINE       0     0     0
            c1d1    ONLINE       0     0     0
            c0d0    ONLINE       0     0     0
            c0d1    ONLINE       0     0     0

errors: No known data errors
root@dara:/[02:11 PM]# 

Notice that the order of the drives changed. Was c0d0 c0d1 c1d0 c1d1, and now it’s c1d0 c1d1 c0d0 c0d1.

root@dara:/[02:22 PM]# digest -a md5 /raid1/*
(/raid1/reely_beeg_file) = 2dd26c4d4799ebd29fa31e48d49e8e53
(/raid1/sunstudio11-ii-20060829-sol-x86.tar.gz) = e7585f12317f95caecf8cfcf93d71b3e
root@dara:/[02:25 PM]#

Same digests.

Oh, and a very neat feature…. You want to know what was happening with your disk pools?

root@dara:/[02:12 PM]# zpool history raid1
History for 'raid1':
2006-11-10.03:01:56 zpool create raid1 raidz c0d0 c0d1 c1d0 c1d1
2006-11-10.12:19:47 zpool export raid1
2006-11-10.12:20:07 zpool import raid1
2006-11-10.12:39:49 zpool export raid1
2006-11-10.12:46:14 zpool import raid1
2006-11-10.14:09:54 zpool export raid1
2006-11-10.14:11:00 zpool import raid1

Yes, zfs logs the last bunch of commands on to the zpool devices. So even if you move the pool to a different system, command history will still be with you.

Lastly, some versioning history for ZFS:

root@dara:/[02:19 PM]# zpool upgrade raid1 
This system is currently running ZFS version 3.

Pool 'raid1' is already formatted using the current version.
root@dara:/[02:19 PM]# zpool upgrade -v
This system is currently running ZFS version 3.

The following versions are suppored:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z

For more information on a particular version, including supported releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.
root@dara:/[02:19 PM]# 

Storage and power consumption costs

Lately I’ve been thinking more and more about storage. Specifically, at one point I’ve used a Promise 8 disk IDE to SCSI hardware RAID enclosure, attached to a Sun system, and formatted with UFS.

Hardware RAID 5 eliminated problems with losing data due to disk dieing in a fiery death. I bought 10 Maxtor 120 gig drives at the time, and dropped two on the shelf. Over course of about two and a half years I used two spare drives to replace the ones inside. Once it was a bad block, and the other time drive had issues spinning up. Solaris 8 had support for only one filesystem snapshot at a time, which was better then no snapshots at all, but not great. I’ve had a script in cron running once a week, that would snapshot whatever was there, and re-cycle each week. Not optimal, but it saved me some stress a couple of times, when it was late, I were tired, and put a space between wildcard and pattern in an rm command.

Last little while I’ve been trying to migrate to Mac OS X. Part of the reason was the cost of operation. I like big iron, and paying for an E4000 and an external storage array operating 24/7 was getting costly when one’s a student, as opposed to being a productive member of the workforce. I thought that it’s cheaper to leave an old G3 iBook running 24/7 – after all iBook itself only “eats” 65 watts, right? Generally I’ve turned off most of the other hardware – Cisco 3640 got replaced by a Linksys WRT54GS running openWRT, three other Sun systems got powered down, etc. At this point I’ve only had an iBook and an Ultra 2 running 24/7.

This is when I’ve hit the storage crunch: I were rapidly running out of disk space again, and I still needed occasional access to the data on the old Promise storage array.

Easy enough solution was to buy more external disk drives, place them in MacAlly USB2/FW external enclosures, and daisy chain them off the iBook. Somehow iBook ended up with over a TB of disk space daisy chained off it.

fiona:~ stany$ df -h
Filesystem                Size   Used  Avail Capacity  Mounted on
/dev/disk0s10              56G    55G   490M    99%    /
devfs                     102K   102K     0B   100%    /dev
fdesc                     1.0K   1.0K     0B   100%    /dev
                   512K   512K     0B   100%    /.vol
automount -nsl [330]        0B     0B     0B   100%    /Network
automount -fstab [356]      0B     0B     0B   100%    /automount/Servers
automount -static [356]     0B     0B     0B   100%    /automount/static
/dev/disk4s2              183G   180G  -6.3G   104%    /Volumes/Foo
/dev/disk2s1              183G   182G  -2.3G   101%    /Volumes/Bar
/dev/disk1s1              183G   183G  -1.0G   101%    /Volumes/Baz
/dev/disk3s1              183G   174G -260.8M   100%    /Volumes/Quux
/dev/disk5s1              183G   183G  -1.2G   101%    /Volumes/Shmoo
fiona:~ stany$ 

In process I’ve discovered how badly HFS+ sucks at a bunch of things – it will happily create filenames with UTF-8 characters, however it will not add things like accent grave or accent aigu to normal files. Migrating files with such filenames from UFS under Solaris ended up not simple – direct copying over NFS or SMB was failing, and untarring archives with such files was resulting in errors.

Eventually I’ve used a sick workaround of ext2fsx and formatted a couple of external 200 gig drives ext2. Ext2 under Mac OS blows chunks too – for starters it was not available for 10.4 for ages, and thus Fiona is still running 10.3.9 (Yes, I know that a very preliminary read only version of ext2fsx for 10.4 is now available. No, I don’t want to betatest it, and lose my data). ext2fsx would not support ext3, thus one doesn’t get any journalling. So if I accidentally pull on the firewire cable, and unplug the daisy chain of FW drives, I will have to fsck them all.
fscking ext2 under Mac OS is a dubious proposition at best, and most of the time fsck_ext2 will not generate an auto-mountable filesystem again. Solution to this was to keep a CD with Rock Linux PPC in the drive, and boot into linux to fsck.

I’ve cursed and set all the external drives to automount read-only, and manually re-mount read-write when I need to. Pain in the back side.

Lately I’ve been eyeing Solaris ZFS with some interest. Big stopping point for me was the migration of the volume to a different system (be that same OS and architecture, or different OS and architecture all together). Turned out that migrating between Solaris systems is as simple as zfs export volumename, move disks to different system, zfs import volumename, which is a big win. Recently there were rumors that Linux folks and Apple folks are porting or investigating porting of ZFS to Linux and Mac OS X (10.5?), which gives hopes to being able to migrate to a different platform if need be.
All of that made ZFS (and by extention Solaris 10) a big contender.

It didn’t help that each of the external drive power supplies is also rated at 0.7 amp. One watt is one ampere of current flowing at one volt, so here I am with 3.5 amps. 65 watts that Apple iBook power adapter is rated for is only about 2/3 of the actual amperage, as it also generates heat, so here is another 0.7 amp or more. Oh, and there is the old Ultra 2, that, according to Sun consumes another 350 W, and generates 683 BTU/hr. So, assuming that Sun actually means that it consumes 350 W, and not that the power supply is rated for 350 W, that’s another 3.2 Amps of load.

This adds up to ~7.5A/hr 24/7.

This is where I get really confused while reading Ottawa hydro bills.

Looking at Ottawa hydro rates page, I read:

Residential Customer Rates

Electricity*
• Consumption up to 600 kWh per month	$0.0580/kWh
• Consumption above the 600 kWh per month threshold  $0.0670/kWh

 Delivery	 	 
• Transmission  $0.0095/kWh
• Hydro Ottawa Delivery   $0.0195/kWh
• Hydro Ottawa Fixed Charge  $7.88 per month

Regulatory	$0.0062/kWh**
Debt Retirement	$0.00694/kWh***

Thus, some basic math shows that:
7.5 amps * 110 volt = 825watts/hr

600kW/hr that Ottawa hydro is oh so generously offering me adds up to 600,000 watts / 31 days / 24 hours = ~806 watt/hr

In other words, I am using up the “cheap” allowance by just keeping two computers and 5 hard drives running.

825 watt/hr * 24 * 31 = 613.8kW/hr

Reading all the Ottawa Hydro debt retirement (read: mismanagement) bullshit, I get the numbers of
6.7 cents + 0.694 cents + 0.62 cents = 8.014 cents/kWh.

613.8kWh * 8.014 cents/kWh = 4919 cents = 49.19 CAD/month
Now, assuming that I were paying 5.8 as opposed to 6.7 cents kWh, it would still be 613.8kWh * 7.114 cents/kWh = 43.66 CAD/month.

Not a perty number, right?

So I am asking myself a question now…. What should I do?

I have two large sources of energy cosumption – external drives (I didn’t realize how much power they draw) and Ultra 2. iBook on it’s own consumes minimal power, and thus is at most about 10$/month to operate.

Option number one – turn off everything, save 50 bucks a month.

Option number two – leave everything running as is, swallow the “costs of doing business”

Option number three – Turn off Ultra 2, average savings of 22$/month, lose my e-mail archives (or migrate pine + e-mail to the iBook). Continue living with frustrations of HFS+.

Option number four – Migrate mail from Ultra 2 to iBook. Turn Ultra 2 off. Migrate all of the drives into the Promise enclosure (how much power it consumes I honestly don’t know until I borrow from somewhere a power meter – Promise is not listing any information, and neither is there any on the back of the thing), hook it up to iBook over RATOC SCSI to Firewire dongle. This will give me somewhere between 1.5 and 2 TB of storage, HFS+ or ext2 based. If I decide to install Linux or FreeBSD on iBook, well, the more the merrier.

Option number five – Migrate all of the drives into the Promise enclosure, hook it up to Ultra 2, turn off (or do not – on it’s own it’s fairely cheap to operate) remaining iBook. Power consumption will remain reasonably stable (I hope. I still have no idea how much power Promise thing consumes. It might be rated for 6.5 amps on it’s own). I could install latest OpenSolaris on Ultra 2, and format the array using ZFS. No costs savings, lots of work shuffling data around, but also has tons of fringe benefits, such as getting back up to date on the latest Solaris tricks.

I’ve just looked at specs for all the Sun system models that I own (Ultra 2, Ultra 10, Ultra 60 and E4K), and seems like U2 consumes the least power out of the bunch. Ultra 10 is rated for the same, but generates twice as much heat. Adopting Ultra 10 for SCSI operation is not that hard, but would force me to scrounge around for bits and pieces, and dual 300mhz US II is arguably better then a single 440mhz US IIi.

I guess there is also an option number 6 – Replace Ultra 2 with some sort of low power semi-embedded x86 system, with a PCI slot for a SCSI controller, and hook up Promise array to it. Install OpenSolaris, format ZFS, migrate data over. Same benefits as Option 5 with additional hardware costs, and having to use annoying computer architecture.

I guess I will have to decide soon.

Update: Promise UltraTrak100 TX8 is rated for 8 amps at 110 volts (4 amps at 220 volts)

CTEact

Just in case I need to analyze kernel crash dumps under Solaris SPARC ever again, this is CTEact, the infamous act tool.

Dave doesn’t let me upload anything but the pictures and movies (heh!), so this will have to be renamed apropriately.
CTEact 7.17 SPARC, covers Solaris 2.5 through to 8
CTEact 8.2 SPARC, covers Solaris 2.8 through to 10.

Care and feeding of a Sun Ultra 5/10

Introduction

I gave away another Sun Ultra 10 today.

As I invariably get questions about Solaris, Sun systems in general, etc, I figure I’ll document some things about caring and feeding for a Sun system.

My experience with Sun systems is somewhat dated – I’ve started with Sun 3 (3/260), and progressed through Sun4, sun4c, sun4m, sun4d (SS1000), onwards to sun4u architecture. However “biggest” sun4u box I’ve played with would be an Enterprise 6500, and biggest I own is an E4K. Thus asking me about domains on an E10K or bigger/newer would not get one far. As I’ve been out of of the workforce and in school for the last 3 years, my knowledge of Solaris 8 is very solid, and I can get by in Solaris 9, but know next to nothing about Solaris 10 changes – Containers, iSCSI, NFSv4, clusters, and other shiny new things that Sun introduced.

But basics are basics, and most of this is either OS independent, or can be transfered over to current versions of Solaris.

So keeping Ultra 5/10 in mind….

Ultra 5/10 hardware

Why am I calling it a 5/10? Because Ultra 5 and Ultra 10 share the same motherboard. Ultra 5 was coming in a pizza case, while Ultra 10 is a mini tower.

Modern Sun systems are very similar to PCs. Ultra 5/10 was one of the first mainstream Sun systems to support IDE (there were others at around the same time – SPARCengine2 comes to mind for some reason). So talking about IDE….

Both Ultra 5 and 10 were designed to operate with a smart card reader. Personally I’ve never seen one with a card reader installed (Maybe University of Ottawa has some), so all U5s and U10s I’ve encountered have a small “trap-door” in the front, with nothing behind it. On a U5 (which is small, crampled, and not very upgradable), you can install a second internal IDE hard drive in the space designed for the smart card. I had to do that once for an outfit called ResponseLogic around 2000, and from what I remember, it was doable, however you might need longer IDE cables to replace the ones system ships with (or maybe ones that have 3 IDE headers, instead of just 2), and only two screwholes would match the hard drive. Solution to that is either a dremel tool and drill to make the necessary holes, or just general contentment with being able to install a second hard drive. 🙂 Inside U10 there is space to mount additional hard drive, so space is less of a concern.

IDE bus on a U5/10 is seriously broken from performance point of view. I remember benchmarking an Ultra 2 with a 300 MHz UltraSPARC II CPU, and U10 with 440Mhz UltraSPARC IIi (?) CPU and a Symbios UW SCSI controller, both driving a multipack of 36 gig SCSI drives in software RAID under Solaris 8. Both had half a gig of RAM. U2 would generally perform ~10% better in IO operations, because U10 was booting from IDE, and IDE interrupts were killing the system performance.

With that in mind, if you have a SCSI drive and a PCI SCSI controller with FCode (that U10 can boot off of), it would make sense to convert the system to a whole SCSI system. Follow this link for good instructions. Plextor SCSI CD-Rom drives and burners are cheap used, and make really good CD-Rom drives in Sun systems in general.

IDE bus in U5/10 doesn’t support addresses wider then 40 bit. In practice that means that IDE hard drives larger then about 128 gig would not be recognized as such. I’ve never tried to put such a large hard drive into a U10, but I’d speculate that one can’t access the space beyond the 40 bit boundry, but otherwise drive works.

Sun systems are using OpenBoot (or Open Firmware) firmware for BIOS.

Primary language of OpenFirmware is Forth, which is of the same family as Common Lisp, Scheme, etc.
Some people are obsessed with Forth, and write crypto or play Tower of Hanoi using it.

OpenBoot used to be on track to become IEEE 1275 standard, but AFAIK standard wasn’t re-affirmed by the Open Firmware Working Group (politics, I guess), lapsed, and now a days Sun, Apple, IBM and whoever are just doing their own thing. Wikipedia has more, so I’ll just throw a bunch of links at the curious:

OpenFirmware Working Group site
OpenFirmware Working Group site (mirror, sometimes more up to date then the main site)
FirmWorks generic Open Firmware Quick Reference
Sun OpenBoot Collection – Contains reference books for OpenFirmware 2.x (Book P/N 806-2907) , 3.x (P/N 806-1377) and 4.x (P/N 816-1177) and Writing FCode (P/N 806-1379))
The following are Apple’s Technotes on Fundamentals of OpenFirmware (There are many Apple specific bits on OpenFirmware (such as setting up kernel debugging over ethernet) at the above link):
TN1061: Part I: User Interface
TN1062: Part II: The Device Tree
TN1044: Part III: PCI Rom Expansion Choices for Mac OS
More Apple specific bits on OpenFirmware (such as setting up kernel debugging over ethernet) at the above link

Eclectic List of OpenFirmware commands

After playing with OpenBoot on Sun workstations/servers, on modern PPC Apple systems, and NetApp filer (F760, at least, had firmware writting for NetApp by FirmWorks), I can say that Sun’s implementation is the nicest, not the least because it includes on line help.

Nothing substitutes reading docs above, and while OpenFirmware is the “same” each vendor defines their own commands, etc. Some commands that return pretty pictures on a Sun (banner for example) return nothing on a mac.

There are a bunch of hidden settings that can sometimes be found by typing words at OpenBoot Prompt. words just dumps all the known words – ie commands that were defined.

Here are a couple of suggestions for investigation at the OK prompt:
probe-ide and probe-scsi-all – Will list IDE and SCSI devices (will return nothing or an error if you don’t have IDE or SCSI, or the words are undefined
.speed – returns the speed of the processor(s). eg (on an a dual CPU 300Mhz Ultra II, {1} prompt refers to second CPU)

{1} ok .speed
CPU  Speed : 296.00 MHz
UPA  Speed : 098.66 MHz
SBus Speed : 025.00 MHz
{1} ok

test-all – test all hardware that has diagnostics. Might take a while. Can be used in conjunction with setenv diag-switch? true to troubleshoot hardware. Hardware or trouble might or might not shoot back.

show-devs to list avialble devices (another option might be cd / followed by ls to look at device tree natively. If you end up cd’ing to a device in a device tree, you can try .properties if it’s listed by ls, to see what words that particular device recognizes. *shrug*. Sun has an example of use

printenv to look at all the variable settings
setenv foo bar – to set environment variable foo to bar.
Most common settings that I use for debugging are:

setenv diag-switch? true
setenv auto-boot? false

This enables firmware diagnostics output on a Sun, and in conjunction with serial console logs lots and lots of interesting information about the state of hardware. Note that on big iron, such as E4K coming from cold to warm state, full diag might take
a good chunk of an hour (5×400 MHz CPUs, 6.5 gigs of RAM in my E4K takes ~15 minutes to test. This is when you start playing with setenv diag-level min (or max) to balance between more hardware tests taking longer, or minimal hardware tests taking less time). auto-boot? variable tells the system if it should try to boot OS right away, or drop to OpenFirmware after power-on, and wait for boot command.

Undoing the damage above is done thusly:

setenv diag-switch? false
setenv auto-boot? true
reset

and you probably want to do the above before removing that serial cable from console, and rebooting the system unattended.

Note: boot command can take arguments that get passed to the kernel. Most common Solaris ones are:
-v Verbose boot – Kernel tells you what it does.
-r Reconfguration boot – Kernel instructs drivers to look for new devices added/removed since last boot and a bunch of scripts gets triggered on boot-up to re-populate the device tree. I’ll refere you to /etc/init.d/drvconfig and /etc/init.d/devalias on a Solaris system for more info. Oh, and drvconfig has a man page.
-s Boot into single user mode
-a Ask. When you really really screwed up your system by editing /etc/path_to_inst, /etc/system, etc, BUT made a backup before hand. If you are lucky, you might be able to get system back to bootable state at this point, and undo whatever you did. However, if you need to use -a option, you might be better off booting off CD into single user, mounting drive, and undoing the damage that way.

Folks are Princeton have some notes on troubleshooting Solaris boot sequence.

Oh, and from inside Solaris there is access to the nvram variables using eeprom utility (eeprom variable setting), and you can trigger reconfiguration boot by touch /reconfigure followed by init 6 or reboot

OpenBoot Firmware Updates

I guess I should mention that firmware on Sun systems is flashable.

If you have Solaris installed, you should consider updating the firmware to the latest version, by going to Sunsolve, and in patchfinder, finding the right patch for your system.

Patch generally includes install.info file, that documents that installation procedure, and README file, that documents the list of bugs that got fixed by the patch. OBP patches generally require one to reboot, and boot from a particular file included in the patch.

Prior to doing this, one might be requires to open the system up, and move a jumper on the motherboard from write-protect into write-enable state.

Locations of the jumpers, etc can be looked up either in the print version of Sun Field Engineer Handbook, or at Sun Systems Handbook online

Here are some systems, and their corresponding patchIDs for OpenBoot updates (Search term is “Standalone Flash PROM Update”)

Ultra 1 (not Enterprise, 10bt) – patch# 104881
Ultra 1E (Enterprise, 100bt) – patch# 104288
Ultra 2 – patch# 104169
Ultra 5/Ultra 10 – patch# 106121
Ultra 60 / E220 – patch# 106455
Ultra 80 /E420R – patch # 109082
Ultra 450/E450 – patch # 106122
E250 – patch # 106503
E3x00, E4x00, E6x00 – patch# 103346

Breaking your Sun box, at OBP

And, to close off this section…. two quick “hacks”

Changing the MAC/hostid of your Sun box for fun and profit.
If for some reason you need to change the hostid or MAC of your Sun system, please refer to the great Sun NVRAM/hostid FAQ by Mark Henderson. I don’t want to fall into trap of discussing why you’d want to do it, but if your OBP has mkp command (ie AFAIK anything older then a SunBlade should work, and I’ve tested this on SS10, SS20, U1, U2, U10, U60, E4K myself)….

01 0 mkp
80 1 mkp  < = System type.  For sun4u arch 80.  For sun4m arch - 72.  Anything else - read the FAQ
08 2 mkp  <= Sun AUI is is always 08:00:20, which are the next three settings for MAC
0  3 mkp
20 4 mkp
c0 5 mkp <= c0:ff:ee to generate 08:00:20:c0:ff:ee as MAC
ff 6 mkp
ee 7 mkp
0 8 mkp
0 9 mkp
0 a mkp
0 b mkp
c0 c mkp
ff d mkp
ee e mkp
0 f 0 do i idprom@ xor loop f mkp  <= Calculates the checksum of what you did, and stores it

The above should generate a hostid of 80c0ffee and MAC of 08:00:20:c0:ff:ee.

Oh, and if you have a dead battery in your NVRAM chip, and system comes up with corrupt settings error on bootup, and refuses to boot, this will at least get it bootable.... until you yank the power and NVRAM loses settings again. It helped me a couple of times, while I were waiting for a new clockchip to arrive.

Note: for sum4m and sun4d arch, if the above doesn't work, there is a second way (c!) to do it, documented in FAQ.

Note to self: if playing with multi-board big iron, might need to follow up with copy-clock-tod-to-io-boards to synchronise NVRAM contents between the clock board (that you just edited) and I/O boards that still have old data. Reverse (if replaced the clock board, and are pushing settings from I/O board boardnum to clock, boardnum copy-io-board-tod-to-clock-tod. tod is, of course, Time Of Day 😛


Kind folks at PCI Alternatives mention that there is a way to overclock US-II chips at least from OBP. Their example is U5/10, and I've never done this myself, but....

also hidden nnn at-speed with nnn will change the clockspeed to nnn
.speed to verify, of course

Sun SPARCengine CP1500-440 Thermal Considerations (page 6) states that d# must be in front of the CPU speed, however as this is an undocumented setting, YMMV. Sun's documentation also has instructions on saving the command to nvram to be executed at each boot-up.

ok setenv auto-boot? false
ok reset
ok also hidden
ok d# 297 at-speed
ok .speed 

PCI Alternatives folks claim that 270Mhz U10 can be pushed to 297Mhz (+10%), and 333Mhz U10 can be pushed to 370Mhz (+11%).

Can 440Mhz be pushed up to 480? I’ll test it some time, and follow up, I guess.

“Safe” approach for something like this would be to run this without saving in NVRAM starting at +10% clock speed, and run SunVTS on a system to check if it’s stable. If it is, either increase the speed by another couple of ticks, and run SunVTS again, or just be happy, and save it in NVRAM.


Oh, and as a bonus to the patient reader….

Entering obdiag, extended diagnostic mode present in U5/10 and newer is performed by setting the following environment variables:

ok setenv diag-switch? true
diag-switch? =        true
ok setenv auto-boot? false
auto-boot? =          false
ok setenv mfg-mode on
mfg-mode =            on
ok reset-all

[system resets at this point]

ok obdiag

obdiag should return a bunch of loading messages followed by:

    OBDiag Menu

  0 ..... PCI/Cheerio
  1 ..... EBUS DMA/TCR Registers
  2 ..... Ethernet
  3 ..... Keyboard
  4 ..... Mouse
  5 ..... Floppy
  6 ..... Parallel Port
  7 ..... Serial Port A
  8 ..... Serial Port B
  9 ..... NVRAM
 10 ..... Audio
 11 ..... EIDE
 12 ..... Video
 13 ..... All Above
 14 ..... Quit
 15 ..... Display this Menu
 16 ..... Toggle script-debug
 17 ..... Enable External Loopback Tests
 18 ..... Disable External Loopback Tests

 Enter (0-13 tests, 14 -Quit, 15 -Menu) ===>

14 bails one out (setenv mfg-mode off might be a good idea at that point). 16 enables verbose mode. 13 tests everything.

For more information, refer to Sun Ultra 5 Service Manual (P/N 805-7763) Section 4: Troubleshooting procedures (Page 4-12 in rev 12 of the above manual, page 84 of the PDF)

Expantion options

I’ve had great luck with Symbios made PCI scsi controllers based around NCR chipset. In one case a PCI controller (not Sun branded and without OBP FCode in the PROM) was not recognized by the OBP in an Ultra 60, however was recognized by Solaris 8 once OS booted. Turned out that updating OBP to the latest version made OBP to recognize SCSI controller.

According to http://pci.unsupported.info/, the NCR53c875 chipsets are generally recognized by the OBP and NCR53c810 is recognized by the glm driver in Solaris. Their experience is with Compaq branded cards.

Now that Solaris source code is freely available, and driver developement kit is available, it should be reasonably simple to port any Intel drivers from Solaris Intel to Solaris SPARC. I toyed with this in Solaris 7 (when Sun first released a stripped down version of the source code to great unwashed under a general NDA), but it probably is even easier now.

Note that if the PCI card doesn’t have it’s own FCode in ROM, and is not amongst the devices supported by the OBP out of the box (built-in drivers), you won’t be able to use them before system boots and driver loads. This means no netbooting on cheap network controllers, or no booting from cheap SCSI controller. Or, I guess, no video on that Matrox or ATI video card before Solaris loads and X starts.

Installing Solaris

Oldest version of Solaris that would install on an Ultra 5/10 is 2.6 HW 3/98. Newest is what ever is current as of this writing. Personally, I’d recommend 8 for now, as it’s solid, still supported and well understood (at least by me), although that depends on the purpose – it one wants to learn the latest and greatest, of course Solaris 10 is the way to go. If one wants to be nostalgic, Solaris 2.6 was a very solid release.

Latest version of Solaris is downloadable from Sun. In addition, Solaris Express which is arguably more “bleeding-edge”, is also downloadable. Lastly there exists Solaris Express: Community Release. Confused yet? Solaris Express is the basis for Solaris 11, and Community Release is as bleeding edge as it gets. Older versions of Solaris used to be downloadable, but are no-longer. If you don’t have a friend with a CD (or CD image), your Solaris choices might be limited.

Depending on the version of Solaris you run, and the disk type you use, you might run into problems with the disk size and size of the root partition. Solaris 2.6 and 7 SPARC on IDE devices has some interesting features, that prevent it from booting or even accessing the disk. Certain versions of Solaris (2.6 SPARC on Tadpole SPARCbook comes to mind) had issues with IDE disks being larger then 8 gigs. Certain versionf of Solaris (7 SPARC comes to mind) had issues with root partition on IDE disk being set too large. Thus root partition on an IDE disk should probably be less then 2 gigs just to be on a safe side. Please refer to questions 5.63 and 5.64 of the Solaris FAQ for more information.

Normally with Solaris 8 I don’t bother with the graphical “Web Start” installation method. Booting from the 2nd CD (the ones that is labeled as 1 of 2, not WebStart) I would get dropped into the old style installation process.

Partitioning

The following is by no means exhaustive or “correct”, but will arguably cause you less grief then the auto-layout that Sun recommends.

Sun partitioning supports “slices”, which used to refer to partitions on SCSI drives. While SCSI drives support up to 8 partitions, IDE drives physically support only 4 partitions, so on IDE drives Solaris writes to a single physical partition, and then inside it creates 8 logical ones (even if you don’t use a slice, doesn’t mean it’s not there). But this is all boring hardware stuff that OS abstracts away any way, and chances are that the only time you’ll encounter this is if you are trying to multi-boot a Sun box between Solaris and Linux, or install Solaris on an x86 box. But this is not a paragraph about multi-booting, but a paragraph about partitioning, so….

The following partitioning works for me (format output of a 9 gig SCA drive)

partition> p
Current partition table (original):
Total disk cylinders available: 4924 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0       root    wm       0 -  584        1.00GB    (585/0/0)   2100735
  1       swap    wu     585 - 1169        1.00GB    (585/0/0)   2100735
  2     backup    wm       0 - 4923        8.43GB    (4924/0/0) 17682084
  3 unassigned    wm    1170 - 1171        3.51MB    (2/0/0)        7182
  4        usr    wm    1172 - 1756        1.00GB    (585/0/0)   2100735
  5 unassigned    wm       0               0         (0/0/0)           0
  6 unassigned    wm       0               0         (0/0/0)           0
  7 unassigned    wm    1757 - 4923        5.42GB    (3167/0/0) 11372697

partition> 

slice 0 – root partition. Mounts as /, and I usually go for between 1 and 2 gigs in size.
slice 1 – swap partition. Rule of thumb is 2x RAM in a system, although this is flexible, and if the system has gigs and gigs of RAM, maybe 1xRAM + 200 megs is good enough. Rationale is that in event that you end up with a kernel panic, or force a system dump at the OBP, swap is where the dump gets written to. swap is also used by the system as it boots up before it recovers the dump, and writes it to file. Yes, in theory there is compression of the dump as it’s written. But if your system died, and nothing is going right, do you think that compression will be effective?
slice 2 – whole disk. Used by things like fsck, format, mount, etc to address entire drive, and is never accessed directly by a user. Well, by a user that doesn’t know what he’s doing. Sun sets slice 2 up by default, so just leave it alone.
slice 3 – unmounted, unformatted partition of 5 – 10 megs in size, used to store the metadb replicas. What are metadb replicas, I hear you ask. metadb replicas are small databases of metadevice information, used by software raid, mirror, etc tools that used to be called Solstice DiskSuite and are part of the OS as of Solaris 8. Even if you think that you’ll not use disksuite, do create the slice, as it’s a small investment into disk space, and saves you lots and lots of hairpulling later. Each replica is ~2 megs in size, so 5 megs is a good number, as you’ll want a couple of databases per disk.
slice 4 – usr. Sun mounts if as /usr, and that’s fine. Under Solaris 2.6 – 8, one gig might be enough, but 2 gigs is probably better if you have the disk space, just to be on the safe side.
slice 5 and slice 6 – You can create a slice holding /var here. In fact, I do recommend either creating a /var slice, or running dumpadm, and changing the default savecore directory into which kernel crash dump gets placed from /var to /opt (or wherever you have lots of disk space).

root@llewella:/usr/exim[02:09pm]# dumpadm 
      Dump content: kernel pages
       Dump device: /dev/md/dsk/d20 (swap)
Savecore directory: /var/crash/llewella
  Savecore enabled: yes
root@llewella:/usr/exim[02:10pm]# 

dumpadm has a man page.

slice 7 – This is the rest of the disk that you still haven’t fully allocated. I mount it in /opt, symlink /home to /opt/home, and kill automounter (that tries to automount /home by default).

/opt is where things live in my world:

root@llewella:/opt[02:14pm]# ls
SUNWapcy         bind             gpg              ncftp
SUNWconn         db-4.2.52.2      home             ncftp-3.1.3
SUNWits          exim             ipf              patchdiag-1.0.4
SUNWppro         fetchmail        lost+found       perl
SUNWsdb          gcc              lsof             perl-5.8.4
apache           gcc-2.95.3       maker            soma
archive          gdb              mc
audioctl-1.1     gnu              mp3
root@llewella:/opt[02:14pm]# 

My world is not perfect, but it works 😛

Patches

For the longest time patching Suns was simple. Every once in a while (once a month was the norm where I worked), sysadmin would schedule downtime for reboot, etc, and a day or so before ftp over to sunsolve.sun.com/patchroot/clusters, grab the jumbo patch cluster from there that corresponds to the release of the OS he runs, and uncompress it. If sysadmin is worth his salt, and has time, he’d read the READMEs for each patch, and check for incompatibilities. If sysadmin was optimistic, he’d just run install_patch, and hope that Sun QAed the jumbo cluster properly (hint: Sun doesn’t QA jumbo clusters, only individual patches, so there are times when one patch breaks the other. Bad sysadmin. Bad!). This all worked until Solaris 9. By Solaris 10, patch clusters are no-longer there:

ncftp /patchroot/clusters > dir 9*
-rw-r--r--   1 130        14540   Mar 31 23:45   9_Recommended.README
-rw-r--r--   1 130    186986848   Mar 31 23:46   9_Recommended.zip
-rw-r--r--   1 130        17253   Sep 27  2005   9_SunAlert_Patch_Cluster.README
-rw-r--r--   1 130    168473046   Sep 27  2005   9_SunAlert_Patch_Cluster.zip
-rw-r--r--   1 130        13279   Mar 30 20:59   9_x86_Recommended.README
-rw-r--r--   1 130    116337317   Mar 30 20:59   9_x86_Recommended.zip
-rw-r--r--   1 130        15596   Oct  7  2005   9_x86_SunAlert_Patch_Cluster.README
-rw-r--r--   1 130    105728719   Oct  7  2005   9_x86_SunAlert_Patch_Cluster.zip
ncftp /patchroot/clusters > dir 10*
-rw-r--r--   1 130        10594   Apr  3 22:51   10_Recommended.README
-rw-r--r--   1 130         9860   Oct 12 17:24   10_SunAlert_Patch_Cluster.README
-rw-r--r--   1 130        11426   Mar 31 23:53   10_x86_Recommended.README
-rw-r--r--   1 130        10110   Oct 14 19:51   10_x86_SunAlert_Patch_Cluster.README
ncftp /patchroot/clusters > 

So off one goes to http://sunsolve.sun.com/, logs in, accepts a long license agreement, and selects patch finder.

There used to be a patchdiag tool to analyze the patches on a current system versus what is the latest and greatest. patchdiag required one to download the latest patch cross-reference database, patchdiag.xref from Sun each time you’d want to run it (required in a sense that you’d want to compare against the latest patches, right?). Latest database is at http://patches.sun.com/reports/patchdiag.xref

Aternatives to patchdiag are Patch Check Advanced, vxpref, or patchfetch2 All use the patchdiag.xref file, some are pertier then others. I use patchdiag, but maybe I am a traditionalist.

Some of the patches patchdiag will report are “free”, while most are paid. So the solution is either to pay for a support contract, or sigh and be out of date.

So for Solaris 10 the way to stay up to date is to start by downloading the latest free jumbo cluster from patch finder, and using a paid SunUpdate service.

root@llewella:/opt/patchdiag-1.0.4[02:39pm]# ./patchdiag -l 
======================================================================================
System Name: llewella.NotBSD.org         SunOS Vers: 5.8         Arch: sparc
Cross Reference File Date: Apr/05/06

PatchDiag Version: 1.0.4
======================================================================================
Report Note:

Recommended patches are considered the most important and highly
recommended patches that avoid the most critical system, user, or
security related bugs which have been reported and fixed to date.
A patch not listed on the recommended list does not imply that it
should not be used if needed.  Some patches listed in this report
may have certain platform specific or application specific dependencies
and thus may not be applicable to your system.  It is important to
carefully review the README file of each patch to fully determine
the applicability of any patch with your system.
======================================================================================
INSTALLED PATCHES
Patch  Installed Latest   Synopsis
  ID   Revision  Revision
------ --------- -------- ------------------------------------------------------------
108434    17        21    SunOS 5.8: 32-Bit Shared library patch for C++
108435    17        21    SunOS 5.8: 64-Bit Shared library patch for C++
108528    29     CURRENT  SunOS 5.8: kernel update  and Apache patch
108569    06        08    X11 6.4.1: platform support for new hardware
108605    22        37    SunOS 5.8: Creator 8 FFB Graphics Patch
108606    18        39    SunOS 5.8: M64 Graphics Patch
108652    83        97    X11 6.4.1: Xsun patch
108693    24        26    Solstice DiskSuite 4.2.1: Product patch
108714    05        08    CDE 1.4: libDtWidget patch
108723    01     CURRENT  SunOS 5.8: /kernel/fs/lofs and /kernel/fs/sparcv9/lofs patch
108725    16        24    SunOS 5.8: st driver patch
108727    26     CURRENT  Obsoleted by: 116959-05 SunOS 5.8: /kernel/fs/nfs and /kernel/fs/s
108773    12        23    SunOS 5.8: IIIM and X Input & Output Method patch
108806    18        20    SunOS 5.8: Sun Quad FastEthernet qfe driver
108808    42        44    SunOS 5.8: Manual Page updates for Solaris 8
108813    17     CURRENT  Obsoleted by: 117000-05 SunOS 5.8: Sun Gigabit Ethernet 3.0
108820    01        03    SunOS 5.8: nss_compat.so.1 patch
108823    01        02    SunOS 5.8: compress/uncompress/zcat patch
[...]

Oh my. I guess I’ve been slacking in patching.

Sun3

My first Sun system was a Sun 3/260, that was surplus from DND. Based on what I understand, it was a screw-up on DND’s part, as while the system lacked hard drives, it still was Tempest shielded (and general public is not supposed to gain access to Tempest shielded gear). It wasn’t a “normal” Sun3 – chassis was made out of 2.5mm think steel, each edge had a copper ribbon glued to it, and parts that weren’t openeable in day to day maintenace were kept together by screws every inch or so.

Oh, and it was protected by 3 Medeco locks.

I’ve had lots of fun with it, back in 1993.

But why am I bringing it up?

If you are morbidly fascinated by Sun3 arch and have a bunch of Sun3/60s kicking around, you can get them going.

Some people still maintain an archive of SunOS software for Sun3 arch, and a bunch of pre-built packages. Perl 5.8.8 and OpenSSH 4.2 are some of the things pre-packaged and available for download (which makes the people who run the Sun3 archive officially obsessed). Install instructions are available.

So grab that 12 VME chassis from Sun3/260, and drop in a bunch of Sun3/60 boards, netboot them all, and turn them into a SunOS cluster. You know you want to, and your local electrical utility wants you to too 🙂

P.S. On the subject of Tempest shieded gear – Sun made at least SPARCstation T2, which was SS2 in Tempest shielded chassis, with a weird round connector on the back for the serial port. It was explained to me at the time (many years ago), that a shielded serial cable could be attached to those, to provide serial console.

Sun Studio 11 (Compilers/Developer Suite) is now free

Alan Coopersmith (whom I’ve never met, yet whom I respect about as much as I respect Casper Dik) mentions that Sun Studio 11 is now free.

Download link is here.
Specifications – Basically Solaris 8 or newer on Solaris SPARC or x86. There is mention of Linux (RH 4 or Suse 9) on the sysreq page too, but I am kind of both disinterested by Leenuks, and somewhat puzzled, as RH 4 is circa 1997 and Sun folks probably mean Fedora 4. Or something.

Here Alan is doing some comparisons between Sun cc and gcc for compiling the X subsystem for Solaris.

I own single license for Sun Studio 6, which I picked up at a dot.com bancropcy auction (it was a box with never registered license codes) for 100 CAD. As part of a deal I got about half a cubic meter of Cisco propoganda for Cisco 25xx routers, but it was worth it.

Owing a license for a Sun’s C compiler for a while made me the coolest kid on the block, as I could compile 64 bit versions of IPF (gcc at the time stood in the corner and nerviously smoked whenever it had to compile 64 bit kernel modules)

DYLD_LIBRARY_PATH

Anyhone has any clue why vast majority of the dynamic linkers out there (Solaris, Linux, BSD etc) all use LD_LIBRARY_PATH variable to specify where to load dynamic libraries from, yet Darwin/MacOS X uses DYLD_LIBRARY_PATH?

*grumble*

Compiling Alladin GhostScript 8.51 from source. It’s not hard, just quirky. Oh, and jpgsrc-6 and zlib-1.2.2 both need a config.sub from a recent package for configure to recognize Darwin/MacOS X.

Rendering a manpage

This is more of a general unix hint, that is not really MacOS X specific.

If you have a manpage that you want to look at, that is not in $MANPATH, (Something that got installed by hand into a custom directory, for example something that was built and installed using
./configure –prefix=/opt/packagename && make install ), yet you know where it is (for example because you did run /usr/libexec/locate.updatedb as root at least once since and now can use locate), you can use nroff to render the man page into text:

stany@gilva:~[12:06 AM]$ ls -la /opt/gnu/man/man6/figlet.6 
-r--r--r--   1 root  501  21054 Sep  3 17:41 /opt/gnu/man/man6/figlet.6
stany@gilva:~[12:06 AM]$ nroff -man /opt/gnu/man/man6/figlet.6 | head -20
FIGLET(6)                                                            FIGLET(6)



NAME
       FIGlet - display large characters made up of ordinary screen characters


SYNOPSIS
       figlet [ -cklnoprstvxDELNRSWX ] [ -d fontdirectory ]
              [ -f fontfile ] [ -m layoutmode ]
              [ -w outputwidth ] [ -C controlfile ]
              [ -I infocode ] [ message ]


DESCRIPTION
       FIGlet prints its input using  large  characters  (called  ``FIGcharac-
       ters'')made  up  of  ordinary  screen  characters (called ``sub-charac-
       ters'').  FIGlet output is generally reminiscent of the sort of  ``sig-
       natures''  many people like to put at the end of e-mail and UseNet mes-
stany@gilva:~[12:06 AM]$