Using Adobe Acrobat to view PDF’s in Safari 5.1.x and Mac OS X 10.6.8

Safari no longer displays PDF files.. and hasn’t done so on my machine for months. It does not bother me much, as I prefer to download them anyway, by clicking in the URL bar and then holding option and then hitting return, which downloads them.

But a client called and complained that they needed to be able to fill in online pdf forms, and when they clicked the link all they got was a black screen, so I went and figured it out:

On Mac OS X 10.6.8 with up to date versions of Safari, you need to make sure that Safari is running in 64 bit mode for the Acrobat reader plugin to work.

To get it to do so, quit Safari, go to it in your applications folder, right click on Safari and Get Info. Empty the checkbox that says “Open in 32 bit mode”. Launch Safari, and viewing PDF’s in Safari with Acrobat Reader will now work.

You can also run into problems if you have Acrobat Reader and Acrobat Pro installed, any updates to the Pro version may mess up your browser plugins. To fix this you need to delete the plugins and reinstall Acrobat Reader.

The AdobePDFViewer plug-in is used to display PDF files in Safari using Acrobat and Reader. This plug-in is installed as part of the Acrobat X or Reader X installation. The location of this plug-in is:

Macintosh HD/Library/Internet Plug-ins/AdobePDFViewer.plugin

Details are from Adobe’s Help page: Troubleshoot Safari Plug-in

To remove the plugin: Quit Safari, then go and delete the plugin. Yes there’s a second one, called AdobePDFViewerNPAPI.plugin, you can ignore it.

Then reinstall the latest version of Acrobat reader. You can find various installers on Adobe’s Acrobat Reader Download Page.

I should also mention: if you have need to use Acrobat Reader to view PDF’s in Safari and would prefer to use the built in viewer, just go and delete both the plugins mentioned above and then restart Safari.

How to buy a used Mac

A client asked what the best approach would be to buying a few used iMacs for her family for Christmas, to which I replied:

I would not use eBay at all, I would go to Kijiji and Craigslist. That way you can actually go and see the computer before buying it. Yes, you probably will pay a bit more for it.. but it will be less headaches in the end!

As for which models to buy, go download the Mactracker app for iPhone or Mac, and use it to look up and compare with what is for sale. The models you want to avoid are ones that do not meet the requirements for running Lion:

http://www.apple.com/macosx/specs.html

Don’t worry if the iMac you are looking at does not already have 10.6 or 10.7 on it, just look at the CPU speed and RAM, the OS itself you can update later since you already own copies.

Once you find a likely iMac, ask the seller for the serial number. If they are unable or unwilling to provide it, move on.. once you have it, put it into this page:

https://selfsolve.apple.com/agreementWarrantyDynamic.do

Which will return the warranty and service details on the iMac, and this page:

http://support.apple.com/specs/

Which will return the specs on the iMac. Oh, I also ask the sellers about pets and smokers.. since my kids have allergies, and I can’t stand the smoke smell. My biases, it’s up to you to ask in advance, or decide when you get on site. Once you have those details you can decide if it’s worth looking into further. I’m assuming you’re going to read all about how to avoid scams, so I will not go into any of those details.

Next step? You’ve contacted the seller, have brought a friend along, and are meeting the seller and looking at the iMac. Ignore the iMac for a minute and look around, that will tell you a lot more than looking at the iMac itself. Once you are back at the iMac, boot it up and make sure it is the same one you were told about, basically check under “About this Mac” for the serial number, CPU and RAM details. If you have a usb key you can check each USB port to make sure it works, and if you have a DVD you can make sure the drive works. That’s about it.

Once you have it back at home use your handy OS X 10.6 install DVD and erase the hard drive and reinstall the OS from scratch. Unless you have a 10.7 installer.. at which point you should use that instead!

Have fun, and feel free to get in touch with me if you have any questions!

p.s. RAM is cheap and easy to upgrade on an iMac, so you might take that into consideration as well, low RAM in the iMac might be a benefit as you can get a good deal for it and then add RAM yourself. Not sure what kind of RAM you need, and what the costs are? Take a look at http://canadaram.com for details.

Cisco Hardware emulator

dynamips is an emulator of various Cisco platforms, that is licensed under GNU GPL, and runs under Windows, Linux, Solaris, MacOS, etc.

Dynamips started off as a MIPS emulator for Cisco 7200, and gradually ended up capable of emulating Cisco 7200 family, Cisco 3600 family, 2600 family (with some exceptions), and Cisco 3725 and 3745. Since it is a hardware emulator, it is bug for bug compatible with the real iron, and IOS on it would have the same bugs as on the physical hardware. Since it supports hypervisor mode, it is possible to run more then one router emulation on a single system, all connected through virtual network. Latest release candidates support packet capture on the virtual interfaces between the routers.

Performance of the emulator is not that great (1 or 2K packers per second, compared to 100s of kpps that actual hardware supports), but it is useful in testing configurations, preparing for Cisco certifications, debugging IOS, etc. I found it while reading up on IOS security, but there are people in both Cisco TAC and preparing/passing CCIE exams, that indicated in 7200emu formus that they use dynamips.

Current PC with a Gig or two of RAM can support a dozen or so router instances.

Based on the information from the developer, we should not expect switch emulation support in the forseeable future, since switches use custom ASICs, so while the main CPUs (MIPS or PPC) that the switches use, are supported, it is very tricky to emulate the power on self-tests of the ASICs (sending packets over loopback, etc), that switches attempt before declaring themselves functional. However 7200 is a bitchin’ platform for pretty much anything, capable of running latest and greatest IOS.

Blog of the author, where newest release candidates of the software are announced. Best place to check to see what bugs got fixed, and what line cards got supported in the latest release.

Forums/Discussion Board for c7200emu, that is moderated by the software’s author.

c7200emu – dynamips project page, detailing more or less up to date list of supported platforms.

Dynagen a dynamips configuration front-end, that allows one easily configure and manage dynamips instances. Currently considered a must have companion to dynamips.

dynamips TODO list, that allowes you to see what the developer is thinking about improving.

P.S. If you lack elf.h, try libelf. In order to build it, you might need GNU sed

ZFS (Part 1)

Over the last year I was getting more and more curious/excited about OpenSolaris. Specifically I got interested in ZFS – Sun’s new filesystem/volume manager.

So I finally got my act together and gave it a whirl.

Test system: Pentium 4, 3.0Ghz in an MSI P4N SLI motherboard. Three ATA Seagate ST3300831A hard drives, one Maxtor 6L300R0 ATA drive (all are nominally 300 gigs – see previous post on slight capacity differences). One Western Digital WDC WD800JD-60LU SATA 80 gig hard drive. Solaris Express Community Release (SXCR) 51.

Originally I started this project running SXCR 41, but back then I only had 3 300 gig drives, and that was interfering with my plans for RAID 5 greatness. In the end the wait was worth it, as ZFS got revved since.

A bit about MSI motherboard. I like it. For a PC system I like it alot. It has two PCI slots, two full length PCI E slots (16x), and one PCIE 1x slot. Technically it supports SLI with two ATI Cross-Fire or Nvidea SLI capable cards, however in that case both full length slots will run at 8x. Single slot will run at 16x. Two dual channel IDE connectors, four SATA connectors, built in high end audio with SPDIF, built in GigE NIC based on Marvell chipset/PHY, serial, parallel, built in IEEE1394 (iLink/Firewire) with 3 ports (one on the back of the board, two more can be brought out). Plenty of USB 2.0 connectors (4 brought out on the back of the board, 6 more can be brought out from conector banks on the motherboard). Overall, pretty shiny.

My setup consists of four IDE hard drives on the IDE bus, and an 80 gig WD on SATA bus for the OS. Motherboard BIOS allowed me to specify that I want to boot from the SATA drive first, so I took advantage of the offer.

Installation of SXCR was from IDE DVD (a pair of hard drives was unplugged for the time).
SXCR recognized pretty much everything in the system, except built in Marvell Gig E nic. Shit happens, I tossed in a PCI 3Com 3c509C NIC that I had kicking around, and restarted. There was a bit of a hold up with SATA drive – Solaris didn’t recognize it, and wanted the geometry, number of heads and number of clusters so that it could create an apropriate volume label. Luckily WD made identical drive but in IDE configuration, for which it actually provided the heads/custers/sectors information, so I plugged those numbers in, and format and fdisk cheered up.

Other then that, normal Solaris install. I did console/text install just because I am alot more familiar with them, however Radeon Sapphire X550 PCIE video card was recognized, and system happily boots into OpenWindows/CDE if you want it to.

So I proceeded to create a ZFS pool.
First thing I wanted to check is how portable ZFS is. Specifically, Sun claims that it’s endinanness neutral (ie I can connect the same drives to the little endian PC, or big endian SPARC system, and as long as both run OS that recognizes ZFS, things will work). I wondered how it deals with device numbers. Traditionally Solaris is very picky about the device IDs, and changing things like controllers or SCSI IDs on a system can be tricky.

Here I wanted to know if I can just create, say, a “travelling zfs pool”, where I’ll have an external enclosure with a few SATA drives, an internal PCI SATA controller card, and if things go wrong in a particular system, I could always unplug the drives, and move them to a different system, and things will work. So I wanted to find out if ZFS can deal with changes in device IDs.

In order for ZFS to work reliably, it has to use a whole drive. It, in turn, writes an EFI disk label on the drive, with a unique identifier. Note that certain PC motherboards choke on EFI disk labels, and refuse to boot. Luckily most of the time this is fixable using a BIOS update.

root@dara:/[03:00 AM]# uname -a
SunOS dara.NotBSD.org 5.11 snv_51 i86pc i386 i86pc
root@dara:/[03:00 AM]# zpool create raid1 raidz c0d0 c0d1 c1d0 c1d1
root@dara:/[03:01 AM]# zpool status
  pool: raid1
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        raid1       ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c0d0    ONLINE       0     0     0
            c0d1    ONLINE       0     0     0
            c1d0    ONLINE       0     0     0
            c1d1    ONLINE       0     0     0

errors: No known data errors
root@dara:/[03:02 AM]# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
raid1                  1.09T    238K   1.09T     0%  ONLINE     -
root@dara:/[03:02 AM]# df -h /raid1 
Filesystem             size   used  avail capacity  Mounted on
raid1                  822G    37K   822G     1%    /raid1
root@dara:/[03:02 AM]# 

Here I created a raidz1 (zfs equivalent of RAID5 with one parity disk, giving me (N-1)*[capacity of the drives]. raidz can survive death of one hard drive. zfs pool can also be creatd with raidz2 command, giving an equivalent of raid5 with two parity disks. Such configuration can survive death of 2 disks) pool.

Note the difference in volume that zpool list and df produce. zpool list shows capacity not counting parity. df shows the more traditional available disk space. Using df will likely cause less confusion in normal operation.

So far so good.

Then I proceeded to create a large file on the ZFS pool:

root@dara:/raid1[03:04 AM]# time mkfile 10g reely_beeg_file

real    2m8.943s
user    0m0.062s
sys     0m5.460s
root@dara:/raid1[03:06 AM]# ls -la /raid1/reely_beeg_file 
-rw------T   1 root     root     10737418240 Nov 10 03:06 /raid1/reely_beeg_file
root@dara:/raid1[03:06 AM]#

While this was running, I was running zpool iostat -v raid1 10 in a different window.

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
raid1        211M  1.09T      0    187      0  18.7M
  raidz1     211M  1.09T      0    187      0  18.7M
    c1d0        -      -      0    110      0  6.26M
    c1d1        -      -      0    110      0  6.27M
    c0d0        -      -      0    110      0  6.25M
    c0d1        -      -      0     94      0  6.23M
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
raid1       1014M  1.09T      0    601      0  59.5M
  raidz1    1014M  1.09T      0    601      0  59.5M
    c1d0        -      -      0    364      0  20.0M
    c1d1        -      -      0    363      0  20.0M
    c0d0        -      -      0    355      0  19.9M
    c0d1        -      -      0    301      0  19.9M
----------  -----  -----  -----  -----  -----  -----

[...]
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
raid1       8.78G  1.08T      0    778    363  91.1M
  raidz1    8.78G  1.08T      0    778    363  91.1M
    c1d0        -      -      0    412      0  30.4M
    c1d1        -      -      0    411  5.68K  30.4M
    c0d0        -      -      0    411  5.68K  30.4M
    c0d1        -      -      0    383  5.68K  30.4M
----------  -----  -----  -----  -----  -----  -----

10 gigabytes written over 128 seconds. About 80 megabytes a second on continuous writes. I think I can live with that.

Next I wanted to run some md5 digests of some files on the /raid1, then export the pool, shut system down, switch around IDE cables, boot system back up, reimport the pool, and re-run the md5 digests. This would simulate moving a disk pool to a different system, screwing up disk ordering in process.

root@dara:/[12:20 PM]# digest -a md5 /raid1/*
(/raid1/reely_beeg_file) = 2dd26c4d4799ebd29fa31e48d49e8e53
(/raid1/sunstudio11-ii-20060829-sol-x86.tar.gz) = e7585f12317f95caecf8cfcf93d71b3e
root@dara:/[12:23 PM]# zpool status
  pool: raid1
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        raid1       ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c0d0    ONLINE       0     0     0
            c0d1    ONLINE       0     0     0
            c1d0    ONLINE       0     0     0
            c1d1    ONLINE       0     0     0

errors: No known data errors
root@dara:/[12:23 PM]# zpool export raid1
root@dara:/[12:23 PM]# zpool status
no pools available
root@dara:/[12:23 PM]#

System was shutdown, IDE cables switched around, system was rebooted.

root@dara:/[02:09 PM]# zpool status
no pools available
root@dara:/[02:09 PM]# zpool import raid1
root@dara:/[02:11 PM]# zpool status
  pool: raid1
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        raid1       ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c1d0    ONLINE       0     0     0
            c1d1    ONLINE       0     0     0
            c0d0    ONLINE       0     0     0
            c0d1    ONLINE       0     0     0

errors: No known data errors
root@dara:/[02:11 PM]# 

Notice that the order of the drives changed. Was c0d0 c0d1 c1d0 c1d1, and now it’s c1d0 c1d1 c0d0 c0d1.

root@dara:/[02:22 PM]# digest -a md5 /raid1/*
(/raid1/reely_beeg_file) = 2dd26c4d4799ebd29fa31e48d49e8e53
(/raid1/sunstudio11-ii-20060829-sol-x86.tar.gz) = e7585f12317f95caecf8cfcf93d71b3e
root@dara:/[02:25 PM]#

Same digests.

Oh, and a very neat feature…. You want to know what was happening with your disk pools?

root@dara:/[02:12 PM]# zpool history raid1
History for 'raid1':
2006-11-10.03:01:56 zpool create raid1 raidz c0d0 c0d1 c1d0 c1d1
2006-11-10.12:19:47 zpool export raid1
2006-11-10.12:20:07 zpool import raid1
2006-11-10.12:39:49 zpool export raid1
2006-11-10.12:46:14 zpool import raid1
2006-11-10.14:09:54 zpool export raid1
2006-11-10.14:11:00 zpool import raid1

Yes, zfs logs the last bunch of commands on to the zpool devices. So even if you move the pool to a different system, command history will still be with you.

Lastly, some versioning history for ZFS:

root@dara:/[02:19 PM]# zpool upgrade raid1 
This system is currently running ZFS version 3.

Pool 'raid1' is already formatted using the current version.
root@dara:/[02:19 PM]# zpool upgrade -v
This system is currently running ZFS version 3.

The following versions are suppored:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z

For more information on a particular version, including supported releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.
root@dara:/[02:19 PM]# 

Mac OS X/mach: Identifying architecture and CPU type

Platform independent endinanness check:

#include <stdio.h>
union foo
{
  char p[4];
  int k;
};

int main()
{
  int j;
  union foo bar;
  printf("$Id: endianness.c,v 1.1 2006/07/09 17:48:14 stany Exp stany $nChecks endianness of your platformn");
  printf("Bigendian platform (ie Mac OS X PPC) would return "abcd"n");
  printf("Littleendian platform (ie Linux x86) would return "dcba"n");
  printf("Your platform returned ");
  bar.k = 0x61626364;
  for(j=0; j<4 ; j++) 
  {
  printf("%c",bar.p[j]);
  }

  printf("n");
  return 0;

}

Platform dependent tell me everything check:

/*
 * $Id: cpuid.c,v 1.2 2002/08/03 23:38:39 stany Exp stany $
 */

#include <mach-o/arch.h>
#include <stdio.h>

const char *byte_order_strings[]  = {
        "Unknown",
        "Little Endian",
        "Big Endian",
};

int main() {

  const NXArchInfo *p=NXGetLocalArchInfo();
  printf("$Id: cpuid.c,v 1.2 2002/08/03 23:38:39 stany Exp stany $ n");
  printf("Identifies Darwin CPU typen");
  printf("Name: %sn", p->name);
  printf("Description: %sn", p->description);
  printf("ByteOrder: %sn", byte_order_strings[p->byteorder]);
  printf("CPUtype: %dn", p->cputype);
  printf("CPUSubtype: %dnn", p->cpusubtype);
  printf("nFor scary explanation of what CPUSubtype and CPUtype stand for, nlook into /usr/include/mach/machine.hnn
ppc750t-tG3nppc7400t-tslower G4nppc7450t-tfaster G4nppc970t-tG5n");


return 0;

Mac OS X: Getting things to run on platforms that are not supported

Purposefully oblique description, I know.

Basically there are two ways of not supporting a platform.

One way is to not support the architecture. If I compile something as ppc64, noone on a G3 or G4 CPU will be able to run it natively, nor will x86 folks be able to run it under Rosetta. I can try to be cute, and compile something for x86 arch, cutting off all PPC folks. I can compile something optimized for PPC7400 CPU (G4). G5 and G4 systems will run it and G3s will not (This is exactly what Apple did with iMovie and iDVD in iLife ’06). Lastly, I can compile something in one of the “depreciated” formats, potentially for Classic, and cut off x86 folks, and annoy all PPC folks who would now have to start Classic to run my creation. Oh, the choices.

The other way is to restrict things by the configuration, and check during runtime.

Procedure for checking that the architecture you are using is supported by the application.

bash$ cd Example_App.app/Contents/MacOS
bash$ file Example_App
Example_App: Mach-O fat file with 2 architectures
Example_App (for architecture ppc):  Mach-O executable ppc
Example_App (for architecture i386): Mach-O executable i386

or

bash$ cd Other_Example/Contents/MacOS
bash$ file Other_Example
Other_Example: header for PowerPC PEF executable

Step 2a) If application is Mach-O, then you can use lipo to see if it’s compiled as a generic or as a platform specific:

bash$ lipo -detailed_info Example_App
Fat header in: Example_App
fat_magic 0xcafebabe
nfat_arch 2
architecture ppc
    cputype CPU_TYPE_POWERPC
    cpusubtype CPU_SUBTYPE_POWERPC_ALL
    offset 4096
    size 23388
    align 2^12 (4096)
architecture i386
    cputype CPU_TYPE_I386
    cpusubtype CPU_SUBTYPE_I386_ALL
    offset 28672
    size 26976
    align 2^12 (4096)

If you see CPU_SUBTYPE_POWERPC_ALL, application is compiled for all PowerPC platforms, from G3 to G5.

What you do not want to see on a G3 or G4 system is:

bash$ lipo -detailed_info Example_App
Fat header in: Example_App
fat_magic 0xcafebabe
nfat_arch 1
architecture ppc64
    cputype CPU_TYPE_POWERPC64
    cpusubtype CPU_SUBTYPE_POWERPC_ALL
    offset 28672
    size 8488
    align 2^12 (4096)

Then you need a 64 bit platform, which amounts to G5 of various speeds.

It is possible that the application is in Mach-o format, but not in fat format.
otool -h -v will decode the mach header, and tell you what cpu is required:


Step 2b) If application is PEF (Preferred Executable Format) or CFM (Code Fragment Manager) things might be harder.  I've not yet encountered a CFM or PEF app that would not run on PPC platform in one way or another, so this section needs further expantion. 



In case of a runtime check, most commonly it is the platform architecture that is checked. 

Some Apple professional software has something like this in AppleSampleProApp.app/Contents/Info.plist
        AELMinimumOSVersion
        10.4.4
        AELMinimumProKitVersion
        576
        AELMinimumQuickTimeVersion
        7.0.4
        ALEPlatform_PPC
        
                AELRequiredCPUType
                G4
        
        CFBundleDevelopmentRegion
        English

Getting rid of

        ALEPlatform_PPC
        
                AELRequiredCPUType
                G4
        

tends to get the app to run under G3.

Lastly, if application says something similar to “Platform POWERBOOK4,1 Unsupported”, maybe running strings on SampleApplication/Contents/MacOS/SampleApplication combined with grep -i powerbook can reveal something.

bash$ strings SampleApplication | grep POWER
POWERBOOK5
POWERBOOK6
-
 POWERBOOK6,3
POWERMAC7
POWERMAC9,1
POWERMAC3,6
           POWERMAC11,2
-
 POWERMAC11,1

So if you want to run this application on 500Mhz iBook G3 for some reason (hi, dAVE), it might make sense to fire up a hexeditor, and change one of the “allowed” arches to match yours.

For example to this:

bash$ strings SampleApplication | grep POWER
POWERBOOK4
POWERBOOK6
-
 POWERBOOK6,3
POWERMAC7
POWERMAC9,1
POWERMAC3,6
           POWERMAC11,2
-
 POWERMAC11,1

But don’t mind me. I am just rambling.

Farewell, National Capital Freenet

I’ve been a member of National Capital Freenet for about 11 years. For about a year prior to getting an account on Freenet, I were using vt320 terminals at Ottawa Public Library and were logging in as guest to read the newsfroops. At some point I got the parents and guardians involved, and they co-signed on my behalf, and I got my own freenet userid: cn119.

Freenet changed over years. Originally it was a primarily text based service, with ability to dial in for up to half an hour a day using PPP on 9600 modem, and connect to internet. Freeport software was the primary means of interaction with the system. It had a god awful e-mail interface, with pico to compose e-mails (nothing like introducing new users to bad unix habits, right? ) until Mark Mielke finally hacked together elm and duct-taped it onto FreePort. FreePort had a bunch of holes – San Mehat at one point showed me that one can drop to a real shell. I seem to recall that it was possible to trick early versions of lynx to execute a real shell on NCF as well, but this was 10 years ago, so my memory is hazy.

In any event, before I started at iStar, and had a real newsfeed (Thank you, John Henders@bogon/whimsey, wherever you are. Heck, thank you to all the folks at iStar NOC, and DIAL. Oh, and Jason “froggy” Blackey, for sure. And Tim Welsh. Jeff Libby. Steven Gallagher, whom I must have given plenty of white hairs. Most defintely GJ/Jennifer tag team. And Mike Storm. And farewell, Chris Portman), NCF was my primary way of reading alt.sysadmin.recovery. For a while, many moons ago cn119@freenet.carleton.seeay was my primary e-mail address.

I guess this is over by now.

Partying was somewhat bittersweet.

A few years ago NCF got swamped by spam. I remember having to delete ~700 spam messages a day. Some sort of mail filtering solution was implemented – I never really cared, as by that point I weren’t using NCF e-mail for anything, but spam stopped. A couple years later I’ve noticed that all mail kind of stopped too.

(In retrospect, to the best of my current understanding (and I really don’t care), a dedicated procmail system was implemented, and in turn was filtering to the POP3 accessable mail queue on a dedicated system, or somesuch. And, of course, I’ve never checked pop3 mail queue, and for years weren’t aware of it’s existance. Telnet, baby, telnet).

By ~2002 the only thing I were using NCF for were newsfroops, as I could read ott.forsale.computing alot more efficiently through a real newsreader then through a web interface.

I guess I should mention that NCF operates on donations. Every year one is expected to donate some money to keep NCF running. In order to do that, NCF accounts expire every year, and one has to go to Dunton tower at Carleton to renew them, and, hopefully give them some money. Every year I’d donate between 20 and 50$, and even after e-mail renewal notices stopped coming, for a couple of years I’d be on Carleton campus writing a final, and remember to stop by, and remind folks at NCF that I am still around and still care. One day I’ve asked if one has to donate in order to get one’s account renewed, but turned out that no, one can donate nil, and still keep on using Freenet. Hrm. Then why expire accounts, then? It’s so you would remember to donate.

This year I’ve forgot to renew my account.

So last Thursday I’ve logged in, only to see

cn119@134.117.136.48's password:
Last login: some date from some ip address.
Sun Microsystems Inc.   SunOS 5.8       Generic Patch   October 2001

-------------------------------------------------------------------------
This National Capital FreeNet       |          Le compte de cet usager du
user account has been archived.     |   Libertel de la Capitale Nationale
                                    |                      a ete archive.

NCF Office / Bureau LCN : (613) 520-9001   office@freenet.carleton.ca
-------------------------------------------------------------------------

Connection to 134.117.136.48 closed.

That morning I were at work, in a dark basement with no cell phone coverage, and someone on the phone convinced that I care about the fact that his blueberry iMac’s power supply failed, and will find him a replacement power supply for cheap. Why do I always get cheap customers?

So the plan was to look for something similar on ott.forsale.computing, and failing that, order a replacement G4 chassis from CPUsed in Toronto. This is where the plan didn’t go as planned, as I couldn’t log in into NCF.

So, logically, one should ask someone at NCF to “unarchive” my account (in practice, change my login shell back to FreePort, as I know that my home directory is still there, as I can access http://freenet.carleton.ca/~cn119/ and see the same old junk that was there for the last 7 or 8 years.), and promise to stop by in person and give NCF more money.

I’ve called the above number, only to hear it ring, and be told that noone is around to answer my call, and I should call 520-5777. Oh, and I could leave voice mail.

So I’ve called 520-5777. It rang once, and then told me that noone is available, and that I should leave voice mail. I’ve hanged up, and called again in 5 minutes. Same result. After calling over and over in 5 – 10 minute intervals seven times, I’ve left voice mail. I identified myself, and the problem I were experiending. In it I’ve pointed out that I am unimpressed by lack of warning regarding account expiry, and unimpressed that I can’t talk to a human being about it. I mentioned that I am not sure that anyone will call me back, and that’s why I am not really happy with voice mail. I’ve pointed out that I don’t have coverage where I am, so they will have to leave voice mail when they call back. If they call back. I guess I were overly snarky in my message.

Around 4 pm I got out of the basement for a breath of fresh air. My cell phone chirped with “new message” message, and I learned that I have new voice mail. Voice mail was from Brian at NCF, and ran for over 7 minutes. In it Brian (or Ryan) was telling me how busy he is, how Freenet has over Eight Thousand members and by talking to me he is not talking to someone else on the phone, and how upset he is with me, etc. Main idea of the message was that I should come in person to Dunton Tower.

Frankly I weren’t impressed by this point. I were expecting one of “Your account is renewed, do stop by and remember to donate” or “Stop by Dunton tower, we will renew your account then”. Instead I got 7 minutes of telling me how busy someone is and how bad I am for taking Ryan (or Brian) away from answering phones, and how ungrateful I am for not donating so that Brian could be hired full time.

While listening to it, I had a WTF moment. Admittedly it wasn’t the first one of the day, as I get WTF moments at work all the time, but still….

So next day I’ve stopped by NCF offices at Carleton’s Dunton tower in person. I got to observe Brian in his natural habitat. Frankly, he reminded me of someone…. Of myself, about 10 years ago. Back when my ego was bigger, and was more easily bruised. Back when I thought that I am really hardcore, and everyone else is less so.

For about 10 minutes I’ve listened how Brian was talking to someone who sounded like a shut-in in search of human interaction, and tried to explain to him where to click. I hear conversations like this at work all the time – they are the bank breakers, as a technican spends a good hour or two hand-holding someone with no financial renumeration at all. Talking for an hour to someone, who has limited grasp of computing, and at the end telling him to see if he has a friend with some other ISP dial-up account, who will let him try his phone number and user id to see if the software will recognize a modem and will successfully negotiate PPP? Why not cut one’s losses, and talk to any of the other people in the call queue, and maybe actually help them?

Eventually Brian and I had a conversation. It didn’t go over too well. Brian was reluctant to do anything, however he repeatedly pointed out that he is not answering the phones while talking to me.

He pointed out that NCF serves over 8000 members. I’ve mentioned that I don’t find that all that impressive, because around 1997 they had 30000 active members, and seem to just be hemmoraging users over the last 10 years. I remember when ‘w’ on freeport would list pages and pages of logged in users, not about 20 users (10 of which would be xxnnn accounts, which are the accounts of freenet volunteers) that it shows now. In other words, FreeNet’s 8000 users is nothing. Cyberus has many more. iStar had about eighty thousand users when I worked for them.

Brian mentioned that the reason NCF doesn’t have a hold queue is because there is voice mail. He expanded upon it by saying that he is not the kind of person that would call department of transportation, but would go there in person. I wondered if he realized that the time that they spent talking to him in person they could have spent answering someone on the phone.

Brian also pointed out that NCF was the first ISP in Ottawa, if not in Canada. I am not too sure, and pointed out that resudox.net (I happened to know Steve Birnbaum, in another life) started in 1993 too, maybe even earlier. Brian snorted, and asked where resudox is now. Heck, if I’d have rent-free space, donated bandwidth, servers, phone lines, modem racks, etc., I’d also be around for years. Somehow noone else has such advantage, and thus actually have to make money somehow.

Ot was obvious that we weren’t seeing things eye to eye.

At that point I’ve asked him directly if he can renew my account, and he told me that matter will be refered to the executive director of NCF, John Selwyn, for review. Only he can renew my account.

I’ve called John, and left him voice mail (note a common theme in my dealings with NCF?) Yesterday I’ve stopped by offices 2018 and 2019 in Dunton Tower, to see if he might be in, and I could talk to him.

So far no answer.

I am not holding my breath.

Frankly, if all the complaints about lack of funding made by Brian are true, NCF loses more by losing yet another member, who was donating. I can use groups.google.com.

Farewell, NCF. It was a long ride, but I guess it’s over now.

In any event, I want to thank all folks who at one point made NCF great, and whom I more, or less knew.

Paul Tomblin, NCF’s newsadmin. I forgot by now what it is that Paul helped me out to with many many years ago, but the feeling of grattitude remains. But NCF news server works, and I am not upset that it doesn’t carry alt.binaries 😛

Ian! D. Allen. formerly technical director of NCF. I’ve interacted with him many a time at OCLUG meetings.

Mark Mielke, who, besides hacking NCF, also hacked LCInet at Lisgar. Lisgar was a melting pot of folks. I’ve met Sierra Bellows at Lisgar too. She is a step-daughter of Ian! D. Allen. Somewhere I have a CD with her singing from 1995 or so. Small world.

Roy Hooper, who gave up on running NCF, and instead ended up running Cyberus (and hiring me to run Cyberus instead), and, now, I’ve heard, runs CIRA. Roy used to be NCF’s sysadmin.

GJ Hagenaars, who also gave up on running NCF, and instead ended up running DIAL at iStar (and hired me “to write technical documentation. Part time.”). GJ was NCF’s Postmaster, and, coincidentially, is responsible for my hate of sendmail and love of exim.

Jennifer Witham, who was right, and in her “tough love” way very supportive. Jennifer, you were right, you hear. I were wrong. Oh, Jennifer was a volunteer of the month, back in 1997.

Pat Drummond, for always being helpful, and Chris Hawley, I guess also for being helpful. By now I forgot what it is that Chris Hawley did, and it might have been minor, like changing permissions on something in my ~, but it was a huge deal back then, and feeling of grattitude remains.

Thank you, folks.

Spamcop lists gmail SMTP servers as spam servers

A while ago I ranted about automated spam filtering.

Here is yet another example of utter idiocy of some people.

Spamcop report for 64.233.182.188, aka nproxy.gmail.com currently states:


64.233.182.188 listed in bl.spamcop.net (127.0.0.2)

If there are no reports of ongoing objectionable email from this system it will be delisted automatically in approximately 2 hours.

Same thing for 64.233.182.184, 64.233.182.185, 64.233.182.186, 64.233.182.187, 64.233.182.189, 64.233.182.190, 64.233.182.191 (all resolve to nproxy.gmail.com, and all are addresses in gmail.com used to send email as listed by Ironport). I am sure rest of gmail is also reported as source of spam by SpamCop, I just can’t be arsed to keep on checking.

*sigh* Anyone needs any more convinient arguments for not using SpamCop? I am really really tempted to write a log parser that would automatically submit IP addresses of folks who use SpamCop back to SpamCop.

Oh, and at this point, when I talk about “utter idiocy of some people”, I am not even sure who I am refering to – SpamCop folks for listening to anyone reporting gmail (or hotmail, or yahoo mail, or any other “free” mail server) as source of spam instead of just whitelisting them, idiots who get a spam through a free gmail account, and report it to SpamCop as spam, or idiots who configure spamcop checks as default reject reason in their MTA.

Care and feeding of a Sun Ultra 5/10

Introduction

I gave away another Sun Ultra 10 today.

As I invariably get questions about Solaris, Sun systems in general, etc, I figure I’ll document some things about caring and feeding for a Sun system.

My experience with Sun systems is somewhat dated – I’ve started with Sun 3 (3/260), and progressed through Sun4, sun4c, sun4m, sun4d (SS1000), onwards to sun4u architecture. However “biggest” sun4u box I’ve played with would be an Enterprise 6500, and biggest I own is an E4K. Thus asking me about domains on an E10K or bigger/newer would not get one far. As I’ve been out of of the workforce and in school for the last 3 years, my knowledge of Solaris 8 is very solid, and I can get by in Solaris 9, but know next to nothing about Solaris 10 changes – Containers, iSCSI, NFSv4, clusters, and other shiny new things that Sun introduced.

But basics are basics, and most of this is either OS independent, or can be transfered over to current versions of Solaris.

So keeping Ultra 5/10 in mind….

Ultra 5/10 hardware

Why am I calling it a 5/10? Because Ultra 5 and Ultra 10 share the same motherboard. Ultra 5 was coming in a pizza case, while Ultra 10 is a mini tower.

Modern Sun systems are very similar to PCs. Ultra 5/10 was one of the first mainstream Sun systems to support IDE (there were others at around the same time – SPARCengine2 comes to mind for some reason). So talking about IDE….

Both Ultra 5 and 10 were designed to operate with a smart card reader. Personally I’ve never seen one with a card reader installed (Maybe University of Ottawa has some), so all U5s and U10s I’ve encountered have a small “trap-door” in the front, with nothing behind it. On a U5 (which is small, crampled, and not very upgradable), you can install a second internal IDE hard drive in the space designed for the smart card. I had to do that once for an outfit called ResponseLogic around 2000, and from what I remember, it was doable, however you might need longer IDE cables to replace the ones system ships with (or maybe ones that have 3 IDE headers, instead of just 2), and only two screwholes would match the hard drive. Solution to that is either a dremel tool and drill to make the necessary holes, or just general contentment with being able to install a second hard drive. 🙂 Inside U10 there is space to mount additional hard drive, so space is less of a concern.

IDE bus on a U5/10 is seriously broken from performance point of view. I remember benchmarking an Ultra 2 with a 300 MHz UltraSPARC II CPU, and U10 with 440Mhz UltraSPARC IIi (?) CPU and a Symbios UW SCSI controller, both driving a multipack of 36 gig SCSI drives in software RAID under Solaris 8. Both had half a gig of RAM. U2 would generally perform ~10% better in IO operations, because U10 was booting from IDE, and IDE interrupts were killing the system performance.

With that in mind, if you have a SCSI drive and a PCI SCSI controller with FCode (that U10 can boot off of), it would make sense to convert the system to a whole SCSI system. Follow this link for good instructions. Plextor SCSI CD-Rom drives and burners are cheap used, and make really good CD-Rom drives in Sun systems in general.

IDE bus in U5/10 doesn’t support addresses wider then 40 bit. In practice that means that IDE hard drives larger then about 128 gig would not be recognized as such. I’ve never tried to put such a large hard drive into a U10, but I’d speculate that one can’t access the space beyond the 40 bit boundry, but otherwise drive works.

Sun systems are using OpenBoot (or Open Firmware) firmware for BIOS.

Primary language of OpenFirmware is Forth, which is of the same family as Common Lisp, Scheme, etc.
Some people are obsessed with Forth, and write crypto or play Tower of Hanoi using it.

OpenBoot used to be on track to become IEEE 1275 standard, but AFAIK standard wasn’t re-affirmed by the Open Firmware Working Group (politics, I guess), lapsed, and now a days Sun, Apple, IBM and whoever are just doing their own thing. Wikipedia has more, so I’ll just throw a bunch of links at the curious:

OpenFirmware Working Group site
OpenFirmware Working Group site (mirror, sometimes more up to date then the main site)
FirmWorks generic Open Firmware Quick Reference
Sun OpenBoot Collection – Contains reference books for OpenFirmware 2.x (Book P/N 806-2907) , 3.x (P/N 806-1377) and 4.x (P/N 816-1177) and Writing FCode (P/N 806-1379))
The following are Apple’s Technotes on Fundamentals of OpenFirmware (There are many Apple specific bits on OpenFirmware (such as setting up kernel debugging over ethernet) at the above link):
TN1061: Part I: User Interface
TN1062: Part II: The Device Tree
TN1044: Part III: PCI Rom Expansion Choices for Mac OS
More Apple specific bits on OpenFirmware (such as setting up kernel debugging over ethernet) at the above link

Eclectic List of OpenFirmware commands

After playing with OpenBoot on Sun workstations/servers, on modern PPC Apple systems, and NetApp filer (F760, at least, had firmware writting for NetApp by FirmWorks), I can say that Sun’s implementation is the nicest, not the least because it includes on line help.

Nothing substitutes reading docs above, and while OpenFirmware is the “same” each vendor defines their own commands, etc. Some commands that return pretty pictures on a Sun (banner for example) return nothing on a mac.

There are a bunch of hidden settings that can sometimes be found by typing words at OpenBoot Prompt. words just dumps all the known words – ie commands that were defined.

Here are a couple of suggestions for investigation at the OK prompt:
probe-ide and probe-scsi-all – Will list IDE and SCSI devices (will return nothing or an error if you don’t have IDE or SCSI, or the words are undefined
.speed – returns the speed of the processor(s). eg (on an a dual CPU 300Mhz Ultra II, {1} prompt refers to second CPU)

{1} ok .speed
CPU  Speed : 296.00 MHz
UPA  Speed : 098.66 MHz
SBus Speed : 025.00 MHz
{1} ok

test-all – test all hardware that has diagnostics. Might take a while. Can be used in conjunction with setenv diag-switch? true to troubleshoot hardware. Hardware or trouble might or might not shoot back.

show-devs to list avialble devices (another option might be cd / followed by ls to look at device tree natively. If you end up cd’ing to a device in a device tree, you can try .properties if it’s listed by ls, to see what words that particular device recognizes. *shrug*. Sun has an example of use

printenv to look at all the variable settings
setenv foo bar – to set environment variable foo to bar.
Most common settings that I use for debugging are:

setenv diag-switch? true
setenv auto-boot? false

This enables firmware diagnostics output on a Sun, and in conjunction with serial console logs lots and lots of interesting information about the state of hardware. Note that on big iron, such as E4K coming from cold to warm state, full diag might take
a good chunk of an hour (5×400 MHz CPUs, 6.5 gigs of RAM in my E4K takes ~15 minutes to test. This is when you start playing with setenv diag-level min (or max) to balance between more hardware tests taking longer, or minimal hardware tests taking less time). auto-boot? variable tells the system if it should try to boot OS right away, or drop to OpenFirmware after power-on, and wait for boot command.

Undoing the damage above is done thusly:

setenv diag-switch? false
setenv auto-boot? true
reset

and you probably want to do the above before removing that serial cable from console, and rebooting the system unattended.

Note: boot command can take arguments that get passed to the kernel. Most common Solaris ones are:
-v Verbose boot – Kernel tells you what it does.
-r Reconfguration boot – Kernel instructs drivers to look for new devices added/removed since last boot and a bunch of scripts gets triggered on boot-up to re-populate the device tree. I’ll refere you to /etc/init.d/drvconfig and /etc/init.d/devalias on a Solaris system for more info. Oh, and drvconfig has a man page.
-s Boot into single user mode
-a Ask. When you really really screwed up your system by editing /etc/path_to_inst, /etc/system, etc, BUT made a backup before hand. If you are lucky, you might be able to get system back to bootable state at this point, and undo whatever you did. However, if you need to use -a option, you might be better off booting off CD into single user, mounting drive, and undoing the damage that way.

Folks are Princeton have some notes on troubleshooting Solaris boot sequence.

Oh, and from inside Solaris there is access to the nvram variables using eeprom utility (eeprom variable setting), and you can trigger reconfiguration boot by touch /reconfigure followed by init 6 or reboot

OpenBoot Firmware Updates

I guess I should mention that firmware on Sun systems is flashable.

If you have Solaris installed, you should consider updating the firmware to the latest version, by going to Sunsolve, and in patchfinder, finding the right patch for your system.

Patch generally includes install.info file, that documents that installation procedure, and README file, that documents the list of bugs that got fixed by the patch. OBP patches generally require one to reboot, and boot from a particular file included in the patch.

Prior to doing this, one might be requires to open the system up, and move a jumper on the motherboard from write-protect into write-enable state.

Locations of the jumpers, etc can be looked up either in the print version of Sun Field Engineer Handbook, or at Sun Systems Handbook online

Here are some systems, and their corresponding patchIDs for OpenBoot updates (Search term is “Standalone Flash PROM Update”)

Ultra 1 (not Enterprise, 10bt) – patch# 104881
Ultra 1E (Enterprise, 100bt) – patch# 104288
Ultra 2 – patch# 104169
Ultra 5/Ultra 10 – patch# 106121
Ultra 60 / E220 – patch# 106455
Ultra 80 /E420R – patch # 109082
Ultra 450/E450 – patch # 106122
E250 – patch # 106503
E3x00, E4x00, E6x00 – patch# 103346

Breaking your Sun box, at OBP

And, to close off this section…. two quick “hacks”

Changing the MAC/hostid of your Sun box for fun and profit.
If for some reason you need to change the hostid or MAC of your Sun system, please refer to the great Sun NVRAM/hostid FAQ by Mark Henderson. I don’t want to fall into trap of discussing why you’d want to do it, but if your OBP has mkp command (ie AFAIK anything older then a SunBlade should work, and I’ve tested this on SS10, SS20, U1, U2, U10, U60, E4K myself)….

01 0 mkp
80 1 mkp  < = System type.  For sun4u arch 80.  For sun4m arch - 72.  Anything else - read the FAQ
08 2 mkp  <= Sun AUI is is always 08:00:20, which are the next three settings for MAC
0  3 mkp
20 4 mkp
c0 5 mkp <= c0:ff:ee to generate 08:00:20:c0:ff:ee as MAC
ff 6 mkp
ee 7 mkp
0 8 mkp
0 9 mkp
0 a mkp
0 b mkp
c0 c mkp
ff d mkp
ee e mkp
0 f 0 do i idprom@ xor loop f mkp  <= Calculates the checksum of what you did, and stores it

The above should generate a hostid of 80c0ffee and MAC of 08:00:20:c0:ff:ee.

Oh, and if you have a dead battery in your NVRAM chip, and system comes up with corrupt settings error on bootup, and refuses to boot, this will at least get it bootable.... until you yank the power and NVRAM loses settings again. It helped me a couple of times, while I were waiting for a new clockchip to arrive.

Note: for sum4m and sun4d arch, if the above doesn't work, there is a second way (c!) to do it, documented in FAQ.

Note to self: if playing with multi-board big iron, might need to follow up with copy-clock-tod-to-io-boards to synchronise NVRAM contents between the clock board (that you just edited) and I/O boards that still have old data. Reverse (if replaced the clock board, and are pushing settings from I/O board boardnum to clock, boardnum copy-io-board-tod-to-clock-tod. tod is, of course, Time Of Day 😛


Kind folks at PCI Alternatives mention that there is a way to overclock US-II chips at least from OBP. Their example is U5/10, and I've never done this myself, but....

also hidden nnn at-speed with nnn will change the clockspeed to nnn
.speed to verify, of course

Sun SPARCengine CP1500-440 Thermal Considerations (page 6) states that d# must be in front of the CPU speed, however as this is an undocumented setting, YMMV. Sun's documentation also has instructions on saving the command to nvram to be executed at each boot-up.

ok setenv auto-boot? false
ok reset
ok also hidden
ok d# 297 at-speed
ok .speed 

PCI Alternatives folks claim that 270Mhz U10 can be pushed to 297Mhz (+10%), and 333Mhz U10 can be pushed to 370Mhz (+11%).

Can 440Mhz be pushed up to 480? I’ll test it some time, and follow up, I guess.

“Safe” approach for something like this would be to run this without saving in NVRAM starting at +10% clock speed, and run SunVTS on a system to check if it’s stable. If it is, either increase the speed by another couple of ticks, and run SunVTS again, or just be happy, and save it in NVRAM.


Oh, and as a bonus to the patient reader….

Entering obdiag, extended diagnostic mode present in U5/10 and newer is performed by setting the following environment variables:

ok setenv diag-switch? true
diag-switch? =        true
ok setenv auto-boot? false
auto-boot? =          false
ok setenv mfg-mode on
mfg-mode =            on
ok reset-all

[system resets at this point]

ok obdiag

obdiag should return a bunch of loading messages followed by:

    OBDiag Menu

  0 ..... PCI/Cheerio
  1 ..... EBUS DMA/TCR Registers
  2 ..... Ethernet
  3 ..... Keyboard
  4 ..... Mouse
  5 ..... Floppy
  6 ..... Parallel Port
  7 ..... Serial Port A
  8 ..... Serial Port B
  9 ..... NVRAM
 10 ..... Audio
 11 ..... EIDE
 12 ..... Video
 13 ..... All Above
 14 ..... Quit
 15 ..... Display this Menu
 16 ..... Toggle script-debug
 17 ..... Enable External Loopback Tests
 18 ..... Disable External Loopback Tests

 Enter (0-13 tests, 14 -Quit, 15 -Menu) ===>

14 bails one out (setenv mfg-mode off might be a good idea at that point). 16 enables verbose mode. 13 tests everything.

For more information, refer to Sun Ultra 5 Service Manual (P/N 805-7763) Section 4: Troubleshooting procedures (Page 4-12 in rev 12 of the above manual, page 84 of the PDF)

Expantion options

I’ve had great luck with Symbios made PCI scsi controllers based around NCR chipset. In one case a PCI controller (not Sun branded and without OBP FCode in the PROM) was not recognized by the OBP in an Ultra 60, however was recognized by Solaris 8 once OS booted. Turned out that updating OBP to the latest version made OBP to recognize SCSI controller.

According to http://pci.unsupported.info/, the NCR53c875 chipsets are generally recognized by the OBP and NCR53c810 is recognized by the glm driver in Solaris. Their experience is with Compaq branded cards.

Now that Solaris source code is freely available, and driver developement kit is available, it should be reasonably simple to port any Intel drivers from Solaris Intel to Solaris SPARC. I toyed with this in Solaris 7 (when Sun first released a stripped down version of the source code to great unwashed under a general NDA), but it probably is even easier now.

Note that if the PCI card doesn’t have it’s own FCode in ROM, and is not amongst the devices supported by the OBP out of the box (built-in drivers), you won’t be able to use them before system boots and driver loads. This means no netbooting on cheap network controllers, or no booting from cheap SCSI controller. Or, I guess, no video on that Matrox or ATI video card before Solaris loads and X starts.

Installing Solaris

Oldest version of Solaris that would install on an Ultra 5/10 is 2.6 HW 3/98. Newest is what ever is current as of this writing. Personally, I’d recommend 8 for now, as it’s solid, still supported and well understood (at least by me), although that depends on the purpose – it one wants to learn the latest and greatest, of course Solaris 10 is the way to go. If one wants to be nostalgic, Solaris 2.6 was a very solid release.

Latest version of Solaris is downloadable from Sun. In addition, Solaris Express which is arguably more “bleeding-edge”, is also downloadable. Lastly there exists Solaris Express: Community Release. Confused yet? Solaris Express is the basis for Solaris 11, and Community Release is as bleeding edge as it gets. Older versions of Solaris used to be downloadable, but are no-longer. If you don’t have a friend with a CD (or CD image), your Solaris choices might be limited.

Depending on the version of Solaris you run, and the disk type you use, you might run into problems with the disk size and size of the root partition. Solaris 2.6 and 7 SPARC on IDE devices has some interesting features, that prevent it from booting or even accessing the disk. Certain versions of Solaris (2.6 SPARC on Tadpole SPARCbook comes to mind) had issues with IDE disks being larger then 8 gigs. Certain versionf of Solaris (7 SPARC comes to mind) had issues with root partition on IDE disk being set too large. Thus root partition on an IDE disk should probably be less then 2 gigs just to be on a safe side. Please refer to questions 5.63 and 5.64 of the Solaris FAQ for more information.

Normally with Solaris 8 I don’t bother with the graphical “Web Start” installation method. Booting from the 2nd CD (the ones that is labeled as 1 of 2, not WebStart) I would get dropped into the old style installation process.

Partitioning

The following is by no means exhaustive or “correct”, but will arguably cause you less grief then the auto-layout that Sun recommends.

Sun partitioning supports “slices”, which used to refer to partitions on SCSI drives. While SCSI drives support up to 8 partitions, IDE drives physically support only 4 partitions, so on IDE drives Solaris writes to a single physical partition, and then inside it creates 8 logical ones (even if you don’t use a slice, doesn’t mean it’s not there). But this is all boring hardware stuff that OS abstracts away any way, and chances are that the only time you’ll encounter this is if you are trying to multi-boot a Sun box between Solaris and Linux, or install Solaris on an x86 box. But this is not a paragraph about multi-booting, but a paragraph about partitioning, so….

The following partitioning works for me (format output of a 9 gig SCA drive)

partition> p
Current partition table (original):
Total disk cylinders available: 4924 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0       root    wm       0 -  584        1.00GB    (585/0/0)   2100735
  1       swap    wu     585 - 1169        1.00GB    (585/0/0)   2100735
  2     backup    wm       0 - 4923        8.43GB    (4924/0/0) 17682084
  3 unassigned    wm    1170 - 1171        3.51MB    (2/0/0)        7182
  4        usr    wm    1172 - 1756        1.00GB    (585/0/0)   2100735
  5 unassigned    wm       0               0         (0/0/0)           0
  6 unassigned    wm       0               0         (0/0/0)           0
  7 unassigned    wm    1757 - 4923        5.42GB    (3167/0/0) 11372697

partition> 

slice 0 – root partition. Mounts as /, and I usually go for between 1 and 2 gigs in size.
slice 1 – swap partition. Rule of thumb is 2x RAM in a system, although this is flexible, and if the system has gigs and gigs of RAM, maybe 1xRAM + 200 megs is good enough. Rationale is that in event that you end up with a kernel panic, or force a system dump at the OBP, swap is where the dump gets written to. swap is also used by the system as it boots up before it recovers the dump, and writes it to file. Yes, in theory there is compression of the dump as it’s written. But if your system died, and nothing is going right, do you think that compression will be effective?
slice 2 – whole disk. Used by things like fsck, format, mount, etc to address entire drive, and is never accessed directly by a user. Well, by a user that doesn’t know what he’s doing. Sun sets slice 2 up by default, so just leave it alone.
slice 3 – unmounted, unformatted partition of 5 – 10 megs in size, used to store the metadb replicas. What are metadb replicas, I hear you ask. metadb replicas are small databases of metadevice information, used by software raid, mirror, etc tools that used to be called Solstice DiskSuite and are part of the OS as of Solaris 8. Even if you think that you’ll not use disksuite, do create the slice, as it’s a small investment into disk space, and saves you lots and lots of hairpulling later. Each replica is ~2 megs in size, so 5 megs is a good number, as you’ll want a couple of databases per disk.
slice 4 – usr. Sun mounts if as /usr, and that’s fine. Under Solaris 2.6 – 8, one gig might be enough, but 2 gigs is probably better if you have the disk space, just to be on the safe side.
slice 5 and slice 6 – You can create a slice holding /var here. In fact, I do recommend either creating a /var slice, or running dumpadm, and changing the default savecore directory into which kernel crash dump gets placed from /var to /opt (or wherever you have lots of disk space).

root@llewella:/usr/exim[02:09pm]# dumpadm 
      Dump content: kernel pages
       Dump device: /dev/md/dsk/d20 (swap)
Savecore directory: /var/crash/llewella
  Savecore enabled: yes
root@llewella:/usr/exim[02:10pm]# 

dumpadm has a man page.

slice 7 – This is the rest of the disk that you still haven’t fully allocated. I mount it in /opt, symlink /home to /opt/home, and kill automounter (that tries to automount /home by default).

/opt is where things live in my world:

root@llewella:/opt[02:14pm]# ls
SUNWapcy         bind             gpg              ncftp
SUNWconn         db-4.2.52.2      home             ncftp-3.1.3
SUNWits          exim             ipf              patchdiag-1.0.4
SUNWppro         fetchmail        lost+found       perl
SUNWsdb          gcc              lsof             perl-5.8.4
apache           gcc-2.95.3       maker            soma
archive          gdb              mc
audioctl-1.1     gnu              mp3
root@llewella:/opt[02:14pm]# 

My world is not perfect, but it works 😛

Patches

For the longest time patching Suns was simple. Every once in a while (once a month was the norm where I worked), sysadmin would schedule downtime for reboot, etc, and a day or so before ftp over to sunsolve.sun.com/patchroot/clusters, grab the jumbo patch cluster from there that corresponds to the release of the OS he runs, and uncompress it. If sysadmin is worth his salt, and has time, he’d read the READMEs for each patch, and check for incompatibilities. If sysadmin was optimistic, he’d just run install_patch, and hope that Sun QAed the jumbo cluster properly (hint: Sun doesn’t QA jumbo clusters, only individual patches, so there are times when one patch breaks the other. Bad sysadmin. Bad!). This all worked until Solaris 9. By Solaris 10, patch clusters are no-longer there:

ncftp /patchroot/clusters > dir 9*
-rw-r--r--   1 130        14540   Mar 31 23:45   9_Recommended.README
-rw-r--r--   1 130    186986848   Mar 31 23:46   9_Recommended.zip
-rw-r--r--   1 130        17253   Sep 27  2005   9_SunAlert_Patch_Cluster.README
-rw-r--r--   1 130    168473046   Sep 27  2005   9_SunAlert_Patch_Cluster.zip
-rw-r--r--   1 130        13279   Mar 30 20:59   9_x86_Recommended.README
-rw-r--r--   1 130    116337317   Mar 30 20:59   9_x86_Recommended.zip
-rw-r--r--   1 130        15596   Oct  7  2005   9_x86_SunAlert_Patch_Cluster.README
-rw-r--r--   1 130    105728719   Oct  7  2005   9_x86_SunAlert_Patch_Cluster.zip
ncftp /patchroot/clusters > dir 10*
-rw-r--r--   1 130        10594   Apr  3 22:51   10_Recommended.README
-rw-r--r--   1 130         9860   Oct 12 17:24   10_SunAlert_Patch_Cluster.README
-rw-r--r--   1 130        11426   Mar 31 23:53   10_x86_Recommended.README
-rw-r--r--   1 130        10110   Oct 14 19:51   10_x86_SunAlert_Patch_Cluster.README
ncftp /patchroot/clusters > 

So off one goes to http://sunsolve.sun.com/, logs in, accepts a long license agreement, and selects patch finder.

There used to be a patchdiag tool to analyze the patches on a current system versus what is the latest and greatest. patchdiag required one to download the latest patch cross-reference database, patchdiag.xref from Sun each time you’d want to run it (required in a sense that you’d want to compare against the latest patches, right?). Latest database is at http://patches.sun.com/reports/patchdiag.xref

Aternatives to patchdiag are Patch Check Advanced, vxpref, or patchfetch2 All use the patchdiag.xref file, some are pertier then others. I use patchdiag, but maybe I am a traditionalist.

Some of the patches patchdiag will report are “free”, while most are paid. So the solution is either to pay for a support contract, or sigh and be out of date.

So for Solaris 10 the way to stay up to date is to start by downloading the latest free jumbo cluster from patch finder, and using a paid SunUpdate service.

root@llewella:/opt/patchdiag-1.0.4[02:39pm]# ./patchdiag -l 
======================================================================================
System Name: llewella.NotBSD.org         SunOS Vers: 5.8         Arch: sparc
Cross Reference File Date: Apr/05/06

PatchDiag Version: 1.0.4
======================================================================================
Report Note:

Recommended patches are considered the most important and highly
recommended patches that avoid the most critical system, user, or
security related bugs which have been reported and fixed to date.
A patch not listed on the recommended list does not imply that it
should not be used if needed.  Some patches listed in this report
may have certain platform specific or application specific dependencies
and thus may not be applicable to your system.  It is important to
carefully review the README file of each patch to fully determine
the applicability of any patch with your system.
======================================================================================
INSTALLED PATCHES
Patch  Installed Latest   Synopsis
  ID   Revision  Revision
------ --------- -------- ------------------------------------------------------------
108434    17        21    SunOS 5.8: 32-Bit Shared library patch for C++
108435    17        21    SunOS 5.8: 64-Bit Shared library patch for C++
108528    29     CURRENT  SunOS 5.8: kernel update  and Apache patch
108569    06        08    X11 6.4.1: platform support for new hardware
108605    22        37    SunOS 5.8: Creator 8 FFB Graphics Patch
108606    18        39    SunOS 5.8: M64 Graphics Patch
108652    83        97    X11 6.4.1: Xsun patch
108693    24        26    Solstice DiskSuite 4.2.1: Product patch
108714    05        08    CDE 1.4: libDtWidget patch
108723    01     CURRENT  SunOS 5.8: /kernel/fs/lofs and /kernel/fs/sparcv9/lofs patch
108725    16        24    SunOS 5.8: st driver patch
108727    26     CURRENT  Obsoleted by: 116959-05 SunOS 5.8: /kernel/fs/nfs and /kernel/fs/s
108773    12        23    SunOS 5.8: IIIM and X Input & Output Method patch
108806    18        20    SunOS 5.8: Sun Quad FastEthernet qfe driver
108808    42        44    SunOS 5.8: Manual Page updates for Solaris 8
108813    17     CURRENT  Obsoleted by: 117000-05 SunOS 5.8: Sun Gigabit Ethernet 3.0
108820    01        03    SunOS 5.8: nss_compat.so.1 patch
108823    01        02    SunOS 5.8: compress/uncompress/zcat patch
[...]

Oh my. I guess I’ve been slacking in patching.