Latest Solaris 10 patch bundles

I don’t know if it’s just my own ignorance or oracle purposely obfuscating the latest patch bundles for Solaris but I recently had a hell of a time finding the January 2017 patch bundle for Solaris 10. Oracle kept sending me to a page recommending I use sunsolve.sun.com which of course no longer resolves.

Finally I was able to find the following links for the latest current patch bundles for solaris 10 (OTN login and service contract required of course). Once you log into your oracle OTN account you can use the following links to get the latest Solaris 10 patches.

https://updates.oracle.com/patch_cluster/10_Recommended.zip

https://updates.oracle.com/patch_cluster/10_Recommended.README

https://updates.oracle.com/patch_cluster/10_x86_Recommended.zip

https://updates.oracle.com/patch_cluster/10_x86_Recommended.README

Repurposing netapp disk trays with FreeBSD and ZFS

I recently received a couple of DS14-MKII Netapp disk arrays and a pile of spare parts for free. I have no use (and no desire to pay for the excessive power) for a full blown Netapp but I thought that perhaps I could re-use the disks for something. After a little searching on the web it seemed possible to re-purpose these for use as generic JBOD disk trays, Ben Rockwood has an old article on his site describing using older DS8 and DS9 shelves with Linux and LVM. Among the pile of Netapp parts I was able to rescue was the Netapp’s fibre-channel adapter and Ethernet cards, these appeared to be generic PCI cards.

Once home I looked over the pile of disks and hardware I had and realized I had about twenty 300GB disks and at least 30 144GB disks. I picked one of the trays and filled it with the same model 300GB Netapp disks and set the tray ID number to 0. Once powered on it beeped and roared to life. As soon as it had spun up all the disks and initialized the tray the fans went down to a nice quiet level so it actually wasn’t too loud. For my own information I took a power reading of the disk tray and it draws around 260 watts. Not great since the whole tray is only about 3.8TB of space but for experimentation it will do.

Since my server at home is a Foxconn low profile machine based on the Intel Atom D510 I would need a low profile fibre-channel card. After looking at the online classifieds I realized that a 2Gb low-profile fibre card wasn’t going to come cheap. I examined the card I received from the Netapp filer head which turned out to be a Qlogic 2312 dual port 2Gbit adapter. It appeared that it was a low profile capable card, I was just missing the low profile adapter plate. I removed the existing full height PCI plate and slid the card into the slot with no back plate. It fit perfectly and the optical connectors seemed to keep the card from wiggling too much from side to side. The card is a 64bit PCI card but my research showed that it would work in a 32bit slot, I would just have to be careful not to jostle the card too much.

I now focused my attention on getting the Qlogic card working in FreeBSD. Looking at the HCL it looked like the card was on the supported list for the isp driver. I booted the system and it appeared to auto-load the isp driver properly but I didn’t see any attached disk. After examining the logs I noticed the following error.

Jan 20 22:57:45 helix kernel: isp0: port 0xe800-0xe8ff mem 0xfebff000-0xfebfffff irq 21 at device 0.0 on pci3
Jan 20 22:57:45 helix kernel: isp0: [ITHREAD]
Jan 20 22:57:45 helix kernel: isp0: Polled Mailbox Command (0x2) Timeout (1000000us) (started @ isp_reset:953)
Jan 20 22:57:45 helix kernel: isp0: Polled Mailbox Command (0x8) Timeout (100000us) (started @ isp_reset:1017)
Jan 20 22:57:45 helix kernel: isp0: Mailbox Command 'ABOUT FIRMWARE' failed (TIMEOUT)
Jan 20 22:57:45 helix kernel: device_attach: isp0 attach returned 6
Jan 20 22:57:45 helix kernel: isp1:
port 0xe400-0xe4ff mem 0xfebfe000-0xfebfefff irq 22 at device 0.1 on pci3
Jan 20 22:57:45 helix kernel: isp1: [ITHREAD]
Jan 20 22:57:45 helix kernel: isp1: Polled Mailbox Command (0x2) Timeout (1000000us) (started @ isp_reset:953)
Jan 20 22:57:45 helix kernel: isp1: Polled Mailbox Command (0x8) Timeout (100000us) (started @ isp_reset:1017)
Jan 20 22:57:45 helix kernel: isp1: Mailbox Command 'ABOUT FIRMWARE' failed (TIMEOUT)
Jan 20 22:57:45 helix kernel: device_attach: isp1 attach returned 6

After a quick search it appeared that I had to load the isp-fw module which loads the compatible Qlogic firmware into the card before it’s initialized by the kernel driver. I added the following two lines to /boot/loader.conf and rebooted the system.

ispfw_load="YES"
isp_load="YES"

Success!

Jan 20 23:20:54 helix kernel: ispfw: registered firmware
Jan 20 23:20:54 helix kernel: ispfw: registered firmware
Jan 20 23:20:54 helix kernel: ispfw: registered firmware
Jan 20 23:20:54 helix kernel: ispfw: registered firmware
Jan 20 23:20:54 helix kernel: ispfw: registered firmware
Jan 20 23:20:54 helix kernel: ispfw: registered firmware
Jan 20 23:20:54 helix kernel: ispfw: registered firmware
Jan 20 23:20:54 helix kernel: ispfw: registered firmware
Jan 20 23:20:54 helix kernel: ispfw: registered firmware
Jan 20 23:20:54 helix kernel: ispfw: registered firmware
Jan 20 23:20:54 helix kernel: ispfw: registered firmware
Jan 20 23:20:54 helix kernel: ispfw: registered firmware
Jan 20 23:20:54 helix kernel: ispfw: registered firmware
Jan 20 23:20:54 helix kernel: ispfw: registered firmware
...
Jan 20 23:20:54 helix kernel: isp0: port 0xe800-0xe8ff mem 0xfebff000-0xfebfffff irq 21 at device 0.0 on pci3
Jan 20 23:20:54 helix kernel: isp0: [ITHREAD]
Jan 20 23:20:54 helix kernel: isp1:
port 0xe400-0xe4ff mem 0xfebfe000-0xfebfefff irq 22 at device 0.1 on pci3
Jan 20 23:20:54 helix kernel: isp1: [ITHREAD]
...
da18: Fixed Direct Access SCSI-3 device
da18: 200.000MB/s transfers WWNN 0x2000001862e6b359 WWPN 0x2200001862e6b359 PortID 0x1b
da18: Command Queueing enabled
da18: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da19 at isp0 bus 0 scbus0 target 6 lun 0
da19:
Fixed Direct Access SCSI-3 device
da19: 200.000MB/s transfers WWNN 0x2000001862eebf72 WWPN 0x2200001862eebf72 PortID 0x1e
da19: Command Queueing enabled
da19: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da21 at isp0 bus 0 scbus0 target 9 lun 0
da21:
Fixed Direct Access SCSI-3 device
da21: 200.000MB/s transfers WWNN 0x2000001862d03be1 WWPN 0x2200001862d03be1 PortID 0x18
da21: Command Queueing enabled
da21: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da20 at isp0 bus 0 scbus0 target 7 lun 0
da20:
Fixed Direct Access SCSI-3 device
da20: 200.000MB/s transfers WWNN 0x2000001862cdf7f2 WWPN 0x2200001862cdf7f2 PortID 0x1d
da20: Command Queueing enabled
da20: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da22 at isp0 bus 0 scbus0 target 13 lun 0
da22:
Fixed Direct Access SCSI-3 device
da22: 200.000MB/s transfers WWNN 0x20000018627f3052 WWPN 0x22000018627f3052 PortID 0x1
da22: Command Queueing enabled
da22: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da26 at isp0 bus 0 scbus0 target 5 lun 0
da26:
Fixed Direct Access SCSI-3 device
da26: 200.000MB/s transfers WWNN 0x2000001862d03fc2 WWPN 0x2200001862d03fc2 PortID 0x1f
da26: Command Queueing enabled
da26: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da23 at isp0 bus 0 scbus0 target 12 lun 0
da23:
Fixed Direct Access SCSI-3 device
da23: 200.000MB/s transfers WWNN 0x20000018627f2f42 WWPN 0x22000018627f2f42 PortID 0x4
da23: Command Queueing enabled
da23: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da24 at isp0 bus 0 scbus0 target 11 lun 0
da24:
Fixed Direct Access SCSI-3 device
da24: 200.000MB/s transfers WWNN 0x200000186285c3ec WWPN 0x220000186285c3ec PortID 0x8
da24: Command Queueing enabled
da24: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da25 at isp0 bus 0 scbus0 target 10 lun 0
da25:
Fixed Direct Access SCSI-3 device
da25: 200.000MB/s transfers WWNN 0x200000186285c982 WWPN 0x220000186285c982 PortID 0x10
da25: Command Queueing enabled
da25: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da27 at isp0 bus 0 scbus0 target 3 lun 0
da27:
Fixed Direct Access SCSI-3 device
da27: 200.000MB/s transfers WWNN 0x2000001862d04077 WWPN 0x2200001862d04077 PortID 0x25
da27: Command Queueing enabled
da27: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da4 at isp1 bus 0 scbus1 target 5 lun 0
da4:
Fixed Direct Access SCSI-3 device
da4: 200.000MB/s transfers WWNN 0x2000001862e6b359 WWPN 0x2100001862e6b359 PortID 0x1b
da4: Command Queueing enabled
da4: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da6 at isp1 bus 0 scbus1 target 3 lun 0
da6:
Fixed Direct Access SCSI-3 device
da6: 200.000MB/s transfers WWNN 0x2000001862eebf72 WWPN 0x2100001862eebf72 PortID 0x1e
da6: Command Queueing enabled
da6: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da28 at isp0 bus 0 scbus0 target 4 lun 0
da28:
Fixed Direct Access SCSI-3 device
da28: 200.000MB/s transfers WWNN 0x2000001862d0349d WWPN 0x2200001862d0349d PortID 0x23
da28: Command Queueing enabled
da28: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da5 at isp1 bus 0 scbus1 target 4 lun 0
da5:
Fixed Direct Access SCSI-3 device
da5: 200.000MB/s transfers WWNN 0x2000001862cdf7f2 WWPN 0x2100001862cdf7f2 PortID 0x1d
da5: Command Queueing enabled
da5: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da16 at isp0 bus 0 scbus0 target 1 lun 0
da16:
Fixed Direct Access SCSI-3 device
da16: 200.000MB/s transfers WWNN 0x20000018627f3241 WWPN 0x22000018627f3241 PortID 0xf
da16: Command Queueing enabled
da16: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da12 at isp1 bus 0 scbus1 target 9 lun 0
da12:
Fixed Direct Access SCSI-3 device
da12: 200.000MB/s transfers WWNN 0x20000018627f3241 WWPN 0x21000018627f3241 PortID 0xf
da12: Command Queueing enabled
da12: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da13 at isp1 bus 0 scbus1 target 7 lun 0
da13:
Fixed Direct Access SCSI-3 device
da13: 200.000MB/s transfers WWNN 0x20000018627bec13 WWPN 0x21000018627bec13 PortID 0x17
da13: Command Queueing enabled
da13: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da3 at isp1 bus 0 scbus1 target 6 lun 0
da3:
Fixed Direct Access SCSI-3 device
da3: 200.000MB/s transfers WWNN 0x2000001862d03be1 WWPN 0x2100001862d03be1 PortID 0x18
da3: Command Queueing enabled
da3: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da15 at isp0 bus 0 scbus0 target 2 lun 0
da15:
Fixed Direct Access SCSI-3 device
da15: 200.000MB/s transfers WWNN 0x20000018627f2b3b WWPN 0x22000018627f2b3b PortID 0x2
da15: Command Queueing enabled
da15: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da1 at isp1 bus 0 scbus1 target 8 lun 0
da1:
Fixed Direct Access SCSI-3 device
da1: 200.000MB/s transfers WWNN 0x200000186285c982 WWPN 0x210000186285c982 PortID 0x10
da1: Command Queueing enabled
da1: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da9 at isp1 bus 0 scbus1 target 13 lun 0
da9:
Fixed Direct Access SCSI-3 device
da9: 200.000MB/s transfers WWNN 0x20000018627f3052 WWPN 0x21000018627f3052 PortID 0x1
da9: Command Queueing enabled
da9: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da10 at isp1 bus 0 scbus1 target 12 lun 0
da10:
Fixed Direct Access SCSI-3 device
da10: 200.000MB/s transfers WWNN 0x20000018627f2b3b WWPN 0x21000018627f2b3b PortID 0x2
da10: Command Queueing enabled
da10: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da17 at isp0 bus 0 scbus0 target 0 lun 0
da17:
Fixed Direct Access SCSI-3 device
da17: 200.000MB/s transfers WWNN 0x20000018627bec13 WWPN 0x22000018627bec13 PortID 0x17
da17: Command Queueing enabled
da17: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da11 at isp1 bus 0 scbus1 target 11 lun 0
da11:
Fixed Direct Access SCSI-3 device
da11: 200.000MB/s transfers WWNN 0x20000018627f2f42 WWPN 0x21000018627f2f42 PortID 0x4
da11: Command Queueing enabled
da11: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da7 at isp1 bus 0 scbus1 target 2 lun 0
da7:
Fixed Direct Access SCSI-3 device
da7: 200.000MB/s transfers WWNN 0x2000001862d03fc2 WWPN 0x2100001862d03fc2 PortID 0x1f
da7: Command Queueing enabled
da7: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da2 at isp1 bus 0 scbus1 target 10 lun 0
da2:
Fixed Direct Access SCSI-3 device
da2: 200.000MB/s transfers WWNN 0x200000186285c3ec WWPN 0x210000186285c3ec PortID 0x8
da2: Command Queueing enabled
da2: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da8 at isp1 bus 0 scbus1 target 0 lun 0
da8:
Fixed Direct Access SCSI-3 device
da8: 200.000MB/s transfers WWNN 0x2000001862d04077 WWPN 0x2100001862d04077 PortID 0x25
da8: Command Queueing enabled
da8: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)
da14 at isp1 bus 0 scbus1 target 1 lun 0
da14:
Fixed Direct Access SCSI-3 device
da14: 200.000MB/s transfers WWNN 0x2000001862d0349d WWPN 0x2100001862d0349d PortID 0x23
da14: Command Queueing enabled
da14: 284481MB (573653847 520 byte sectors: 255H 63S/T 35708C)

Huh? 14 disks appearing as 28 devices? I hooked both ports of the FC adapter to the shelf, one to each controller. I guess the controllers are running in an active-active scenario. For now we’ll concern ourselves with the first 14 disks, later we’ll look at multipath. If you look at the entries you’ll notice something else that’s odd, 520 byte sectors. Netapp stores ECC information in each block of the disk for data protection purposes, I guess that’s what the additional bytes in the sectors are for. Unfortunately FreeBSD 8-stable doesn’t appear to have native support for 520 byte sectors. Any time I attempted to work with the disks I received a permission denied error or an I/O error however I was able to see the disks.

# camcontrol devlist
at scbus0 target 0 lun 0 (pass15,da15)
at scbus0 target 1 lun 0 (pass16,da16)
at scbus0 target 2 lun 0 (pass20,da20)
at scbus0 target 3 lun 0 (pass19,da19)
at scbus0 target 4 lun 0 (pass18,da18)
at scbus0 target 5 lun 0 (pass17,da17)
at scbus0 target 6 lun 0 (pass27,da27)
at scbus0 target 7 lun 0 (pass26,da26)
at scbus0 target 8 lun 0 (pass28,da28)
at scbus0 target 9 lun 0 (pass24,da24)
at scbus0 target 10 lun 0 (pass23,da23)
at scbus0 target 11 lun 0 (pass25,da25)
at scbus0 target 12 lun 0 (pass22,da22)
at scbus0 target 13 lun 0 (pass21,da21)
at scbus1 target 0 lun 0 (pass4,da4)
at scbus1 target 1 lun 0 (pass9,da9)
at scbus1 target 2 lun 0 (pass8,da8)
at scbus1 target 3 lun 0 (pass3,da3)
at scbus1 target 4 lun 0 (pass7,da7)
at scbus1 target 5 lun 0 (pass13,da13)
at scbus1 target 6 lun 0 (pass2,da2)
at scbus1 target 7 lun 0 (pass12,da12)
at scbus1 target 8 lun 0 (pass14,da14)
at scbus1 target 9 lun 0 (pass11,da11)
at scbus1 target 10 lun 0 (pass1,da1)
at scbus1 target 11 lun 0 (pass10,da10)
at scbus1 target 12 lun 0 (pass6,da6)
at scbus1 target 13 lun 0 (pass5,da5)

After a little more searching it appeared that it was possible to reformat the disks with 512 byte sectors using a low-level format utility. Unfortunately in the beginning all the information I found was related to Windows or Linux. After some more searching I was able to locate this thread which helped me greatly. Essentially you have to set the sector size using the first camcontrol line. Then issue a low-level format command to the disk, the disk then handles formatting it self and reports back when done. Warning, this will destroy ALL data on the disks, so please make sure you’re formatting the right disk!

# camcontrol cmd da1 -v -c "15 10 0 0 v:i1 0" 12 -o 12 "0 0 0 8 0 0:i3 0 v:i3" 512
# camcontrol format da1 -q -y

I did this for all of the disks in the shelf and went to bed for the night. When I awoke in the morning all the low-level formats had finished and the disks appeared to now be formatted with 512 byte blocks!

# diskinfo -v da1
da1
512 # sectorsize
300351537152 # mediasize in bytes (280G)
586624096 # mediasize in sectors
0 # stripesize
0 # stripeoffset
36515 # Cylinders according to firmware.
255 # Heads according to firmware.
63 # Sectors according to firmware.
3KR22JF400009731E2K6 # Disk ident.

Awesome! Now all the disks appeared to be formatted properly. I created a partition which was 200 MB smaller than the size of the disk to account for any size difference between the 300GB NA07 disks and the older 300GB NA04 disks I had for spares.

[root@helix ~]# for i in da{1,2,3,4,5,6,7,8,9,10,11,12,13,14} ; do gpart create -s GPT $i; done
[root@helix ~]# gpart add -b 2048 -s 586214429 -t freebsd-zfs -l disk00 da1
[root@helix ~]# gpart show da1
=> 34 586624029 da1 GPT (280G)
34 2014 - free - (1.0M)
2048 586214429 1 freebsd-zfs (280G)
586216477 407586 - free - (199M)
[root@helix ~]# gpart add -b 2048 -s 586214429 -t freebsd-zfs -l disk01 da2
[root@helix ~]# gpart add -b 2048 -s 586214429 -t freebsd-zfs -l disk02 da3
[root@helix ~]# gpart add -b 2048 -s 586214429 -t freebsd-zfs -l disk03 da4
[root@helix ~]# gpart add -b 2048 -s 586214429 -t freebsd-zfs -l disk04 da5
[root@helix ~]# gpart add -b 2048 -s 586214429 -t freebsd-zfs -l disk05 da6
[root@helix ~]# gpart add -b 2048 -s 586214429 -t freebsd-zfs -l disk06 da7
[root@helix ~]# gpart add -b 2048 -s 586214429 -t freebsd-zfs -l disk07 da8
[root@helix ~]# gpart add -b 2048 -s 586214429 -t freebsd-zfs -l disk08 da9
[root@helix ~]# gpart add -b 2048 -s 586214429 -t freebsd-zfs -l disk09 da10
[root@helix ~]# gpart add -b 2048 -s 586214429 -t freebsd-zfs -l disk10 da11
[root@helix ~]# gpart add -b 2048 -s 586214429 -t freebsd-zfs -l disk11 da12
[root@helix ~]# gpart add -b 2048 -s 586214429 -t freebsd-zfs -l disk12 da13
[root@helix ~]# gpart add -b 2048 -s 586214429 -t freebsd-zfs -l disk13 da14
[root@helix ~]# zpool create array0 raidz2 gpt/disk00 gpt/disk01 gpt/disk02 gpt/disk03 gpt/disk04 gpt/disk05 gpt/disk06 gpt/disk07 gpt/disk08 gpt/disk09 gpt/disk10 gpt/disk11 gpt/disk12 gpt/disk13
[root@helix ~]# zpool status
pool: array0
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
array0 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
gpt/disk00 ONLINE 0 0 0
gpt/disk01 ONLINE 0 0 0
gpt/disk02 ONLINE 0 0 0
gpt/disk03 ONLINE 0 0 0
gpt/disk04 ONLINE 0 0 0
gpt/disk05 ONLINE 0 0 0
gpt/disk06 ONLINE 0 0 0
gpt/disk07 ONLINE 0 0 0
gpt/disk08 ONLINE 0 0 0
gpt/disk09 ONLINE 0 0 0
gpt/disk10 ONLINE 0 0 0
gpt/disk11 ONLINE 0 0 0
gpt/disk12 ONLINE 0 0 0
gpt/disk13 ONLINE 0 0 0

Now I have a raidz2 array assembled with ZFS on my little intel Atom server!

View hardware detail in FreeBSD

If you’re running FreeBSD and need to find out information about the physical hardware the system is running on the following commands may be helpful to you.

atacontrol list # Show ATA devices
camcontrol devlist -v # Show SCSI devices
pciconf -l -cv # Show PCI devices
sysctl hw.model # CPU model
sysctl hw # Gives a lot of hardware information
sysctl hw.ncpu # number of active CPUs installed
sysctl vm # Memory usage
sysctl hw.realmem # Hardware memory
sysctl -a | grep mem # Kernel memory settings and info
sysctl dev # Configured devices
usbdevs -v # Show USB devices

There is also a handy script I use called memconf that will give you detailed memory information on a variety of *NIX OS’s, on FreeBSD you will need to install dmidecode which can be found in the ports tree..

Solaris 8 containers with nested filesystems

I am working with solaris 8 containers on solaris 10 at work. Today I was working with a container that had some zfs partitions mounted in it and wanted to move it to another machine. When I tried to detach the container I received the following error message.

T2000# zoneadm -z myzone detach
zoneadm: zone ‘myzone’: These file-systems are mounted on subdirectories of /zone_roots/myzone.

zoneadm: zone ‘myzone’: /zone_roots/myzone/vol0

zoneadm: zone ‘myzone’: /zone_roots/myzone/data

T2000# umount /zone_roots/myzone/vol0/
T2000# umount /zone_roots/myzone/data
T2000# zoneadm -z myzone detach
T2000#

I googled to see if there was an obvious answer but found none. It turns out I simply had to unmount the two filesystems before detaching the container. Simple fix but it caused me some serious head scratching for a few minutes.

New life for my old Sun Ultra5

I acquired my little Sun Ultra 5 in 2003 for $100 from the asset auction of a dead .com company. I didn’t really realize it at the time but I got a pretty good deal, the system was in mint shape and looked as if it had never been used.

Over the years I’ve played with Solaris on it and various *nix sparc ports. It’s served as a desktop machine and a server for me and always worked well. Last January my group did their annual storage room cleanout where we go through our storage area and recycle all the equipment and parts that are just taking up space. One such item was a Sun Ultra 10 with a bad CPU module and no nvram.

I grabbed it from the recycling pile and took it home to see if there was any memory in it that would work with my Ultra 5. Luckily for me, there was 1GB of ram installed! The only problem is it was full height memory which interfered with the floppy mount in my ultra 5.

So I stripped both systems down to the bare components. I put my ultra 5 motherboard and cpu into the Ultra 10 chassis. Then I took the memory, SCSI card, and Elite 3D framebuffer from the old Ultra 10 and a 73gb 10K rpm scsi disk and DVD-Rom I had from an old pc and put them all into the Ultra 10. The old leftover parts went back to the computer recyclers, where all electronics waste should be (not in the garbage!).

So here are the specs of the new machine:

System Configuration: Sun Microsystems sun4u Sun Ultra 5/10 UPA/PCI (UltraSPARC-IIi 400MHz)
System clock frequency: 100 MHz
Memory size: 1024 Megabytes

========================= CPUs =========================

Run Ecache CPU CPU
Brd CPU Module MHz MB Impl. Mask
--- --- ------- ----- ------ ------ ----
0 0 0 400 2.0 12 9.1

========================= IO Cards =========================

Bus# Freq
Brd Type MHz Slot Name Model
--- ---- ---- ---- -------------------------------- ----------------------
0 PCI-1 33 1 ebus
0 PCI-1 33 1 network-SUNW,hme
0 PCI-1 33 2 SUNW,m64B ATY,GT-C
0 PCI-1 33 3 ide-pci1095,646.1095.646.3
0 PCI-2 33 4 scsi-glm Symbios,53C875

No failures found in System
===========================

========================= HW Revisions =========================

ASIC Revisions:
---------------
Cheerio: ebus Rev 1

System PROM revisions:
----------------------
OBP 3.31.0 2001/07/25 20:36 POST 3.1.0 2000/06/27 13:56

It’s like a new machine! Solaris 9 is pretty responsive on it, the addition of a SCSI drive to replace the slow IDE made a big difference. Now that’s true computer recycling!