Make a copy of your root disk on solaris 10

if you need to make a copy of your boot disk, and you don’t have
disksuite, and you only have one slice as “/”, and this is solaris 10,
you can do:
#prtvtoc /dev/rdsk/c0t0d0s0 | fmthard -s – /dev/rdsk/c0t1d0s0
#newfs /dev/dsk/c0t1d0s 0
#mount /dev/dsk/c0t1d0s0 /a
#cd /
#find . -mount | cpio -pmdv /a
#installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s0
#vi /a/etc/vfstab
(change references from c0t0d0s0 to c1)
#mkdir /a/tmp
#mkdir /a/dev
#mkdir /a/proc
#touch /a/etc/mnttab
#mkdir /a/etc/svc/volatile
#mkdir /a/system/object
#mkdir /a/system/contract
#umount /a
#init 0
#boot otherdisk -r
And the system will boot fine from the other disk.
Remember it should be better to do this in single user mode, and with apps down.
200/433

s9y comment spam protection

I was been attacked by comment spam in this blog… this had me a bit down for a while.
Soon i found out there is an easy way to make comment spam difficult on serendipity(s9y) blog.
Go to “Configure Plugins” —> “Event Plugins” —-> “Spam Protector”
And you’ll solve all your problems 🙂
– rdircio
253/433

get a vnc session forwarded

This is basic ssh port forwarding, but i always forget…
so, you have hosta, hostb and yourpc.
you’re not in the same network as hostb, so you need to connect this way
yourpc –> hosta —> hostb
and you have a vnc session at hostb:1 (5901)
you want to connect to hosta:11 (5911) and get your vnc screen at hostb:1 (5901)
All you have to do is, from hosta
rdircio@hosta $ ssh -g -C hostb -L 5911:localhost:5901
then point the vnc client on yourpc to hosta:11.
196/433

Add a couple of disks to your zpool and grow some filesystems

First, check the status of your pool
bash-3.00# zpool status -v
pool: amx002zpool1
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
amx002zpool1 ONLINE 0 0 0
c6t60060480000190101353533030393543d0 ONLINE 0 0 0
c6t60060480000190101353533030393536d0 ONLINE 0 0 0
c6t60060480000190101353533030393530d0 ONLINE 0 0 0
c6t60060480000190101353533030393441d0 ONLINE 0 0 0
c6t60060480000190101353533030393434d0 ONLINE 0 0 0
c6t60060480000190101353533030433435d0 ONLINE 0 0 0
errors: No known data errors
Then add a couple of disks to it:
bash-3.00# zpool add amx002zpool1 c6t60060480000190101353533031343236d0
bash-3.00# zpool add amx002zpool1 c6t60060480000190101353533031343243d0
Now check if they’re added:
bash-3.00# zpool status -v
pool: amx002zpool1
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
amx002zpool1 ONLINE 0 0 0
c6t60060480000190101353533030393543d0 ONLINE 0 0 0
c6t60060480000190101353533030393536d0 ONLINE 0 0 0
c6t60060480000190101353533030393530d0 ONLINE 0 0 0
c6t60060480000190101353533030393441d0 ONLINE 0 0 0
c6t60060480000190101353533030393434d0 ONLINE 0 0 0
c6t60060480000190101353533030433435d0 ONLINE 0 0 0
c6t60060480000190101353533031343236d0 ONLINE 0 0 0
c6t60060480000190101353533031343243d0 ONLINE 0 0 0
errors: No known data errors
List free space on the pool:
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
99/433
gskhr009zpool1 69.5G 5.87G 63.6G 8% ONLINE –
gskhr009zpool2 139G 73.6G 65.4G 52% ONLINE –
gskhr009zpool3 1.70T 1.24T 468G 73% ONLINE –
List all the filesystems in the pool:
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
amx002zpool1 345G 157G 24.5K /amx002zpool1
amx002zpool1/amx21 64.6G 6.37G 64.6G /amx21
amx002zpool1/amx22 32.9G 4.12G 32.9G /amx22
amx002zpool1/amx23 26.1G 4.95G 26.1G /amx23
amx002zpool1/amx24 58.9G 6.15G 58.9G /amx24
amx002zpool1/amx25 49.6G 7.39G 49.6G /amx25
amx002zpool1/oracle 6.50G 1.50G 6.50G /oracle
amx002zpool1/oracle-export 21.6G 23.4G 21.6G /oracle/export
amx002zpool1/oracle2 24.5K 8.00G 24.5K /opt/oracle
amx002zpool1/psreports 24.5K 3.00G 24.5K /opt/reports
amx002zpool1/vendor 690K 20.0G 690K /opt/vendor
Get a quota and reservation for the filesystem to grow:
bash-3.00# zfs get quota amx002zpool1/amx21
NAME PROPERTY VALUE SOURCE
amx002zpool1/amx21 quota 71G local
bash-3.00# zfs get reservation amx002zpool1/amx21
NAME PROPERTY VALUE SOURCE
amx002zpool1/amx21 reservation 71G local
Grow the filesystem:
bash-3.00# zfs set quota=100G amx002zpool1/amx21
bash-3.00# zfs set reservation=100G amx002zpool1/amx21
That’s it, have fun
100/433

last failed/successful “sulog” entries for N days ago

You want to know for the latest N days who was able/unable to su, in solaris. Sulog is in /var/adm/sulog, and we don’t
have the niceties of GNU date, so we use perl inside a shell… “suentries.ksh”
#!/bin/ksh
#— n has how many days ago we want
n=$1
today=`/usr/bin/perl -e ‘printf “%dn”, time;’`
x=$n
while [ $x -gt -1 ];do
ago=$(($today-86400*${x}))
export ago
DAY=`perl -e ‘($a,$b,$c,$d,$e,$f,$g,$h,$i) =localtime($ENV{‘ago’});printf”%02d/%02dn”,$e+1,$d’`
echo “—- $DAY”
cat /var/adm/sulog | grep $DAY
x=$(($x-1))
done

tar and gzip

this is first grade, but in case you forget how to create a tar and gzip on the fly if you do not have gnu tar…
to unpack on the fly:
# gunzip < filename.tar.gz | tar xvf –
to pack on the fly:
# tar cvf – files_to_tar | gzip -c > filename.tar.gz
249/433

Sendmail starts using “-C” in solaris 10

If sendmail starts only using local.cf and something like this appears:
# ps -ef | grep -i sendmail
root 22616 1 0 11:13:11 x y z 0:00 /usr/lib/sendmail -bd -q15m -C /etc/mail/local.cf
smmsp 22614 1 0 11:13:10 x y z 0:00 /usr/lib/sendmail -A
And you want it not to run only local, you can do:
# svccfg -s svc:/network/smtp:sendmail setprop config/local_only = false
# svcadm refresh svc:/network/smtp:sendmail
# /usr/bin/svcprop -p config/local_only svc:/network/smtp:sendmail
false
# svcadm disable sendmail
# svcadm enable sendmail
And now it runs without local.cf
# ps -ef | grep -i sendmail
root 24465 1 0 11:18:13 x y z 0:00 /usr/lib/sendmail -bd -q15m
smmsp 24463 1 0 11:18:13 x y z 0:00 /usr/lib/sendmail -Ac -q15m
18/433