poniedziałek, 23 czerwca 2008

xVM SXCE91 CentOS 5.1 pvm

W najnowszym SXCE91 jest problem jeśli chcemy zainstalować domenę pvm z systemem CentOS 5.1.
Problem występuje, jeśli podamy jako źródło instalacji obraz płyty z CentOS, instalator linuxa nie widzi dysku twardego...

Aby to ominąć można podać jako źródło instalacji zasób NFS.

# virt-install -n centos -r 256 -f /dev/zvol/dsk/data/centos -p --nographics -l nfs:192.168.20.30:/export/centos


Wcześniej w /export/centos należy zamontować obraz płyty CentOS i wyeksportować katalog po NFS.

środa, 18 czerwca 2008

DTrace IP Provider

# dtrace -l | awk '{if ($2 == "ip") print $0}'
1492 ip ip ip_wput_local_v6 receive
1493 ip ip ip_rput_v6 receive
1494 ip ip ip_wput_local receive
1495 ip ip ip_input receive
1514 ip ip ip_inject_impl send
1515 ip ip udp_xmit send
1516 ip ip tcp_lsosend_data send
1517 ip ip tcp_multisend send
1518 ip ip tcp_send_data send
1519 ip ip ip_multicast_loopback send
1520 ip ip ip_xmit_v6 send
1521 ip ip ip_wput_ire_v6 send
1522 ip ip ip_xmit_v4 send
1523 ip ip ip_wput_ipsec_out send
1524 ip ip ip_wput_ipsec_out_v6 send
1525 ip ip ip_wput_frag send
1526 ip ip ip_wput_frag_mdt send
1527 ip ip ip_wput_ire send
1528 ip ip ip_fast_forward send



# dtrace -n 'ip:::send {@[execname]=count()}'
dtrace: description 'ip:::send ' matched 15 probes
^C

bonobo-activatio 1
esd 1
gconf-sanity-che 1
gdmprefetch 1
iiimx-settings-i 1
ksh 1
rm 1
run-mozilla.sh 1
nfsmapid 2
nscd 2
quota 2
gconftool-2 3
gnome-vfs-daemon 3
dbus-daemon 4
dtsearchpath 4
nfs4cbd 4
md5sum 6
dtappgather 7
firefox 7
sdt_shell 8
iiim-xbe 9
xmbind 9
mv 10
Xorg 11
bash 11
xscreensaver 13
xdg-user-dirs-up 15
dbus-launch 16
xsetroot 16
touch 18
echo 22
mkfontdir 22
iiimx 23
gdm-binary 38
gnome-volume-man 43
clock-applet 47
metacity 61
gnome-terminal 72
Xsession 82
gnome-settings-d 178
mixer_applet2 386
battstat-applet- 390
wnck-applet 405
gam_server 441
nautilus 620
gnome-panel 656
gnome-session 681
gconfd-2 1190
firefox-bin 2815
sched 21545


# dtrace -n 'ip::tcp_send_data: {@[execname]=count();}'
dtrace: description 'ip::tcp_send_data: ' matched 1 probe
^C

sshd 1
firefox-bin 31
sched 34
gam_server 68



# dtrace -n 'ip:::'
dtrace: description 'ip:::' matched 19 probes
CPU ID FUNCTION:NAME
0 1495 ip_input:receive
0 1518 tcp_send_data:send
0 1518 tcp_send_data:send
0 1495 ip_input:receive
0 1495 ip_input:receive
0 1518 tcp_send_data:send
0 1518 tcp_send_data:send
0 1495 ip_input:receive
0 1518 tcp_send_data:send
0 1495 ip_input:receive
0 1518 tcp_send_data:send
0 1495 ip_input:receive
0 1518 tcp_send_data:send
0 1495 ip_input:receive
0 1518 tcp_send_data:send
^C
0 1518 tcp_send_data:send
0 1518 tcp_send_data:send
0 1495 ip_input:receive
0 1518 tcp_send_data:send
0 1518 tcp_send_data:send
0 1495 ip_input:receive
0 1518 tcp_send_data:send
0 1518 tcp_send_data:send
0 1495 ip_input:receive
0 1518 tcp_send_data:send
0 1518 tcp_send_data:send
0 1495 ip_input:receive
0 1518 tcp_send_data:send
0 1518 tcp_send_data:send
0 1495 ip_input:receive
0 1518 tcp_send_data:send
0 1518 tcp_send_data:send
0 1495 ip_input:receive
0 1518 tcp_send_data:send
0 1518 tcp_send_data:send
0 1495 ip_input:receive
0 1518 tcp_send_data:send
0 1518 tcp_send_data:send
0 1495 ip_input:receive
0 1495 ip_input:receive
0 1518 tcp_send_data:send
0 1518 tcp_send_data:send

piątek, 13 czerwca 2008

OpenDS 1.0.0 Release Candidate

Właśnie wyszedł RC wersji 1.0.0 otwartego serwera usług katalogowych OpenDS :)
OpenDS możemy pobrać stąd.
Link prowadzi do wersji 1.0.0-build016, właśnie ta wersja jest oznaczona jako RC.

wtorek, 10 czerwca 2008

W końcu w komplecie
















Szczególne podziękowania dla cypromisa i sob0la :)

niedziela, 8 czerwca 2008

Polska klawiatura w OpenSolaris

Już wiele osób o tym pisało, ale dodam to jeszcze raz ...

Jak ustawić polską klawiaturę w Solaris/OpenSolaris.
Czyli jak ustawić układ polski programisty, aby np "z" było "z", a nie "y"...

Należe wyedytować plik /etc/X11/xorg.conf, który domyślnie nie jest tworzony.

W tym celu należy wyłączyć GDM lub cde-login (w zależności od wersji systemu).

Aby zobaczyć jaka usługa jest włączona:
# svcs gdm cde-login

Zakładająć, że używamy GDM, musimy go wyłączyć:
# svcadm disable -t gdm

Tworzymy plik xorg.conf:
/usr/X11/bin/Xorg -configure

Musimy go przenieść do /etc/X11/xorg.conf

W sekcji klawiatury dopisujemy:
Option "XkbLayout" "pl_dev"

Sekcja może wyglądać tak:

Section "InputDevice"
Identifier "Keyboard0"
Driver "kbd"
Option "XkbLayout" "pl_dev"
EndSection

Na końcu uruchamiamy GDM lub cde-login:
# svcadm enable gdm

Klawiatura powinna działać poprawnie.

sobota, 7 czerwca 2008

lofi mount

Od buildu 91 nie trzeba używać lofiadm, aby zamontować obraz płyty CD/DVD.

Wystarczy samo mount:

# mount -F hsfs -o ro /mnt/new/sxce90.iso /mnt/x
# ls /mnt/x/
autorun.inf JDS-THIRDPARTYLICENSEREADME
autorun.sh License
boot README.txt
Copyright sddtool
DeveloperTools Solaris_11
installer Sun_HPC_ClusterTools
#
# mount | grep /mnt/x
/mnt/x on /mnt/new/sxce90.iso read only/nosetuid/nodevices/noglobal/maplcase/rr/traildot/dev=2400001 on So cz 7 09:17:28 2008

czwartek, 5 czerwca 2008

Live Upgrade w 25 sekund!

# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
be0_snv90 yes no no yes -
be_zfs_b90 yes yes yes no -

# lucreate -n be_zfs_test
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Cloning file systems from boot environment to create boot environment .
Creating snapshot for on .
Creating clone for on .
Setting canmount=noauto for in zone on .
No entry for BE in GRUB menu
Population of boot environment successful.
Creation of boot environment successful.

# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
be0_snv90 yes no no yes -
be_zfs_b90 yes yes yes no -
be_zfs_test yes no no yes -

# time lucreate -n be_zfs_bfu
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Cloning file systems from boot environment to create boot environment .
Creating snapshot for on .
Creating clone for on .
Setting canmount=noauto for in zone on .
No entry for BE in GRUB menu
Population of boot environment successful.
Creation of boot environment successful.

real 0m25.099s
user 0m3.195s
sys 0m4.602s

Live Upgrade, BUGi i ZFS

W Solaris Express Comminity Edition b90 (SXCE90) doszła
obsługa głównego systemu plików jako ZFS.
Co ciekawe oprócz instalatora (próbowałem tylko tekstowego) obsługę /
jako ZFS mają również narzędzia od Live Upgrade.
Dodatkowo można zmigrować obecny system zainstalowany na UFS
na dodatkową pulę ZFS.
Wszystko ładnie i pięknie, ale jest niestety BUG (6707013), który utrudnia życie...

Na potrzeby testów usunąłem jedno środowisko uruchomieniowe,
w jego miejsce zrobiłem nową pulę ZFS:

# zpool create rpool c0d0s4
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c0d0s4 contains a ufs filesystem.
# zpool create -f rpool c0d0s4

# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
data 23,2G 4,78G 18,5G 20% ONLINE -
rpool 11,7G 95,5K 11,7G 0% ONLINE -

Tworzymy nowe środowisko uruchomieniowe na nowej puli ZFS:
# lucreate -n be_zfs_b90 -p rpool
Checking GRUB menu...

This system contains only a single GRUB menu for all boot environments. To
enhance reliability and improve the user experience, live upgrade requires
you to run a one time conversion script to migrate the system to multiple
redundant GRUB menus. This is a one time procedure and you will not be
required to run this script on subsequent invocations of Live Upgrade
commands. To run this script invoke:

/usr/lib/lu/lux86menu_propagate /path/to/new/Solaris/install/image

where /path/to/new/Solaris/install/image is an absolute
path to the Solaris media or netinstall image from which you installed the
Live Upgrade packages.


System musi zaktualizować /boot/grub/menu.lst ...
W tym celu montujemy płytkę z SXCE90:
# lofiadm -a /mnt/new/sxce90.iso
/dev/lofi/1
# mount -F hsfs -o ro /dev/lofi/1 /mnt/x/

Nie wiem dlaczego, ale musiałem usunąć inne BE, aby Live Upgrade zadziałało...

# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
be0_snv90 yes yes yes no -
be1_snv89 yes no no yes -

# ludelete -n be1_snv89

This system contains only a single GRUB menu for all boot environments. To
enhance reliability and improve the user experience, live upgrade requires
you to run a one time conversion script to migrate the system to multiple
redundant GRUB menus. This is a one time procedure and you will not be
required to run this script on subsequent invocations of Live Upgrade
commands. To run this script invoke:

/usr/lib/lu/lux86menu_propagate /path/to/new/Solaris/install/image

where /path/to/new/Solaris/install/image is an absolute
path to the Solaris media or netinstall image from which you installed the
Live Upgrade packages.

Unable to delete boot environment.

Usunąłem je ręcznie:
# vi /etc/lutab

# cat /etc/lutab
# DO NOT EDIT THIS FILE BY HAND. This file is not a public interface.
# The format and contents of this file are subject to change.
# Any user modification to this file may result in the incorrect
# operation of Live Upgrade.
1:be0_snv90:C:0
1:/:/dev/dsk/c0d0s0:1
1:boot-device:/dev/dsk/c0d0s0:2

# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
be0_snv90 yes yes yes no -


Kontynuujemy migrację wpisów bootloadera:

# /usr/lib/lu/lux86menu_propagate /mnt/x/
Validating the contents of the media .
The media is a standard Solaris media.
The media contains a Solaris operating system image.
The media contains version <11>.
Installing latest Live Upgrade packages from media
Updating Live Upgrade packages on all BEs
Successfully updated Live Upgrade packages on all BEs
Successfully extracted GRUB from media
Extracted GRUB menu from GRUB slice
Installing GRUB bootloader to all GRUB based BEs
stage1 written to partition 1 sector 0 (abs 29398950)
stage2 written to partition 1, 264 sectors starting at 50 (abs 29399000)
System does not have an applicable x86 boot partition
install GRUB to all BEs successful
Converting root entries to findroot
Generated boot signature for BE
Converting GRUB menu entry for BE
Added findroot entry for BE to GRUB menu
No more bootadm entries. Deletion of bootadm entries is complete.
Changing GRUB menu default setting to <8>
Done eliding bootadm entries.
No x86 boot partition
File
propagation successful
Menu propagation successful
No x86 boot partition
File
deletion successful
Successfully deleted GRUB_slice file
No x86 boot partition
File
deletion successful
Successfully deleted GRUB_root file
Propagating findroot GRUB for menu conversion.
No x86 boot partition
File
propagation successful
No x86 boot partition
File
propagation successful
No x86 boot partition
File propagation successful
Deleting stale GRUB loader from all BEs.
No x86 boot partition
File deletion successful
No x86 boot partition
File deletion successful
No x86 boot partition
File deletion successful
Conversion was successful


Tworzymy nowe środowisko na ZFS, pulę podajemy po parametrze "-p":

# lucreate -n be_zfs_b90 -p rpool
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device
is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Creating file systems on boot environment .
Creating file system for in zone on .
Populating file systems on boot environment .
Checking selection integrity.
Integrity check OK.
Populating contents of mount point
.
Copying.
[...]

{na innej konsoli:
> df -h | egrep 'c0d0|rpool'
/dev/dsk/c0d0s0 12G 6,4G 5,0G 57% /
rpool 12G 19K 3,3G 1% /rpool
rpool/ROOT 12G 18K 3,3G 1% /rpool/ROOT
rpool/ROOT/be_zfs_b90 12G 6,3G 3,3G 66% /.alt.tmp.b-Wzb.mnt
}

[...]
Creating shared file system mount points.
Segmentation Fault - core dumped
Segmentation Fault - core dumped
Creating compare databases for boot environment .
Creating compare database for file system
.
Updating compare databases on boot environment .
Making boot environment bootable.
Updating bootenv.rc on ABE .
ERROR: File
not found in top level dataset for BE
ERROR: Failed to copy file
from top level dataset to BE
ERROR: Unable to delete GRUB menu entry for boot environment .
ERROR: Cannot make file systems for boot environment .


No i zaczyna się :) Nie mam zarówno czasu, ani chęci szukać co poszło nie tak,
jednak szybko patrząc w core widzę, że jakiś system plików był już zamontowany.

System na ZFS się nie uruchomi, trzeba zrobić jeszcze małą sztuczkę ...

# zfs set mountpoint=legacy rpool/ROOT/be_zfs_b90
# cd /etc/lu

(powinienem mieć ICF.2, jednek LU go nie utworzył...)
# cp ICF.1 ICF.2
# vi ICF.2
# cat ICF.2
be_zfs_b90:-:/dev/dsk/c0d0s1:swap:2104515
be_zfs_b90:/:rpool/ROOT/be_zfs_b90:zfs:0

# lumount -n be_zfs_b90
/.alt.be_zfs_b90
# luumount -n be_zfs_b90

# luactivate -n be_zfs_b90
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE

Generating boot-sign for ABE
ERROR: File
not found in top level dataset for BE
Generating partition and slice information for ABE
Boot menu exists.
Generating direct boot menu entries for PBE.
Generating xVM menu entries for PBE.
Generating direct boot menu entries for ABE.
Generating xVM menu entries for ABE.
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Boot from Solaris failsafe or boot in single user mode from the Solaris
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:

mount -Fufs /dev/dsk/c0d0s0 /mnt

3. Run utility with out any arguments from the Parent boot
environment root slice, as shown below:

/mnt/sbin/luactivate

4. luactivate, activates the previous working boot environment and
indicates the result.

5. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Activation of boot environment successful.



# cat /etc/bootsign
BE_be0_snv90

# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
be0_snv90 yes yes no no -
be_zfs_b90 yes no yes no -

# lumount -n be_zfs_b90
/.alt.be_zfs_b90
# cat /.alt.be_zfs_b90/etc/bootsign
cat: cannot open /.alt.be_zfs_b90/etc/bootsign: No such file or directory

# echo "be_zfs_b90" > /.alt.be_zfs_b90/etc/bootsign
# cat /.alt.be_zfs_b90/etc/bootsign
be_zfs_b90
# luumount -n be_zfs_b90

W zasadzie, to chyba wszystko, po 'init 6' pojawi się nowe menu w GRUBie.

# init 6

[...]

# uname -srv
SunOS 5.11 snv_90
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
be0_snv90 yes no no yes -
be_zfs_b90 yes yes yes no -
# df -h | head -2
Filesystem size used avail capacity Mounted on
rpool/ROOT/be_zfs_b90 12G 6,7G 2,8G 71% /
# swap -l
swapfile dev swaplo blocks free
/dev/zvol/dsk/rpool/swap 182,2 8 2105336 2105336
#


wtorek, 3 czerwca 2008

pbackup na Solarisie

Kiedyś napisałem programik w /bin/sh do tworzenia kopii plików, katalogów oraz surowych partycji.
Jako mały trening przeportowałem go na Solaris, jednak nie próbowałem go jeszcze uruchamiać na innych systemach, więc mogą i pewnie są jeszcze gdzieś BUGi.
Podam mały przykład jak zrobić kopię pliku (to może być również dysk, partycja, itp).
pbackup w trybie 'RAW' tnie plik źródłowy na 100MB kawałki, kompresuje je, robi sumy kontrolne i zapisuje informacje do logów.
Zrobimy kopię pliku 'source.raw', zamiast pliku to może być partycja lub cały dysk.

-bash-3.2$ ls -lh
total 66
-rw-r--r-- 1 pbackup-usr other 1,0K cz 3 20:30 local.cshrc
-rw-r--r-- 1 pbackup-usr other 1002 cz 3 20:30 local.login
-rw-r--r-- 1 pbackup-usr other 1019 cz 3 20:30 local.profile
-rwxr-xr-x 1 pbackup-usr root 23K cz 3 22:58 pbackup
-rwxr-xr-x 1 pbackup-usr root 1,5K cz 3 20:30 pbackup_cut
-rwxr-xr-x 1 pbackup-usr root 1,7K cz 3 23:12 pbackup_raw_restore
-rwxr-xr-x 1 pbackup-usr other 23K cz 3 22:29 pbackup_v3.4
-rw------- 1 pbackup-usr other 400M cz 3 22:44 source.raw
-bash-3.2$ ./pbackup -r -M -c /export/home/pbackup-usr/my_backup -T /export/home/pbackup-usr/source.raw
pbackup version current

new backup directory: /export/home/pbackup-usr/my_backup/2008_06_03__23-13_52-full

Date of backup: 2008_06_03__23-13_52
##################################################
Use suffix: *.tar.gz

##################################################
Raw partitions:
Using file /export/home/pbackup-usr/my_backup/2008_06_03__23-13_52-full/__tmp_raw
raw_bs=1000000 raw_count=100

*** partition: /export/home/pbackup-usr/source.raw => raw____export___home___pbackup-usr___source.raw
status: 1 - copying, 2 - compressing, 3 - checking
raw____export___home___pbackup-usr___source.raw
skip: 0 file: raw____export___home___pbackup-usr___source.raw.1000 1 2 3 md5 ... ok
skip: 100 file: raw____export___home___pbackup-usr___source.raw.1001 1 2 3 md5 ... ok
skip: 200 file: raw____export___home___pbackup-usr___source.raw.1002 1 2 3 md5 ... ok
skip: 300 file: raw____export___home___pbackup-usr___source.raw.1003 1 2 3 md5 ... ok
skip: 400 file: raw____export___home___pbackup-usr___source.raw.1004 1 2 3 md5 ... ok

Compressing /export/home/pbackup-usr/my_backup/2008_06_03__23-13_52-full/log ... ok
Done
-bash-3.2$



Tak wygląda kopia:

-bash-3.2$ ls -alh my_backup/
total 25
drwxr-x--- 3 pbackup-usr other 3 cz 3 23:13 .
drwx------ 3 pbackup-usr other 13 cz 3 23:13 ..
drwxr-x--- 2 pbackup-usr other 13 cz 3 23:14 2008_06_03__23-13_52-full
-bash-3.2$ ls -alh my_backup/2008_06_03__23-13_52-full/
total 68
-rw-r----- 1 pbackup-usr other 36 cz 3 23:13 __tmp_raw
drwxr-x--- 2 pbackup-usr other 13 cz 3 23:14 .
drwxr-x--- 3 pbackup-usr other 3 cz 3 23:13 ..
-rw-r----- 1 pbackup-usr other 0 cz 3 23:14 .all_done
-rw-r----- 1 pbackup-usr other 9 cz 3 23:13 date
-rw-r----- 1 pbackup-usr other 371 cz 3 23:14 log.gz
-rw-r----- 1 pbackup-usr other 372 cz 3 23:14 log~.gz
-rw-r----- 1 pbackup-usr other 95K cz 3 23:13 raw____export___home___pbackup-usr___source.raw.1000.gz
-rw-r----- 1 pbackup-usr other 95K cz 3 23:13 raw____export___home___pbackup-usr___source.raw.1001.gz
-rw-r----- 1 pbackup-usr other 95K cz 3 23:14 raw____export___home___pbackup-usr___source.raw.1002.gz
-rw-r----- 1 pbackup-usr other 95K cz 3 23:14 raw____export___home___pbackup-usr___source.raw.1003.gz
-rw-r----- 1 pbackup-usr other 18K cz 3 23:14 raw____export___home___pbackup-usr___source.raw.1004.gz
-rw-r----- 1 pbackup-usr other 450 cz 3 23:14 raw.md5
-bash-3.2$


Teraz przywrócimy kopię w inne miejsce.
Na początku pbackup sprawdza sumy kontrolne kopii, dopiero potem przywraca dane:

-bash-3.2$ ./pbackup_raw_restore my_backup/2008_06_03__23-13_52-full/raw____export___home___pbackup-usr___source.raw my_restored
pbackup_raw_restore version 0.3

source=my_backup/2008_06_03__23-13_52-full/raw____export___home___pbackup-usr___source.raw
destination=my_restored
Checking source:
my_backup/2008_06_03__23-13_52-full/raw____export___home___pbackup-usr___source.raw.1000.gz ... ok
my_backup/2008_06_03__23-13_52-full/raw____export___home___pbackup-usr___source.raw.1001.gz ... ok
my_backup/2008_06_03__23-13_52-full/raw____export___home___pbackup-usr___source.raw.1002.gz ... ok
my_backup/2008_06_03__23-13_52-full/raw____export___home___pbackup-usr___source.raw.1003.gz ... ok
my_backup/2008_06_03__23-13_52-full/raw____export___home___pbackup-usr___source.raw.1004.gz ... ok
DD_RAW_BS=1000000 DD_RAW_COUNT=100
Restoring my_backup/2008_06_03__23-13_52-full/raw____export___home___pbackup-usr___source.raw.* to my_restored ...
my_backup/2008_06_03__23-13_52-full/raw____export___home___pbackup-usr___source.raw.1000.gz seek: 0 ... ok
my_backup/2008_06_03__23-13_52-full/raw____export___home___pbackup-usr___source.raw.1001.gz seek: 100 ... ok
my_backup/2008_06_03__23-13_52-full/raw____export___home___pbackup-usr___source.raw.1002.gz seek: 200 ... ok
my_backup/2008_06_03__23-13_52-full/raw____export___home___pbackup-usr___source.raw.1003.gz seek: 300 ... ok
my_backup/2008_06_03__23-13_52-full/raw____export___home___pbackup-usr___source.raw.1004.gz seek: 400 ... ok

ok
-bash-3.2$ ls -lh
total 70
-rw-r--r-- 1 pbackup-usr other 1,0K cz 3 20:30 local.cshrc
-rw-r--r-- 1 pbackup-usr other 1002 cz 3 20:30 local.login
-rw-r--r-- 1 pbackup-usr other 1019 cz 3 20:30 local.profile
drwxr-x--- 3 pbackup-usr other 3 cz 3 23:13 my_backup
-rw-r--r-- 1 pbackup-usr other 400M cz 3 23:18 my_restored
-rwxr-xr-x 1 pbackup-usr root 23K cz 3 22:58 pbackup
-rwxr-xr-x 1 pbackup-usr root 1,5K cz 3 20:30 pbackup_cut
-rwxr-xr-x 1 pbackup-usr root 1,7K cz 3 23:12 pbackup_raw_restore
-rwxr-xr-x 1 pbackup-usr other 23K cz 3 22:29 pbackup_v3.4
-rw------- 1 pbackup-usr other 400M cz 3 22:44 source.raw
-bash-3.2$


Pliki są identyczne:

-bash-3.2$ cmp source.raw my_restored
-bash-3.2$



Pbackup wyświetla ustawione zmienne:

-bash-3.2$ ./pbackup -H
pbackup version current

Variables:
DIR_NEW_BACKUP=2008_06_03__23-19_24
DIR_BACKUP=/home/BACKUP
DIR_BR=
DIR_RAW=
DIR_DIRS=
FILE_MD5=md5
FILE_LOG=log
FILE_DIRS=/export/home/pbackup-usr/.pbackup_dirs
FILE_DIRS_EXCLUDE=/export/home/pbackup-usr/.pbackup_dirs_exclude
FILE_BR=/export/home/pbackup-usr/.pbackup_br
FILE_RAW=/export/home/pbackup-usr/.pbackup_raw
FILE_BACKUP_TYPE=dirs_type
FILE_TMP_LOG=__tmp_log
FILE_TMP_DIRS=__tmp_dirs
FILE_TMP_DIRS_EXCLUDE=__tmp_dirs_exclude
FILE_TMP_BR=__tmp_br
FILE_TMP_RAW=__tmp_raw
DEBUG=NO
VERBOSE=NO
BACKUP_DIRS=YES
BACKUP_BR=NO
BACKUP_RAW=NO
USE_BZIP2=NO
USE_DIRS_EXCLUDE=NO
SHOW_DIRS_EXCLUDE=YES
SHOW_FIND_LAST=YES
DD_BR_BS=1000
DD_BR_COUNT=64
DD_RAW_BS=1000000
DD_RAW_COUNT=100
UMASK=0027
INCR_LAST=NO
INCR_NEWER=NO
-bash-3.2$



Oraz wszystkie dostępne parametry:

-bash-3.2$ ./pbackup -h
pbackup version current

Usage: ./pbackup [OPTIONS]

OPTIONS:
-a backup type = full
-b use bzip2
-B use gzip
-c /my_backup set DIR_BACKUP
-C new_backup set DIR_NEW_BACKUP
-d my_file_dirs.txt set FILE_DIRS
-D "~/bin /opt" list of directories
-e dirs_exclude.txt path to FILE_DIRS_EXCLUDE
-E "*.old" exclude from backup
-g DEBUG=YES
-G DEBUG=NO
-h show help
-H show variables
-m BACKUP_DIRS=YES
-M BACKUP_DIRS=NO
-n 20050720 incremental, newer than 2005-07-20
-N 5 incremental, last 5 days
With this option, exclude file doesn't work!!!

-p br.txt path to FILE_BR
-P "/dev/hda /dev/hdb" list of boot records
-q quiet
-r BACKUP_RAW=YES
-R BACKUP_RAW=NO
-s BACKUP_BR=YES
-S BACKUP_BR=NO
-t my_partitions.txt path to FILE_RAW
-T "/dev/hda4 /dev/hdb2" list of partitions
-v verbose
-V show version
-x USE_DIRS_EXCLUDE=YES
-X USE_DIRS_EXCLUDE=NO
-y SHOW_DIRS_EXCLUDE=YES
-Y SHOW_DIRS_EXCLUDE=NO
-z SHOW_FIND_LAST=YES
-Z SHOW_FIND_LAST=NO

license: CDDL
author: Piotr Jasiukajtis / estibi
-bash-3.2$

Replikacja ZFS za pomocą: est-repl

Jakiś czas temu napisałem proste narzędzie do zdalnej replikacji ZFS między dwoma hostami.
Widzę, że nadal nie ma tego typu narzędzi dostępnych w systemie,
więc postanowiłem wypuścić 'est-repl' na świat :)

est-repl - główne narzędzie, replikuje przyrostowo system plików
est-repl.config - konfiguracja
est-repl_initial - tworzy pełną replikę - nie przyrostową
est-repl_initial_CREATE - tworzy pełną replikę, usuwa zdalny system plików - uruchamiane do inicjacji zdalnego systemu plików
est-repl_initial_RECURSIVELY - tworzy pełną replikę, usuwa zdalny system plików rekurencyjnie!


Konfiguracja jest banalna:
-bash-3.00$ cat est-repl_0.1/est-repl.config
#! /bin/sh
#
# est-rep.config
#
# AUTHOR: Piotr Jasiukajtis / estibi
# VERSION: 0.1

# destination host and user
DEST_HOST="my-remote-host"
DEST_USER="my-remote-user"

# what to replicate
REPL_FS="my/dataset/to/replicate"

#DEBUG="1"

# EOF


Użytkownik na lokalnym jak i zdalnym systemie musi mieć odpowiednie uprawnienia:
-bash-3.00$ grep my-remote-user /etc/user_attr
my-remote-user::::type=normal;profiles=ZFS File System Management


Dodatkowo komunikacja między hostami odbywa się za pomocą kluczy SSH bez hasła!


A sama replikacja wygląda tak:

-bash-3.00$ ./est-repl ./est-repl.config
Using config file: ./est-repl.config
Used filesystem: data/zones_data/mail1_maildirs
NAME USED AVAIL REFER MOUNTPOINT
data/zones_data/mail1_maildirs 231M 4.77G 49.4M /zones_data/mail1_maildirs

Checking remote snapshots ...
Trying Latest remote snapshot: data/zones_data/mail1_maildirs@backup_080603_18-35-28
data/zones_data/mail1_maildirs@backup_080603_18-35-28
Restoring remote data/zones_data/mail1_maildirs ...
Latest remote snapshot: data/zones_data/mail1_maildirs@backup_080603_18-35-28
Creating snapshot ...
Creating incremental snapshot ...
Sending snapshot to host remote-host ...
inc_snap_080603_18-5 100% |*********************************************************************************************************************************| 2888 KB 00:00
Receiving snapshot ...
receiving incremental stream of data/zones_data/mail1_maildirs@backup_080603_18-55-47 into data/zones_data/mail1_maildirs@backup_080603_18-55-47
received 2.82Mb stream in 2 seconds (1.41Mb/sec)
Deleting temporary files ...
OK, done

Można go dodać to do crona, aby replikował system plików co np 5 minut:

5 * * * * /usr/est-repl /config.repl

To jest wstępna wersja, jest wiele niedociągnięć itp.
Proszę używać go na własną odpowiedzialność.