piątek, 24 sierpnia 2007

SXCE 70 - naprawiony bug z gnome-terminal

Blad z rozmazanym kursorem w gnome-terminal jest naprawiony w najnowszym buildzie (70).

Zaczynam znowu uzywac gnome-terminal, zamiast mrxvt ...

czwartek, 23 sierpnia 2007

Live Upgrade cd... (czas na SXCE70)

Dostepne juz jest SXCE70, wiec warto sie z nim zapoznac,
oczywiscie w bezpieczny sposob, korzystajac z Live Upgrade.

Moje obecne srodowiska uruchomieniowe:
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
be0_xen66 yes no no yes -
be1_snv69 yes yes yes no -


Zamienimy nazwe srodowiska, ktore bedziemy upgradeowac,
aby nie bylo balaganu:
# lurename -e be0_xen66 -n be0_snv70
Renaming boot environment to .

Changing the name of BE in the BE definition file.
Changing the name of BE in configuration file.
Updating compare databases on boot environment .
Changing the name of BE in Internal Configuration Files.
Propagating the boot environment name change to all BEs.
Boot environment renamed to .


Nazwa juz jest zmieniona:
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
be0_snv70 yes no no yes -
be1_snv69 yes yes yes no -



W nastepnym kroku zsynchronizujemy srodowiska.
Zrobimy kopie obecnego srodowiska (SXCE69)
w miejsce starego (SXCE66-xen) ...

# time lumake -n be0_snv70

Creating configuration for boot environment .
Source boot environment is .
Determining the split file systems of
.
Determining the merge point of
.
Determining the size and inode count for the split filesystem of
.
Creating boot environment .
Checking for GRUB menu on boot environment .
Saving GRUB menu on boot environment .
Creating file systems on boot environment .
Creating file system for in zone on .
Mounting file systems for boot environment .
Calculating required sizes of file systems for boot environment .
Populating file systems on boot environment .
Checking selection integrity.
Integrity check OK.
Populating contents of mount point
.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment .
Creating compare database for file system
.
Updating compare databases on boot environment .
Making boot environment bootable.
Updating bootenv.rc on ABE .
Population of boot environment successful.


Teraz majac juz identyczna kopie srodowiska robimy upgrade:

# lofiadm -a /mnt/new/sol-nv-b70-x86-dvd.iso
/dev/lofi/1
# mount -F hsfs /dev/lofi/1 /mnt/x/

# time luupgrade -u -n be0_snv70 -s /mnt/x

Copying failsafe kernel from media.
Uncompressing miniroot
Creating miniroot device
miniroot filesystem is
Mounting miniroot at
Validating the contents of the media .
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains version <11>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE .
Checking for GRUB menu on ABE .
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE .
Performing the operating system upgrade of the BE .
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Deleted empty GRUB menu on ABE .
Adding operating system patches to the BE .
The operating system patch installation is complete.
ABE boot partition backing deleted.
Configuring failsafe for system.
Failsafe configuration is complete.
INFORMATION: The file on boot
environment contains a log of the upgrade operation.
INFORMATION: The file on boot
environment contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment . Before you activate boot
environment , determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment is complete.
Installing failsafe
Failsafe install is complete.



Aktywujemy nowy system:
# luactivate -n be0_snv70

Saving latest GRUB loader.
Generating partition and slice information for ABE
No boot menu exists. Creating new menu file
Generating direct boot menu entries for ABE.
Generating direct boot menu entries for PBE.

**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Do *not* change *hard* disk order in the BIOS.

2. Boot from the Solaris Install CD or Network and bring the system to
Single User mode.

3. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:

mount -Fufs /dev/dsk/c0d0s3 /mnt

4. Run utility with out any arguments from the Parent boot
environment root slice, as shown below:

/mnt/sbin/luactivate

5. luactivate, activates the previous working boot environment and
indicates the result.

6. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
GRUB menu is on device: .
Filesystem type for menu device: .
Activation of boot environment successful.


Uruchamiamy nowy system:
# init 6

środa, 15 sierpnia 2007

Realtek 8139 GLDv3

Wczoraj Garrett D'Amore na swoim blogu http://gdamore.blogspot.com/2007/08/stuck-with-rtls-realtek-8139.html
poinformowal o fakcie iz dodal obsluge GLDv3 do sterownika realtek 8139 w
OpenSolaris (swoja droga, kiedys go prosilem o to...).
GLDv3 daje m.in obsluge aggregacji portow, oraz wirtualizacji (IP instances),
na co czekam od jakiegos czasu ...
Realtek 8139 to slabe uklady, ale sa w powszechnym uzyciu i sa najtansze.
Mozna powiedziec, ze jest to "standard" w Polsce :)

Czekam z niecierpliwoscia na oficjalne wlaczenie binariow (sterownik 8139 jest
zamkniety jeszcze) do SXCE.

D-Trace Provider dla /bin/sh

Kilka dni temu powstal provider dla /bin/sh do dtrace.
Jak na razie jest to eksperymentalne i dostepne dla builda 70.

Mozna pobrac stad: http://www.opensolaris.org/os/community/dtrace/shells/.

Jest to dla mnie szczegolnie istotne poniewaz najwiecej rzeczy pisze wlasnie w /bin/sh i provider dla dtrace
wiele moze pomoc :)

Kazdy, kto pisal wieksze skrypty w powloce wie jakie ciezkie jest debugowanie kodu, szczegolnie jesli chcemy zachowac kompatybilnosc i piszemy pod /bin/sh ;)

Czekamy na oficjalne wlaczenie zrodel do OpenSolarisa oraz backport do Solarisa 10!

niedziela, 12 sierpnia 2007

LU do nowszej wersji SXCE

Aktualizujemy Solaris Express do nowszej wersji :)

Montujemy katalog z obrazami:
# mount /mnt/isos

Tworzymy urzadzenie LOFI:
# lofiadm -a /mnt/isos/snv69_dvd.iso
/dev/lofi/1

Teraz montujemy to urzadzenie w katalogu /mnt/x
(monujemy zawartosc plyty dvd z najnowszym solarisem)
# mount -F hsfs /dev/lofi/1 /mnt/x


Mamy nastepujace srodowiska uruchomieniowe (BE):
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
be0_xen66 yes yes yes no -
be1_snv69 yes no no yes -


Robimy upgrade naszego nowego BE (z poprzedniego postu) :

# time luupgrade -u -n be1_snv69 -s /mnt/x

Copying failsafe kernel from media.
Uncompressing miniroot
Creating miniroot device
miniroot filesystem is
Mounting miniroot at

Validating the contents of the media .
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains version <11>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE .
Checking for GRUB menu on ABE .
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE .
Performing the operating system upgrade of the BE .
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
...

Teraz mozemy zrobic przerwe, poczytac ksiazke, obejrzec film, isc na zakupy wypic 10 kaw ...

...
Upgrading Solaris: 1% completed
...

Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Deleted empty GRUB menu on ABE .
Adding operating system patches to the BE .
The operating system patch installation is complete.
ABE boot partition backing deleted.
Configuring failsafe for system.
Failsafe configuration is complete.
INFORMATION: The file on boot
environment contains a log of the upgrade operation.
INFORMATION: The file on boot
environment contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment . Before you activate boot
environment , determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment is complete.
Installing failsafe
Failsafe install is complete.


Aktualizacja systemu przebiegla poprawnie.
Teraz wystarczy aktywowac nowe srodowisko uruchomieniowe, aby bylo domyslnie uruchamiane przy starcie maszyny:

# luactivate -n be1_snv69

Saving latest GRUB loader.
Generating partition and slice information for ABE
Boot menu exists.
Generating direct boot menu entries for ABE.
Generating direct boot menu entries for PBE.

**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Do *not* change *hard* disk order in the BIOS.

2. Boot from the Solaris Install CD or Network and bring the system to
Single User mode.

3. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:

mount -Fufs /dev/dsk/c0d0s0 /mnt

4. Run utility with out any arguments from the Parent boot
environment root slice, as shown below:

/mnt/sbin/luactivate

5. luactivate, activates the previous working boot environment and
indicates the result.

6. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
GRUB menu is on device: .
Filesystem type for menu device: .
Activation of boot environment successful.


Uruchamiamy ponownie system do nowego Solaris Express :)
# init 6

Live Upgrade, XEN

Od jakiegos czasu uzywam Xena na solarisie, jednak jest on oparty na snv66. Na dzien dzisiejszy mamy juz build 69, dlatego chcialem zsynchronizowac sobie drugie BE (srodowisko uruchomieniowe).

Zaczalem od utworzenia nowego srodowiska:
time lucreate -c be0_xen66 -n be1_snv69 -m /:c0d0s3:ufs
Glowny system plikow mam na /dev/dsk/c0d0s0, partycja /dev/c0d0s3 byla pusta - specjalnie utworzona na potrzeby Live Upgrade.

Oto co otrzymalem:

Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
The device name expands to device path
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
ERROR: The system must be rebooted after applying required patches.
Please reboot and try again.

Hmm, wyglada jak nastepujace BUGI:
6488829, 6501968

Generalnie jest to spowodowane tym, ze LU nie widzi jakie mam dyski:
/sbin/biosdev
biosdev: Could not match any!!

Wiec zrobilem maly trik:
cd /sbin/
mv biosdev biosdev_orig

Aby dowiedziec sie jak system odwoluje sie do mojego dysku skorzystalem z 'format':
# format

Pokazal mi, ze dysk w laptopie jest reprezentowany nastepujaco:
/pci@0,0/pci-ide@14,1/ide@0/cmdk@0,0

Teraz trorze plik /sbin/biosdev aby mial nastepujaca zawartosc:
cat /sbin/biosdev
#! /bin/sh

echo "0x80 /pci@0,0/pci-ide@14,1/ide@0/cmdk@0,0"

# EOF


Zmieniam mu prawa dostepu na wykonywalne:
chmod 755 /sbin/biosdev

I zaczynam ponownie tworzyc LU...
Efekt taki sam, nawet po restarcie ... (init 6)

Wiec sprobuje zrobic to na 'czystym' SXCE, bez Xena.

Uruchamiam ponownie, 'reboot' - bo nie chce mi sie czekac:)

Na czystym SXCE zarowno /sbin/biosdev jak i /sbin/biosdev_orig daja identyczne wyjscie:
> /sbin/biosdev
0x80 /pci@0,0/pci-ide@14,1/ide@0/cmdk@0,0
> /sbin/biosdev_orig
0x80 /pci@0,0/pci-ide@14,1/ide@0/cmdk@0,0

Powinno dzialac, uruchamiamy LU:
lucreate -c be0_xen66 -n be1_snv69 -m /:c0d0s3:ufs
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
The device name expands to device path
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named .
Creating initial configuration for primary boot environment .
The device is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name PBE Boot Device .
Comparing source boot environment file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices

Updating system configuration files.
The device is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Checking for GRUB menu on boot environment .
The boot environment does not contain the GRUB menu.
Creating file systems on boot environment .
Creating file system for in zone on .
Mounting file systems for boot environment .
Calculating required sizes of file systems for boot environment .
Populating file systems on boot environment .
Checking selection integrity.
Integrity check OK.
Populating contents of mount point .
Copying.
...

Ok teraz tworzy sie drugie srodowisko uruchomieniowe nazwane jako '
be1_snv69'.
Bierzace srodowisko uruchomieniowe nazwalismy
'be0_xen66'.


Musimy pamietac, ze system teraz kopiuje cala zawartosc glownej partycji, moze to troszke potrwac, szczegolnie na laptopie :)

kolejne komunikaty:
Creating shared file system mount points.
Creating compare databases for boot environment .
Creating compare database for file system .
Updating compare databases on boot environment .
Making boot environment bootable.
Updating bootenv.rc on ABE .
Population of boot environment successful.
Creation of boot environment successful.


W tym momencie mamy utworzone 2 BE, aby miec pewnosc wpisujemy:
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
be0_xen66 yes yes yes no -
be1_snv69 yes no no yes -

Gdy uruchomimy ponownie system bedziemy mieli dodatkowe wpisy w GRUBie, domyslnie jednak uruchomi sie be0_xen66.

Zeby to zmienic i uruchomic system z nowego srodowiska (nowej partycji) musimy zrobic:
# luactivate -n be1_snv69

Saving latest GRUB loader.
Generating partition and slice information for ABE
Boot menu exists.
Generating direct boot menu entries for ABE.
Generating direct boot menu entries for PBE.

**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Do *not* change *hard* disk order in the BIOS.

2. Boot from the Solaris Install CD or Network and bring the system to
Single User mode.

3. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:

mount -Fufs /dev/dsk/c0d0s0 /mnt

4. Run utility with out any arguments from the Parent boot
environment root slice, as shown below:

/mnt/sbin/luactivate

5. luactivate, activates the previous working boot environment and
indicates the result.

6. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
GRUB menu is on device: .
Filesystem type for menu device: .
Activation of boot environment successful.


Aby obaczyc co sie stalo wpisujemy lustatus:
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
be0_xen66 yes yes no no -
be1_snv69 yes no yes no -

lustatus mowi, ze aktywny po restarcie bedzie nasz system w nowym BE

Aby poprawnie dokonczyc ta operacje musimy zrestartowac system poprzez np 'init 6', o czym mowi zdanie "
You MUST USE either the init or the shutdown command when you reboot."


Zrobilismy w ten sposob kopie systemu, mozemy teraz testowac i psuc do woli :)

Jest to tylko kopia dzialajacego systemu, nie robilismy jeszcze upgrade'u do nowszej wersji SXCE, o tym w nastepnym poscie ...

niedziela, 5 sierpnia 2007

SXCE, ZFS, LU, problemy przy restarcie ...

Ci, ktorzy czesto uzywaja LiveUpgrade do aktualizacji swojego SolarisExpress oraz maja lokalne pule ZFS moga czasem miec problem po uruchomieniu nowego systemu.

Problem moze powstac, gdy mamy kilka "partycji" ZFS i jedna partycja jest montowana w miejscu innej, podmontowanej juz partycji. Troche to niezrozumiale, ale przyklad rozjasni wszystko:

- Mamy domyslne punky montowania (nie zmienialismy zmiennej ZFS mountpoint).

- Tworzymy system plikow ZFS:
zfs create data/export

- Zmieniamy punkt montowania na z /data/export na /export
zfs set mountpoint=/export data/export

- Tworzymy inne partycje ZFS
zfs create data/export/home
zfs create data/export/home/antek
zfs create data/export/home/szymon
zfs create data/export/home/witold

- Teraz, synchronizujemy inne srodowisko uruchomieniowe (BE)
lumake -n be1

- Robimy upgrade nowego BE (zakladam, ze w /mnt/dvd jest podmontowana plytka z nowszym SXCE)
luupgrade -u -n be1 -s /mnt/dvd

- Aktywujemy nowe BE
luactivate -n be1

- Teraz gdy damy restart poprzez np 'init 6',
to po restarcie mozemy miec problem z usluga svc:/system/filesystem/local:default
Jesli mamy dostep do serwera poprzez ssh, lub nie chce nam sie chodzic i logowac lokalnie, mozemy zrobic tutaj maly trik ..

- Montujemy nowe BE
lumount -n be1

- Wchodzimy do katalogu, gdzie jest podmontowane i usuwamy katalogi, w ktorych bedzie montowany ZFS. W naszym przykladzie wszystkie katalogi w /export (np /.alt1.be1/export)

- Upewniamy sie ze usunelismy puste katalogi z 'be1', gdzie bedzie montowany ZFS

- Musimy odmontowac drugie BE i zrobic restart do nowego systemu
luumount -n be1
init 6

Teraz powinnismy miec juz nowe SXCE bez niepotrzebnych problemow.