테스트 환경:
Linux Host(SY480 Gen10) + 3PAR Storage

1. Configure BIOS
1) Set Default use BIOS Memu
2) Set HPC or Virtualization - Max Performance

3) VSP Configuration ----------------
BIOS/Platform Configuration (RBSU) > System Options > Serial Port Options >
- Embedded Serial Port > COM1
- Virtual Serial Port > COM2

- BIOS Serial Console and EMS
-> Select BIOS Serial Console Port -> "Auto" to "Virtual Serial Port"
-> Select BIOS Serial Console Emulation Mode -> VT100+ (default).
-> Select BIOS Serial Console Baud Rate -115200 (default).
-> EMS Console -> Disabled (default)
---------------------------------------

2. connects via ssh to iLO IP Address
hpiLO-> vsp

3. Install RHEL 7.5 (Server with GUI + Compatibility Libraries)
- enabled kdump - 384MB (set manually)

4. Configure Network
# systemctl stop NetworkManager
# systemctl disable NetworkManager

# vim /etc/sysconfig/network-scripts/ifcfg-ens3f0
cf. ifcfg-ens3f0 - dhcp or static assign IP Address (eth port)

5. Install SPP2019.03.1

6. Configure VSP from Host
# vim /etc/default/grub

add below line's tail - "console=tty0 console=ttyS1,115200"
GRUB_CMDLINE_LINUX="crashkernel=384M rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet console=tty0 console=ttyS1,115200"

skip this for next (duplicate) -----
# grub2-mkconfig -o /boot/grub2/grub.cfg
# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

# reboot
------------------------------------

7. Configure Kdump
A. Mount DVD Media
# mkdir /media/odd
# mount -o loop /dev/cdrom /media/odd

B. Edit configuration
# vim /etc/yum.repos.d/RH7-DVD.repo

It may like below ----------------------
[RHEL7-DVD]
NAME=RHEL7-DVD
BASEURL=file:///media/odd
ENABLED=1
GPGCHECK=0
------------------------------------------

C. Update list Repo
# yum repolist all

D. Check rpm
# rpm -qa | grep kexec-tools

if not exsist
# yum install kexec-tools

E. Add Kernel Parameters
# vim /etc/default/grub
GRUB_CMDLINE_LINUX="crashkernel=384M rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet console=tty0 console=ttyS1,115200 nmi_watchdog=1"

# grub2-mkconfig -o /boot/grub2/grub.cfg
# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

F. Add Kernel Parameters
# vim /etc/sysctl.conf
kernel.unknown_nmi_panic = 1
kernel.panic_on_unrecovered_nmi = 1
kernel.panic_on_io_nmi = 1
kernel.panic_on_oops = 1
kernel.panic = 1
kernel.sysrq = 1

G. Check Service
# systemctl status kdump.service

# reboot

8. Trigger crash
A. via Magic key.
# echo c > /proc/sysrq-trigger

B. via Virtual NMI Button from iLO

C. Check creation state of Crash Dump
# ll /var/crash

^^^ for Local volume
-----------------------------------------------------------
v v v for Remote volume (Boot from SAN)

Synergy 480 Gen10 - FCoE port of Server Profile

1. Configure/Check BIOS for BFS
---------------------------------
Configure Network Boot
System Utilities > System Configuration > BIOS/Platform Configuration (RBSU) > Network Options > Network Boot Options > PCIe Slot Network Boot
set Network Boot from port(s)

Configure FC/FCoE Scan Policy
System Utilities > System Configuration > BIOS/Platform Configuration (RBSU) > Storage Options > Fibre Channel/FCoE Scan Policy
set Scan Configured Targets Only

Configure UEFI POST Discovery Mode
System Utilities > System Configuration > BIOS/Platform Configuration (RBSU) > System Options > Boot Time Optimizations > UEFI Post Discovery Mode
set Auto or Force Full Discovery

Check
Device Hardware Configuration
MBA Configuration > Legacy Boot Protocol
set FCoE
FCoe Boot Configuration > FCoE General Parameters > Boot to FCoE Target
set Enabled
FCoe Boot Configuration > FCoE General Parameters > HBA Boot Mode
check Enabled

FCoe Boot Configuration > FCoE Target Parameters
set Connect # Enabled and Boot LUN #
---------------------------------

1. Install RHEL 7.5 to BFS / remote volume (Server with GUI + Compatibility Libraries)
- Network & Hostname > ens3f2 (BFS port) > General > Automatically connect to this network when it is available

2. Check FCoE port
# cat /etc/sysconfig/network-scripts/ifcfg-fcoe0
TYPE=Ethernet
DEVICE=ens3f2
ONBOOT=yes

그 외 항목들은 Local volume 구성과 동일

 

Posted by 스쳐가는인연

RHEL 6.x를 운용하는 Gen10 System 에서, RESTful Interface Tool(ilorest)를 설치 후 약 하루 지난 뒤 구동 실패

증상
RESTful Interface Tool을 설치 후 약 하루 정도 시간이 지난 뒤부터, 구동 시, 오류와 함께 실행되지 않음
(ilorest 구문 관계 없음)

# ilorest -v
Cannot open self /usr/sbin/ilorest or archive /usr/sbin/ilorest.pkg


원인
RHEL 6의 prelink 라는 기능 때문에 발생됨.

prelink는 응용프로그램 수행 시간을 단축/가속 하기 위한 일종의 캐시로 이해할 수 있으며,
공통 라이브러리 등을 미리 메모리에 상주 시키고, 해당 라이브러리가 필요한 프로그램에 바로 연결해 주어,
특정 프로그램이 실행 시 라이브러리를 호출하고 메모리에 로드 하는 등의 시간을 줄이기 위한 기술.

개발 환경이 다변화 되어 기대하지 않은 오류(ilorest가 이 부분에 해당)가 확인되고, 시스템이 충분히(?) 빨라졌기 때문에, 더 이상 효용이 상대적으로 낮아짐에 최근에는 사용하지 않음. (레드햇 기준)

Redhat에서 가이드 하는 방법을 통해, 구동 실패 이슈를 해소할 수 있음.

Questions about Prelinking in Red Hat Enterprise Linux
https://access.redhat.com/solutions/61691
Prelink
http://people.redhat.com/jakub/prelink.pdf

prelink is supported in RHEL5 and RHEL6. In RHEL7, prelink is not supported. prelink was deprecated and disabled by default because it no longer offers any significant benefits and is not stable enough across supported architectures.

What is prelinking
Prelinking is the operation upon which prelink modifies binaries to dynamically link them to binaries

Why is this default in Red Hat Enterprise Linux?
Prelinking significantly improves application start up times (often on the order for 5x speedup). It shortens the time the OS spends dynamically linking a program (since it is done ahead of time).

How to prevent prelink from prelinking specific executables
To filter out a program from being optimized you can open /etc/prelink.conf and add
-b /usr/bin/program
if you wish to avoid prelink to work on /usr/bin/program.


환경
RHEL 6을 사용하는 HPE ProLiant Gen10 System (보고된 환경)

솔루션
Action Item.
What: Ilorest 구동에 prelink가 관여하지 않도록 변경 (ilorest를 blacklist 등록)
Why: ilorest 수행 장애를 해소하기 위해
What if/Next: TBD
To do.
1) prelink와 ilorest 의 연결을 제어(참조하지 않도록 수정)
# echo "-b /usr/sbin/ilorest" > /etc/prelink.conf.d/ilorest.conf
2) ilorest 삭제 후 재설치

Posted by 스쳐가는인연

HPE RESTful Interface Tool 2.4.x에서 추가된 serverinfo commands

Check System Health -----------------------
serverinfo --fans
serverinfo --thermals
serverinfo --power

serverinfo --processors --memory
-------------------------------------------
ilorest -d serverinfo --fans
ilorest -d serverinfo --thermals
ilorest -d serverinfo --power

ilorest -d serverinfo --processors --memory
-------------------------------------------

Linux
ilorest list | egrep 'SmartStorageBattery|Status|State|Health'

Windows
ilorest list | findstr "SmartStorageBattery Status State Health"

Posted by 스쳐가는인연

# Location VMware ESXi 6.5/6.7
/opt/smartstorageadmin/ssacli/bin/ssacli

Get Slot Information - Slot x
# ./ssacli ctrl all show status

Check Rebuild progress
./ssacli ctrl [all|slot=x] show config detail | grep -i recov -A10
./ssacli ctrl [all|slot=x] show config | grep -i recov -A10

./ssacli ctrl [all|slot=x] ld [all|n] show
./ssacli ctrl [all|slot=x] ld [all|n] show status

x is specific slot number
n is specific logical drive

e.g.) command outcome
--------------------------------------------------------------
[root@localhost:~] /opt/smartstorageadmin/ssacli/bin/ssacli version

SSACLI Version: 3.40.3.0 2018-12-06
SOULAPI Version: 3.40.3.0 2018-12-06

[root@localhost:~] /opt/smartstorageadmin/ssacli/bin/ssacli ctrl all show config detail | grep -i recov -A10

Status: Recovering, 52.15% complete
Unrecoverable Media Errors: None
MultiDomain Status: OK
Caching: Enabled
Unique Identifier: 600508B1001C8B23FDC6C5AC266B92D9
Logical Drive Label: 010F50FBPEYHB0ARH7503L 420D
Mirror Group 1:
physicaldrive 1I:3:1 (port 1I:box 3:bay 1, SAS HDD, 1 TB, OK)
Mirror Group 2:
physicaldrive 2I:2:1 (port 2I:box 2:bay 1, SAS HDD, 1 TB, Rebuilding)
Drive Type: Data
LD Acceleration Method: Controller Cache

[root@localhost:~] /opt/smartstorageadmin/ssacli/bin/ssacli ctrl slot=0 ld all show

HPE Smart Array P408i-a SR Gen10 in Slot 0 (Embedded)

Array A

logicaldrive 1 (931.48 GB, RAID 1, Recovering, 52.28% complete)

Array B

logicaldrive 2 (447.10 GB, RAID 1, OK)

[root@localhost:~] /opt/smartstorageadmin/ssacli/bin/ssacli ctrl slot=0 ld 1 show

HPE Smart Array P408i-a SR Gen10 in Slot 0 (Embedded)

Array A

Logical Drive: 1
Size: 931.48 GB
Fault Tolerance: 1
Heads: 255
Sectors Per Track: 32
Cylinders: 65535
Strip Size: 256 KB
Full Stripe Size: 256 KB
Status: Recovering, 52.76% complete
Unrecoverable Media Errors: None
MultiDomain Status: OK
Caching: Enabled
Unique Identifier: 600508B1001C8B23FDC6C5AC266B92D9
Logical Drive Label: 010F50FBPEYHB0ARH7503L 420D
Mirror Group 1:
physicaldrive 1I:3:1 (port 1I:box 3:bay 1, SAS HDD, 1 TB, OK)
Mirror Group 2:
physicaldrive 2I:2:1 (port 2I:box 2:bay 1, SAS HDD, 1 TB, Rebuilding)
Drive Type: Data
LD Acceleration Method: Controller Cache
--------------------------------------------------------------

Also can observed via iLO.

 

Posted by 스쳐가는인연

---------------------------------------
# Location VMware ESXi 4.0/4.1/5.0
/opt/hp/hpacucli/bin/hpacucli

# Location VMware ESXi 5.1/5.5/6.0
/opt/hp/hpssacli/bin/hpssacli

# Location VMware ESXi 6.5/6.7
/opt/smartstorageadmin/ssacli/bin/ssacli
---------------------------------------

Get Slot Information - Slot x
# ./ssacli ctrl all show status

Check Rebuild progress
# ./ssacli ctrl slot=XX ld all show

or
# ./ssacli ctrl slot=XX show config

출처:
HOWTO: Monitor the rebuild status of a HPE SmartArray in ESXi 5.5
http://blog.jbgeek.net/2016/04/14/howto-monitor-the-rebuild-status-of-a-hpe-smartarray-in-esxi-5-5/

HPE Storage Controller Management (ssacli)
https://be-virtual.net/hpe-storage-controller-management-ssacli/

Posted by 스쳐가는인연