Thursday, June 25, 2009

so it look's like the raid is now running, so this next step would be an lvm2 layer on top of it.


I thought it would be easy, but...

/dev/sdc1: lseek 10448441413827256320 failed: Invalid argument
Incorrect metadata area header checksum
--- Physical volume ---
PV Name /dev/md3
VG Name data
PV Size 9.09 TB / not usable 3.25 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 2384157
Free PE 2384157
Allocated PE 0
PV UUID 2EQH1i-BIYU-BDvH-sRb3-yruB-V15f-ZCTywN
"/dev/sdc1" is a new physical volume of "8.98 EB"
--- NEW Physical volume ---
PV Name /dev/sdc1
VG Name
PV Size 9.06 EB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 2;??-ƈ+-?m-s?b3-?ruB-V15f-ZCTywN
"/dev/sdn1" is a new physical volume of "64.01 PB"
--- NEW Physical volume ---
PV Name /dev/sdn1
VG Name
PV Size 64.01 PB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 2EQI1i-BHYU-BEvH-sRb3-yruB-V15f-ZCTywN


it looks like I just got promoted to a 64 Peta Byte server on the disk's sdn1 and 9.06 Exta Byte on sdc1.

Guess I got more harddrive space as I ordered...

So it look's like it's adding my disk's to the group instead of only my raid6 devices. The solution for this is to disable in the /etc/lvm/lvm.com the related disks:

filter = [ "r|/dev/cdrom|","r|/dev/sd*|" ]

Time for the next issue...

No comments: