dmraid fails to read promise RAID sector count larger than 32-bits
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Baltix |
New
|
Undecided
|
Unassigned | ||
dmraid (Fedora) |
New
|
Undecided
|
Unassigned | ||
dmraid (Ubuntu) |
Triaged
|
Medium
|
Unassigned |
Bug Description
I have two amd sb7*** motherbord. I tryed two case.
I use raid0(1.5TB x 3=4.5TB) by bios(SB7*0).
I partitioned two array. 2.0TB(A) and 2.5TB(B).
-------
raid-A 2.0TB ok ok all capacity OK
raid-B 2.5TB ok (all) no*1 only 300GB(NG)
-------
*1=ubuntu knows only 300gb. fedora too.
ubuntu x64/Fedora13 x64,by DMRAID
description: | updated |
tags: | added: 2tb dmraid |
summary: |
- fakeraid cannot use over 2TB raid0 + dmraid cannot use over 2TB raid0 |
Mitch Towner (kermiac) wrote : Re: dmraid cannot use over 2TB raid0 | #1 |
affects: | ubuntu → dmraid (Ubuntu) |
Danny Wood (danwood76) wrote : | #2 |
I think your issue may be caused by the fact that dmraid cannot handle two seperate arrays on one disk set.
The best way to do it is have the RAID0 set as the entire set (all 4.5TB) and then partition into smaller chunks.
Please try this first!
Nishihama Kenkowo (hitobashira) wrote : | #3 |
I done . I tryed 1 raid array last week.
In this case, Linux(fedora/
so I spilitted 2 arrays.
thanks.
Danny Wood (danwood76) wrote : | #4 |
Ok,
Could you please output the result of the following command from a live session:
sudo dmraid -ay -vvv -d
Can windows see the 2.5TB drive ok?
Danny Wood (danwood76) wrote : | #5 |
Also what RAID controller is it?
Danny Wood (danwood76) wrote : | #6 |
I've just had another thought.
Are you trying the 64-bit version of Ubuntu?
32-bit addressing will only go up to 2.2TB so it might be worth trying 64-bit Ubuntu instead.
2.5TB - 2.2TB = 300GB (Sound familiar?)
But dmraid will not handle two separate arrays on one disk.
If 64-bit sees the array just fine you should rearrange the array so its a complete 4.5TB.
Luke Yelavich (themuso) wrote : Re: [Bug 599255] Re: dmraid cannot use over 2TB raid0 | #7 |
I also believe for disks that size you need to use a GPT partition table.
Nishihama Kenkowo (hitobashira) wrote : Re: dmraid cannot use over 2TB raid0 | #8 |
Additional test:
I tryed another distro,yesterday.
In CentOS 5.5 DVD, all capacity was recognized to the second array. 2.5TB .
[CentOS inst ]said me "initialize this 2.5TB? so. you will use" but i did not.
because. there are 2.5TB NTFS made by windows XP. i need backup before
I was glad to listen this message.
It doesn't touch PC for about 24 hours because all arrays are backed up now for test.
Anything cannot be done now.
Please , wait me(backuping my pc).
all for test.
my fake raid: Asus M4a78-EM1394 onboard. SB7*0.(ATI/Promise)
disto : ubuntu 10.4 , fedora13 , centos5.5 , all x64 edition.
http://
others: windowsXP 32bit, Windows7 64bit ultimate (Both OSes can use all capacity 2.0TB & 2.5TB.
It is possible to use it without trouble. )
Danny Wood (danwood76) wrote : | #9 |
Well we need to determine which software the bug is in.
How are you determining that Ubuntu and Fedora can't see the full amount?
E.g what program are you using for partitioning.
Could you please post the output of (in ubuntu and centos):
sudo dmraid -ay -vvv -d
This will help us!
Danny Wood (danwood76) wrote : | #10 |
I have just been looking at the sources for the centos dmraid package for the pdc. (Promise controller)
The only difference is a Raid10 patch but won't affect you as you are not using that.
I am inclined to believe the bug is in whatever program you are using for partitioning.
The dmraid debug output (previous post) will shed more light on this!
Nishihama Kenkowo (hitobashira) wrote : | #11 |
I backuped ALL , almost 3TB.
I tryed test on CentOS again.
Oh, sorry > everyone.
on CentOS . I misread message size. CentOS had initialized only 0.3GB (2nd array 2.5TB).
CentOS initialized only 286079MB on 2.5TB.
every linux have same situations for me.
I paste dmraid -ay -vvv -d
pdc_cdfjcjhfhe :1st array 2.0TB
pdc_cdgjdcefic :2nd array 2.5TB
sudo dmraid -ay -vvv -d (ubuntu 10.4 x64)
WARN: locking /var/lock/
NOTICE: /dev/sdc: asr discovering
NOTICE: /dev/sdc: ddf1 discovering
NOTICE: /dev/sdc: hpt37x discovering
NOTICE: /dev/sdc: hpt45x discovering
NOTICE: /dev/sdc: isw discovering
DEBUG: not isw at 1500301908992
DEBUG: isw trying hard coded -2115 offset.
DEBUG: not isw at 1500300827136
NOTICE: /dev/sdc: jmicron discovering
NOTICE: /dev/sdc: lsi discovering
NOTICE: /dev/sdc: nvidia discovering
NOTICE: /dev/sdc: pdc discovering
NOTICE: /dev/sdc: pdc metadata discovered
NOTICE: /dev/sdc: sil discovering
NOTICE: /dev/sdc: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
DEBUG: not isw at 1500301908992
DEBUG: isw trying hard coded -2115 offset.
DEBUG: not isw at 1500300827136
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: pdc metadata discovered
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
DEBUG: not isw at 1500301908992
DEBUG: isw trying hard coded -2115 offset.
DEBUG: not isw at 1500300827136
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: pdc metadata discovered
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
DEBUG: _find_set: searching pdc_cdfjcjhfhe
DEBUG: _find_set: not found pdc_cdfjcjhfhe
DEBUG: _find_set: searching pdc_cdfjcjhfhe
DEBUG: _find_set: not found pdc_cdfjcjhfhe
DEBUG: _find_set: searching pdc_cdgjdcefic
DEBUG: _find_set: searching pdc_cdgjdcefic
DEBUG: _find_set: not found pdc_cdgjdcefic
DEBUG: _find_set: not found pdc_cdgjdcefic
DEBUG: _find_set: searching pdc_cdgjdcefic
DEBUG: _find_set: not found pdc_cdgjdcefic
NOTICE: added /dev/sdc to RAID set "pdc_cdfjcjhfhe"
DEBUG: _find_set: searching pdc_cdfjcjhfhe
DEBUG: _find_set: found pdc_cdfjcjhfhe
DEBUG: _find_set: searching pdc_cdfjcjhfhe
DEBUG: _find_set: found pdc_cdfjcjhfhe
DEBUG: _find_set: searching pdc_cdgjdcefic
DEBUG: _find_set: found pdc_cdgjdcefic
DEBUG: _find_set: searching pdc_cdgjdcefic
DEBUG: _find_set: found pdc_cdgjdcefic
NOTICE: added /dev/sda to RAID set "pdc_cdfjcjhfhe"
DEBUG: _find_set: searching pdc_cdfjcjhfhe
DEBUG: _find_set: found pdc_cdfjcjhfhe
DEBUG: _find_set: searching pdc_cdfjcjhfhe
DEBUG: _find_set: found pdc_cdfjcjhfhe
DEBUG: _find_set: searching pdc_cdgjd...
Nishihama Kenkowo (hitobashira) wrote : | #12 |
it is a snapshot of AMD RAID expert software on XP.
http://
I'll be very happy if I can serve you
Danny Wood (danwood76) wrote : | #13 |
OK well the debug stuff all looks fine.
Could you also post the output of `dmraid -s` (this will list all disk sizes and status seen by dmraid)
What software are you trying to partition with?
And how are you working out how much disk they are seeing?
Also can you post the output of an fdisk list for both arrays.
So:
fdisk -l /dev/mapper/
fdisk -l /dev/mapper/
Danny Wood (danwood76) wrote : | #14 |
The AMD RAID control panel has nothing to do with linux.
To debug the issue I need the output of the commands I have asked for.
Nishihama Kenkowo (hitobashira) wrote : | #15 |
sudo fdisk -l /dev/mapper/
GNU Fdisk 1.2.4
Copyright (C) 1998 - 2006 Free Software Foundation, Inc.
This program is free software, covered by the GNU General Public License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
Error: ディスクの外側にパーティション
---
maybe (Partition outside the disk!)
fdisk -l /dev/mapper/
GNU Fdisk 1.2.4
Copyright (C) 1998 - 2006 Free Software Foundation, Inc.
This program is free software, covered by the GNU General Public License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
Disk /dev/mapper/
255 heads, 63 sectors/track, 243152 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
/dev/mapper/
/dev/mapper/
/dev/mapper/
/dev/mapper/
Warning: Partition 5 does not end on cylinder boundary.
/dev/mapper/
Warning: Partition 6 does not end on cylinder boundary.
/dev/mapper/
Warning: Partition 7 does not end on cylinder boundary.
/dev/mapper/
Warning: Partition 8 does not end on cylinder boundary.
$ sudo dmraid -s
*** Active Set
name : pdc_cdfjcjhfhe
size : 3906249984
stride : 128
type : stripe
status : ok
subsets: 0
devs : 3
spares : 0
*** Active Set
name : pdc_cdgjdcefic
size : 585891840
stride : 128
type : stripe
status : ok
subsets: 0
devs : 3
spares : 0
sudo dmraid -r
/dev/sdc: pdc, "pdc_cdfjcjhfhe", stripe, ok, 1302083328 sectors, data@ 0
/dev/sdb: pdc, "pdc_cdfjcjhfhe", stripe, ok, 1302083328 sectors, data@ 0
/dev/sda: pdc, "pdc_cdfjcjhfhe", stripe, ok, 1302083328 sectors, data@ 0
there are no 2nd array(pdc_
sudo dmraid -ay
RAID set "pdc_cdfjcjhfhe" already active
RAID set "pdc_cdgjdcefic" already active
ERROR: dos: partition address past end of RAID device
ERROR: dos: partition address past end of RAID device
RAID set "pdc_cdfjcjhfhe1" already active
RAID set "pdc_cdfjcjhfhe2" already active
RAID set "pdc_cdfjcjhfhe5" already active
RAID set "pdc_cdfjcjhfhe6" already active
RAID set "pdc_cdfjcjhfhe7" already active
RAID set "pdc_cdfjcjhfhe8" already active
gparted results png
http://
http://
Nishihama Kenkowo (hitobashira) wrote : | #16 |
>What software are you trying to partition with?
On Linux, fundamentally I use "gparted" or "ubuntu installer" for resize,
>And how are you working out how much disk they are seeing?
how to see my partition.
on Fedora, just type.
$gparted
on Ubuntu
$gparted /dev/mapper/
on ubuntu ,just type gparted, I see /dev/sda,b,c ,not raid array.
Nishihama Kenkowo (hitobashira) wrote : | #17 |
I tested acronis disk director 10.0.
linuxCDROM boot kernel 2.4.34
DD not support GPT.
So, I partitioned as 2TB EXT2 for 2nd array on DD.
XP & Win7 can read/write this EXT2 partition.
but ,ubuntu can not see.
Danny Wood (danwood76) wrote : | #18 |
Right.
The bug is definitely in dmraid. (Disk size reported my dmraid is wrong by 32-bits probably due to truncation)
Could you post a metadata dump please. This will allow me to explore the metadata and see if something is different to what it expects.
To do a metadata dump run the following command:
dmraid -rD
In the directory where you are currently (in the terminal) a directory will be created labelled dmraid.pdc could you please tar (archive) this directory and attach it to this bug report.
Nishihama Kenkowo (hitobashira) wrote : | #19 |
thank you danny
root@phantom:~# sudo dmraid -rD
/dev/sdc: pdc, "pdc_cdfjcjhfhe", stripe, ok, 1302083328 sectors, data@ 0
/dev/sdb: pdc, "pdc_cdfjcjhfhe", stripe, ok, 1302083328 sectors, data@ 0
/dev/sda: pdc, "pdc_cdfjcjhfhe", stripe, ok, 1302083328 sectors, data@ 0
root@phantom:~#
cannot dump?
Danny Wood (danwood76) wrote : | #20 |
It dumps to a directory without saying anything.
In your current working directory.
So for example if I run it in the terminal I get this output (I have jmicron and intel raids):
danny@danny-
/dev/sda: isw, "isw_bgafaifadd", GROUP, ok, 1465149166 sectors, data@ 0
/dev/sdc: jmicron, "jmicron_HD2", stripe, ok, 625082368 sectors, data@ 0
/dev/sdb: isw, "isw_bgafaifadd", GROUP, ok, 1465149166 sectors, data@ 0
/dev/sdd: jmicron, "jmicron_HD2", stripe, ok, 625082368 sectors, data@ 0
danny@danny-
if I run an ls there are two directories:
danny@danny-
dmraid.isw dmraid.jmicron
danny@danny-
These directories contain the metadata information (you will just have one pdc directory).
Nishihama Kenkowo (hitobashira) wrote : | #21 |
Danny Wood (danwood76) wrote : | #22 |
Annoyingly the metadata only contains the information for the first raid set (which is fine of course). The other set will be another metadata block.
We can dump this metadata manually but we need to know where its located.
To do this I will patch a version of dmraid that will output these locations and upload it to my ppa a bit later on.
I will let you know when this is done.
Danny Wood (danwood76) wrote : | #23 |
Hi,
I have patched dmraid to show the metadata locations in the debug output (hopefully)
This can be found in my ppa https:/
Update the dmraid packages from my ppa and then run `dmraid -ay -d -vvv` again and post the output.
This should hopefully display the metadata locations that we can then dump from.
Nishihama Kenkowo (hitobashira) wrote : | #24 |
Nishihama Kenkowo (hitobashira) wrote : | #25 |
>patch < dmraid_
>patching file README.source
> ........
>patching file dmraid-activate
>patch: **** File dmraid is not a regular file -- can't patch
where I mistaked?
dmraid -V
dmraid version: 1.0.0.rc16 (2009.09.16)
dmraid library version: 1.0.0.rc16 (2009.09.16)
device-mapper version: 4.15.0
Nishihama Kenkowo (hitobashira) wrote : | #26 |
I can not find dmraid_
Nishihama Kenkowo (hitobashira) wrote : | #27 |
root@phantom:/sbin# ls -l dmraid
-rwxr-xr-x 1 root root 26891 2010-07-03 03:15 dmraid
I do not understand well because I rarely use it(patch). It is difficult for me to patch.
Possible deb.form. or (patched)source code..wholly..
get.. I will be glad.
The following can be done if it does so. ./configure ,make ,make install.
Danny Wood (danwood76) wrote : | #28 |
Hi,
Unfortunately the package is still waiting to be built by the Ubuntu servers.
It should be complete in 7 hours from now, it seems there is quite a queue for building.
You can check the progress by looking on the ppa page (https:/
To install my ppa just run `sudo add-apt-repository ppa:danwood76/
Then do a `sudo apt-get update` then a `sudo apt-get upgrade` (after the package has been built by Ubuntu)
Once the updates have been installed post the output of `dmraid -ay -d`
thanks!
Nishihama Kenkowo (hitobashira) wrote : | #29 |
Nishihama Kenkowo (hitobashira) wrote : | #30 |
oh, I think that I must wait for building.
dmraid -V
dmraid version: 1.0.0.rc16 (2009.09.16) shared
dmraid library version: 1.0.0.rc16 (2009.09.16)
device-mapper version: 4.15.0
Danny Wood (danwood76) wrote : | #31 |
Hi,
The 64-bit version has now been built.
So you should be able to upgrade.
To check the installed version you can use dpkg, the program version will always remain the same.
dpkg -p dmraid | grep Version
Once upgraded it should output:
danny@danny-
Version: 1.0.0.rc16-
Then please post the output of
`sudo dmraid -ay -d -vvv`
Nishihama Kenkowo (hitobashira) wrote : | #32 |
- dmraid-ay-d-vvv.txt Edit (7.3 KiB, text/plain)
# dpkg -p dmraid | grep Ver
Version: 1.0.0.rc16-
root@phantom:
Danny Wood (danwood76) wrote : | #33 |
Hmmm. It didn't quite output what I wanted, sorry about that.
I have made another patched version which is more verbose and should show each meta location it tries (weather it finds it or not).
Unfortunately there is a bit of a wait in the ppa build queue at the moment.
The new version is 1.0.0.rc16-
The status of the build can be found here:
https:/
I will be away for tomorrow but it would be good if you could post the "dmraid -ay -d -vvv" output again once the package has updated.
Thanks!
Nishihama Kenkowo (hitobashira) wrote : | #34 |
Nishihama Kenkowo (hitobashira) wrote : | #35 |
Awareness:
Current, Only just type gpated. I can see /dev/mapper/
Current my partition:
1st array(2TB)
/dev/mapper/
/dev/mapper/
/dev/mapper/
/dev/mapper/
/dev/mapper/
/dev/mapper/
/dev/mapper/
/dev/mapper/
2nd-array:
yesterday I partationed 2 part for 2nd array as GPT on Win7x64.
I can not see /dev/mapper/
I can see 279.37GiB(blanked space) on gparted.
Another Awareness:
You may not matter, because some might be useful, to note.
sometime,there are strange situation in /dev/mapper/
/dev/mapper/
/dev/mapper/
Case of adding "p" , ubuntu and kubuntu installer Failure to ensure.
So, eg on /dev/mapper/
and edited grub.cfg manually. and change uuid and some another check.......
thus I could boot Primary4 on 1st array.
Danny Wood (danwood76) wrote : | #36 |
- Scipt which dumps pdc metadata using dd Edit (270 bytes, application/x-tar)
The version of gparted in my ppa doesn't rely on kpartx like the repository version. It should leave dev names alone but the repository version seems to screw them up adding a p in sometimes.
The next version of dmraid will leave the p in there, there is a discussion on this in this bug: https:/
In addition to this ubiquity (the installer) doesnt seem to be able to repartition dmraid drives at all. Its best to create the partitions using gparted and install without modifying the partition table.
Back to the bug!
The new output gives me exactly what I want to know! (finally)
I've written a script which dumps the data at those locations and then compresses them.
Danny Wood (danwood76) wrote : | #37 |
To use the script open a terminal and make a clean directory to work in and place the thh dump-pdc-
Make the script executable and then run it.
chmod a+x dump-pdc-
./dump-
It will ask you for your password as dd will require root permissions.
Once the script has finished you will be left with a metadata.tar.gz file in that directory.
Please upload this as this is the metadata I require.
Thanks!
Nishihama Kenkowo (hitobashira) wrote : | #38 |
Danny Wood (danwood76) wrote : | #39 |
Yep that's perfect.
The second metadata chunk is there for me to investigate.
I will let you know when I find a solution.
Thanks!
Nishihama Kenkowo (hitobashira) wrote : | #40 |
I'm relieved. Slowly, I'll wait.
Thank you,all.
summary: |
- dmraid cannot use over 2TB raid0 + dmraid fails to read promise RAID sector count larger than 32-bits |
tags: | added: patch |
tags: | removed: 2tb dmraid patch |
Changed in dmraid (Ubuntu): | |
status: | New → Triaged |
importance: | Undecided → Medium |
Changed in dmraid (Ubuntu): | |
status: | Triaged → Fix Released |
Changed in dmraid (Ubuntu): | |
status: | Fix Released → Triaged |
71 comments hidden Loading more comments | view all 151 comments |
HenryC (henryc) wrote : | #112 |
I have been doing some testing with Danny's patch, and it seems something is still missing... The patch works fine, but the sector counts in the metadata don't quite add up, and I still cannot get the array to work.
I did some calculations based on the disk size, and it seems with the 8TB array the sector count in the metadata is 1024 sectors less than what it should be. The disk size without a partition table is 7629395MB, which would be 15625000960 sectors, but according to the metadata the sector count is 15624999936...
I feel like there is some offset or rounding missing, but it seems odd that it would only be an issue with larger arrays.
Phillip Susi (psusi) wrote : | #113 |
How did you determine the disk size?
HenryC (henryc) wrote : | #114 |
Sorry about the sector counts, I did the calculations again, and it seems that the sector count in the metadata is probably correct. I got the disk size in megabytes from windows disk manager, and calculated the sector count from that, but since the disk size is rounded to megabytes and the sector size is 512B, the sector count can be off by about one megabyte, which is 2048 sectors.
Now I feel like I am doing something wrong when I try to read the disks, since the size seems to be correct, but I cannot access any partition on the array. I tried parted, but it only says "unrecognised disk label", and I tried manually running kpartx, but it doesn't detect any partitions.
Phillip Susi (psusi) wrote : | #115 |
What does dmsetup table show?
HenryC (henryc) wrote : | #116 |
# dmsetup table
pdc_bdfcfaebcj: 0 15624999936 striped 4 256 8:0 0 8:16 0 8:32 0 8:80 0
Phillip Susi (psusi) wrote : | #117 |
It appears that on smaller arrays, the pdc metadata is in a sector near the end of the drive, but on the larger ones it is at the beginning. Since the metadata is at the start of the drive, that should require adding some offset before the first raid stripe, which dmraid does not seem to have done.
Danny Wood (danwood76) wrote : | #118 |
Looking back I think this was the issue Nishihama Kenkowo had with the original patch.
Sorry if you are already working on this offset issue but I thought I would add some thoughts.
Looking through the dmraid code I cannot see where it would add an offset.
Would the offset simply be the metadata size of 4 sectors or 2kB?
Is it possible to simulate this offset with kpartx? I seem to remember an offset option when mounting disk images.
HenryC (henryc) wrote : | #119 |
I tried to look into calculating the offset, but if I understand the metadata detection code correctly, it seems that is not the problem I am having. The metadata for my array is found within the first loop in pdc_read_metadata, as an offset of end_sectors, so I assume it is at the end of the disk.
Danny Wood (danwood76) wrote : | #120 |
If you have created a correct GPT then kpartx should find them.
Does dmraid detect the correct RAID layout?
Ie stride size, count, etc.
You need to investigate the partitioning on the disk, you need to make sure your data is backup up as you are likely to loose partitioning here.
Dump the current GPT to a file (First 17kB of array in total I think) and then recreate the GPT using gparted or gdisk creating the same partition layout and dump it again.
Take a look at the files and try to analyse the GPT, also post both files here.
Phillip Susi (psusi) wrote : | #121 |
According to the .offset files in your metadata it was found at offset 0, or the start of the disk. Are you sure this is not where it is at?
HenryC (henryc) wrote : | #122 |
- diskdumps.tar.gz Edit (10.0 KiB, application/x-tar)
Sorry for the late response, I haven't had access to my computer over the weekend.
I dumped the first 17kB of the array with the formatting from windows, and after formatting it with gparted. It would seem the partition table from windows is offset further into the disk than the one created by gparted. I am guessing the partition tables start at 0x200 for gparted, 0x800 for the table created in windows (I am not familiar with the GPT format). Both dumps are attached.
The metadata is on sectors 3907029105 to 3907029109.
Danny Wood (danwood76) wrote : | #123 |
Does the gparted version work in Ubuntu?
It doesn't appear to have a protective MBR as in the GPT spec but this may not be an issue.
It appears that windows believes the LBA of the drive is 2048 (0x800) bytes where as ubuntu thinks it is 512 bytes (0x200) as the GPT header is located at LBA1.
I am unsure where the LBA size comes from.
Phillip is it read from the metadata?
Phillip Susi (psusi) wrote : | #124 |
That is really strange. I did not think Windows could handle non 512 byte sector devices. There does not appear to be any known field in the pdc header that specifies the sector size. It could be that it just uses 2k for anything over 2TB. Actually, I wonder if it uses whatever sector size would be required for MBR to address the whole thing? So maybe it goes to 1k for 2-4 TB, then 2k for 4-8 TB?
Henry, can you dump the first few sectors of the individual disks?
HenryC (henryc) wrote : | #125 |
- diskdumps2.tar.gz Edit (10.0 KiB, application/x-tar)
I dumped the first 6 sectors of each individual disk, both with windows formatting and dmraid formatting. I can't make much out of the data, but hopefully it's helpful...
Phillip Susi (psusi) wrote : | #126 |
That confirms that the metadata is not at the start of the disk. It looks like the problem is just the sector size. Could you try recreating the array such that the total size is around 3 TB and see if that gives a sector size of 1k?
HenryC (henryc) wrote : | #127 |
I created a 3TB array, and it does indeed use a sector size of 1024 bytes. I also tried a 4TB and a 5TB array to verify your theory, and it seems to be correct. The 4TB array is still using a sector size of 1024 bytes, while the 5TB array used 2048.
Danny Wood (danwood76) wrote : | #128 |
That is interesting.
I have been doing various searches online and can't find any other references to windows doing this.
Are you using 64-bit windows?
I am just setting up a virtual machine with a rather large virtual drive to see if I can replicate.
HenryC (henryc) wrote : | #129 |
64-bit windows 7, yes.
Danny Wood (danwood76) wrote : | #130 |
Ok,
After some testing I think I can confirm that the sector size is coming from the pdc driver and not windows.
All the drives I created of various sizes with windows and gparted show up in both operating systems and always have a sector size of 512.
So we need to change the sector size advertised by dmraid to accommodate this, what is odd is that the metadata sector count is still 512 bytes / sector just to confuse things.
Danny Wood (danwood76) wrote : | #131 |
I can't see where dmraid advertises its sector size!
Phillip do you have any idea?
I did find a thread where someone described the same symptoms of large arrays on the promise raid controller and the sector counts:
http://
(Phillip you commented on this thread and in the end they created 2 x 2TB arrays instead of 1 x 4TB)
Phillip Susi (psusi) wrote : | #132 |
You contradicted yourself there Danny. If they always have a sector size of 512 bytes then we wouldn't have anything to fix. You must have meant that the larger arrays have larger sector size.
And yea, I can't see where you set the sector size, so I posted a question to the ataraid mailing list yesterday about it.
Danny Wood (danwood76) wrote : | #133 |
Sorry Phillip if I wasn't clear, what I meant to say was that with virtual drives in both virtualbox and qemu windows 7 created a GPT with a 512 bytes per sector size no matter the drive size.
So I concluded that it must be the promise raid driver itself that creates the larger sector size which windows uses as opposed windows creating this itself. So whatever changes are made to dmraid would have to be specific to the pdc driver.
However I do not have a promise raid chip set to be able to test larger arrays in real life but the evidence from Henry and the other thread I found indicate that this is the promise raid drivers behaviour.
Phillip Susi (psusi) wrote : | #134 |
Oh yes, of course... I thought it was a given that this is pdc specific behavior.
Greg Turner (gmt) wrote : | #135 |
This bug is ancient, and perhaps nobody cares anymore, but I've figured out a bit more about where we are left with respect to this.
dmraid userland always assumes that the sector size is 512. It is a hard-coded constant value.
Meanwhile, in kernel land, dm devices always map their sector sizes, both logical and physical, to the logical sector size of their underlying devices.
Perhaps in order to deal with this discrepancy, there is code in dmraid userland to ignore any drive whose sector size is not 512. That code doesn't get triggered, as in this case the problem is that Promise wants to virtualize the sector size, as they do in their scsi miniport driver for windows.
Check out this:
https:/
If that's right, we might be able to work around this whole mess, having our dual-boot cake and eating it, too, by creating multiple volumes of size less than 2TB, keeping MBR on them (as linux does not grok GPT-partitioned dynamic disks) and using LDM to piece them together.
For my part, looking at the state the dmraid code and Promise metadata are in, I'm disinclined to rely on it at all; I'm just going to give up on fully functional dual-boot, use md-raid, and an emulated NAS if I need access to my other-system data from Windows.
That stated, I guess, to solve the problem fundamentally, in linux, we'd either need to extend dmraid to support emulated, metadata-based sector sizes, both in the kernel and the userland code-bases, or to implement some hack to change the logical geometry of the physical devices before setting up these arrays (but see https:/
It's hard to see anyone putting that kind of effort into the increasingly marginalized dm-raid framework so I wouldn't hold my breath...
Phillip Susi (psusi) wrote : | #136 |
Linux understands GPT just fine, but ldm *is* "dynamic disks", so if you tried to use that to glue them back together, then linux would not understand it.
1 comments hidden Loading more comments | view all 151 comments |
Vertago1 (vertago1) wrote : | #138 |
I believe I am affected by this bug, but I wanted to check to see if I am having the same issue.
I have an amd 990X chipset which uses SB950, according to http://
I have two 2TB disks in RAID0 which windows was able to see and partition with GPT.
sudo dmraid -r:
/dev/sdb: pdc, "pdc_ejdejgej", stripe, ok, 1758766336 sectors, data@ 0
/dev/sda: pdc, "pdc_ejdejgej", stripe, ok, 1758766336 sectors, data@ 0
Ubuntu doesn't see the correct volume size.
sudo /dev/mapper/gdisk pdc_ejdejgej:
GPT fdisk (gdisk) version 0.8.8
Warning! Disk size is smaller than the main header indicates! Loading
secondary header from the last sector of the disk! You should use 'v' to
verify disk integrity, and perhaps options on the experts' menu to repair
the disk.
Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
on the recovery & transformation menu to examine the two tables.
Warning! One or more CRCs don't match. You should repair the disk!
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: damaged
*******
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
*******
Phillip Susi (psusi) wrote : | #139 |
If you have a pdc volume that is over 2TiB, then yes.
1 comments hidden Loading more comments | view all 151 comments |
Vertago1 (vertago1) wrote : | #141 |
I have setup a build environment for dmraid and will start looking through it to get an idea of whether or not I could contribute a patch. Any advice on where to start or on what documentation would be useful would be appreciated.
Phillip Susi (psusi) wrote : | #142 |
I'm not sure why you can't build it, but the part of the source of most interest is pdc.c. The problem is that promise has never provided specifications for the format, so it was reverse engineered. The other problem is that it looks like the Windows driver pretends the disk has a larger sector size when you go over 2 TiB, and the kernel device-mapper driver does not have a way to change the sector size, so the kernel would need patched.
Your best bet is to simply avoid using volumes over 2 TiB.
Vertago1 (vertago1) wrote : | #143 |
- M4A79XTD_EVO_1.7tb.hex Edit (4.6 KiB, text/plain)
Well I figure it might be useful to start collecting samples of metadata from different arrays using the pdc part of dmraid. I have two machines with different chipsets one has a 1.7TB striped volume the other a 3.7TB striped volume.
I created these dumps by running:
sudo dmraid -rD /dev/sda
cd dmraid.pdc
sudo cat sda.dat | hexdump > /tmp/result.hex
Vertago1 (vertago1) wrote : | #144 |
Vertago1 (vertago1) wrote : | #145 |
I was able to build the dmraid packages with Danny's patch: https:/
After installing them I am able to see my ntfs volumes. I mounted the largest read only and I was able to read the files ok. The largest partition is under 2TB though.
Gparted gives an error saying invalid argument during seek on /dev/sda. If I tell it cancel it seems to work ok after that.
Is there a problem with this patch that prevents us from submitting it to upstream?
I am working on getting a grub2 entry to work for chainloading windows.
Danny Wood (danwood76) wrote : | #146 |
Hi Vertago1,
Yes the patch appeared to work, we merged it to the Ubuntu dev packages and it worked for some people.
The sector size was still an issue in some setups as windows appeared to use both 512 and 1024 byte sectors sizes.
However once we hit the release we quite a few people then reporting non functioning RAID setups as the additional bytes I chose were obviously used for something else.
Upstream dmraid doesn't accept patches. It seems that most people who start off booting using dmraid eventually migrate to a fully Linux Mdadm setup. Add in to that Mdadm being more feature complete and also supporting intel matrix raid metadata and dmraid is not really required any more except for a few odd chipsets.
David Burrows (snadge) wrote : | #147 |
- fixed pdc large array support patch Edit (3.3 KiB, text/plain)
It's been 2 years, 8 months, 20 days since Danny Wood last posted in this thread. Just quickly, really appreciate your efforts attempting to fix this problem, without even having the hardware. That's dedicated.
I've just set up a 2x4TB RAID1 mirror in Windows, which of course leads me to this thread. Good news, with a patch to Danny's patch, my raid mirror detects and appears to be working. My pre-existing 1TB raid1, continues to function as it did before.
I will re-upload the patch (with a different patch index number to avoid confusion with the original), which includes my 1 line fix, that allows the 4TB mirror to detect, activate and work as expected.
- unsigned pdc_sectors_max = di->sectors - div_up(
+ uint64_t pdc_sectors_max = di->sectors - div_up(
pdc_sectors_max was 32bit, and overflowing, which caused the pdc_read_metadata function to fail to find the metadata offset from the end of the disk.
I thought I might also use the opportunity to clear up some confusion with regards to some people having difficulty finding a partition table or failing to mount their existing raid setups.
AMD RAIDXpert (pdc format) allows you to choose a logical sector size. 512, 1024, 2048 or 4096 bytes. In Windows, this configures the drives logical sector size to match what you chose at the raids creation time. This is presumably contained within the metadata.
Page 106 of the user manual alludes to why you might want to choose a non default sector size, as it affects the maximum LD migration size. Linked for convenience:
https:/
dmraid seems to only support 512 byte logical sectors. If we could read the logical sector size from the metadata, couldn't we then just set the logical sector size at the device mapper node's creation time? This way the partition table should line up when you use f(g)disk/gparted etc.
In the meantime, just make sure you choose the default 512 byte logical sectors, if you want to share RAID arrays between Windows and Linux.
Phillip Susi (psusi) wrote : | #148 |
You should bear in mind that fakeraid puts your data at risk. In the event of a crash or power failure, some data can be written to one disk and not the other. When the system comes back up, a proper raid system will copy everything from the primary to the secondary disk, or at least the parts of the disk ( if you have a write intent bitmap ) that were dirty at the time of the crash, and only allow reads from the primary disk until that is complete. Fake raid does neither of these, so which disk services a read request is a toss up so the system might read the old data on one disk or the new data on the other disk, and this can flip flop back and forth on a sector by sector basis, causing all sorts of filesystem corruption.
Due to this issue being brought to IRC #ubuntu I did some background research to try to confirm Danny's theory about sector-size.
So far the best resource I've found in the Promise Knowledge base (kb.promise.com) is:
This page contains the following table:
For this logical drive size Select this sector size
Up to 16 TB 4096 bytes (4 KB)
Up to 8 TB 2048 bytes (2 KB)
Up to 4 TB 1024 bytes (1 KB)
Up to 2 TB 512 bytes (512 B)
The page intro says:
"This application note deals with a specific application for VTrak M-Class and E-Class in a Windows 2000/WinXP 32-bit OS environment."
From fragments in other Promise KB articles I do think this is the formula the Promise Fastrak Windows drivers follow so might be a basis for a permanent and reliable fix.
Ghah! After pressing "Post Comment" also found this firmer confirmation of (part of) the algorithm:
"Solution: From 0-2 TB the sector size is 512k. From 2-4 TB the sector size is 1028k. Then from 4 + it changes the sector size to 2048k thats why the information is displayed in as unallocated. Following this parameters when expanding should make expanding the array work."
https:/
Martina N. (tyglik78) wrote : | #151 |
Can I help with solving this?
I have this problem now - I would like to create fake raid10 4TB (dual boot).
Thank you for taking the time to report this bug and helping to make Ubuntu better. This bug did not have a package associated with it, which is important for ensuring that it gets looked at by the proper developers. You can learn more about finding the right package at https:/ /wiki.ubuntu. com/Bugs/ FindRightPackag e. I have classified this bug as a bug in dmraid.
When reporting bugs in the future please use apport, either via the appropriate application's "Help -> Report a Problem" menu or using 'ubuntu-bug' and the name of the package affected. You can learn more about this functionality at https:/ /wiki.ubuntu. com/ReportingBu gs.