Can RAID 1 have more than two drives?
Recently I had a discussion with a teacher of mine. He was claiming that you could set up RAID 1 with five drives and that the data would be mirrored over all of these drives.
I told him a RAID 1 with 5 drives wouldn't work like that. It would be a RAID 1 with two drives and would use the other three drives as hot spare.
He also said that RAID 6 is identical to RAID 5 but you can place all the parity checks on the same drive. I thought RAID 6 was a RAID 5-like solution where two drives where used for parity.
Who's right, then?
raid raid-1 raid6
add a comment |
Recently I had a discussion with a teacher of mine. He was claiming that you could set up RAID 1 with five drives and that the data would be mirrored over all of these drives.
I told him a RAID 1 with 5 drives wouldn't work like that. It would be a RAID 1 with two drives and would use the other three drives as hot spare.
He also said that RAID 6 is identical to RAID 5 but you can place all the parity checks on the same drive. I thought RAID 6 was a RAID 5-like solution where two drives where used for parity.
Who's right, then?
raid raid-1 raid6
add a comment |
Recently I had a discussion with a teacher of mine. He was claiming that you could set up RAID 1 with five drives and that the data would be mirrored over all of these drives.
I told him a RAID 1 with 5 drives wouldn't work like that. It would be a RAID 1 with two drives and would use the other three drives as hot spare.
He also said that RAID 6 is identical to RAID 5 but you can place all the parity checks on the same drive. I thought RAID 6 was a RAID 5-like solution where two drives where used for parity.
Who's right, then?
raid raid-1 raid6
Recently I had a discussion with a teacher of mine. He was claiming that you could set up RAID 1 with five drives and that the data would be mirrored over all of these drives.
I told him a RAID 1 with 5 drives wouldn't work like that. It would be a RAID 1 with two drives and would use the other three drives as hot spare.
He also said that RAID 6 is identical to RAID 5 but you can place all the parity checks on the same drive. I thought RAID 6 was a RAID 5-like solution where two drives where used for parity.
Who's right, then?
raid raid-1 raid6
raid raid-1 raid6
edited Oct 19 '12 at 9:14
slhck
159k47440464
159k47440464
asked Oct 19 '12 at 8:20
Mad_piggy
41112
41112
add a comment |
add a comment |
4 Answers
4
active
oldest
votes
You can use as many drives as you want for RAID1. They will all be mirrored, and written on at the same time, and be exact copies of each other. The fact that there isn't a card that do more than x drives doesn't meant anything about the concept. RAID1 is just mirroring your disks, and you can have as many mirrors as you want.
Also, your view of RAID5/6 is erroneous. The parity is distributed on all the drives, there isn't a dedicated drive for that. Compared to raid5, raid6 adds an additional parity block, which is also distributed.
You can find more info on wikipedia.
I never had a raid-card that could handle raid 1 with more then 2 drives. so ... And what is wrong with my raid-6??? I was trieing to say that raid-5 has one drive for its parity, and raid-6 has 2 drives for parity. As wikipedia says: RAID 5: Block-level striping with distributed parity. RAID 6: Block-level striping with double distributed parity.
– Mad_piggy
Oct 19 '12 at 8:56
I'll update my answer.
– m4573r
Oct 19 '12 at 9:03
I've seen an example of mdadm (linux software raid) using 8 drives in a raid 1, or rather the first small partition on 8 drives as a raid 1. This stored the system drive. The big partition on each drive was than grouped in to a RAID 6 array. I've not seen a linux distro that will boot from a software raid 5 or 6.
– BeowulfNode42
Dec 16 '13 at 12:00
add a comment |
There are two possibilities:
use
- all 5 drives for the raid 1, with every drive as exact copy of other drives
- mirror (example) 3 drives and use the other two disks as spare (if one of the first 3 disks fails, the 4th will take his place)
I prefer the 2nd solution (with 2+1 drives or 3+1)
your assumption about raid 6 is wrong :)
add a comment |
I've worked with some LenovoEMC PX4-something NAS which had 4 or 12 disks. The first 50 GB of each drive was used as a raid1 for the OS, and the rest of each disk was for user data.
So it has a 4 or 12-way raid1 for the root drive, and a small swap file on this drive. So yes its totally possible and workable, and used in production by commercial solutions.
As long as at least one disk still worked then it would boot and network. The NAS needed to boot off a USB drive if you changed all the disks, to reinstall the base OS.
Here's the 4 bay NAS rebuilding after a disk swap, so no sdd
root@px4-300r-THYAQ42E9:/nfs/# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid1 sde1[4] sdc1[1] sda1[3] sdb1[2]
20964480 blocks super 1.1 [4/3] [UUU_]
[===========>.........] recovery = 58.1% (12188416/20964480) finish=7.2min speed=21337K/sec
md1 : active raid5 sde2[4] sdc2[1] sda2[3] sdb2[2]
5797200384 blocks super 1.1 level 5, 512k chunk, algorithm 2 [4/3] [_UUU]
The /proc/mdstat output was found in an old email - devices are long gone to hardware afterlife, so I can't run a hdparm or bonnie test easily, sorry.
– Criggie
Dec 16 at 23:15
add a comment |
There is a lot of misunderstanding of RAID levels.
JBoD is Just a Bunch of Drives, where you can see multiple drives in the same box, this is a most confused non-raid term.
Years ago, some RAID manufacturers could not make a truly JBOD with their RAID engine, they call SPAN (BIG) as JBoD.
RAID1 is a Mirror RAID and it needs TWO HDDs to mirror each other. Whereas CLONE is a Multiple Duplicate HDD with the same volume, for example DAT Optic's eBOX, sBOX (hardware RAID). Hardware RAID boxes generally offer RAID 0, 1, 5, CLONE, Large, and Hot spare.
As for RAID 5/6, both have the parity space portion equal to one drive for RAID5 and two drives for RAID6.
The most common mistaken knowledge is that parity data is located in a dedicated drive(s). That is incorrect. The party space is divided equally among the RAID member HDDs.
Example: RAID5 from five HDD, each of the drives will have 1/5 of space allocated for parity, whereas for RAID6, each drive will have 2/5 of space allocated for parity.
For those who want to argue, if there is a dedicated parity drive(s), let's assume there is, what happens to the RAID if the dedicated parity drive fails?
The RAID can not be rebuilt because the data needed to rebuild is no longer there.
Note that your last comment saying that RAID5 with a dedicated parity drive could not recover from a drive failure is incorrect. Even if RAID5 was implemented with the parity information entirely on one drive, it would still be able to recover from the failure of any one drive. If your argument was true, then that would mean that with distributed parity, 1/5th of your data would be unrecoverable when any drive failed, because you lost the parity information that was on 1/5th of that drive. That argument is just wrong.
– Makyen
Dec 17 at 3:18
"RAID5 with a dedicated parity drive" is RAID 4. The difference between RAID 4 and RAID 5 is that RAID 4 has a dedicated parity drive and RAID 5 has parity distributed across all disks. If the dedicated parity drive fails on a RAID 4 configuration, the parity can be reconstructed from the data, just as would happens to all the parity lost on a failed drive of a RAID 5 array.
– David Schwartz
Dec 17 at 5:51
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "3"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f489793%2fcan-raid-1-have-more-than-two-drives%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
You can use as many drives as you want for RAID1. They will all be mirrored, and written on at the same time, and be exact copies of each other. The fact that there isn't a card that do more than x drives doesn't meant anything about the concept. RAID1 is just mirroring your disks, and you can have as many mirrors as you want.
Also, your view of RAID5/6 is erroneous. The parity is distributed on all the drives, there isn't a dedicated drive for that. Compared to raid5, raid6 adds an additional parity block, which is also distributed.
You can find more info on wikipedia.
I never had a raid-card that could handle raid 1 with more then 2 drives. so ... And what is wrong with my raid-6??? I was trieing to say that raid-5 has one drive for its parity, and raid-6 has 2 drives for parity. As wikipedia says: RAID 5: Block-level striping with distributed parity. RAID 6: Block-level striping with double distributed parity.
– Mad_piggy
Oct 19 '12 at 8:56
I'll update my answer.
– m4573r
Oct 19 '12 at 9:03
I've seen an example of mdadm (linux software raid) using 8 drives in a raid 1, or rather the first small partition on 8 drives as a raid 1. This stored the system drive. The big partition on each drive was than grouped in to a RAID 6 array. I've not seen a linux distro that will boot from a software raid 5 or 6.
– BeowulfNode42
Dec 16 '13 at 12:00
add a comment |
You can use as many drives as you want for RAID1. They will all be mirrored, and written on at the same time, and be exact copies of each other. The fact that there isn't a card that do more than x drives doesn't meant anything about the concept. RAID1 is just mirroring your disks, and you can have as many mirrors as you want.
Also, your view of RAID5/6 is erroneous. The parity is distributed on all the drives, there isn't a dedicated drive for that. Compared to raid5, raid6 adds an additional parity block, which is also distributed.
You can find more info on wikipedia.
I never had a raid-card that could handle raid 1 with more then 2 drives. so ... And what is wrong with my raid-6??? I was trieing to say that raid-5 has one drive for its parity, and raid-6 has 2 drives for parity. As wikipedia says: RAID 5: Block-level striping with distributed parity. RAID 6: Block-level striping with double distributed parity.
– Mad_piggy
Oct 19 '12 at 8:56
I'll update my answer.
– m4573r
Oct 19 '12 at 9:03
I've seen an example of mdadm (linux software raid) using 8 drives in a raid 1, or rather the first small partition on 8 drives as a raid 1. This stored the system drive. The big partition on each drive was than grouped in to a RAID 6 array. I've not seen a linux distro that will boot from a software raid 5 or 6.
– BeowulfNode42
Dec 16 '13 at 12:00
add a comment |
You can use as many drives as you want for RAID1. They will all be mirrored, and written on at the same time, and be exact copies of each other. The fact that there isn't a card that do more than x drives doesn't meant anything about the concept. RAID1 is just mirroring your disks, and you can have as many mirrors as you want.
Also, your view of RAID5/6 is erroneous. The parity is distributed on all the drives, there isn't a dedicated drive for that. Compared to raid5, raid6 adds an additional parity block, which is also distributed.
You can find more info on wikipedia.
You can use as many drives as you want for RAID1. They will all be mirrored, and written on at the same time, and be exact copies of each other. The fact that there isn't a card that do more than x drives doesn't meant anything about the concept. RAID1 is just mirroring your disks, and you can have as many mirrors as you want.
Also, your view of RAID5/6 is erroneous. The parity is distributed on all the drives, there isn't a dedicated drive for that. Compared to raid5, raid6 adds an additional parity block, which is also distributed.
You can find more info on wikipedia.
edited Oct 19 '12 at 9:10
answered Oct 19 '12 at 8:46
m4573r
4,65811536
4,65811536
I never had a raid-card that could handle raid 1 with more then 2 drives. so ... And what is wrong with my raid-6??? I was trieing to say that raid-5 has one drive for its parity, and raid-6 has 2 drives for parity. As wikipedia says: RAID 5: Block-level striping with distributed parity. RAID 6: Block-level striping with double distributed parity.
– Mad_piggy
Oct 19 '12 at 8:56
I'll update my answer.
– m4573r
Oct 19 '12 at 9:03
I've seen an example of mdadm (linux software raid) using 8 drives in a raid 1, or rather the first small partition on 8 drives as a raid 1. This stored the system drive. The big partition on each drive was than grouped in to a RAID 6 array. I've not seen a linux distro that will boot from a software raid 5 or 6.
– BeowulfNode42
Dec 16 '13 at 12:00
add a comment |
I never had a raid-card that could handle raid 1 with more then 2 drives. so ... And what is wrong with my raid-6??? I was trieing to say that raid-5 has one drive for its parity, and raid-6 has 2 drives for parity. As wikipedia says: RAID 5: Block-level striping with distributed parity. RAID 6: Block-level striping with double distributed parity.
– Mad_piggy
Oct 19 '12 at 8:56
I'll update my answer.
– m4573r
Oct 19 '12 at 9:03
I've seen an example of mdadm (linux software raid) using 8 drives in a raid 1, or rather the first small partition on 8 drives as a raid 1. This stored the system drive. The big partition on each drive was than grouped in to a RAID 6 array. I've not seen a linux distro that will boot from a software raid 5 or 6.
– BeowulfNode42
Dec 16 '13 at 12:00
I never had a raid-card that could handle raid 1 with more then 2 drives. so ... And what is wrong with my raid-6??? I was trieing to say that raid-5 has one drive for its parity, and raid-6 has 2 drives for parity. As wikipedia says: RAID 5: Block-level striping with distributed parity. RAID 6: Block-level striping with double distributed parity.
– Mad_piggy
Oct 19 '12 at 8:56
I never had a raid-card that could handle raid 1 with more then 2 drives. so ... And what is wrong with my raid-6??? I was trieing to say that raid-5 has one drive for its parity, and raid-6 has 2 drives for parity. As wikipedia says: RAID 5: Block-level striping with distributed parity. RAID 6: Block-level striping with double distributed parity.
– Mad_piggy
Oct 19 '12 at 8:56
I'll update my answer.
– m4573r
Oct 19 '12 at 9:03
I'll update my answer.
– m4573r
Oct 19 '12 at 9:03
I've seen an example of mdadm (linux software raid) using 8 drives in a raid 1, or rather the first small partition on 8 drives as a raid 1. This stored the system drive. The big partition on each drive was than grouped in to a RAID 6 array. I've not seen a linux distro that will boot from a software raid 5 or 6.
– BeowulfNode42
Dec 16 '13 at 12:00
I've seen an example of mdadm (linux software raid) using 8 drives in a raid 1, or rather the first small partition on 8 drives as a raid 1. This stored the system drive. The big partition on each drive was than grouped in to a RAID 6 array. I've not seen a linux distro that will boot from a software raid 5 or 6.
– BeowulfNode42
Dec 16 '13 at 12:00
add a comment |
There are two possibilities:
use
- all 5 drives for the raid 1, with every drive as exact copy of other drives
- mirror (example) 3 drives and use the other two disks as spare (if one of the first 3 disks fails, the 4th will take his place)
I prefer the 2nd solution (with 2+1 drives or 3+1)
your assumption about raid 6 is wrong :)
add a comment |
There are two possibilities:
use
- all 5 drives for the raid 1, with every drive as exact copy of other drives
- mirror (example) 3 drives and use the other two disks as spare (if one of the first 3 disks fails, the 4th will take his place)
I prefer the 2nd solution (with 2+1 drives or 3+1)
your assumption about raid 6 is wrong :)
add a comment |
There are two possibilities:
use
- all 5 drives for the raid 1, with every drive as exact copy of other drives
- mirror (example) 3 drives and use the other two disks as spare (if one of the first 3 disks fails, the 4th will take his place)
I prefer the 2nd solution (with 2+1 drives or 3+1)
your assumption about raid 6 is wrong :)
There are two possibilities:
use
- all 5 drives for the raid 1, with every drive as exact copy of other drives
- mirror (example) 3 drives and use the other two disks as spare (if one of the first 3 disks fails, the 4th will take his place)
I prefer the 2nd solution (with 2+1 drives or 3+1)
your assumption about raid 6 is wrong :)
answered Oct 19 '12 at 8:58
AndreaCi
1,2701918
1,2701918
add a comment |
add a comment |
I've worked with some LenovoEMC PX4-something NAS which had 4 or 12 disks. The first 50 GB of each drive was used as a raid1 for the OS, and the rest of each disk was for user data.
So it has a 4 or 12-way raid1 for the root drive, and a small swap file on this drive. So yes its totally possible and workable, and used in production by commercial solutions.
As long as at least one disk still worked then it would boot and network. The NAS needed to boot off a USB drive if you changed all the disks, to reinstall the base OS.
Here's the 4 bay NAS rebuilding after a disk swap, so no sdd
root@px4-300r-THYAQ42E9:/nfs/# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid1 sde1[4] sdc1[1] sda1[3] sdb1[2]
20964480 blocks super 1.1 [4/3] [UUU_]
[===========>.........] recovery = 58.1% (12188416/20964480) finish=7.2min speed=21337K/sec
md1 : active raid5 sde2[4] sdc2[1] sda2[3] sdb2[2]
5797200384 blocks super 1.1 level 5, 512k chunk, algorithm 2 [4/3] [_UUU]
The /proc/mdstat output was found in an old email - devices are long gone to hardware afterlife, so I can't run a hdparm or bonnie test easily, sorry.
– Criggie
Dec 16 at 23:15
add a comment |
I've worked with some LenovoEMC PX4-something NAS which had 4 or 12 disks. The first 50 GB of each drive was used as a raid1 for the OS, and the rest of each disk was for user data.
So it has a 4 or 12-way raid1 for the root drive, and a small swap file on this drive. So yes its totally possible and workable, and used in production by commercial solutions.
As long as at least one disk still worked then it would boot and network. The NAS needed to boot off a USB drive if you changed all the disks, to reinstall the base OS.
Here's the 4 bay NAS rebuilding after a disk swap, so no sdd
root@px4-300r-THYAQ42E9:/nfs/# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid1 sde1[4] sdc1[1] sda1[3] sdb1[2]
20964480 blocks super 1.1 [4/3] [UUU_]
[===========>.........] recovery = 58.1% (12188416/20964480) finish=7.2min speed=21337K/sec
md1 : active raid5 sde2[4] sdc2[1] sda2[3] sdb2[2]
5797200384 blocks super 1.1 level 5, 512k chunk, algorithm 2 [4/3] [_UUU]
The /proc/mdstat output was found in an old email - devices are long gone to hardware afterlife, so I can't run a hdparm or bonnie test easily, sorry.
– Criggie
Dec 16 at 23:15
add a comment |
I've worked with some LenovoEMC PX4-something NAS which had 4 or 12 disks. The first 50 GB of each drive was used as a raid1 for the OS, and the rest of each disk was for user data.
So it has a 4 or 12-way raid1 for the root drive, and a small swap file on this drive. So yes its totally possible and workable, and used in production by commercial solutions.
As long as at least one disk still worked then it would boot and network. The NAS needed to boot off a USB drive if you changed all the disks, to reinstall the base OS.
Here's the 4 bay NAS rebuilding after a disk swap, so no sdd
root@px4-300r-THYAQ42E9:/nfs/# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid1 sde1[4] sdc1[1] sda1[3] sdb1[2]
20964480 blocks super 1.1 [4/3] [UUU_]
[===========>.........] recovery = 58.1% (12188416/20964480) finish=7.2min speed=21337K/sec
md1 : active raid5 sde2[4] sdc2[1] sda2[3] sdb2[2]
5797200384 blocks super 1.1 level 5, 512k chunk, algorithm 2 [4/3] [_UUU]
I've worked with some LenovoEMC PX4-something NAS which had 4 or 12 disks. The first 50 GB of each drive was used as a raid1 for the OS, and the rest of each disk was for user data.
So it has a 4 or 12-way raid1 for the root drive, and a small swap file on this drive. So yes its totally possible and workable, and used in production by commercial solutions.
As long as at least one disk still worked then it would boot and network. The NAS needed to boot off a USB drive if you changed all the disks, to reinstall the base OS.
Here's the 4 bay NAS rebuilding after a disk swap, so no sdd
root@px4-300r-THYAQ42E9:/nfs/# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid1 sde1[4] sdc1[1] sda1[3] sdb1[2]
20964480 blocks super 1.1 [4/3] [UUU_]
[===========>.........] recovery = 58.1% (12188416/20964480) finish=7.2min speed=21337K/sec
md1 : active raid5 sde2[4] sdc2[1] sda2[3] sdb2[2]
5797200384 blocks super 1.1 level 5, 512k chunk, algorithm 2 [4/3] [_UUU]
answered Dec 16 at 23:14
Criggie
696411
696411
The /proc/mdstat output was found in an old email - devices are long gone to hardware afterlife, so I can't run a hdparm or bonnie test easily, sorry.
– Criggie
Dec 16 at 23:15
add a comment |
The /proc/mdstat output was found in an old email - devices are long gone to hardware afterlife, so I can't run a hdparm or bonnie test easily, sorry.
– Criggie
Dec 16 at 23:15
The /proc/mdstat output was found in an old email - devices are long gone to hardware afterlife, so I can't run a hdparm or bonnie test easily, sorry.
– Criggie
Dec 16 at 23:15
The /proc/mdstat output was found in an old email - devices are long gone to hardware afterlife, so I can't run a hdparm or bonnie test easily, sorry.
– Criggie
Dec 16 at 23:15
add a comment |
There is a lot of misunderstanding of RAID levels.
JBoD is Just a Bunch of Drives, where you can see multiple drives in the same box, this is a most confused non-raid term.
Years ago, some RAID manufacturers could not make a truly JBOD with their RAID engine, they call SPAN (BIG) as JBoD.
RAID1 is a Mirror RAID and it needs TWO HDDs to mirror each other. Whereas CLONE is a Multiple Duplicate HDD with the same volume, for example DAT Optic's eBOX, sBOX (hardware RAID). Hardware RAID boxes generally offer RAID 0, 1, 5, CLONE, Large, and Hot spare.
As for RAID 5/6, both have the parity space portion equal to one drive for RAID5 and two drives for RAID6.
The most common mistaken knowledge is that parity data is located in a dedicated drive(s). That is incorrect. The party space is divided equally among the RAID member HDDs.
Example: RAID5 from five HDD, each of the drives will have 1/5 of space allocated for parity, whereas for RAID6, each drive will have 2/5 of space allocated for parity.
For those who want to argue, if there is a dedicated parity drive(s), let's assume there is, what happens to the RAID if the dedicated parity drive fails?
The RAID can not be rebuilt because the data needed to rebuild is no longer there.
Note that your last comment saying that RAID5 with a dedicated parity drive could not recover from a drive failure is incorrect. Even if RAID5 was implemented with the parity information entirely on one drive, it would still be able to recover from the failure of any one drive. If your argument was true, then that would mean that with distributed parity, 1/5th of your data would be unrecoverable when any drive failed, because you lost the parity information that was on 1/5th of that drive. That argument is just wrong.
– Makyen
Dec 17 at 3:18
"RAID5 with a dedicated parity drive" is RAID 4. The difference between RAID 4 and RAID 5 is that RAID 4 has a dedicated parity drive and RAID 5 has parity distributed across all disks. If the dedicated parity drive fails on a RAID 4 configuration, the parity can be reconstructed from the data, just as would happens to all the parity lost on a failed drive of a RAID 5 array.
– David Schwartz
Dec 17 at 5:51
add a comment |
There is a lot of misunderstanding of RAID levels.
JBoD is Just a Bunch of Drives, where you can see multiple drives in the same box, this is a most confused non-raid term.
Years ago, some RAID manufacturers could not make a truly JBOD with their RAID engine, they call SPAN (BIG) as JBoD.
RAID1 is a Mirror RAID and it needs TWO HDDs to mirror each other. Whereas CLONE is a Multiple Duplicate HDD with the same volume, for example DAT Optic's eBOX, sBOX (hardware RAID). Hardware RAID boxes generally offer RAID 0, 1, 5, CLONE, Large, and Hot spare.
As for RAID 5/6, both have the parity space portion equal to one drive for RAID5 and two drives for RAID6.
The most common mistaken knowledge is that parity data is located in a dedicated drive(s). That is incorrect. The party space is divided equally among the RAID member HDDs.
Example: RAID5 from five HDD, each of the drives will have 1/5 of space allocated for parity, whereas for RAID6, each drive will have 2/5 of space allocated for parity.
For those who want to argue, if there is a dedicated parity drive(s), let's assume there is, what happens to the RAID if the dedicated parity drive fails?
The RAID can not be rebuilt because the data needed to rebuild is no longer there.
Note that your last comment saying that RAID5 with a dedicated parity drive could not recover from a drive failure is incorrect. Even if RAID5 was implemented with the parity information entirely on one drive, it would still be able to recover from the failure of any one drive. If your argument was true, then that would mean that with distributed parity, 1/5th of your data would be unrecoverable when any drive failed, because you lost the parity information that was on 1/5th of that drive. That argument is just wrong.
– Makyen
Dec 17 at 3:18
"RAID5 with a dedicated parity drive" is RAID 4. The difference between RAID 4 and RAID 5 is that RAID 4 has a dedicated parity drive and RAID 5 has parity distributed across all disks. If the dedicated parity drive fails on a RAID 4 configuration, the parity can be reconstructed from the data, just as would happens to all the parity lost on a failed drive of a RAID 5 array.
– David Schwartz
Dec 17 at 5:51
add a comment |
There is a lot of misunderstanding of RAID levels.
JBoD is Just a Bunch of Drives, where you can see multiple drives in the same box, this is a most confused non-raid term.
Years ago, some RAID manufacturers could not make a truly JBOD with their RAID engine, they call SPAN (BIG) as JBoD.
RAID1 is a Mirror RAID and it needs TWO HDDs to mirror each other. Whereas CLONE is a Multiple Duplicate HDD with the same volume, for example DAT Optic's eBOX, sBOX (hardware RAID). Hardware RAID boxes generally offer RAID 0, 1, 5, CLONE, Large, and Hot spare.
As for RAID 5/6, both have the parity space portion equal to one drive for RAID5 and two drives for RAID6.
The most common mistaken knowledge is that parity data is located in a dedicated drive(s). That is incorrect. The party space is divided equally among the RAID member HDDs.
Example: RAID5 from five HDD, each of the drives will have 1/5 of space allocated for parity, whereas for RAID6, each drive will have 2/5 of space allocated for parity.
For those who want to argue, if there is a dedicated parity drive(s), let's assume there is, what happens to the RAID if the dedicated parity drive fails?
The RAID can not be rebuilt because the data needed to rebuild is no longer there.
There is a lot of misunderstanding of RAID levels.
JBoD is Just a Bunch of Drives, where you can see multiple drives in the same box, this is a most confused non-raid term.
Years ago, some RAID manufacturers could not make a truly JBOD with their RAID engine, they call SPAN (BIG) as JBoD.
RAID1 is a Mirror RAID and it needs TWO HDDs to mirror each other. Whereas CLONE is a Multiple Duplicate HDD with the same volume, for example DAT Optic's eBOX, sBOX (hardware RAID). Hardware RAID boxes generally offer RAID 0, 1, 5, CLONE, Large, and Hot spare.
As for RAID 5/6, both have the parity space portion equal to one drive for RAID5 and two drives for RAID6.
The most common mistaken knowledge is that parity data is located in a dedicated drive(s). That is incorrect. The party space is divided equally among the RAID member HDDs.
Example: RAID5 from five HDD, each of the drives will have 1/5 of space allocated for parity, whereas for RAID6, each drive will have 2/5 of space allocated for parity.
For those who want to argue, if there is a dedicated parity drive(s), let's assume there is, what happens to the RAID if the dedicated parity drive fails?
The RAID can not be rebuilt because the data needed to rebuild is no longer there.
edited Dec 17 at 4:04
fixer1234
17.8k144581
17.8k144581
answered Sep 5 '17 at 17:39
FireWire2
1
1
Note that your last comment saying that RAID5 with a dedicated parity drive could not recover from a drive failure is incorrect. Even if RAID5 was implemented with the parity information entirely on one drive, it would still be able to recover from the failure of any one drive. If your argument was true, then that would mean that with distributed parity, 1/5th of your data would be unrecoverable when any drive failed, because you lost the parity information that was on 1/5th of that drive. That argument is just wrong.
– Makyen
Dec 17 at 3:18
"RAID5 with a dedicated parity drive" is RAID 4. The difference between RAID 4 and RAID 5 is that RAID 4 has a dedicated parity drive and RAID 5 has parity distributed across all disks. If the dedicated parity drive fails on a RAID 4 configuration, the parity can be reconstructed from the data, just as would happens to all the parity lost on a failed drive of a RAID 5 array.
– David Schwartz
Dec 17 at 5:51
add a comment |
Note that your last comment saying that RAID5 with a dedicated parity drive could not recover from a drive failure is incorrect. Even if RAID5 was implemented with the parity information entirely on one drive, it would still be able to recover from the failure of any one drive. If your argument was true, then that would mean that with distributed parity, 1/5th of your data would be unrecoverable when any drive failed, because you lost the parity information that was on 1/5th of that drive. That argument is just wrong.
– Makyen
Dec 17 at 3:18
"RAID5 with a dedicated parity drive" is RAID 4. The difference between RAID 4 and RAID 5 is that RAID 4 has a dedicated parity drive and RAID 5 has parity distributed across all disks. If the dedicated parity drive fails on a RAID 4 configuration, the parity can be reconstructed from the data, just as would happens to all the parity lost on a failed drive of a RAID 5 array.
– David Schwartz
Dec 17 at 5:51
Note that your last comment saying that RAID5 with a dedicated parity drive could not recover from a drive failure is incorrect. Even if RAID5 was implemented with the parity information entirely on one drive, it would still be able to recover from the failure of any one drive. If your argument was true, then that would mean that with distributed parity, 1/5th of your data would be unrecoverable when any drive failed, because you lost the parity information that was on 1/5th of that drive. That argument is just wrong.
– Makyen
Dec 17 at 3:18
Note that your last comment saying that RAID5 with a dedicated parity drive could not recover from a drive failure is incorrect. Even if RAID5 was implemented with the parity information entirely on one drive, it would still be able to recover from the failure of any one drive. If your argument was true, then that would mean that with distributed parity, 1/5th of your data would be unrecoverable when any drive failed, because you lost the parity information that was on 1/5th of that drive. That argument is just wrong.
– Makyen
Dec 17 at 3:18
"RAID5 with a dedicated parity drive" is RAID 4. The difference between RAID 4 and RAID 5 is that RAID 4 has a dedicated parity drive and RAID 5 has parity distributed across all disks. If the dedicated parity drive fails on a RAID 4 configuration, the parity can be reconstructed from the data, just as would happens to all the parity lost on a failed drive of a RAID 5 array.
– David Schwartz
Dec 17 at 5:51
"RAID5 with a dedicated parity drive" is RAID 4. The difference between RAID 4 and RAID 5 is that RAID 4 has a dedicated parity drive and RAID 5 has parity distributed across all disks. If the dedicated parity drive fails on a RAID 4 configuration, the parity can be reconstructed from the data, just as would happens to all the parity lost on a failed drive of a RAID 5 array.
– David Schwartz
Dec 17 at 5:51
add a comment |
Thanks for contributing an answer to Super User!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f489793%2fcan-raid-1-have-more-than-two-drives%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown