Travis Kriza
2004-04-06 00:08:56 UTC
Hello. Just getting DRBD setup on some boxes to do some NFS serving.
Running a shared volume of about 40gigs. Anyways, I've got it running
on a set of HPs running RedHat Enterprise 3 w/ ultramonkey.
Anyways, first I completely missed the sync limits in the drbd.conf
file, and was wondering what was taking so long. 40Gigs at 1Meg max
would take a while. Anyways, I knocked that max up to 100M (running a
shared gigabit switch).
So, I reset the sync, and I notice its going a decent clip in the 40meg
range. I distract myself and get to another chore. I come back to the
machine and notice this rate has dropped to about 500. (It happens to
do this not too far after reaching near the 10gig mark). I knocked up
the sync-min to 10megs. However, this is still occurring and I'm not
quite sure whats going on.
I am running an IDE software raid on each box as well, which I know
could have some impact on performance, although I'm guessing it should
be capable of sustaining more throughput than just 500k/s.
Any idea's? Thanks,
Travis
PS... Here is a trimmed version of the drdb.conf file:
#
# drbd.conf example
#
resource drbd0 {
protocol = C
fsckcmd = fsck -p -y
disk {
do-panic
disk-size = 40313912k
}
net {
sync-min = 10M
sync-max = 100M # maximal average syncer bandwidth
tl-size = 5000 # transfer log size, ensures strict write
ordering
timeout = 60 # unit: 0.1 seconds
connect-int = 10 # unit: seconds
ping-int = 10 # unit: seconds
ko-count = 4 # if some block send times out this many times,
# the peer is considered dead, even if it still
# answeres ping requests
}
on server-1 {
device = /dev/nb0
disk = /dev/md2
address = 192.168.0.1
port = 7788
}
on server-2 {
device = /dev/nb0
disk = /dev/md4
address = 192.168.0.2
port = 7788
}
}
Running a shared volume of about 40gigs. Anyways, I've got it running
on a set of HPs running RedHat Enterprise 3 w/ ultramonkey.
Anyways, first I completely missed the sync limits in the drbd.conf
file, and was wondering what was taking so long. 40Gigs at 1Meg max
would take a while. Anyways, I knocked that max up to 100M (running a
shared gigabit switch).
So, I reset the sync, and I notice its going a decent clip in the 40meg
range. I distract myself and get to another chore. I come back to the
machine and notice this rate has dropped to about 500. (It happens to
do this not too far after reaching near the 10gig mark). I knocked up
the sync-min to 10megs. However, this is still occurring and I'm not
quite sure whats going on.
I am running an IDE software raid on each box as well, which I know
could have some impact on performance, although I'm guessing it should
be capable of sustaining more throughput than just 500k/s.
Any idea's? Thanks,
Travis
PS... Here is a trimmed version of the drdb.conf file:
#
# drbd.conf example
#
resource drbd0 {
protocol = C
fsckcmd = fsck -p -y
disk {
do-panic
disk-size = 40313912k
}
net {
sync-min = 10M
sync-max = 100M # maximal average syncer bandwidth
tl-size = 5000 # transfer log size, ensures strict write
ordering
timeout = 60 # unit: 0.1 seconds
connect-int = 10 # unit: seconds
ping-int = 10 # unit: seconds
ko-count = 4 # if some block send times out this many times,
# the peer is considered dead, even if it still
# answeres ping requests
}
on server-1 {
device = /dev/nb0
disk = /dev/md2
address = 192.168.0.1
port = 7788
}
on server-2 {
device = /dev/nb0
disk = /dev/md4
address = 192.168.0.2
port = 7788
}
}