Slow file transfers between home server and PC over ethernet
I have an HP Proliant DL385 G5p server which I use to host a personal Debian 9 fileserver via a VM on a hypervisor. This VM has its own 1Gb/s ethernet connection to a switch of the same speed, which my regular PC is also connected to. All three devices are capable of running at 1Gb/s.
I previously used a Debian install direct to the disk on the server with the rest of the network the same, and could achieve transfer speeds near enough to the 1Gb/s advertised, however, since I have started running the fileserver on a VM, transfer speeds are somewhere in the 5MB/s (40Mb/s) range, on a good day.
The software I am using to transfer my files from my PC (running Windows 10) is called "SFTP Net Drive", which allows me to view the contents of the fileserver from within Explorer (I didn't want to have to use a different software tool to connect to the server every time just because Windows doesn't support SFTP). When I was running the server directly on the disk without a hypervisor, I was using a program called "WinSCP", which allows multiple (up to 9) simultaneous transfers over the same network. This would saturate the 1000Mb/s connection and I would see no poor speeds except when transferring really small files (less than 1KB).
I have used IPerf to test the connection to the server from my PC (and vice-versa to be sure) and the connection is near enough what it should be, ~1000Mb/s. I have also tested disk write speeds on the server but thet also seem to be running fine (I think around 6000MB/s but I can't quite remember. Don't remember what tool I used to test with either). There are 4 72GB physical disks in RAID 5, which the hypervisor interprets as one logical drive. The hypervisor then assigns the VM another logical partition of this drive, which can presumably be split up again - by Debian in my case - using LVM. (Don't think you need all of that info but it might be useful).
Using the fact that the server performed fine before, I believe it's safe to assume that this is a software issue or misconfiguration, probably on the Windows side. One possible explanation for the slowness could be the fact that Windows seems to be able to transfer only one thing at a time using SFTP Drive? Any help in figuring this out and rectifying it will be much appreciated.
Edit: Okay, so I've found another strange thing that happens when I'm transferring files to the server using the software that I used to use, WinSCP. When transferring some music files to the server (~50MB each, roughly 300 of them) after all 9 simultaneous connections had been established, the transfer rate peaked at 110MB/s, where it stayed for about 20 seconds. It then promptly dropped back to 20-30MB/s and stayed there until the transfer was complete. This leads me to believe that there is some kind of buffer that, once saturated, slows down the transfer rate to keep up with the requests to write to disk? Not really sure if that makes any sense but it seems logical to me.
Edit 2: The transfer speeds are just as bad when moving files from the server to my PC, around 3-8MB/s according to Windows.
networking windows-10 sftp gigabit-ethernet file-server
add a comment |
I have an HP Proliant DL385 G5p server which I use to host a personal Debian 9 fileserver via a VM on a hypervisor. This VM has its own 1Gb/s ethernet connection to a switch of the same speed, which my regular PC is also connected to. All three devices are capable of running at 1Gb/s.
I previously used a Debian install direct to the disk on the server with the rest of the network the same, and could achieve transfer speeds near enough to the 1Gb/s advertised, however, since I have started running the fileserver on a VM, transfer speeds are somewhere in the 5MB/s (40Mb/s) range, on a good day.
The software I am using to transfer my files from my PC (running Windows 10) is called "SFTP Net Drive", which allows me to view the contents of the fileserver from within Explorer (I didn't want to have to use a different software tool to connect to the server every time just because Windows doesn't support SFTP). When I was running the server directly on the disk without a hypervisor, I was using a program called "WinSCP", which allows multiple (up to 9) simultaneous transfers over the same network. This would saturate the 1000Mb/s connection and I would see no poor speeds except when transferring really small files (less than 1KB).
I have used IPerf to test the connection to the server from my PC (and vice-versa to be sure) and the connection is near enough what it should be, ~1000Mb/s. I have also tested disk write speeds on the server but thet also seem to be running fine (I think around 6000MB/s but I can't quite remember. Don't remember what tool I used to test with either). There are 4 72GB physical disks in RAID 5, which the hypervisor interprets as one logical drive. The hypervisor then assigns the VM another logical partition of this drive, which can presumably be split up again - by Debian in my case - using LVM. (Don't think you need all of that info but it might be useful).
Using the fact that the server performed fine before, I believe it's safe to assume that this is a software issue or misconfiguration, probably on the Windows side. One possible explanation for the slowness could be the fact that Windows seems to be able to transfer only one thing at a time using SFTP Drive? Any help in figuring this out and rectifying it will be much appreciated.
Edit: Okay, so I've found another strange thing that happens when I'm transferring files to the server using the software that I used to use, WinSCP. When transferring some music files to the server (~50MB each, roughly 300 of them) after all 9 simultaneous connections had been established, the transfer rate peaked at 110MB/s, where it stayed for about 20 seconds. It then promptly dropped back to 20-30MB/s and stayed there until the transfer was complete. This leads me to believe that there is some kind of buffer that, once saturated, slows down the transfer rate to keep up with the requests to write to disk? Not really sure if that makes any sense but it seems logical to me.
Edit 2: The transfer speeds are just as bad when moving files from the server to my PC, around 3-8MB/s according to Windows.
networking windows-10 sftp gigabit-ethernet file-server
Google "bufferbloat". It's difficult to find those buffers, they could be for example somewhere in your router.
– dirkt
Nov 23 '18 at 12:17
add a comment |
I have an HP Proliant DL385 G5p server which I use to host a personal Debian 9 fileserver via a VM on a hypervisor. This VM has its own 1Gb/s ethernet connection to a switch of the same speed, which my regular PC is also connected to. All three devices are capable of running at 1Gb/s.
I previously used a Debian install direct to the disk on the server with the rest of the network the same, and could achieve transfer speeds near enough to the 1Gb/s advertised, however, since I have started running the fileserver on a VM, transfer speeds are somewhere in the 5MB/s (40Mb/s) range, on a good day.
The software I am using to transfer my files from my PC (running Windows 10) is called "SFTP Net Drive", which allows me to view the contents of the fileserver from within Explorer (I didn't want to have to use a different software tool to connect to the server every time just because Windows doesn't support SFTP). When I was running the server directly on the disk without a hypervisor, I was using a program called "WinSCP", which allows multiple (up to 9) simultaneous transfers over the same network. This would saturate the 1000Mb/s connection and I would see no poor speeds except when transferring really small files (less than 1KB).
I have used IPerf to test the connection to the server from my PC (and vice-versa to be sure) and the connection is near enough what it should be, ~1000Mb/s. I have also tested disk write speeds on the server but thet also seem to be running fine (I think around 6000MB/s but I can't quite remember. Don't remember what tool I used to test with either). There are 4 72GB physical disks in RAID 5, which the hypervisor interprets as one logical drive. The hypervisor then assigns the VM another logical partition of this drive, which can presumably be split up again - by Debian in my case - using LVM. (Don't think you need all of that info but it might be useful).
Using the fact that the server performed fine before, I believe it's safe to assume that this is a software issue or misconfiguration, probably on the Windows side. One possible explanation for the slowness could be the fact that Windows seems to be able to transfer only one thing at a time using SFTP Drive? Any help in figuring this out and rectifying it will be much appreciated.
Edit: Okay, so I've found another strange thing that happens when I'm transferring files to the server using the software that I used to use, WinSCP. When transferring some music files to the server (~50MB each, roughly 300 of them) after all 9 simultaneous connections had been established, the transfer rate peaked at 110MB/s, where it stayed for about 20 seconds. It then promptly dropped back to 20-30MB/s and stayed there until the transfer was complete. This leads me to believe that there is some kind of buffer that, once saturated, slows down the transfer rate to keep up with the requests to write to disk? Not really sure if that makes any sense but it seems logical to me.
Edit 2: The transfer speeds are just as bad when moving files from the server to my PC, around 3-8MB/s according to Windows.
networking windows-10 sftp gigabit-ethernet file-server
I have an HP Proliant DL385 G5p server which I use to host a personal Debian 9 fileserver via a VM on a hypervisor. This VM has its own 1Gb/s ethernet connection to a switch of the same speed, which my regular PC is also connected to. All three devices are capable of running at 1Gb/s.
I previously used a Debian install direct to the disk on the server with the rest of the network the same, and could achieve transfer speeds near enough to the 1Gb/s advertised, however, since I have started running the fileserver on a VM, transfer speeds are somewhere in the 5MB/s (40Mb/s) range, on a good day.
The software I am using to transfer my files from my PC (running Windows 10) is called "SFTP Net Drive", which allows me to view the contents of the fileserver from within Explorer (I didn't want to have to use a different software tool to connect to the server every time just because Windows doesn't support SFTP). When I was running the server directly on the disk without a hypervisor, I was using a program called "WinSCP", which allows multiple (up to 9) simultaneous transfers over the same network. This would saturate the 1000Mb/s connection and I would see no poor speeds except when transferring really small files (less than 1KB).
I have used IPerf to test the connection to the server from my PC (and vice-versa to be sure) and the connection is near enough what it should be, ~1000Mb/s. I have also tested disk write speeds on the server but thet also seem to be running fine (I think around 6000MB/s but I can't quite remember. Don't remember what tool I used to test with either). There are 4 72GB physical disks in RAID 5, which the hypervisor interprets as one logical drive. The hypervisor then assigns the VM another logical partition of this drive, which can presumably be split up again - by Debian in my case - using LVM. (Don't think you need all of that info but it might be useful).
Using the fact that the server performed fine before, I believe it's safe to assume that this is a software issue or misconfiguration, probably on the Windows side. One possible explanation for the slowness could be the fact that Windows seems to be able to transfer only one thing at a time using SFTP Drive? Any help in figuring this out and rectifying it will be much appreciated.
Edit: Okay, so I've found another strange thing that happens when I'm transferring files to the server using the software that I used to use, WinSCP. When transferring some music files to the server (~50MB each, roughly 300 of them) after all 9 simultaneous connections had been established, the transfer rate peaked at 110MB/s, where it stayed for about 20 seconds. It then promptly dropped back to 20-30MB/s and stayed there until the transfer was complete. This leads me to believe that there is some kind of buffer that, once saturated, slows down the transfer rate to keep up with the requests to write to disk? Not really sure if that makes any sense but it seems logical to me.
Edit 2: The transfer speeds are just as bad when moving files from the server to my PC, around 3-8MB/s according to Windows.
networking windows-10 sftp gigabit-ethernet file-server
networking windows-10 sftp gigabit-ethernet file-server
edited Nov 23 '18 at 11:32
asked Nov 9 '18 at 10:26
James Stone
116
116
Google "bufferbloat". It's difficult to find those buffers, they could be for example somewhere in your router.
– dirkt
Nov 23 '18 at 12:17
add a comment |
Google "bufferbloat". It's difficult to find those buffers, they could be for example somewhere in your router.
– dirkt
Nov 23 '18 at 12:17
Google "bufferbloat". It's difficult to find those buffers, they could be for example somewhere in your router.
– dirkt
Nov 23 '18 at 12:17
Google "bufferbloat". It's difficult to find those buffers, they could be for example somewhere in your router.
– dirkt
Nov 23 '18 at 12:17
add a comment |
1 Answer
1
active
oldest
votes
did you benchmark disk speed of your VM. Looks like that to me especially if you are using QCOW2 you can get bad speeds: https://serverfault.com/questions/407842/incredibly-slow-kvm-disk-performance-qcow2-disk-files-virtio or https://serverfault.com/questions/675704/extremely-slow-qemu-storage-performance-with-qcow2-images just google "slow qcow2" and see,
Sorry for the slow response, been quite busy lately. Anyway, I have tested the read and write speeds of the VM and got some decent results. I tested read speeds usinghdparm -Tt /dev/xvda5
and got 2000+MB/s cached reads and around 100MB/s buffered disk reads. Tested write speeds withdd
with a few different data sizes and got over 100MB/s when using small files > 10KB but pretty rubbish results when using 1MB; under 50MB/s.
– James Stone
Nov 23 '18 at 11:26
Looks like we have found one of the issue with your vm. But it does not explain everything else. Next steps to check are two things. Try to create ramdisk under you vm to be used as sharing and then check cpu usage when you hit the network limits with your ramdisk. I fear you are having also bad virtualization for your network device routing
– Abdurrahim
Nov 23 '18 at 15:58
I am using a program on my PC called XCP-ng Centre with performance data readouts. I will include screenshots of this in an edit.
– James Stone
Nov 23 '18 at 17:46
Again sorry for the slow response. As of yet I've not been able to do this, however, a thought did cross my mind: I am currently using RAID 5 on the server, and according to [this site](www.raid-calculator.com/default.aspx) with 4 0.072TB disks in RAID 1+0 I could have potentially higher disk performance. Would this be worth testing? I would probably have to wipe the server and start afresh though.
– James Stone
Nov 29 '18 at 17:49
No altought it will improve speed it won't resolve your actual problem. What you are struggling is not about host performance it's virtualization issue. Most likely I would find something wrong if I checked this XCP-ng (some kind of preconfigured xen hypervisor). If you have time try something else like using libvirt directly or maybe go easiest try virtualbox or vmware trial. You would get good speeds when you configure correctly. Also be sure to check what this XCP-ng offers as well: If you can test different network virtualization, raw disks (preallocated) etc and see
– Abdurrahim
Nov 29 '18 at 19:18
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "3"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1374040%2fslow-file-transfers-between-home-server-and-pc-over-ethernet%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
did you benchmark disk speed of your VM. Looks like that to me especially if you are using QCOW2 you can get bad speeds: https://serverfault.com/questions/407842/incredibly-slow-kvm-disk-performance-qcow2-disk-files-virtio or https://serverfault.com/questions/675704/extremely-slow-qemu-storage-performance-with-qcow2-images just google "slow qcow2" and see,
Sorry for the slow response, been quite busy lately. Anyway, I have tested the read and write speeds of the VM and got some decent results. I tested read speeds usinghdparm -Tt /dev/xvda5
and got 2000+MB/s cached reads and around 100MB/s buffered disk reads. Tested write speeds withdd
with a few different data sizes and got over 100MB/s when using small files > 10KB but pretty rubbish results when using 1MB; under 50MB/s.
– James Stone
Nov 23 '18 at 11:26
Looks like we have found one of the issue with your vm. But it does not explain everything else. Next steps to check are two things. Try to create ramdisk under you vm to be used as sharing and then check cpu usage when you hit the network limits with your ramdisk. I fear you are having also bad virtualization for your network device routing
– Abdurrahim
Nov 23 '18 at 15:58
I am using a program on my PC called XCP-ng Centre with performance data readouts. I will include screenshots of this in an edit.
– James Stone
Nov 23 '18 at 17:46
Again sorry for the slow response. As of yet I've not been able to do this, however, a thought did cross my mind: I am currently using RAID 5 on the server, and according to [this site](www.raid-calculator.com/default.aspx) with 4 0.072TB disks in RAID 1+0 I could have potentially higher disk performance. Would this be worth testing? I would probably have to wipe the server and start afresh though.
– James Stone
Nov 29 '18 at 17:49
No altought it will improve speed it won't resolve your actual problem. What you are struggling is not about host performance it's virtualization issue. Most likely I would find something wrong if I checked this XCP-ng (some kind of preconfigured xen hypervisor). If you have time try something else like using libvirt directly or maybe go easiest try virtualbox or vmware trial. You would get good speeds when you configure correctly. Also be sure to check what this XCP-ng offers as well: If you can test different network virtualization, raw disks (preallocated) etc and see
– Abdurrahim
Nov 29 '18 at 19:18
add a comment |
did you benchmark disk speed of your VM. Looks like that to me especially if you are using QCOW2 you can get bad speeds: https://serverfault.com/questions/407842/incredibly-slow-kvm-disk-performance-qcow2-disk-files-virtio or https://serverfault.com/questions/675704/extremely-slow-qemu-storage-performance-with-qcow2-images just google "slow qcow2" and see,
Sorry for the slow response, been quite busy lately. Anyway, I have tested the read and write speeds of the VM and got some decent results. I tested read speeds usinghdparm -Tt /dev/xvda5
and got 2000+MB/s cached reads and around 100MB/s buffered disk reads. Tested write speeds withdd
with a few different data sizes and got over 100MB/s when using small files > 10KB but pretty rubbish results when using 1MB; under 50MB/s.
– James Stone
Nov 23 '18 at 11:26
Looks like we have found one of the issue with your vm. But it does not explain everything else. Next steps to check are two things. Try to create ramdisk under you vm to be used as sharing and then check cpu usage when you hit the network limits with your ramdisk. I fear you are having also bad virtualization for your network device routing
– Abdurrahim
Nov 23 '18 at 15:58
I am using a program on my PC called XCP-ng Centre with performance data readouts. I will include screenshots of this in an edit.
– James Stone
Nov 23 '18 at 17:46
Again sorry for the slow response. As of yet I've not been able to do this, however, a thought did cross my mind: I am currently using RAID 5 on the server, and according to [this site](www.raid-calculator.com/default.aspx) with 4 0.072TB disks in RAID 1+0 I could have potentially higher disk performance. Would this be worth testing? I would probably have to wipe the server and start afresh though.
– James Stone
Nov 29 '18 at 17:49
No altought it will improve speed it won't resolve your actual problem. What you are struggling is not about host performance it's virtualization issue. Most likely I would find something wrong if I checked this XCP-ng (some kind of preconfigured xen hypervisor). If you have time try something else like using libvirt directly or maybe go easiest try virtualbox or vmware trial. You would get good speeds when you configure correctly. Also be sure to check what this XCP-ng offers as well: If you can test different network virtualization, raw disks (preallocated) etc and see
– Abdurrahim
Nov 29 '18 at 19:18
add a comment |
did you benchmark disk speed of your VM. Looks like that to me especially if you are using QCOW2 you can get bad speeds: https://serverfault.com/questions/407842/incredibly-slow-kvm-disk-performance-qcow2-disk-files-virtio or https://serverfault.com/questions/675704/extremely-slow-qemu-storage-performance-with-qcow2-images just google "slow qcow2" and see,
did you benchmark disk speed of your VM. Looks like that to me especially if you are using QCOW2 you can get bad speeds: https://serverfault.com/questions/407842/incredibly-slow-kvm-disk-performance-qcow2-disk-files-virtio or https://serverfault.com/questions/675704/extremely-slow-qemu-storage-performance-with-qcow2-images just google "slow qcow2" and see,
answered Nov 11 '18 at 15:36
Abdurrahim
1011
1011
Sorry for the slow response, been quite busy lately. Anyway, I have tested the read and write speeds of the VM and got some decent results. I tested read speeds usinghdparm -Tt /dev/xvda5
and got 2000+MB/s cached reads and around 100MB/s buffered disk reads. Tested write speeds withdd
with a few different data sizes and got over 100MB/s when using small files > 10KB but pretty rubbish results when using 1MB; under 50MB/s.
– James Stone
Nov 23 '18 at 11:26
Looks like we have found one of the issue with your vm. But it does not explain everything else. Next steps to check are two things. Try to create ramdisk under you vm to be used as sharing and then check cpu usage when you hit the network limits with your ramdisk. I fear you are having also bad virtualization for your network device routing
– Abdurrahim
Nov 23 '18 at 15:58
I am using a program on my PC called XCP-ng Centre with performance data readouts. I will include screenshots of this in an edit.
– James Stone
Nov 23 '18 at 17:46
Again sorry for the slow response. As of yet I've not been able to do this, however, a thought did cross my mind: I am currently using RAID 5 on the server, and according to [this site](www.raid-calculator.com/default.aspx) with 4 0.072TB disks in RAID 1+0 I could have potentially higher disk performance. Would this be worth testing? I would probably have to wipe the server and start afresh though.
– James Stone
Nov 29 '18 at 17:49
No altought it will improve speed it won't resolve your actual problem. What you are struggling is not about host performance it's virtualization issue. Most likely I would find something wrong if I checked this XCP-ng (some kind of preconfigured xen hypervisor). If you have time try something else like using libvirt directly or maybe go easiest try virtualbox or vmware trial. You would get good speeds when you configure correctly. Also be sure to check what this XCP-ng offers as well: If you can test different network virtualization, raw disks (preallocated) etc and see
– Abdurrahim
Nov 29 '18 at 19:18
add a comment |
Sorry for the slow response, been quite busy lately. Anyway, I have tested the read and write speeds of the VM and got some decent results. I tested read speeds usinghdparm -Tt /dev/xvda5
and got 2000+MB/s cached reads and around 100MB/s buffered disk reads. Tested write speeds withdd
with a few different data sizes and got over 100MB/s when using small files > 10KB but pretty rubbish results when using 1MB; under 50MB/s.
– James Stone
Nov 23 '18 at 11:26
Looks like we have found one of the issue with your vm. But it does not explain everything else. Next steps to check are two things. Try to create ramdisk under you vm to be used as sharing and then check cpu usage when you hit the network limits with your ramdisk. I fear you are having also bad virtualization for your network device routing
– Abdurrahim
Nov 23 '18 at 15:58
I am using a program on my PC called XCP-ng Centre with performance data readouts. I will include screenshots of this in an edit.
– James Stone
Nov 23 '18 at 17:46
Again sorry for the slow response. As of yet I've not been able to do this, however, a thought did cross my mind: I am currently using RAID 5 on the server, and according to [this site](www.raid-calculator.com/default.aspx) with 4 0.072TB disks in RAID 1+0 I could have potentially higher disk performance. Would this be worth testing? I would probably have to wipe the server and start afresh though.
– James Stone
Nov 29 '18 at 17:49
No altought it will improve speed it won't resolve your actual problem. What you are struggling is not about host performance it's virtualization issue. Most likely I would find something wrong if I checked this XCP-ng (some kind of preconfigured xen hypervisor). If you have time try something else like using libvirt directly or maybe go easiest try virtualbox or vmware trial. You would get good speeds when you configure correctly. Also be sure to check what this XCP-ng offers as well: If you can test different network virtualization, raw disks (preallocated) etc and see
– Abdurrahim
Nov 29 '18 at 19:18
Sorry for the slow response, been quite busy lately. Anyway, I have tested the read and write speeds of the VM and got some decent results. I tested read speeds using
hdparm -Tt /dev/xvda5
and got 2000+MB/s cached reads and around 100MB/s buffered disk reads. Tested write speeds with dd
with a few different data sizes and got over 100MB/s when using small files > 10KB but pretty rubbish results when using 1MB; under 50MB/s.– James Stone
Nov 23 '18 at 11:26
Sorry for the slow response, been quite busy lately. Anyway, I have tested the read and write speeds of the VM and got some decent results. I tested read speeds using
hdparm -Tt /dev/xvda5
and got 2000+MB/s cached reads and around 100MB/s buffered disk reads. Tested write speeds with dd
with a few different data sizes and got over 100MB/s when using small files > 10KB but pretty rubbish results when using 1MB; under 50MB/s.– James Stone
Nov 23 '18 at 11:26
Looks like we have found one of the issue with your vm. But it does not explain everything else. Next steps to check are two things. Try to create ramdisk under you vm to be used as sharing and then check cpu usage when you hit the network limits with your ramdisk. I fear you are having also bad virtualization for your network device routing
– Abdurrahim
Nov 23 '18 at 15:58
Looks like we have found one of the issue with your vm. But it does not explain everything else. Next steps to check are two things. Try to create ramdisk under you vm to be used as sharing and then check cpu usage when you hit the network limits with your ramdisk. I fear you are having also bad virtualization for your network device routing
– Abdurrahim
Nov 23 '18 at 15:58
I am using a program on my PC called XCP-ng Centre with performance data readouts. I will include screenshots of this in an edit.
– James Stone
Nov 23 '18 at 17:46
I am using a program on my PC called XCP-ng Centre with performance data readouts. I will include screenshots of this in an edit.
– James Stone
Nov 23 '18 at 17:46
Again sorry for the slow response. As of yet I've not been able to do this, however, a thought did cross my mind: I am currently using RAID 5 on the server, and according to [this site](www.raid-calculator.com/default.aspx) with 4 0.072TB disks in RAID 1+0 I could have potentially higher disk performance. Would this be worth testing? I would probably have to wipe the server and start afresh though.
– James Stone
Nov 29 '18 at 17:49
Again sorry for the slow response. As of yet I've not been able to do this, however, a thought did cross my mind: I am currently using RAID 5 on the server, and according to [this site](www.raid-calculator.com/default.aspx) with 4 0.072TB disks in RAID 1+0 I could have potentially higher disk performance. Would this be worth testing? I would probably have to wipe the server and start afresh though.
– James Stone
Nov 29 '18 at 17:49
No altought it will improve speed it won't resolve your actual problem. What you are struggling is not about host performance it's virtualization issue. Most likely I would find something wrong if I checked this XCP-ng (some kind of preconfigured xen hypervisor). If you have time try something else like using libvirt directly or maybe go easiest try virtualbox or vmware trial. You would get good speeds when you configure correctly. Also be sure to check what this XCP-ng offers as well: If you can test different network virtualization, raw disks (preallocated) etc and see
– Abdurrahim
Nov 29 '18 at 19:18
No altought it will improve speed it won't resolve your actual problem. What you are struggling is not about host performance it's virtualization issue. Most likely I would find something wrong if I checked this XCP-ng (some kind of preconfigured xen hypervisor). If you have time try something else like using libvirt directly or maybe go easiest try virtualbox or vmware trial. You would get good speeds when you configure correctly. Also be sure to check what this XCP-ng offers as well: If you can test different network virtualization, raw disks (preallocated) etc and see
– Abdurrahim
Nov 29 '18 at 19:18
add a comment |
Thanks for contributing an answer to Super User!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1374040%2fslow-file-transfers-between-home-server-and-pc-over-ethernet%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Google "bufferbloat". It's difficult to find those buffers, they could be for example somewhere in your router.
– dirkt
Nov 23 '18 at 12:17