Hi All
Has anyone got any suggestions as to where the problem maybe in the following scenario.
We have a VFP application, using DBC/DBF's on a windows server, it has been running fine and quick for years. Various windows clients (all now Windows 10).
The server was a physical server but has now been replaced with a virtual machine hosted on a HyperV machine on premises.
Most of the 20 clients are still running fine, but a handful (3 identified) at the moment are running really slow, they are only running slow in the VFP application any other use of the server like copying files runs fine. They also seem to be the better spec'd machines out of the 20. Nothing has changed on the client pc's other than the unc path where the application is looking for the data.
Any suggestions?
TIA
Chris.
--- StripMime Report -- processed MIME parts --- multipart/alternative text/plain (text body -- kept) text/html ---
Hi Chris,
Most of the 20 clients are still running fine, but a handful (3 identified) at the moment are running really slow, they are only running slow in the VFP application any other use of the server like copying files runs fine. They also seem to be the better spec'd machines out of the 20. Nothing has changed on the client pc's other than the unc path where the application is looking for the data.
I had issues with systems were the number of TCP/IP packets that could be exchanged between client and server depended on the version of Windows on the server. So the same client would exchange 250 packets per second with a Windows 2012 R2 but 1250 with Windows 2008 R2. Because VFP requests records one at a time and only reads the next one when the first has been processed, we ended up reading a maximum of one record every 4ms or every 0.8 ms. Both were slow, but the new Windows server considerably slower.
My first step would be to run Process Monitor on the client once for the faster machines and then once for the slower system. Then look at the timing between ReadFile requests in some table. Preferably one where multiple records are read in a row, such as a SCAN loop, SELECT statement or a grid control. Look if there's a significant difference in the duration between two records and if this number is somewhat constant.
If this doesn't help, then WireShark is a great tool, but does require some digging into SMB protocols.
Thanks Christof, assuming at the moment this was the issue did you manage to speed it up? Was there a configuration on the client which determined how many packets could be exchanged?
-----Original Message----- From: ProfoxTech profoxtech-bounces@leafe.com On Behalf Of Christof Wollenhaupt Sent: 26 October 2021 11:23 To: profoxtech@leafe.com Subject: Re: Slow performance after moving to VM
Hi Chris,
Most of the 20 clients are still running fine, but a handful (3 identified) at the moment are running really slow, they are only running slow in the VFP application any other use of the server like copying files runs fine. They also seem to be the better spec'd machines out of the 20. Nothing has changed on the client pc's other than the unc path where the application is looking for the data.
I had issues with systems were the number of TCP/IP packets that could be exchanged between client and server depended on the version of Windows on the server. So the same client would exchange 250 packets per second with a Windows 2012 R2 but 1250 with Windows 2008 R2. Because VFP requests records one at a time and only reads the next one when the first has been processed, we ended up reading a maximum of one record every 4ms or every 0.8 ms. Both were slow, but the new Windows server considerably slower.
My first step would be to run Process Monitor on the client once for the faster machines and then once for the slower system. Then look at the timing between ReadFile requests in some table. Preferably one where multiple records are read in a row, such as a SCAN loop, SELECT statement or a grid control. Look if there's a significant difference in the duration between two records and if this number is somewhat constant.
If this doesn't help, then WireShark is a great tool, but does require some digging into SMB protocols.
-- Christof [excessive quoting removed by server]
Hi Chris,
Thanks Christof, assuming at the moment this was the issue did you manage to speed it up? Was there a configuration on the client which determined how many packets could be exchanged?
No, in this case it was handed over to the IT people. Setting up an extensive virtual network with various Windows servers was on my todo list for a long time, but I never found the time to actually install and configure all these machines.
There are just too many variables that could have an impact. Like there is SMB signing where packets are signed and validated. With signing the certificate infrastructure might have an impact. There are various ways to configure network throughput on the servers as well as on any router and switch in between. There's also a growing number of "smart" solutions that tries to optimize network throughput, but makes it really hard to get repeatable results. On top of that there might be extra services on the server from obvious ones like virus scanners, to less obvious ones such as filter drivers for distributed file systems.
Then there's the whole thing of OpLocks and client side caching that impacts performance and stability depending on whether a file is used, has recently been used, or is not used by more than one client machine in the network.
Plus SMB 2.x/3.x is a protocol that dynamically adjusts its behavior based on network performance and quality.
Whatever the reason is, the fewer read/write requests you make, the better your application will perform, if that is the cause of your problem, even when you can't fix the network.
Other reasons might include a larger number of users adding records, modifying memo fields or updating index fields. All of these are operations that require exclusive access in FoxPro and therefore can only be performed by one client at a time. These operations lock the table, index and memo field header, respectively. That would be an entirely different problem that in ProcMon you would notice with locker than usual times between the LockFile and the Read/WriteFile lines as well as the total number of LockFile requests for each period of using the application when monitored on the server.
I ran into something like you're discussing below back in 2017 (?) but thankfully my client just moved to my newer version of software that used a MariaDB (MySQL) backend instead of the DBFs. The old DBF version for them was suffering network/corruption problems. Interestingly enough, those problems went away when we made the switch. (Ever since Bob Lee's MySQL session at WhilFest in 2002/2003, I've never used DBFs on the backend for things I've built.)
On 10/26/2021 5:04 AM, Chris Davis wrote:
Hi All
Has anyone got any suggestions as to where the problem maybe in the following scenario.
We have a VFP application, using DBC/DBF's on a windows server, it has been running fine and quick for years. Various windows clients (all now Windows 10).
The server was a physical server but has now been replaced with a virtual machine hosted on a HyperV machine on premises.
Most of the 20 clients are still running fine, but a handful (3 identified) at the moment are running really slow, they are only running slow in the VFP application any other use of the server like copying files runs fine. They also seem to be the better spec'd machines out of the 20. Nothing has changed on the client pc's other than the unc path where the application is looking for the data.
Any suggestions?
TIA
Chris.
--- StripMime Report -- processed MIME parts --- multipart/alternative text/plain (text body -- kept) text/html
[excessive quoting removed by server]