If you have multiple repetitive queries to run on same db, consider writing them to a working table on that server's db and then running
exec ProccessMyBatch
Of course, you have to create the sproc of ProcessMyBatch there as well. I have no idea what the return of the initial process is so this may be a good idea or not.
The point I am making is to do the most work where it is fastest and then query the results a tiny bit later.
We are implementing this in a rewrite of our pricing update that runs every month to do likewise right now.
On Thu, Mar 29, 2018 at 1:58 PM, Paul H. Tarver paul@tpcqpc.com wrote:
I'm working on a project which involves issuing multiple queries across several SQL databases and servers. At this point, I'm polling about 30 databases and retrieving data from four tables from each database.
I really have no reason to complain as my procedure including retrieving the data synchronously over a VPN, processing results locally and then exporting the data to local files only takes about 6 minutes.
However, I see this as an opportunity to tune my sql back-end connection and was wondering if anyone had any suggestions say for setting the PacketSize or other changes that might optimize the retrieval of data from the server(s).
Thanks in advance!
Paul
--- StripMime Report -- processed MIME parts --- multipart/alternative text/plain (text body -- kept) text/html
[excessive quoting removed by server]