Good Morning!
Have a very wide dbf with about 8 memo fields . Started acteing strange like if I tried to re-index or pack memo it would seem to go so far and then freeze!
Found a utility that seemed to repair the file. The only issue is that the DBF is 710mb and the associated memo file is 1.2gb! Yikes. Have ran a pack memo (with no issues) and there is no change in the size of the memo file, which I am assuming is boated.
1.) Is the memo file really bloated. 2.) Going to look at the individual memo fields and see if any one of the records has an unusually large memo field. 3.) Open to any other suggestion that you wise people could recommend....
TIA/Regards, Desmond
--- StripMime Report -- processed MIME parts --- multipart/alternative text/plain (text body -- kept) text/html ---
When you select * to a temp table. What is the size of the files? You'll know quickly if you have bloat.
HTH, Tracy
-----Original Message----- From: ProfoxTech [mailto:profoxtech-bounces@leafe.com] On Behalf Of Desmond Lloyd Sent: Monday, October 05, 2020 9:49 AM To: profoxtech@leafe.com Subject: My memo field(s) Are Bloated......
Good Morning!
Have a very wide dbf with about 8 memo fields . Started acteing strange like if I tried to re-index or pack memo it would seem to go so far and then freeze!
Found a utility that seemed to repair the file. The only issue is that the DBF is 710mb and the associated memo file is 1.2gb! Yikes. Have ran a pack memo (with no issues) and there is no change in the size of the memo file, which I am assuming is boated.
1.) Is the memo file really bloated. 2.) Going to look at the individual memo fields and see if any one of the records has an unusually large memo field. 3.) Open to any other suggestion that you wise people could recommend....
TIA/Regards, Desmond
--- StripMime Report -- processed MIME parts --- multipart/alternative text/plain (text body -- kept) text/html ---
[excessive quoting removed by server]
Fascinating, original size of the DBF 710448 original size of fpt: 1276848
selecting * from the tabl3 DBF = 710448 FPT = 923354
A drop in the size of the fpt of 353492.. Could i rename the "selected" FPT and use it. I treid a pack memo and there no change in thesiz of the FPT?
Regards, Desmond
/
On Mon, 5 Oct 2020 at 09:06, Tracy Pearson tracy@powerchurch.com wrote:
When you select * to a temp table. What is the size of the files? You'll know quickly if you have bloat.
HTH, Tracy
-----Original Message----- From: ProfoxTech [mailto:profoxtech-bounces@leafe.com] On Behalf Of Desmond Lloyd Sent: Monday, October 05, 2020 9:49 AM To: profoxtech@leafe.com Subject: My memo field(s) Are Bloated......
Good Morning!
Have a very wide dbf with about 8 memo fields . Started acteing strange like if I tried to re-index or pack memo it would seem to go so far and then freeze!
Found a utility that seemed to repair the file. The only issue is that the DBF is 710mb and the associated memo file is 1.2gb! Yikes. Have ran a pack memo (with no issues) and there is no change in the size of the memo file, which I am assuming is boated.
1.) Is the memo file really bloated. 2.) Going to look at the individual memo fields and see if any one of the records has an unusually large memo field. 3.) Open to any other suggestion that you wise people could recommend....
TIA/Regards, Desmond
--- StripMime Report -- processed MIME parts --- multipart/alternative text/plain (text body -- kept) text/html
[excessive quoting removed by server]
If it is a Free table, you could. If it is in a DBC, you may have troubles. If I remember correctly, there is a header connecting a table to a DBC.
You may need to ZAP the original and append from the Free table. However, if you have an AutoInc integer field, you'll have problems.
HTH, Tracy
-----Original Message----- From: ProfoxTech [mailto:profoxtech-bounces@leafe.com] On Behalf Of Desmond Lloyd Sent: Monday, October 05, 2020 10:37 AM To: profoxtech@leafe.com Subject: Re: My memo field(s) Are Bloated......
Fascinating, original size of the DBF 710448 original size of fpt: 1276848
selecting * from the tabl3 DBF = 710448 FPT = 923354
A drop in the size of the fpt of 353492.. Could i rename the "selected" FPT and use it. I treid a pack memo and there no change in thesiz of the FPT?
Regards, Desmond
/
On Mon, 5 Oct 2020 at 09:06, Tracy Pearson tracy@powerchurch.com wrote:
When you select * to a temp table. What is the size of the files? You'll know quickly if you have bloat.
HTH, Tracy
-----Original Message----- From: ProfoxTech [mailto:profoxtech-bounces@leafe.com] On Behalf Of Desmond Lloyd Sent: Monday, October 05, 2020 9:49 AM To: profoxtech@leafe.com Subject: My memo field(s) Are Bloated......
Good Morning!
Have a very wide dbf with about 8 memo fields . Started acteing strange like if I tried to re-index or pack memo it would seem to go so far and then freeze!
Found a utility that seemed to repair the file. The only issue is that
the
DBF is 710mb and the associated memo file is 1.2gb! Yikes. Have ran a pack memo (with no issues) and there is no change in the size of the memo file, which I am assuming is boated.
1.) Is the memo file really bloated. 2.) Going to look at the individual memo fields and see if any one of
the
records has an unusually large memo field. 3.) Open to any other suggestion that you wise people could recommend....
TIA/Regards, Desmond
--- StripMime Report -- processed MIME parts --- multipart/alternative text/plain (text body -- kept) text/html
[excessive quoting removed by server]
I "think" the way to do this is to copy the data to a new table and then delete the old table and rename the new one to the old name, but it's been a really long time since I worked with DBFs so caveat emptor!
Be careful if the table is part of a database and you have specific configuration set for the table in the dbc (long column names, autoinc, triggers).
If that is going to be a problem maybe copy it to a new table, then zap the old table and append from the new table but you'd still have to look out for autoinc.
Frank.
Frank Cazabon
On 05/10/2020 10:37 am, Desmond Lloyd wrote:
Fascinating, original size of the DBF 710448 original size of fpt: 1276848
selecting * from the tabl3 DBF = 710448 FPT = 923354
A drop in the size of the fpt of 353492.. Could i rename the "selected" FPT and use it. I treid a pack memo and there no change in thesiz of the FPT?
Regards, Desmond
/
On Mon, 5 Oct 2020 at 09:06, Tracy Pearson tracy@powerchurch.com wrote:
When you select * to a temp table. What is the size of the files? You'll know quickly if you have bloat.
HTH, Tracy
-----Original Message----- From: ProfoxTech [mailto:profoxtech-bounces@leafe.com] On Behalf Of Desmond Lloyd Sent: Monday, October 05, 2020 9:49 AM To: profoxtech@leafe.com Subject: My memo field(s) Are Bloated......
Good Morning!
Have a very wide dbf with about 8 memo fields . Started acteing strange like if I tried to re-index or pack memo it would seem to go so far and then freeze!
Found a utility that seemed to repair the file. The only issue is that the DBF is 710mb and the associated memo file is 1.2gb! Yikes. Have ran a pack memo (with no issues) and there is no change in the size of the memo file, which I am assuming is boated.
1.) Is the memo file really bloated. 2.) Going to look at the individual memo fields and see if any one of the records has an unusually large memo field. 3.) Open to any other suggestion that you wise people could recommend....
TIA/Regards, Desmond
--- StripMime Report -- processed MIME parts --- multipart/alternative text/plain (text body -- kept) text/html
[excessive quoting removed by server]
Do not use an FPT from one table with another.
I am a little surprised that the PACK MEMO did not recover space. Are you sure you refreshed the disk folder where the .fpt was stored.
Also, since you had to have the table open exclusively to do a PACK MEMO, why not just do a complete PACK. If there are tons of deleted records in there the PACK MEMO may assume it cannot get rid of the associated MEMO data.
As a very tangential side point, you may want to specifically limit the VFP memory - SYS(3050, 1, xxx), SYS(3050, 2, xxx). I usually use 384000000 as xxx (384 MB). Others have a lot more details about the SYS(3050,...), but for me, I've found better stability by specifically setting these. And don't let the 384MB fool you - I've seen things get worse if I try to put a very large value in for xxx. VFP does fine in smaller memory spaces - just more writes to temp files. Anyway, you could try that memory limit and see if the PACK goes through fine (PACK vice PACK MEMO). I've had many DBF files grow over 1GB (close to the 2GB limit) that I PACK with no problems.
Oh, and if the application is not multi-user, I would recommend opening the table in EXCLUSIVE mode all the time. Doing that will allow memo field 'space' to be reused (if the new data size is not larger than the previous). In SHARED mode, when a record changes (and is saved) I think the memo data is duplicated into a new entry in the .fpt. It definitely is if the memo data itself changed - even if the new memo data is smaller than the original.
If your design really requires storing large amounts of data into the MEMO field, and if it's mainly text, consider compression - compress the data when storing/reading. I think Dynazip may be out there somewhere - and there are others.
Or, create additional tables, 1 memo field each (assuming your 8 memo fields hold roughly equivalent amounts of data). Then link them back to the main table - aka main tables primary key. Of course, this would impact your data queries when pulling data. And if you really need all 8 memo fields every time you pull data, this approach will probably hurt performance (8 joins - very fast, but not as fast as pulling from 1 table).
-Charlie
On 10/5/2020 11:08 AM, Frank Cazabon wrote:
I "think" the way to do this is to copy the data to a new table and then delete the old table and rename the new one to the old name, but it's been a really long time since I worked with DBFs so caveat emptor!
Be careful if the table is part of a database and you have specific configuration set for the table in the dbc (long column names, autoinc, triggers).
If that is going to be a problem maybe copy it to a new table, then zap the old table and append from the new table but you'd still have to look out for autoinc.
Frank.
Frank Cazabon
On 05/10/2020 10:37 am, Desmond Lloyd wrote:
Fascinating, original size of the DBF 710448 original size of fpt: 1276848
selecting * from the tabl3 DBF = 710448 FPT = 923354
A drop in the size of the fpt of 353492.. Could i rename the "selected" FPT and use it. I treid a pack memo and there no change in thesiz of the FPT?
Regards, Desmond
/
On Mon, 5 Oct 2020 at 09:06, Tracy Pearson tracy@powerchurch.com wrote:
When you select * to a temp table. What is the size of the files? You'll know quickly if you have bloat.
HTH, Tracy
-----Original Message----- From: ProfoxTech [mailto:profoxtech-bounces@leafe.com] On Behalf Of Desmond Lloyd Sent: Monday, October 05, 2020 9:49 AM To: profoxtech@leafe.com Subject: My memo field(s) Are Bloated......
Good Morning!
Have a very wide dbf with about 8 memo fields . Started acteing strange like if I tried to re-index or pack memo it would seem to go so far and then freeze!
Found a utility that seemed to repair the file. The only issue is that the DBF is 710mb and the associated memo file is 1.2gb! Yikes. Have ran a pack memo (with no issues) and there is no change in the size of the memo file, which I am assuming is boated.
1.) Is the memo file really bloated. 2.) Going to look at the individual memo fields and see if any one of the records has an unusually large memo field. 3.) Open to any other suggestion that you wise people could recommend....
TIA/Regards, Desmond
--- StripMime Report -- processed MIME parts --- multipart/alternative text/plain (text body -- kept) text/html
[excessive quoting removed by server]
On 10/5/2020 2:30 PM, Charlie wrote:
<snipped> Or, create additional tables, 1 memo field each (assuming your 8 memo fields hold roughly equivalent amounts of data). Then link them back to the main table - aka main tables primary key. Of course, this would impact your data queries when pulling data. And if you really need all 8 memo fields every time you pull data, this approach will probably hurt performance (8 joins - very fast, but not as fast as pulling from 1 table).
I sometimes think the "penalty" for such an approach is negligible. Anyone have experience on that? I mean, if you get results in 1.5 seconds compared to 1.2 seconds....is that really a big deal? Even 8 seconds vs. 7 seconds. You get my drift.
Thanks everyone.... Had thought of a separate file for each key field/Memo field. Surely this should help with performance. Have thought about doing something like that for years!
Regards,
Desmond
On Mon, 5 Oct 2020 at 15:59, MB Software Solutions, LLC < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 10/5/2020 2:30 PM, Charlie wrote:
<snipped> Or, create additional tables, 1 memo field each (assuming your 8 memo fields hold roughly equivalent amounts of data). Then link them back to the main table - aka main tables primary key. Of course, this would impact your data queries when pulling data. And if you really need all 8 memo fields every time you pull data, this approach will probably hurt performance (8 joins - very fast, but not as fast as pulling from 1 table).
I sometimes think the "penalty" for such an approach is negligible. Anyone have experience on that? I mean, if you get results in 1.5 seconds compared to 1.2 seconds....is that really a big deal? Even 8 seconds vs. 7 seconds. You get my drift.
-- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
[excessive quoting removed by server]
I like Charlie's idea.
But am I mistaken - wasn't there at some point a utility to kinda Zip memos & Unzip on the fly???
-K-
On Mon, Oct 5, 2020, 1:59 PM MB Software Solutions, LLC < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 10/5/2020 2:30 PM, Charlie wrote:
<snipped> Or, create additional tables, 1 memo field each (assuming your 8 memo fields hold roughly equivalent amounts of data). Then link them back to the main table - aka main tables primary key. Of course, this would impact your data queries when pulling data. And if you really need all 8 memo fields every time you pull data, this approach will probably hurt performance (8 joins - very fast, but not as fast as pulling from 1 table).
I sometimes think the "penalty" for such an approach is negligible. Anyone have experience on that? I mean, if you get results in 1.5 seconds compared to 1.2 seconds....is that really a big deal? Even 8 seconds vs. 7 seconds. You get my drift.
-- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
[excessive quoting removed by server]