On 2016-11-03 17:31, Charlie wrote:
One thing I put into my design is working with temporary cursors or objects (aka from SCATTER MEMO NAME oDataRec....). I rarely have user interface objects directly accessing DBFs.
I did this years ago with my MBSS framework based upon your ideas when you presented as such as Central PA VFP Users Group back in what...1998? Been problem free ever since. Great approach.
As I recall, the main reason for memo bloat was caused when saving data. Specifically, if I was editing a memo, then saved it, if the contents were larger than my "blocksize", VFP would create a whole new "block" in the .fpt file. The original block is essentially wasted space at that point. I think there are some nuances to this: e.g. VFP doesn't see it as "bigger" unless the "blocksize" threshold is crossed
- or maybe the text has to be "bigger" than the text 1st edited. I
can't remember.
So, if you have a GUI screen element that is directly bound to the DBF memo field, every time you type a character (well, every few characters maybe), a new .fpt block gets assigned, wasting the previous one. So imagine if you have users typing quite a bit of text in there, editing, yada yada. One editing session might cause dozens of "blocks" in the memo file to be wasted. But if you take my approach, there is only 1 "write" operation (when data is saved) and therefore, at worst, 1 wasted .ftp "block".
Take a look at "SET BLOCKSIZE". If you can't really change a bunch of source code, simply "recreating" a table with a different memo blocksize can sometimes solve the bloat problem. E.g. if the memo field will almost always have more than 100 characters, but rarely have over 512, do a SET BLOCKSIZE to 1. Once the table is created I think the blocksize for the memo fields are "locked in". And I don't think a simple MODIFY STRUCTURE will adjust to the new blocksize, so you may have to do some "table-create-swap" thing. The default is 64 bytes. And be advised, setting it to 1 thru 32 is not in bytes, it is in 512 byte chunks (SET BLOCKSIZE to 1 is 512, setting to 2 is 1024, ... up to 32). After 32, the number is interpreted as straight bytes.
Of course, as always, test these things out, run some variations.
I created this simple test, without regard for blocksize, on a copy of the table on my own machine last weekend:
CLEAR SET ESCAPE ON LOCAL liCnt as Integer liCnt = 0 CLOSE DATABASES all USE c:\crap\vendor SHARED DO WHILE .T. liCnt = liCnt + 1 ? liCnt WAIT WINDOW NOWAIT "Iteration " + ALLTRIM(STR(liCnt)) UPDATE vendor SET ven_note = ven_note ENDDO
It hit the 2GB limit after so many iterations. So I'm not sure the blocksize plays a part. This code below says average note length is just under 130 characters:
SELECT LEN(ALLTRIM(ven_note)) as iLen FROM vendor INTO CURSOR MyLens SELECT AVG(iLen) FROM MyLens