The problem with that kind of approach is essentially, the algorithm counts words one at a time to get to the one you want: 1,2,3,... which is fine for 100 words, but unworkable for a million.
On Sat, Sep 10, 2016 at 9:06 PM, Laurie Alvey trukker41@gmail.com wrote:
To split a string into words you can use something like this:
n = GETWORDCOUNT(mysring) FOR i = 1 TO n ? GETWORDNUM(mystring,i) && do something with the word ENDFOR
Laurie
On 10 September 2016 at 16:02, Stephen Russell srussell705@gmail.com wrote:
Is there a split function in VFP to take every word of the string into an array? Then you could parse each array element till done. C# example here.
string s = FromYourTextfile; // Split string on spaces. // ... This will separate all the words. string[] words = s.*Split*(' '); foreach (string word in words) { // parse your word here for what you need. }
On Sat, Sep 10, 2016 at 12:32 AM, Joe Yoder joe@wheypower.com wrote:
I have a routine that processes each character in a file. The file I am working with is over 2 million characters long. I pull it into a memory variable with filetostr and then process each character with the substr command. Apparently substr has problems when dealing with a long string
as
the process is painfully slow.
I suspect that I will be better off using the low level file routines to read one character at a time but thought maybe someone knows of a way to speed up the approach I am using now.
Thanks in advance,
Joe
--- StripMime Report -- processed MIME parts --- multipart/alternative text/plain (text body -- kept) text/html
[excessive quoting removed by server]