A parsing routine is usually the fastest. LLFF's (don't forget to use _MLINE) or FileToStr() or Append to Memo and then parse each come out about even in terms of processing time and code, unless the file is really extreme ( 10 Gb, or million byte lines or funny characters...)
Whil ran into a situation where long fields and an impossible number of columns made that technique impractical, and he came up with an innovative solution: import the datea into SQLIte, then use VFP ODBC to manipulate the date from there. He published his whitepaper here: http://hentzenwerke.com/catalog/sqlite2gb.htm
[Disclaimer: I tech-edited this, but make no money from promoting it.]
On Thu, Apr 20, 2017 at 10:58 AM, Matt Wiedeman Matt.Wiedeman@nahealth.com wrote:
Hello everyone,
I need to setup a job to import a pipe delimited text file. This is easy enough but one of the fields is larger than 254 characters. If I use a memo field, it does not import that field. I started to setup a routine to step through each character and store the fields manually but I would rather not do it that way.
Does anyone have a function or tip they can share to resolve this situation?
[excessive quoting removed by server]