Tech used: VFP9SP2 against MySQL (MariaDB) backend
I created the new incarnation of FabMate a year ago with 16-byte character keys. Works fine, but I wanted to entertain the idea of using integer keys again instead (because I love the idea of 4 bytes instead of 16) and I did NOT want to use auto-increment keys in the tables because I wanted to have more control over them during data-entry, so basically, I can create all kinds of records and then at save time AND ONLY THEN do I want to store the actual data (and that's why AUTOINC keys wouldn't work). Oh...and they wouldn't work because I have several child-parent-grandparent relationships being created all during data-entry.
So I'm thinking "how do I get something like VFP's old NextKey generation" akin to this:
* this was pseudo-code, off the top of my head trying to remember my coding from 17+ years ago FUNCTION GetNextKey(tcTable as String) as Integer LOCAL llGotIt as Logical, liNewKey as Integer IF SEEK(tcTable,"MyKeysTable","TableName") THEN llGotIt = .F. DO WHILE NOT llGotIt IF RLOCK("MyKeysTable") THEN REPLACE NextVal with NextVal + 1 in MyKeysTable liNewKey = MyKeysTable.NextVal UNLOCK IN MyKeysTable llGotIt = .T. ELSE WAIT WIND NOWAIT "Trying to get record lock on table counter...please wait" ENDIF ENDDO ELSE liNewKey = -1 ENDIF RETURN liNewKey ENDFUNC
My idea at this point was to simply have ONE table who's primary key *IS* an Auto-Increment integer field, where I insert something (probably the unique 16-byte key I'm generating now) and just grab the resulting new primary (integer) key created via this old routine:
FUNCTION GetNewID(tiHandle as Integer) as Integer * Retrieves newly created ID from MYSQL backend. *** mjb 11/09/2011 - added here from Scheduler LOCAL liKey as Integer IF SQLEXEC(tiHandle,'SELECT @@IDENTITY as iNewKey',"curKey") = 1 THEN && retrieve PK from resulting cursor liKey = curKey.iNewKey IF VARTYPE(liKey) = "C" THEN liKey = VAL(liKey) ENDIF ELSE liKey = -1 ENDIF RETURN liKey ENDFUNC && GetNewID(tiHandle as Integer) as Integer
Can anyone think of a better way?
tia, --Mike
why not this?
INSERT INTO table_name (col1, col2,...) VALUES ('val1', 'val2'...);SELECT LAST_INSERT_ID();
Sometimes you have to use the tool as intended.
On Wed, Mar 9, 2016 at 5:17 PM, <mbsoftwaresolutions@mbsoftwaresolutions.com
wrote:
Tech used: VFP9SP2 against MySQL (MariaDB) backend
I created the new incarnation of FabMate a year ago with 16-byte character keys. Works fine, but I wanted to entertain the idea of using integer keys again instead (because I love the idea of 4 bytes instead of 16) and I did NOT want to use auto-increment keys in the tables because I wanted to have more control over them during data-entry, so basically, I can create all kinds of records and then at save time AND ONLY THEN do I want to store the actual data (and that's why AUTOINC keys wouldn't work). Oh...and they wouldn't work because I have several child-parent-grandparent relationships being created all during data-entry.
So I'm thinking "how do I get something like VFP's old NextKey generation" akin to this:
- this was pseudo-code, off the top of my head trying to remember my
coding from 17+ years ago FUNCTION GetNextKey(tcTable as String) as Integer LOCAL llGotIt as Logical, liNewKey as Integer IF SEEK(tcTable,"MyKeysTable","TableName") THEN llGotIt = .F. DO WHILE NOT llGotIt IF RLOCK("MyKeysTable") THEN REPLACE NextVal with NextVal + 1 in MyKeysTable liNewKey = MyKeysTable.NextVal UNLOCK IN MyKeysTable llGotIt = .T. ELSE WAIT WIND NOWAIT "Trying to get record lock on table counter...please wait" ENDIF ENDDO ELSE liNewKey = -1 ENDIF RETURN liNewKey ENDFUNC
My idea at this point was to simply have ONE table who's primary key *IS* an Auto-Increment integer field, where I insert something (probably the unique 16-byte key I'm generating now) and just grab the resulting new primary (integer) key created via this old routine:
FUNCTION GetNewID(tiHandle as Integer) as Integer * Retrieves newly created ID from MYSQL backend. *** mjb 11/09/2011 - added here from Scheduler LOCAL liKey as Integer IF SQLEXEC(tiHandle,'SELECT @@IDENTITY asiNewKey',"curKey") = 1 THEN && retrieve PK from resulting cursor liKey = curKey.iNewKey IF VARTYPE(liKey) = "C" THEN liKey = VAL(liKey) ENDIF ELSE liKey = -1 ENDIF RETURN liKey ENDFUNC && GetNewID(tiHandle as Integer) as Integer
Can anyone think of a better way?
tia, --Mike
[excessive quoting removed by server]
On 2016-03-09 19:55, Stephen Russell wrote:
why not this?
INSERT INTO table_name (col1, col2,...) VALUES ('val1', 'val2'...);SELECT LAST_INSERT_ID();
Sometimes you have to use the tool as intended.
Well that's basically what I'm doing, but it's in a framework so re-use is key. You don't put this after each task-specific SQL; that's why I had that subroutine. (Am I showing my age there? Does anybody say 'subroutine' anymore??? Reminds me of learning FORTRAN in the LATE 80s at Penn State!)
You may get the incorrect value back buy doing this: So I'm thinking "how do I get something like VFP's old NextKey generation" akin to this:
* this was pseudo-code, off the top of my head trying to remember my coding from 17+ years ago FUNCTION GetNextKey(tcTable as String) as Integer <snip>
In the same insert statement you append ; select last_insert_id() ; and it returns YOUR value no matter what type of data column the key is. If you do some super fancy secondary select someone else may have gotten an insert in as well and your code returns their key, not yours.
On Thu, Mar 10, 2016 at 9:49 AM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-09 19:55, Stephen Russell wrote:
why not this?
INSERT INTO table_name (col1, col2,...) VALUES ('val1', 'val2'...);SELECT LAST_INSERT_ID();
Sometimes you have to use the tool as intended.
Well that's basically what I'm doing, but it's in a framework so re-use is key. You don't put this after each task-specific SQL; that's why I had that subroutine. (Am I showing my age there? Does anybody say 'subroutine' anymore??? Reminds me of learning FORTRAN in the LATE 80s at Penn State!)
[excessive quoting removed by server]
On 2016-03-10 10:51, Stephen Russell wrote:
You may get the incorrect value back buy doing this: So I'm thinking "how do I get something like VFP's old NextKey generation" akin to this:
- this was pseudo-code, off the top of my head trying to remember my
coding from 17+ years ago FUNCTION GetNextKey(tcTable as String) as Integer
<snip>
In the same insert statement you append ; select last_insert_id() ; and it returns YOUR value no matter what type of data column the key is. If you do some super fancy secondary select someone else may have gotten an insert in as well and your code returns their key, not yours.
Ah, I see it now. Sorry, I thought you were in SQL Server land with your answer. Yes, that's MySQL. So that should work, right? Or can this fail somehow if others are inserting simultaneously?
You have been using mySQL for 15 years now and you don't know about this? hahahahha
Well the documentation says that when you append it to your insert statement you get THAT PKEY value even in multi insert conditions to the same table by others. Think of EBAY and last seconds of an auction with a few to a few hundred bids coming in.
If you are inserting and then doing a separate statement with the same spid you should get your insert pkey but if you have done multiple inserts which one are you getting?
You should experiment with some of these methods. http://www.mysqltutorial.org/mysql-stored-procedure-tutorial.aspx
I had to laugh at the causal warnings about getting too fancy in your sprocs. Glad the don't like at mine. hahahahaha.
On Thu, Mar 10, 2016 at 9:59 AM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-10 10:51, Stephen Russell wrote:
You may get the incorrect value back buy doing this: So I'm thinking "how do I get something like VFP's old NextKey generation" akin to this:
- this was pseudo-code, off the top of my head trying to remember my
coding from 17+ years ago FUNCTION GetNextKey(tcTable as String) as Integer
<snip>
In the same insert statement you append ; select last_insert_id() ; and it returns YOUR value no matter what type of data column the key is. If you do some super fancy secondary select someone else may have gotten an insert in as well and your code returns their key, not yours.
Ah, I see it now. Sorry, I thought you were in SQL Server land with your answer. Yes, that's MySQL. So that should work, right? Or can this fail somehow if others are inserting simultaneously?
[excessive quoting removed by server]
It's *almost* 15 years, and I always used this function in my framework:
FUNCTION GetNewID(tiHandle as Integer) as Integer * Retrieves newly created ID from MYSQL backend. *** mjb 11/09/2011 - added here from Scheduler LOCAL liKey as Integer IF SQLEXEC(tiHandle,'SELECT @@IDENTITY as iNewKey',"curKey") = 1 THEN && retrieve PK from resulting cursor liKey = curKey.iNewKey IF VARTYPE(liKey) = "C" THEN liKey = VAL(liKey) ENDIF ELSE liKey = -1 ENDIF RETURN liKey ENDFUNC && GetNewID(tiHandle as Integer) as Integer
Wondering about the difference between last_insert_id() and SELECT @@IDENTITY? Researching later...
Never saw this site: http://www.mysqltutorial.org/ HUGE THANKS FOR THAT! The MYSQL help online is usually decent but this might supplement that nicely.
--Mike
On 2016-03-10 11:05, Stephen Russell wrote:
You have been using mySQL for 15 years now and you don't know about this? hahahahha
Well the documentation says that when you append it to your insert statement you get THAT PKEY value even in multi insert conditions to the same table by others. Think of EBAY and last seconds of an auction with a few to a few hundred bids coming in.
If you are inserting and then doing a separate statement with the same spid you should get your insert pkey but if you have done multiple inserts which one are you getting?
You should experiment with some of these methods. http://www.mysqltutorial.org/mysql-stored-procedure-tutorial.aspx
I had to laugh at the causal warnings about getting too fancy in your sprocs. Glad the don't like at mine. hahahahaha.
On Thu, Mar 10, 2016 at 9:59 AM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-10 10:51, Stephen Russell wrote:
You may get the incorrect value back buy doing this: So I'm thinking "how do I get something like VFP's old NextKey generation" akin to this:
- this was pseudo-code, off the top of my head trying to remember my
coding from 17+ years ago FUNCTION GetNextKey(tcTable as String) as Integer
<snip>
In the same insert statement you append ; select last_insert_id() ; and it returns YOUR value no matter what type of data column the key is. If you do some super fancy secondary select someone else may have gotten an insert in as well and your code returns their key, not yours.
Ah, I see it now. Sorry, I thought you were in SQL Server land with your answer. Yes, that's MySQL. So that should work, right? Or can this fail somehow if others are inserting simultaneously?
[excessive quoting removed by server]
I think there is confusion here. @@Identity returns the tables identity setting I thought, and in SQL Server it gives you your insert's PKey value.
I don't do mySQL but still think that @@Identity gives you how your table is set for PKey incrementation. It could be int or guid.
On Thu, Mar 10, 2016 at 10:22 AM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
It's *almost* 15 years, and I always used this function in my framework:
FUNCTION GetNewID(tiHandle as Integer) as Integer * Retrieves newly created ID from MYSQL backend. *** mjb 11/09/2011 - added here from Scheduler LOCAL liKey as Integer IF SQLEXEC(tiHandle,'SELECT @@IDENTITY asiNewKey',"curKey") = 1 THEN && retrieve PK from resulting cursor liKey = curKey.iNewKey IF VARTYPE(liKey) = "C" THEN liKey = VAL(liKey) ENDIF ELSE liKey = -1 ENDIF RETURN liKey ENDFUNC && GetNewID(tiHandle as Integer) as Integer
Wondering about the difference between last_insert_id() and SELECT @@IDENTITY? Researching later...
Never saw this site: http://www.mysqltutorial.org/ HUGE THANKS FOR THAT! The MYSQL help online is usually decent but this might supplement that nicely.
--Mike
On 2016-03-10 11:05, Stephen Russell wrote:
You have been using mySQL for 15 years now and you don't know about this? hahahahha
Well the documentation says that when you append it to your insert statement you get THAT PKEY value even in multi insert conditions to the same table by others. Think of EBAY and last seconds of an auction with a few to a few hundred bids coming in.
If you are inserting and then doing a separate statement with the same spid you should get your insert pkey but if you have done multiple inserts which one are you getting?
You should experiment with some of these methods. http://www.mysqltutorial.org/mysql-stored-procedure-tutorial.aspx
I had to laugh at the causal warnings about getting too fancy in your sprocs. Glad the don't like at mine. hahahahaha.
On Thu, Mar 10, 2016 at 9:59 AM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-10 10:51, Stephen Russell wrote:
You may get the incorrect value back buy doing this:
So I'm thinking "how do I get something like VFP's old NextKey generation" akin to this:
- this was pseudo-code, off the top of my head trying to remember my
coding from 17+ years ago FUNCTION GetNextKey(tcTable as String) as Integer
<snip>
In the same insert statement you append ; select last_insert_id() ; and it returns YOUR value no matter what type of data column the key is. If you do some super fancy secondary select someone else may have gotten an insert in as well and your code returns their key, not yours.
Ah, I see it now. Sorry, I thought you were in SQL Server land with your answer. Yes, that's MySQL. So that should work, right? Or can this fail somehow if others are inserting simultaneously?
[excessive quoting removed by server]
On 2016-03-10 11:32, Stephen Russell wrote:
I think there is confusion here. @@Identity returns the tables identity setting I thought, and in SQL Server it gives you your insert's PKey value.
I don't do mySQL but still think that @@Identity gives you how your table is set for PKey incrementation. It could be int or guid.
That's been working since 2004 in my framework! I'm guessing it's reading the table's AutoInc value perhaps and returning that.
On 2016-03-10 11:22, mbsoftwaresolutions@mbsoftwaresolutions.com wrote:
It's *almost* 15 years, and I always used this function in my framework:
FUNCTION GetNewID(tiHandle as Integer) as Integer
- Retrieves newly created ID from MYSQL backend.
*** mjb 11/09/2011 - added here from Scheduler LOCAL liKey as Integer IF SQLEXEC(tiHandle,'SELECT @@IDENTITY as iNewKey',"curKey") = 1 THEN && retrieve PK from resulting cursor liKey = curKey.iNewKey IF VARTYPE(liKey) = "C" THEN liKey = VAL(liKey) ENDIF ELSE liKey = -1 ENDIF RETURN liKey ENDFUNC && GetNewID(tiHandle as Integer) as Integer
Wondering about the difference between last_insert_id() and SELECT @@IDENTITY? Researching later...
Got the answer (and see why it works): http://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_i...
Identity: "This variable is a synonym for the last_insert_id variable. It exists for compatibility with other database systems. You can read its value with SELECT @@identity, and set it using SET identity."
At 07:49 2016-03-10, mbsoftwaresolutions@mbsoftwaresolutions.com wrote:
[snip]
Well that's basically what I'm doing, but it's in a framework so re-use is key. You don't put this after each task-specific SQL; that's why I had that subroutine. (Am I showing my age there? Does anybody say 'subroutine' anymore??? Reminds me of learning FORTRAN in the LATE 80s at Penn State!)
"subroutine" is a perfectly good word. So is "procedure".
I love explaining OOP as procedural programming where the procedure calls might not be obvious.
Sincerely,
Gene Wirchenko
Use autoinc but wrap your child-parent-grandparent inserts in a transaction - or better still an SP.
On 10/03/2016 04:47, mbsoftwaresolutions@mbsoftwaresolutions.com wrote:
Tech used: VFP9SP2 against MySQL (MariaDB) backend
I created the new incarnation of FabMate a year ago with 16-byte character keys. Works fine, but I wanted to entertain the idea of using integer keys again instead (because I love the idea of 4 bytes instead of 16) and I did NOT want to use auto-increment keys in the tables because I wanted to have more control over them during data-entry, so basically, I can create all kinds of records and then at save time AND ONLY THEN do I want to store the actual data (and that's why AUTOINC keys wouldn't work). Oh...and they wouldn't work because I have several child-parent-grandparent relationships being created all during data-entry.
So I'm thinking "how do I get something like VFP's old NextKey generation" akin to this:
- this was pseudo-code, off the top of my head trying to remember my
coding from 17+ years ago FUNCTION GetNextKey(tcTable as String) as Integer LOCAL llGotIt as Logical, liNewKey as Integer IF SEEK(tcTable,"MyKeysTable","TableName") THEN llGotIt = .F. DO WHILE NOT llGotIt IF RLOCK("MyKeysTable") THEN REPLACE NextVal with NextVal + 1 in MyKeysTable liNewKey = MyKeysTable.NextVal UNLOCK IN MyKeysTable llGotIt = .T. ELSE WAIT WIND NOWAIT "Trying to get record lock on table counter...please wait" ENDIF ENDDO ELSE liNewKey = -1 ENDIF RETURN liNewKey ENDFUNC
My idea at this point was to simply have ONE table who's primary key *IS* an Auto-Increment integer field, where I insert something (probably the unique 16-byte key I'm generating now) and just grab the resulting new primary (integer) key created via this old routine:
FUNCTION GetNewID(tiHandle as Integer) as Integer * Retrieves newly created ID from MYSQL backend. *** mjb 11/09/2011 - added here from Scheduler LOCAL liKey as Integer IF SQLEXEC(tiHandle,'SELECT @@IDENTITY as iNewKey',"curKey") =1 THEN && retrieve PK from resulting cursor liKey = curKey.iNewKey IF VARTYPE(liKey) = "C" THEN liKey = VAL(liKey) ENDIF ELSE liKey = -1 ENDIF RETURN liKey ENDFUNC && GetNewID(tiHandle as Integer) as Integer
Can anyone think of a better way?
tia, --Mike
[excessive quoting removed by server]
Yeah what he said.
http://www.mysqltutorial.org/mysql-stored-procedure-tutorial.aspx
Unlock the power of the database and make your life easier.
On Thu, Mar 10, 2016 at 4:12 AM, AndyHC jarndice@gmail.com wrote:
Use autoinc but wrap your child-parent-grandparent inserts in a transaction - or better still an SP.
On 10/03/2016 04:47, mbsoftwaresolutions@mbsoftwaresolutions.com wrote:
Tech used: VFP9SP2 against MySQL (MariaDB) backend
I created the new incarnation of FabMate a year ago with 16-byte character keys. Works fine, but I wanted to entertain the idea of using integer keys again instead (because I love the idea of 4 bytes instead of 16) and I did NOT want to use auto-increment keys in the tables because I wanted to have more control over them during data-entry, so basically, I can create all kinds of records and then at save time AND ONLY THEN do I want to store the actual data (and that's why AUTOINC keys wouldn't work). Oh...and they wouldn't work because I have several child-parent-grandparent relationships being created all during data-entry.
So I'm thinking "how do I get something like VFP's old NextKey generation" akin to this:
- this was pseudo-code, off the top of my head trying to remember my
coding from 17+ years ago FUNCTION GetNextKey(tcTable as String) as Integer LOCAL llGotIt as Logical, liNewKey as Integer IF SEEK(tcTable,"MyKeysTable","TableName") THEN llGotIt = .F. DO WHILE NOT llGotIt IF RLOCK("MyKeysTable") THEN REPLACE NextVal with NextVal + 1 in MyKeysTable liNewKey = MyKeysTable.NextVal UNLOCK IN MyKeysTable llGotIt = .T. ELSE WAIT WIND NOWAIT "Trying to get record lock on table counter...please wait" ENDIF ENDDO ELSE liNewKey = -1 ENDIF RETURN liNewKey ENDFUNC
My idea at this point was to simply have ONE table who's primary key *IS* an Auto-Increment integer field, where I insert something (probably the unique 16-byte key I'm generating now) and just grab the resulting new primary (integer) key created via this old routine:
FUNCTION GetNewID(tiHandle as Integer) as Integer * Retrieves newly created ID from MYSQL backend. *** mjb 11/09/2011 - added here from Scheduler LOCAL liKey as Integer IF SQLEXEC(tiHandle,'SELECT @@IDENTITY as iNewKey',"curKey") = 1THEN && retrieve PK from resulting cursor liKey = curKey.iNewKey IF VARTYPE(liKey) = "C" THEN liKey = VAL(liKey) ENDIF ELSE liKey = -1 ENDIF RETURN liKey ENDFUNC && GetNewID(tiHandle as Integer) as Integer
Can anyone think of a better way?
tia, --Mike
[excessive quoting removed by server]
I think you guys are missing the point of the original post. I just told him I didn't want to commit to disk immediately until I was absolutely sure I wanted to save the entire dataset.
The original post title/question was basically "how do I replicate the same key generation we used to do in VFP 15+ years ago?"
On 2016-03-10 09:23, Stephen Russell wrote:
Yeah what he said.
http://www.mysqltutorial.org/mysql-stored-procedure-tutorial.aspx
Unlock the power of the database and make your life easier.
On Thu, Mar 10, 2016 at 4:12 AM, AndyHC jarndice@gmail.com wrote:
Use autoinc but wrap your child-parent-grandparent inserts in a transaction - or better still an SP.
On 10/03/2016 04:47, mbsoftwaresolutions@mbsoftwaresolutions.com wrote:
Tech used: VFP9SP2 against MySQL (MariaDB) backend
I created the new incarnation of FabMate a year ago with 16-byte character keys. Works fine, but I wanted to entertain the idea of using integer keys again instead (because I love the idea of 4 bytes instead of 16) and I did NOT want to use auto-increment keys in the tables because I wanted to have more control over them during data-entry, so basically, I can create all kinds of records and then at save time AND ONLY THEN do I want to store the actual data (and that's why AUTOINC keys wouldn't work). Oh...and they wouldn't work because I have several child-parent-grandparent relationships being created all during data-entry.
So I'm thinking "how do I get something like VFP's old NextKey generation" akin to this:
- this was pseudo-code, off the top of my head trying to remember my
coding from 17+ years ago FUNCTION GetNextKey(tcTable as String) as Integer LOCAL llGotIt as Logical, liNewKey as Integer IF SEEK(tcTable,"MyKeysTable","TableName") THEN llGotIt = .F. DO WHILE NOT llGotIt IF RLOCK("MyKeysTable") THEN REPLACE NextVal with NextVal + 1 in MyKeysTable liNewKey = MyKeysTable.NextVal UNLOCK IN MyKeysTable llGotIt = .T. ELSE WAIT WIND NOWAIT "Trying to get record lock on table counter...please wait" ENDIF ENDDO ELSE liNewKey = -1 ENDIF RETURN liNewKey ENDFUNC
My idea at this point was to simply have ONE table who's primary key *IS* an Auto-Increment integer field, where I insert something (probably the unique 16-byte key I'm generating now) and just grab the resulting new primary (integer) key created via this old routine:
FUNCTION GetNewID(tiHandle as Integer) as Integer * Retrieves newly created ID from MYSQL backend. *** mjb 11/09/2011 - added here from Scheduler LOCAL liKey as Integer IF SQLEXEC(tiHandle,'SELECT @@IDENTITY as iNewKey',"curKey")= 1 THEN && retrieve PK from resulting cursor liKey = curKey.iNewKey IF VARTYPE(liKey) = "C" THEN liKey = VAL(liKey) ENDIF ELSE liKey = -1 ENDIF RETURN liKey ENDFUNC && GetNewID(tiHandle as Integer) as Integer
Can anyone think of a better way?
tia, --Mike
[excessive quoting removed by server]
"until I was absolutely sure I wanted to save the entire dataset."
That is exactly what we are talking about. When user clicks save, submit, ok, button they are in save mode. Then you commit header row(s) retaining the fkey(s) necessary for your transactional details.
On Thu, Mar 10, 2016 at 9:54 AM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
I think you guys are missing the point of the original post. I just told him I didn't want to commit to disk immediately until I was absolutely sure I wanted to save the entire dataset.
The original post title/question was basically "how do I replicate the same key generation we used to do in VFP 15+ years ago?"
On 2016-03-10 09:23, Stephen Russell wrote:
Yeah what he said.
http://www.mysqltutorial.org/mysql-stored-procedure-tutorial.aspx
Unlock the power of the database and make your life easier.
On Thu, Mar 10, 2016 at 4:12 AM, AndyHC jarndice@gmail.com wrote:
Use autoinc but wrap your child-parent-grandparent inserts in a
transaction - or better still an SP.
On 10/03/2016 04:47, mbsoftwaresolutions@mbsoftwaresolutions.com wrote:
Tech used: VFP9SP2 against MySQL (MariaDB) backend
I created the new incarnation of FabMate a year ago with 16-byte character keys. Works fine, but I wanted to entertain the idea of using integer keys again instead (because I love the idea of 4 bytes instead of 16) and I did NOT want to use auto-increment keys in the tables because I wanted to have more control over them during data-entry, so basically, I can create all kinds of records and then at save time AND ONLY THEN do I want to store the actual data (and that's why AUTOINC keys wouldn't work). Oh...and they wouldn't work because I have several child-parent-grandparent relationships being created all during data-entry.
So I'm thinking "how do I get something like VFP's old NextKey generation" akin to this:
- this was pseudo-code, off the top of my head trying to remember my
coding from 17+ years ago FUNCTION GetNextKey(tcTable as String) as Integer LOCAL llGotIt as Logical, liNewKey as Integer IF SEEK(tcTable,"MyKeysTable","TableName") THEN llGotIt = .F. DO WHILE NOT llGotIt IF RLOCK("MyKeysTable") THEN REPLACE NextVal with NextVal + 1 in MyKeysTable liNewKey = MyKeysTable.NextVal UNLOCK IN MyKeysTable llGotIt = .T. ELSE WAIT WIND NOWAIT "Trying to get record lock on table counter...please wait" ENDIF ENDDO ELSE liNewKey = -1 ENDIF RETURN liNewKey ENDFUNC
My idea at this point was to simply have ONE table who's primary key *IS* an Auto-Increment integer field, where I insert something (probably the unique 16-byte key I'm generating now) and just grab the resulting new primary (integer) key created via this old routine:
FUNCTION GetNewID(tiHandle as Integer) as Integer * Retrieves newly created ID from MYSQL backend. *** mjb 11/09/2011 - added here from Scheduler LOCAL liKey as Integer IF SQLEXEC(tiHandle,'SELECT @@IDENTITY as iNewKey',"curKey") = 1THEN && retrieve PK from resulting cursor liKey = curKey.iNewKey IF VARTYPE(liKey) = "C" THEN liKey = VAL(liKey) ENDIF ELSE liKey = -1 ENDIF RETURN liKey ENDFUNC && GetNewID(tiHandle as Integer) as Integer
Can anyone think of a better way?
tia, --Mike
[excessive quoting removed by server]
On 2016-03-10 10:55, Stephen Russell wrote:
"until I was absolutely sure I wanted to save the entire dataset."
That is exactly what we are talking about. When user clicks save, submit, ok, button they are in save mode. Then you commit header row(s) retaining the fkey(s) necessary for your transactional details.
Yes but until the user does the Save, I have to keep the relationship hierarchy for primary keys and related foreign keys.
Example (where cID is the table's primary key):
1) Create Job (cID in Jobs cursor) 2) Create 1:M items (cID in Items cursor, with cJobID foreign key pointing back to Jobs table) 3) Create 1:M details about each item (cID in Details cursor, with cItemID foreign key pointing back to Items table) 4) Create some 1:M offshoots perhaps for each Detail (...you see the trend...)
Rather than add all those records immediately to the database and later abandon because the dude hits "Cancel", I prefer to create my own keys rather than rely on AutoIncrement to have full control like this.
I understand how this could be a complex job and the first insert may only contain 30% of the total rows known at this time.
I would consider making sprocs for inserts into each unique table that returns when necessary the PKey of that insert.
jobInsert itemInsert detailsInsert offshootsInsert
Also make: jobSelect itemSelect detailsSelect offshootsSelect
In some of my databases there are hundreds of sprocs, 400-500 in number.
jobAllAspects could have all of the joins needed to pull the entire beast into one dataset or all of the tables in independent returned datasets. We do a lot of the latter here at Ring.
On Thu, Mar 10, 2016 at 11:26 AM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-10 10:55, Stephen Russell wrote:
"until I was absolutely sure I wanted to save the entire dataset."
That is exactly what we are talking about. When user clicks save, submit, ok, button they are in save mode. Then you commit header row(s) retaining the fkey(s) necessary for your transactional details.
Yes but until the user does the Save, I have to keep the relationship hierarchy for primary keys and related foreign keys.
Example (where cID is the table's primary key):
- Create Job (cID in Jobs cursor)
- Create 1:M items (cID in Items cursor, with cJobID foreign key pointing
back to Jobs table) 3) Create 1:M details about each item (cID in Details cursor, with cItemID foreign key pointing back to Items table) 4) Create some 1:M offshoots perhaps for each Detail (...you see the trend...)
Rather than add all those records immediately to the database and later abandon because the dude hits "Cancel", I prefer to create my own keys rather than rely on AutoIncrement to have full control like this.
[excessive quoting removed by server]
I realize it's done lots of places, but I never wanted explicit stored procedures for inserts/updates as they required update every time you changed a structure. That's too fragile/ridig a system for my liking.
I'm thinking it'll be a stored procedure for the purpose of inserting something into a table and grabbing the @@IDENTITY value resulting from the insert. I realize that the number will grow large because for Table1's insert, I get a value of 1, and then for Table2's insert, I get the next value (2), etc. etc. etc. I don't mind that my entire collection of PKeys is unique numbers. I don't see this system ever hitting the maximum threshold integer value.
So thus, it's similar to the classic Fox GetNextKey routine but instead of a row for each table, the Keys table is just handing out the next integer key created...and if it's not used (i.e., the user hits Cancel and doesn't save his new data), no big deal.
Make sense?
On 2016-03-10 12:42, Stephen Russell wrote:
I understand how this could be a complex job and the first insert may only contain 30% of the total rows known at this time.
I would consider making sprocs for inserts into each unique table that returns when necessary the PKey of that insert.
jobInsert itemInsert detailsInsert offshootsInsert
Also make: jobSelect itemSelect detailsSelect offshootsSelect
In some of my databases there are hundreds of sprocs, 400-500 in number.
jobAllAspects could have all of the joins needed to pull the entire beast into one dataset or all of the tables in independent returned datasets. We do a lot of the latter here at Ring.
On Thu, Mar 10, 2016 at 11:26 AM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-10 10:55, Stephen Russell wrote:
"until I was absolutely sure I wanted to save the entire dataset."
That is exactly what we are talking about. When user clicks save, submit, ok, button they are in save mode. Then you commit header row(s) retaining the fkey(s) necessary for your transactional details.
Yes but until the user does the Save, I have to keep the relationship hierarchy for primary keys and related foreign keys.
Example (where cID is the table's primary key):
- Create Job (cID in Jobs cursor)
- Create 1:M items (cID in Items cursor, with cJobID foreign key
pointing back to Jobs table) 3) Create 1:M details about each item (cID in Details cursor, with cItemID foreign key pointing back to Items table) 4) Create some 1:M offshoots perhaps for each Detail (...you see the trend...)
Rather than add all those records immediately to the database and later abandon because the dude hits "Cancel", I prefer to create my own keys rather than rely on AutoIncrement to have full control like this.
[excessive quoting removed by server]
Change is great because that is why we have a job.
If you have to fix select statements in your system I would only want to do it on the db and then adjust the receiver's as needed. It is simple to do it there or at least that is how I have been doing this for the last 18+ years. For any maintenance questions you only go to one SINGLE point of failure.
YMMV
On Thu, Mar 10, 2016 at 12:10 PM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
I realize it's done lots of places, but I never wanted explicit stored procedures for inserts/updates as they required update every time you changed a structure. That's too fragile/ridig a system for my liking.
I'm thinking it'll be a stored procedure for the purpose of inserting something into a table and grabbing the @@IDENTITY value resulting from the insert. I realize that the number will grow large because for Table1's insert, I get a value of 1, and then for Table2's insert, I get the next value (2), etc. etc. etc. I don't mind that my entire collection of PKeys is unique numbers. I don't see this system ever hitting the maximum threshold integer value.
So thus, it's similar to the classic Fox GetNextKey routine but instead of a row for each table, the Keys table is just handing out the next integer key created...and if it's not used (i.e., the user hits Cancel and doesn't save his new data), no big deal.
Make sense?
On 2016-03-10 12:42, Stephen Russell wrote:
I understand how this could be a complex job and the first insert may only contain 30% of the total rows known at this time.
I would consider making sprocs for inserts into each unique table that returns when necessary the PKey of that insert.
jobInsert itemInsert detailsInsert offshootsInsert
Also make: jobSelect itemSelect detailsSelect offshootsSelect
In some of my databases there are hundreds of sprocs, 400-500 in number.
jobAllAspects could have all of the joins needed to pull the entire beast into one dataset or all of the tables in independent returned datasets. We do a lot of the latter here at Ring.
On Thu, Mar 10, 2016 at 11:26 AM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-10 10:55, Stephen Russell wrote:
"until I was absolutely sure I wanted to save the entire dataset."
That is exactly what we are talking about. When user clicks save, submit, ok, button they are in save mode. Then you commit header row(s) retaining the fkey(s) necessary for your transactional details.
Yes but until the user does the Save, I have to keep the relationship hierarchy for primary keys and related foreign keys.
Example (where cID is the table's primary key):
- Create Job (cID in Jobs cursor)
- Create 1:M items (cID in Items cursor, with cJobID foreign key
pointing back to Jobs table) 3) Create 1:M details about each item (cID in Details cursor, with cItemID foreign key pointing back to Items table) 4) Create some 1:M offshoots perhaps for each Detail (...you see the trend...)
Rather than add all those records immediately to the database and later abandon because the dude hits "Cancel", I prefer to create my own keys rather than rely on AutoIncrement to have full control like this.
[excessive quoting removed by server]
Now wait a second...you have to change the user interface who's calling that stored procedure, so there's TWO spots right there that need changing...the UI call and the actual SP itself. Or how am I reading that wrong?
On 2016-03-10 13:53, Stephen Russell wrote:
Change is great because that is why we have a job.
If you have to fix select statements in your system I would only want to do it on the db and then adjust the receiver's as needed. It is simple to do it there or at least that is how I have been doing this for the last 18+ years. For any maintenance questions you only go to one SINGLE point of failure.
YMMV
On Thu, Mar 10, 2016 at 12:10 PM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
I realize it's done lots of places, but I never wanted explicit stored procedures for inserts/updates as they required update every time you changed a structure. That's too fragile/ridig a system for my liking.
I'm thinking it'll be a stored procedure for the purpose of inserting something into a table and grabbing the @@IDENTITY value resulting from the insert. I realize that the number will grow large because for Table1's insert, I get a value of 1, and then for Table2's insert, I get the next value (2), etc. etc. etc. I don't mind that my entire collection of PKeys is unique numbers. I don't see this system ever hitting the maximum threshold integer value.
So thus, it's similar to the classic Fox GetNextKey routine but instead of a row for each table, the Keys table is just handing out the next integer key created...and if it's not used (i.e., the user hits Cancel and doesn't save his new data), no big deal.
Make sense?
On 2016-03-10 12:42, Stephen Russell wrote:
I understand how this could be a complex job and the first insert may only contain 30% of the total rows known at this time.
I would consider making sprocs for inserts into each unique table that returns when necessary the PKey of that insert.
jobInsert itemInsert detailsInsert offshootsInsert
Also make: jobSelect itemSelect detailsSelect offshootsSelect
In some of my databases there are hundreds of sprocs, 400-500 in number.
jobAllAspects could have all of the joins needed to pull the entire beast into one dataset or all of the tables in independent returned datasets. We do a lot of the latter here at Ring.
On Thu, Mar 10, 2016 at 11:26 AM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-10 10:55, Stephen Russell wrote:
"until I was absolutely sure I wanted to save the entire dataset."
That is exactly what we are talking about. When user clicks save, submit, ok, button they are in save mode. Then you commit header row(s) retaining the fkey(s) necessary for your transactional details.
Yes but until the user does the Save, I have to keep the relationship hierarchy for primary keys and related foreign keys.
Example (where cID is the table's primary key):
- Create Job (cID in Jobs cursor)
- Create 1:M items (cID in Items cursor, with cJobID foreign key
pointing back to Jobs table) 3) Create 1:M details about each item (cID in Details cursor, with cItemID foreign key pointing back to Items table) 4) Create some 1:M offshoots perhaps for each Detail (...you see the trend...)
Rather than add all those records immediately to the database and later abandon because the dude hits "Cancel", I prefer to create my own keys rather than rely on AutoIncrement to have full control like this.
[excessive quoting removed by server]
You will always have to change the UI. How many different places are you asking for the same data? If that was in a Sproc it is fixed all over the place now. If you have it as a statement you pas back you have to go to every occurrence in your app and make the change.
Reporting as well a UI.
Most of us will make the sproc to pull the data for a specific report.
On Thu, Mar 10, 2016 at 4:16 PM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
Now wait a second...you have to change the user interface who's calling that stored procedure, so there's TWO spots right there that need changing...the UI call and the actual SP itself. Or how am I reading that wrong?
On 2016-03-10 13:53, Stephen Russell wrote:
Change is great because that is why we have a job.
If you have to fix select statements in your system I would only want to do it on the db and then adjust the receiver's as needed. It is simple to do it there or at least that is how I have been doing this for the last 18+ years. For any maintenance questions you only go to one SINGLE point of failure.
YMMV
On Thu, Mar 10, 2016 at 12:10 PM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
I realize it's done lots of places, but I never wanted explicit stored
procedures for inserts/updates as they required update every time you changed a structure. That's too fragile/ridig a system for my liking.
I'm thinking it'll be a stored procedure for the purpose of inserting something into a table and grabbing the @@IDENTITY value resulting from the insert. I realize that the number will grow large because for Table1's insert, I get a value of 1, and then for Table2's insert, I get the next value (2), etc. etc. etc. I don't mind that my entire collection of PKeys is unique numbers. I don't see this system ever hitting the maximum threshold integer value.
So thus, it's similar to the classic Fox GetNextKey routine but instead of a row for each table, the Keys table is just handing out the next integer key created...and if it's not used (i.e., the user hits Cancel and doesn't save his new data), no big deal.
Make sense?
On 2016-03-10 12:42, Stephen Russell wrote:
I understand how this could be a complex job and the first insert may
only contain 30% of the total rows known at this time.
I would consider making sprocs for inserts into each unique table that returns when necessary the PKey of that insert.
jobInsert itemInsert detailsInsert offshootsInsert
Also make: jobSelect itemSelect detailsSelect offshootsSelect
In some of my databases there are hundreds of sprocs, 400-500 in number.
jobAllAspects could have all of the joins needed to pull the entire beast into one dataset or all of the tables in independent returned datasets. We do a lot of the latter here at Ring.
On Thu, Mar 10, 2016 at 11:26 AM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-10 10:55, Stephen Russell wrote:
"until I was absolutely sure I wanted to save the entire dataset."
That is exactly what we are talking about. When user clicks save, submit, ok, button they are in save mode. Then you commit header row(s) retaining the fkey(s) necessary for your transactional details.
Yes but until the user does the Save, I have to keep the relationship hierarchy for primary keys and related foreign keys.
Example (where cID is the table's primary key):
- Create Job (cID in Jobs cursor)
- Create 1:M items (cID in Items cursor, with cJobID foreign key
pointing back to Jobs table) 3) Create 1:M details about each item (cID in Details cursor, with cItemID foreign key pointing back to Items table) 4) Create some 1:M offshoots perhaps for each Detail (...you see the trend...)
Rather than add all those records immediately to the database and later abandon because the dude hits "Cancel", I prefer to create my own keys rather than rely on AutoIncrement to have full control like this.
[excessive quoting removed by server]
On 2016-03-10 17:22, Stephen Russell wrote:
You will always have to change the UI. How many different places are you asking for the same data? If that was in a Sproc it is fixed all over the place now. If you have it as a statement you pas back you have to go to every occurrence in your app and make the change.
Reporting as well a UI.
Most of us will make the sproc to pull the data for a specific report.
Sure, reporting is fine...use stored procs and you never need to change the UI caller; but I thought you were talking about passing each field as parms to a stored proc.
Yes you do that all the time with sprocs. No biggie adding another one at the end. This is the list of params for an insert.
ALTER PROCEDURE [dbo].[InvoiceLines_ins]
@InvoiceLineID numeric(18,0), @InvoiceID numeric (18,0), @PositionNumber varchar (50), @SequenceNumber varchar (50), @CustomerOrderPosition varchar (50), @CustomerOrderSequence varchar (50), @CustomerBaseUnitCode varchar (50), @CustomerItem varchar (50), @Qualifier1 varchar (50), @Item varchar (50), @ItemDescription varchar (50), @ShipToBusinessPartner varchar (50), @Qualifier2 varchar (50), @ShipToBusinessPartnerCodeinMessage varchar (50), @OrderedQnty varchar (50), @DeliveredQnty varchar (50), @AmountinInvoiceCurrency varchar (50), @Shipment varchar (50), @OrderDiscount varchar (50), @LineDiscountAmount varchar (50), @TaxExemptionReasonCode varchar (50), @TaxExemptionCertificate varchar (50), @DeliveryDate varchar (50), @PriceUnit varchar (50), @QuantityUnit varchar (50), @Price varchar (50)
On Thu, Mar 10, 2016 at 4:50 PM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-10 17:22, Stephen Russell wrote:
You will always have to change the UI. How many different places are you asking for the same data? If that was in a Sproc it is fixed all over the place now. If you have it as a statement you pas back you have to go to every occurrence in your app and make the change.
Reporting as well a UI.
Most of us will make the sproc to pull the data for a specific report.
Sure, reporting is fine...use stored procs and you never need to change the UI caller; but I thought you were talking about passing each field as parms to a stored proc.
[excessive quoting removed by server]
On 2016-03-10 17:53, Stephen Russell wrote:
Yes you do that all the time with sprocs. No biggie adding another one at the end. This is the list of params for an insert.
ALTER PROCEDURE [dbo].[InvoiceLines_ins]
@InvoiceLineID numeric(18,0), @InvoiceID numeric (18,0), @PositionNumber varchar (50), @SequenceNumber varchar (50), @CustomerOrderPosition varchar (50), @CustomerOrderSequence varchar (50), @CustomerBaseUnitCode varchar (50), @CustomerItem varchar (50), @Qualifier1 varchar (50), @Item varchar (50), @ItemDescription varchar (50), @ShipToBusinessPartner varchar (50), @Qualifier2 varchar (50), @ShipToBusinessPartnerCodeinMessage varchar (50), @OrderedQnty varchar (50), @DeliveredQnty varchar (50), @AmountinInvoiceCurrency varchar (50), @Shipment varchar (50), @OrderDiscount varchar (50), @LineDiscountAmount varchar (50), @TaxExemptionReasonCode varchar (50), @TaxExemptionCertificate varchar (50), @DeliveryDate varchar (50), @PriceUnit varchar (50), @QuantityUnit varchar (50), @Price varchar (50)
Oh hell no. I see that as torture and ripe for problems. What happens if I go and change the field widths? Then I have to visit every instance/reference and update. NO THANKS.
Field widths shouldn't make any difference, you do you varchar(50) or nvarchar(50) or (100) already don't you for the data outside of State?
In all of my code the UI data is defined by the received data from the backend.
On Fri, Mar 11, 2016 at 4:13 PM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-10 17:53, Stephen Russell wrote:
Yes you do that all the time with sprocs. No biggie adding another one at the end. This is the list of params for an insert.
ALTER PROCEDURE [dbo].[InvoiceLines_ins]
@InvoiceLineID numeric(18,0), @InvoiceID numeric (18,0), @PositionNumber varchar (50), @SequenceNumber varchar (50), @CustomerOrderPosition varchar (50), @CustomerOrderSequence varchar (50), @CustomerBaseUnitCode varchar (50), @CustomerItem varchar (50), @Qualifier1 varchar (50), @Item varchar (50), @ItemDescription varchar (50), @ShipToBusinessPartner varchar (50), @Qualifier2 varchar (50), @ShipToBusinessPartnerCodeinMessage varchar (50), @OrderedQnty varchar (50), @DeliveredQnty varchar (50), @AmountinInvoiceCurrency varchar (50), @Shipment varchar (50), @OrderDiscount varchar (50), @LineDiscountAmount varchar (50), @TaxExemptionReasonCode varchar (50), @TaxExemptionCertificate varchar (50), @DeliveryDate varchar (50), @PriceUnit varchar (50), @QuantityUnit varchar (50), @Price varchar (50)
Oh hell no. I see that as torture and ripe for problems. What happens if I go and change the field widths? Then I have to visit every instance/reference and update. NO THANKS.
[excessive quoting removed by server]
On 2016-03-11 17:18, Stephen Russell wrote:
Field widths shouldn't make any difference, you do you varchar(50) or nvarchar(50) or (100) already don't you for the data outside of State?
In all of my code the UI data is defined by the received data from the backend.
I'm not talking about the front end UI. I'm talking about when you guys make a field 25 characters large (for example) than it is currently. Your stored procedure parameters have to be updated to reflect the length beyond the existing setting. That's where Oracle's parameters are better than SQL Server: you can reference a field name (synonym) rather than hardcoding any width.
In reality that happens every once in a while, correct? If not you have another problem.
On Sat, Mar 12, 2016 at 9:33 PM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-11 17:18, Stephen Russell wrote:
Field widths shouldn't make any difference, you do you varchar(50) or nvarchar(50) or (100) already don't you for the data outside of State?
In all of my code the UI data is defined by the received data from the backend.
I'm not talking about the front end UI. I'm talking about when you guys make a field 25 characters large (for example) than it is currently. Your stored procedure parameters have to be updated to reflect the length beyond the existing setting. That's where Oracle's parameters are better than SQL Server: you can reference a field name (synonym) rather than hardcoding any width.
[excessive quoting removed by server]
On 2016-03-10 05:12, AndyHC wrote:
Use autoinc but wrap your child-parent-grandparent inserts in a transaction - or better still an SP.
I don't like the idea of committing until I'm totally sure I want to add those records; hence the disconnected-until-ready-to-save approach.
I've played around with MariaDB a bit and this stored procedure works (newids table must have primary key index):
CREATE DEFINER=`root`@`localhost` PROCEDURE `newidX`(IN `TableName` CHAR(20)) LANGUAGE SQL NOT DETERMINISTIC CONTAINS SQL SQL SECURITY DEFINER COMMENT '' BEGIN select nextid from newids where newids.table=TableName for update; update newids set nextid=nextid+1 where newids.`table`=TableName; END
Fred
On Wed, Mar 9, 2016 at 4:17 PM, <mbsoftwaresolutions@mbsoftwaresolutions.com
wrote:
Tech used: VFP9SP2 against MySQL (MariaDB) backend
I created the new incarnation of FabMate a year ago with 16-byte character keys. Works fine, but I wanted to entertain the idea of using integer keys again instead (because I love the idea of 4 bytes instead of 16) and I did NOT want to use auto-increment keys in the tables because I wanted to have more control over them during data-entry, so basically, I can create all kinds of records and then at save time AND ONLY THEN do I want to store the actual data (and that's why AUTOINC keys wouldn't work). Oh...and they wouldn't work because I have several child-parent-grandparent relationships being created all during data-entry.
So I'm thinking "how do I get something like VFP's old NextKey generation" akin to this:
- this was pseudo-code, off the top of my head trying to remember my
coding from 17+ years ago FUNCTION GetNextKey(tcTable as String) as Integer LOCAL llGotIt as Logical, liNewKey as Integer IF SEEK(tcTable,"MyKeysTable","TableName") THEN llGotIt = .F. DO WHILE NOT llGotIt IF RLOCK("MyKeysTable") THEN REPLACE NextVal with NextVal + 1 in MyKeysTable liNewKey = MyKeysTable.NextVal UNLOCK IN MyKeysTable llGotIt = .T. ELSE WAIT WIND NOWAIT "Trying to get record lock on table counter...please wait" ENDIF ENDDO ELSE liNewKey = -1 ENDIF RETURN liNewKey ENDFUNC
My idea at this point was to simply have ONE table who's primary key *IS* an Auto-Increment integer field, where I insert something (probably the unique 16-byte key I'm generating now) and just grab the resulting new primary (integer) key created via this old routine:
FUNCTION GetNewID(tiHandle as Integer) as Integer * Retrieves newly created ID from MYSQL backend. *** mjb 11/09/2011 - added here from Scheduler LOCAL liKey as Integer IF SQLEXEC(tiHandle,'SELECT @@IDENTITY asiNewKey',"curKey") = 1 THEN && retrieve PK from resulting cursor liKey = curKey.iNewKey IF VARTYPE(liKey) = "C" THEN liKey = VAL(liKey) ENDIF ELSE liKey = -1 ENDIF RETURN liKey ENDFUNC && GetNewID(tiHandle as Integer) as Integer
Can anyone think of a better way?
tia, --Mike
[excessive quoting removed by server]
You control freak. Let the backend do what it does so well.
https://mariadb.com/kb/en/mariadb/auto_increment-faq/
I gave up on that single table of last PKs 15 + years ago when I found that the backend already did it for us, all we had to was ask for it.
On Thu, Mar 24, 2016 at 12:51 PM, Fred Taylor fbtaylor@gmail.com wrote:
I've played around with MariaDB a bit and this stored procedure works (newids table must have primary key index):
CREATE DEFINER=`root`@`localhost` PROCEDURE `newidX`(IN `TableName` CHAR(20)) LANGUAGE SQL NOT DETERMINISTIC CONTAINS SQL SQL SECURITY DEFINER COMMENT '' BEGIN select nextid from newids where newids.table=TableName for update; update newids set nextid=nextid+1 where newids.`table`=TableName; END
Fred
On Wed, Mar 9, 2016 at 4:17 PM, < mbsoftwaresolutions@mbsoftwaresolutions.com
wrote:
Tech used: VFP9SP2 against MySQL (MariaDB) backend
I created the new incarnation of FabMate a year ago with 16-byte
character
keys. Works fine, but I wanted to entertain the idea of using integer
keys
again instead (because I love the idea of 4 bytes instead of 16) and I
did
NOT want to use auto-increment keys in the tables because I wanted to
have
more control over them during data-entry, so basically, I can create all kinds of records and then at save time AND ONLY THEN do I want to store
the
actual data (and that's why AUTOINC keys wouldn't work). Oh...and they wouldn't work because I have several child-parent-grandparent
relationships
being created all during data-entry.
So I'm thinking "how do I get something like VFP's old NextKey
generation"
akin to this:
- this was pseudo-code, off the top of my head trying to remember my
coding from 17+ years ago FUNCTION GetNextKey(tcTable as String) as Integer LOCAL llGotIt as Logical, liNewKey as Integer IF SEEK(tcTable,"MyKeysTable","TableName") THEN llGotIt = .F. DO WHILE NOT llGotIt IF RLOCK("MyKeysTable") THEN REPLACE NextVal with NextVal + 1 in MyKeysTable liNewKey = MyKeysTable.NextVal UNLOCK IN MyKeysTable llGotIt = .T. ELSE WAIT WIND NOWAIT "Trying to get record lock on table counter...please wait" ENDIF ENDDO ELSE liNewKey = -1 ENDIF RETURN liNewKey ENDFUNC
My idea at this point was to simply have ONE table who's primary key *IS* an Auto-Increment integer field, where I insert something (probably the unique 16-byte key I'm generating now) and just grab the resulting new primary (integer) key created via this old routine:
FUNCTION GetNewID(tiHandle as Integer) as Integer * Retrieves newly created ID from MYSQL backend. *** mjb 11/09/2011 - added here from Scheduler LOCAL liKey as Integer IF SQLEXEC(tiHandle,'SELECT @@IDENTITY asiNewKey',"curKey") = 1 THEN && retrieve PK from resulting cursor liKey = curKey.iNewKey IF VARTYPE(liKey) = "C" THEN liKey = VAL(liKey) ENDIF ELSE liKey = -1 ENDIF RETURN liKey ENDFUNC && GetNewID(tiHandle as Integer) as Integer
Can anyone think of a better way?
tia, --Mike
[excessive quoting removed by server]
On 2016-03-24 14:17, Stephen Russell wrote:
You control freak. Let the backend do what it does so well.
https://mariadb.com/kb/en/mariadb/auto_increment-faq/
I gave up on that single table of last PKs 15 + years ago when I found that the backend already did it for us, all we had to was ask for it.
From SNL: "Jane, you ignorant slut." lol
Steve -- you might recall the original post from this thread that laid out that there were multiple levels (3+) that were needing keys at record creation time, so the AUTOINC on the backend was not an option as we were creating datasets locally and then if the user saved, we committed to the database. So your "let the backend do it" doesn't apply here.
Jane you ignorant slut.
So in reality you have code to generate your first insert + catch new key value. More code to setup next insert(s) using that first Pkey value.
This is Four trips to the db for only single inserts to two tables and then update the master last_id data as well. Now add two more trips for each and every trip for each additional detail rows needed to be added.
Bwahahahaha
On Thu, Mar 24, 2016 at 3:17 PM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-24 14:17, Stephen Russell wrote:
You control freak. Let the backend do what it does so well.
https://mariadb.com/kb/en/mariadb/auto_increment-faq/
I gave up on that single table of last PKs 15 + years ago when I found that the backend already did it for us, all we had to was ask for it.
From SNL: "Jane, you ignorant slut." lol
Steve -- you might recall the original post from this thread that laid out that there were multiple levels (3+) that were needing keys at record creation time, so the AUTOINC on the backend was not an option as we were creating datasets locally and then if the user saved, we committed to the database. So your "let the backend do it" doesn't apply here.
[excessive quoting removed by server]
Like GWB said, you must be using that "fuzzy math." lol
I count many more than 2 tables: Jobs, Items (child to Jobs), Subitems (child to items), Materials (child to Items), Labor (child to Items), Installation (child to Items), etc. depending how many things deep you do.
On 2016-03-24 16:42, Stephen Russell wrote:
Jane you ignorant slut.
So in reality you have code to generate your first insert + catch new key value. More code to setup next insert(s) using that first Pkey value.
This is Four trips to the db for only single inserts to two tables and then update the master last_id data as well. Now add two more trips for each and every trip for each additional detail rows needed to be added.
Bwahahahaha
On Thu, Mar 24, 2016 at 3:17 PM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-24 14:17, Stephen Russell wrote:
You control freak. Let the backend do what it does so well.
https://mariadb.com/kb/en/mariadb/auto_increment-faq/
I gave up on that single table of last PKs 15 + years ago when I found that the backend already did it for us, all we had to was ask for it.
From SNL: "Jane, you ignorant slut." lol
Steve -- you might recall the original post from this thread that laid out that there were multiple levels (3+) that were needing keys at record creation time, so the AUTOINC on the backend was not an option as we were creating datasets locally and then if the user saved, we committed to the database. So your "let the backend do it" doesn't apply here.
[excessive quoting removed by server]
Michael please eliminate the steps I think are happening just dealing with the jobs table.
1. insert job row 2. get pkyey back 3. update masterKeyValues table
I see 3 separate sql statements involving the first table. Every time you need reference to a PKey you require 200% more code and that is in need of a refactor.
All I was suggesting is that your insert statement returns the bloody key in step one and you move onto inserting items in the same way. Sure you can iterate through all of the sub family per item one at a time before you insert the second item.
On Fri, Mar 25, 2016 at 2:13 PM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
Like GWB said, you must be using that "fuzzy math." lol
I count many more than 2 tables: Jobs, Items (child to Jobs), Subitems (child to items), Materials (child to Items), Labor (child to Items), Installation (child to Items), etc. depending how many things deep you do.
On 2016-03-24 16:42, Stephen Russell wrote:
Jane you ignorant slut.
So in reality you have code to generate your first insert + catch new key value. More code to setup next insert(s) using that first Pkey value.
This is Four trips to the db for only single inserts to two tables and then update the master last_id data as well. Now add two more trips for each and every trip for each additional detail rows needed to be added.
Bwahahahaha
On Thu, Mar 24, 2016 at 3:17 PM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-24 14:17, Stephen Russell wrote:
You control freak. Let the backend do what it does so well.
https://mariadb.com/kb/en/mariadb/auto_increment-faq/
I gave up on that single table of last PKs 15 + years ago when I found that the backend already did it for us, all we had to was ask for it.
From SNL: "Jane, you ignorant slut." lol
Steve -- you might recall the original post from this thread that laid out that there were multiple levels (3+) that were needing keys at record creation time, so the AUTOINC on the backend was not an option as we were creating datasets locally and then if the user saved, we committed to the database. So your "let the backend do it" doesn't apply here.
[excessive quoting removed by server]
On 2016-03-26 11:25, Stephen Russell wrote:
Michael please eliminate the steps I think are happening just dealing with the jobs table.
- insert job row
- get pkyey back
- update masterKeyValues table
I see 3 separate sql statements involving the first table. Every time you need reference to a PKey you require 200% more code and that is in need of a refactor.
All I was suggesting is that your insert statement returns the bloody key in step one and you move onto inserting items in the same way. Sure you can iterate through all of the sub family per item one at a time before you insert the second item.
Ah, gotcha. Happy Easter my friend!
At 13:17 2016-03-24, mbsoftwaresolutions@mbsoftwaresolutions.com wrote:
On 2016-03-24 14:17, Stephen Russell wrote:
You control freak. Let the backend do what it does so well. https://mariadb.com/kb/en/mariadb/auto_increment-faq/ I gave up on that single table of last PKs 15 + years ago when I found that the backend already did it for us, all we had to was ask for it.
From SNL: "Jane, you ignorant slut." lol
I find this sort of language really offensive and unprofessional. Please do not use it. There are much more polite ways to express disagreement.
[snip]
Sincerely,
Gene Wirchenko
On 3/9/16 3:17 PM, mbsoftwaresolutions@mbsoftwaresolutions.com wrote:
I created the new incarnation of FabMate a year ago with 16-byte character keys. Works fine, but I wanted to entertain the idea of using integer keys again instead (because I love the idea of 4 bytes instead of 16)
You are concerned about using 16 bytes instead of 4? IMO you should be using 40-byte uuids for everything these days.
Paul
Paul,
If I were to say "that's overkill for my application's needs" you would say....?
Just wondering what I'm not understanding here. Is it that I'm thinking too small?
Mike Copeland
Paul McNett wrote:
On 3/9/16 3:17 PM, mbsoftwaresolutions@mbsoftwaresolutions.com wrote:
I created the new incarnation of FabMate a year ago with 16-byte character keys. Works fine, but I wanted to entertain the idea of using integer keys again instead (because I love the idea of 4 bytes instead of 16)
You are concerned about using 16 bytes instead of 4? IMO you should be using 40-byte uuids for everything these days.
Paul
[excessive quoting removed by server]
UUID's are the simplest general solution. With integer id's you must coordinate with a central authority (the db server) to avoid collisions.
Space is cheap.
Paul
On 3/25/16 3:54 PM, Mike Copeland wrote:
Paul,
If I were to say "that's overkill for my application's needs" you would say....?
Just wondering what I'm not understanding here. Is it that I'm thinking too small?
Mike Copeland
Paul McNett wrote:
On 3/9/16 3:17 PM, mbsoftwaresolutions@mbsoftwaresolutions.com wrote:
I created the new incarnation of FabMate a year ago with 16-byte character keys. Works fine, but I wanted to entertain the idea of using integer keys again instead (because I love the idea of 4 bytes instead of 16)
You are concerned about using 16 bytes instead of 4? IMO you should be using 40-byte uuids for everything these days.
On Fri, Mar 25, 2016 at 6:54 PM, Mike Copeland mike@ggisoft.com wrote:
Paul,
If I were to say "that's overkill for my application's needs" you would say....?
Just wondering what I'm not understanding here. Is it that I'm thinking too small?
Satellite offices. With UUIDs, each app can create *unique* IDs that don't have to be changed when you merge IDs from multiple sources. Solves problems with roaming salesman, remote offices, off-line use, etc.
Paul, Ted,
Got it! Thanks!
It's all about scope and being ready for the next level.
Mike
Ted Roche wrote:
On Fri, Mar 25, 2016 at 6:54 PM, Mike Copeland mike@ggisoft.com wrote:
Paul,
If I were to say "that's overkill for my application's needs" you would say....?
Just wondering what I'm not understanding here. Is it that I'm thinking too small?
Satellite offices. With UUIDs, each app can create *unique* IDs that don't have to be changed when you merge IDs from multiple sources. Solves problems with roaming salesman, remote offices, off-line use, etc.
On 2016-03-25 19:56, Ted Roche wrote:
Satellite offices. With UUIDs, each app can create *unique* IDs that don't have to be changed when you merge IDs from multiple sources. Solves problems with roaming salesman, remote offices, off-line use, etc.
This is why I went with the app-generated character keys this time around instead of the integer keys I'd used since 13+ years ago. Still, the 16-byte keys should suffice for this.
On Sun, Mar 27, 2016 at 10:45 PM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-25 19:56, Ted Roche wrote:
Satellite offices. With UUIDs, each app can create *unique* IDs that don't have to be changed when you merge IDs from multiple sources. Solves problems with roaming salesman, remote offices, off-line use, etc.
This is why I went with the app-generated character keys this time around instead of the integer keys I'd used since 13+ years ago. Still, the 16-byte keys should suffice for this.
For me it was moving business that was prepped in test and easily moving it to production. The keys are no longer an issue.
The issue with indexes is the free space available in an index page before it has to split into two pages for the same data.
This is only in a primary key because it can always be a problem in the secondary one. The GUID generated may fit into any one of different btree index pages, but a int will only be placed at the last page.
On 2016-03-28 11:01, Stephen Russell wrote:
For me it was moving business that was prepped in test and easily moving it to production. The keys are no longer an issue.
The issue with indexes is the free space available in an index page before it has to split into two pages for the same data.
This is only in a primary key because it can always be a problem in the secondary one. The GUID generated may fit into any one of different btree index pages, but a int will only be placed at the last page.
The point you made about switching between systems (Test, Production, Local, somewhere else) was the bonus for me using this approach. That, and as mentioned earlier, being able to combine data from different disconnected datasets (Database1, Database2, etc.). No more worrying about AutoInc either meaning something else to a different disconnected dataset, or duplicate integer keys (and see reason prior as a result).
In my owned systems I separate what is better as an INT Pkey vs a GUID.
The Name table is GUID because it grew wild at first and then slowed down to a crawl, or few new rows being added they can go anywhere in the index pages with a 70% free space setting with little problem.
Actual business transactions have an int because they grow quickly and no need to force index repagination frequently as invoicing is run nightly. Trading Partners are GUID because they come from a test system and with all of the mess in contracts it is so much easier to just work with the GUID.
On Mon, Mar 28, 2016 at 10:13 AM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-28 11:01, Stephen Russell wrote:
For me it was moving business that was prepped in test and easily moving it to production. The keys are no longer an issue.
The issue with indexes is the free space available in an index page before it has to split into two pages for the same data.
This is only in a primary key because it can always be a problem in the secondary one. The GUID generated may fit into any one of different btree index pages, but a int will only be placed at the last page.
The point you made about switching between systems (Test, Production, Local, somewhere else) was the bonus for me using this approach. That, and as mentioned earlier, being able to combine data from different disconnected datasets (Database1, Database2, etc.). No more worrying about AutoInc either meaning something else to a different disconnected dataset, or duplicate integer keys (and see reason prior as a result).
[excessive quoting removed by server]
On 2016-03-25 18:23, Paul McNett wrote:
On 3/9/16 3:17 PM, mbsoftwaresolutions@mbsoftwaresolutions.com wrote:
I created the new incarnation of FabMate a year ago with 16-byte character keys. Works fine, but I wanted to entertain the idea of using integer keys again instead (because I love the idea of 4 bytes instead of 16)
You are concerned about using 16 bytes instead of 4? IMO you should be using 40-byte uuids for everything these days.
Craig Boyd's VFPEncryption allows for the 40-byte one, but I preferred the 16-byte instead. A colleague had said however that those large keys don't do well in the SQL Server index world (it's late so I can't recall the specifics but something about balancing I thought).
So why do you say 40-byte is the way to go? What's the advantage? Certainly the index sizes are larger (and that's not good).
On Wed, Mar 9, 2016 at 6:17 PM, mbsoftwaresolutions@mbsoftwaresolutions.com wrote:
...I did NOT want to use auto-increment keys in the tables because I wanted to have more control over them during data-entry, so basically, I can create all kinds of records and then at save time AND ONLY THEN do I want to store the actual data (and that's why AUTOINC keys wouldn't work).
Fun Fact: If you were to create 1000 PKs a day and throw them away, and you worked 365 days/year, you would run out of 4-byte integers in 11,767 years.
Of course, if you were to create GUIDs, the universe would likely end first.
Hilton Hotel has had to restart BigInts twice now across the world for rentals.
On Fri, Apr 8, 2016 at 3:16 PM, Ted Roche tedroche@gmail.com wrote:
On Wed, Mar 9, 2016 at 6:17 PM, mbsoftwaresolutions@mbsoftwaresolutions.com wrote:
...I did NOT want to use auto-increment keys in the tables because I wanted to
have
more control over them during data-entry, so basically, I can create all kinds of records and then at save time AND ONLY THEN do I want to store
the
actual data (and that's why AUTOINC keys wouldn't work).
Fun Fact: If you were to create 1000 PKs a day and throw them away, and you worked 365 days/year, you would run out of 4-byte integers in 11,767 years.
Of course, if you were to create GUIDs, the universe would likely end first.
-- Ted Roche Ted Roche & Associates, LLC http://www.tedroche.com
[excessive quoting removed by server]
On 2016-04-08 16:16, Ted Roche wrote:
On Wed, Mar 9, 2016 at 6:17 PM, mbsoftwaresolutions@mbsoftwaresolutions.com wrote:
...I did NOT want to use auto-increment keys in the tables because I wanted to have more control over them during data-entry, so basically, I can create all kinds of records and then at save time AND ONLY THEN do I want to store the actual data (and that's why AUTOINC keys wouldn't work).
Fun Fact: If you were to create 1000 PKs a day and throw them away, and you worked 365 days/year, you would run out of 4-byte integers in 11,767 years.
Another trivia: Didn't Twitter have a problem with their system because they were using AutoInc keys?
Of course, if you were to create GUIDs, the universe would likely end first.
I know I know...and it's currently setup that way.....it's just the miser inside of me trying to optimize as much as I can. OCD.