You will always have to change the UI. How many different places are you asking for the same data? If that was in a Sproc it is fixed all over the place now. If you have it as a statement you pas back you have to go to every occurrence in your app and make the change.
Reporting as well a UI.
Most of us will make the sproc to pull the data for a specific report.
On Thu, Mar 10, 2016 at 4:16 PM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
Now wait a second...you have to change the user interface who's calling that stored procedure, so there's TWO spots right there that need changing...the UI call and the actual SP itself. Or how am I reading that wrong?
On 2016-03-10 13:53, Stephen Russell wrote:
Change is great because that is why we have a job.
If you have to fix select statements in your system I would only want to do it on the db and then adjust the receiver's as needed. It is simple to do it there or at least that is how I have been doing this for the last 18+ years. For any maintenance questions you only go to one SINGLE point of failure.
YMMV
On Thu, Mar 10, 2016 at 12:10 PM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
I realize it's done lots of places, but I never wanted explicit stored
procedures for inserts/updates as they required update every time you changed a structure. That's too fragile/ridig a system for my liking.
I'm thinking it'll be a stored procedure for the purpose of inserting something into a table and grabbing the @@IDENTITY value resulting from the insert. I realize that the number will grow large because for Table1's insert, I get a value of 1, and then for Table2's insert, I get the next value (2), etc. etc. etc. I don't mind that my entire collection of PKeys is unique numbers. I don't see this system ever hitting the maximum threshold integer value.
So thus, it's similar to the classic Fox GetNextKey routine but instead of a row for each table, the Keys table is just handing out the next integer key created...and if it's not used (i.e., the user hits Cancel and doesn't save his new data), no big deal.
Make sense?
On 2016-03-10 12:42, Stephen Russell wrote:
I understand how this could be a complex job and the first insert may
only contain 30% of the total rows known at this time.
I would consider making sprocs for inserts into each unique table that returns when necessary the PKey of that insert.
jobInsert itemInsert detailsInsert offshootsInsert
Also make: jobSelect itemSelect detailsSelect offshootsSelect
In some of my databases there are hundreds of sprocs, 400-500 in number.
jobAllAspects could have all of the joins needed to pull the entire beast into one dataset or all of the tables in independent returned datasets. We do a lot of the latter here at Ring.
On Thu, Mar 10, 2016 at 11:26 AM, < mbsoftwaresolutions@mbsoftwaresolutions.com> wrote:
On 2016-03-10 10:55, Stephen Russell wrote:
"until I was absolutely sure I wanted to save the entire dataset."
That is exactly what we are talking about. When user clicks save, submit, ok, button they are in save mode. Then you commit header row(s) retaining the fkey(s) necessary for your transactional details.
Yes but until the user does the Save, I have to keep the relationship hierarchy for primary keys and related foreign keys.
Example (where cID is the table's primary key):
- Create Job (cID in Jobs cursor)
- Create 1:M items (cID in Items cursor, with cJobID foreign key
pointing back to Jobs table) 3) Create 1:M details about each item (cID in Details cursor, with cItemID foreign key pointing back to Items table) 4) Create some 1:M offshoots perhaps for each Detail (...you see the trend...)
Rather than add all those records immediately to the database and later abandon because the dude hits "Cancel", I prefer to create my own keys rather than rely on AutoIncrement to have full control like this.
[excessive quoting removed by server]