Some sort of “different auto-increment indexes” per a primary key values?

MySQL's MyISAM engine can do this. See their manual, in section Using AUTO_INCREMENT : For MyISAM tables you can specify AUTO_INCREMENT on a secondary column in a multiple-column index. In this case, the generated value for the AUTO_INCREMENT column is calculated as MAX(auto_increment_column) + 1 WHERE prefix=given-prefix.

This is useful when you want to put data into ordered groups The docs go on after that paragraph, showing an example The InnoDB engine in MySQL does not support this feature, which is unfortunate because it's better to use InnoDB in almost all cases You can't emulate this behavior using triggers (or any SQL statements limited to transaction scope) without locking tables on INSERT. Consider this sequence of actions: Mario starts transaction and inserts a new row for user 4 Bill starts transaction and inserts a new row for user 4 Mario's session fires a trigger to computes MAX(id)+1 for user 4. You get 3 Bill's session fires a trigger to compute MAX(id).

I get 3 Bill's session finishes his INSERT and commits Mario's session tries to finish his INSERT, but the row with (userid=4, id=3) now exists, so Mario gets a primary key conflict In general, you can't control the order of execution of these steps without some kind of synchronization The solutions to this are either: Get an exclusive table lock Before trying an INSERT, lock the table. This is necessary to prevent concurrent INSERTs from creating a race condition like in the example above. It's necessary to lock the whole table, since you're trying to restrict INSERT there's no specific row to lock (if you were trying to govern access to a given row with UPDATE, you could lock just the specific row).

But locking the table causes access to the table to become serial, which limits your throughput Do it outside transaction scope Generate the id number in a way that won't be hidden from two concurrent transactions. By the way, this is what AUTO_INCREMENT does. Two concurrent sessions will each get a unique id value, regardless of their order of execution or order of commit.

But tracking the last generated id per userid requires access to the database, or a duplicate data store. For example, a memcached key per userid, which can be incremented atomically It's relatively easy to ensure that inserts get unique values. But it's hard to ensure they will get consecutive ordinal values.

Also consider: What happens if you INSERT in a transaction but then roll back? You've allocated id value 3 in that transaction, and then I allocated value 4, so if you roll back and I commit, now there's a gap What happens if an INSERT fails because of other constraints on the table (e.g. Another column is NOT NULL)? You could get gaps this way too If you ever DELETE a row, do you need to renumber all the following rows for the same userid?

What does that do to your memcached entries if you use that solution?

MySQL's MyISAM engine can do this. See their manual, in section Using AUTO_INCREMENT: For MyISAM tables you can specify AUTO_INCREMENT on a secondary column in a multiple-column index. In this case, the generated value for the AUTO_INCREMENT column is calculated as MAX(auto_increment_column) + 1 WHERE prefix=given-prefix.

This is useful when you want to put data into ordered groups. The docs go on after that paragraph, showing an example. The InnoDB engine in MySQL does not support this feature, which is unfortunate because it's better to use InnoDB in almost all cases.

You can't emulate this behavior using triggers (or any SQL statements limited to transaction scope) without locking tables on INSERT. Consider this sequence of actions: Mario starts transaction and inserts a new row for user 4. Bill starts transaction and inserts a new row for user 4. Mario's session fires a trigger to computes MAX(id)+1 for user 4.

You get 3. Bill's session fires a trigger to compute MAX(id). I get 3.

Bill's session finishes his INSERT and commits. Mario's session tries to finish his INSERT, but the row with (userid=4, id=3) now exists, so Mario gets a primary key conflict.In general, you can't control the order of execution of these steps without some kind of synchronization. The solutions to this are either: Get an exclusive table lock.

Before trying an INSERT, lock the table. This is necessary to prevent concurrent INSERTs from creating a race condition like in the example above. It's necessary to lock the whole table, since you're trying to restrict INSERT there's no specific row to lock (if you were trying to govern access to a given row with UPDATE, you could lock just the specific row).

But locking the table causes access to the table to become serial, which limits your throughput.Do it outside transaction scope. Generate the id number in a way that won't be hidden from two concurrent transactions. By the way, this is what AUTO_INCREMENT does.

Two concurrent sessions will each get a unique id value, regardless of their order of execution or order of commit. But tracking the last generated id per userid requires access to the database, or a duplicate data store. For example, a memcached key per userid, which can be incremented atomically.It's relatively easy to ensure that inserts get unique values.

But it's hard to ensure they will get consecutive ordinal values. Also consider: What happens if you INSERT in a transaction but then roll back? You've allocated id value 3 in that transaction, and then I allocated value 4, so if you roll back and I commit, now there's a gap.

What happens if an INSERT fails because of other constraints on the table (e.g. Another column is NOT NULL)? You could get gaps this way too. If you ever DELETE a row, do you need to renumber all the following rows for the same userid?

What does that do to your memcached entries if you use that solution?

SQL Server should allow you to do this. If you can't implement this using a computed column (probably not - there are some restrictions), surely you can implement it in a trigger. MySQL also would allow you to implement this via triggers.

In a comment you ask the question about efficiency. Unless you are dealing with extreme volumes, storing an 8 byte DATETIME isn't much of an overhead compared to using, for example, a 4 byte INT. It also massively simplifies your data inserts, as well as being able to cope with records being deleted without creating 'holes' in your sequence.

If you DO need this, be careful with the field names. If you have uid and id in a table, I'd expect id to be unique in that table, and uid to refer to something else. Perhaps, instead, use the field names property_id and amendment_id.

In terms of implementation, there are generally two options. 1). A trigger Implementations vary, but the logic remains the same.As you don't specify an RDBMS (other than NOT MS/Oracle) the general logic is simple... Start a transaction (often this is Implicitly already started inside triggers) Find the MAX(amendment_id) for the property_id being inserted Update the newly inserted value with MAX(amendment_id) + 1 Commit the transaction Things to be aware of are... - multiple records being inserted at the same time - records being inserted with amendment_id being already populated - updates altering existing records 2).

A Stored Procedure If you use a stored procedure to control writes to the table, you gain a lot more control. Implicitly, you know you're only dealing with one record. You simply don't provide a parameter for DEFAULT fields.

You know what updates / deletes can and can't happen. You can implement all the business logic you like without hidden triggers I personally recommend the Stored Procedure route, but triggers do work.

It is important to get your data types right. What you are describing is a multi-part key. So use a multi-part key.

Don't try to encode everything into a magic integer, you will poison the rest of your code. If a record is identified by (entity_id,version_number) then embrace that description and use it directly instead of mangling the meaning of your keys. You will have to write queries which constrain the version number but that's OK.

Databases are good at this sort of thing. Version_number could be a timestamp, as a_horse_with_no_name suggests. This is quite a good idea.

There is no meaningful performance disadvantage to using timestamps instead of plain integers. What you gain is meaning, which is more important. You could maintain a "latest version" table which contains, for each entity_id, only the record with the most-recent version_number.

This will be more work for you, so only do it if you really need the performance.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions