The RDBMS conceptual model is not very good at maintaining temporal versions of data. So it is not just Oracle which is lacking in this regard I am unclear why you think keeping version information at the record level will be too slow. Too slow in creating a new version?
Or too slow where it comes to data retrieval during regular operations? Here is how you could do it. Given a table CUSTOMERS with a business key of CUSTOMER_REF I might normally build it like this (I am using abbreviated syntax rather than best practice for reasons of space): create table customers ( id number not null primary key , customer_ref number not null unique key , name varchar2(30) not null ) The versioned equivalent would look like this: create table customers ( id number not null primary key , customer_ref number not null , version_number number , name varchar2(30) not null , constraint whatever unique (customer_ref, version_number) ) This works by keeping the current version of VERSION_NUMBER null, and only populating it at archival time.
Any lookup is going to have to include and version_number is null This will be a bit of a pain and you may need to include the column in any additional indexes you build Obviously maintaining all versions of the records in the same table will increase the size of your tables, which might have an effect on performance. Oracle's Partitioning option can definitely help here. It also would give you a neat way of creating next year's set of data.
However, it is a chargeable extra on top of the Enterprise License, so it is an expensive option Find out more The most time consuming aspect of this will be managing foreign key relationships in the new version of the table. Presuming you choose to use synthetic primary keys, the archival process will have to generate new IDs and then painstakingly cascade them to their dependent records in the new versions of referencing foreign keys Thinking about this makes discreet tables for each version seem very attractive. For ease of use I would keep the current version un-prefixed, so that archiving becomes a process simply of create table customers_n as select * from customers You might want to avoid downtime while creating the versioned tables.In that case you could use materialized views to capture the tables' state during the run-up to the archival switchover.
When the clock strikes twelve you can switch off the refresh. (caveat: this is thinking on the fly, I have never done anything like this so try before you buy.) One pertinent advantage of multiple tables (and Partitioning) is that you can move the archived records to a READ ONLY tablespace. This not only preserves them from unwanted change, it also means you can exclude them from subsequent backups edit I notice you have commented that the archived data can occasionbally be amended.
In taht case moving it to READ ONLY tablespaces is not a go-er.
The RDBMS conceptual model is not very good at maintaining temporal versions of data. So it is not just Oracle which is lacking in this regard. I am unclear why you think keeping version information at the record level will be too slow.
Too slow in creating a new version? Or too slow where it comes to data retrieval during regular operations? Here is how you could do it.
Given a table CUSTOMERS with a business key of CUSTOMER_REF I might normally build it like this (I am using abbreviated syntax rather than best practice for reasons of space): create table customers ( id number not null primary key , customer_ref number not null unique key , name varchar2(30) not null ) / The versioned equivalent would look like this: create table customers ( id number not null primary key , customer_ref number not null , version_number number , name varchar2(30) not null , constraint whatever unique (customer_ref, version_number) ) / This works by keeping the current version of VERSION_NUMBER null, and only populating it at archival time. Any lookup is going to have to include and version_number is null. This will be a bit of a pain and you may need to include the column in any additional indexes you build.
Obviously maintaining all versions of the records in the same table will increase the size of your tables, which might have an effect on performance. Oracle's Partitioning option can definitely help here. It also would give you a neat way of creating next year's set of data.
However, it is a chargeable extra on top of the Enterprise License, so it is an expensive option. Find out more.. The most time consuming aspect of this will be managing foreign key relationships in the new version of the table. Presuming you choose to use synthetic primary keys, the archival process will have to generate new IDs and then painstakingly cascade them to their dependent records in the new versions of referencing foreign keys.
Thinking about this makes discreet tables for each version seem very attractive. For ease of use I would keep the current version un-prefixed, so that archiving becomes a process simply of create table customers_n as select * from customers; You might want to avoid downtime while creating the versioned tables. In that case you could use materialized views to capture the tables' state during the run-up to the archival switchover.
When the clock strikes twelve you can switch off the refresh. (caveat: this is thinking on the fly, I have never done anything like this so try before you buy. ) One pertinent advantage of multiple tables (and Partitioning) is that you can move the archived records to a READ ONLY tablespace.
This not only preserves them from unwanted change, it also means you can exclude them from subsequent backups. Edit I notice you have commented that the archived data can occasionbally be amended. In taht case moving it to READ ONLY tablespaces is not a go-er.
I am unclear why you think keeping version information at the >> record level will be too slow. Too slow in creating a new >> version? >> Or too slow where it comes to data retrieval during regular >> operations?
Too slow during regular retrival operations, but I must say that my perception of it being too-slow is based on no-facts at all and entirely a gut-feel. We are probing the client to establish if they have the partitioning option, failing that, we are going for the table versioning strategy. Thank you for your input it was greatly appreciated!
– twiga Jan 22 '10 at 10:03.
The only thing I wil add to what APC said is regarding your asking for "namespaces". A namespace in Oracle is a schema, whereby you can have the same object name(s) in each schema. Of course this all depends on how your app must access multiple versions, but I would lean towards a different schema for each year before I would use some sort of naming convention to maintain versions of tables in the same schema.
The reason is, eventually you will have a nightmares. At least with different schemas, all DDL can be the same, all references to objects will be the same, and tools like ER modellers and query tools will work within the context of that schema. Data models change, so at some point you may need to run some compare tools, and if all your tables are named funky with some sort of version postfix, that won't work well.
Add a schema can be copied / moved with export or data pump quickly using the fromuser/touser or remap_schema options, so you won't need much code, except to do any cleanup of last years data out of the new version. I find schemas are very useful as "containers" and most apps I host only have schema level privileges, so I'm guaranteed the app can be easily and quickly moved from instance to instance, or multiple copies of the app can be hosted side-by-side on the same instance.
Might the schema change between years. For example, in 2010 you have fifteen columns but in 2011 you add a sixteenth. If so, will the same application work on both 2010 and 2011 data.
If the schema is static, I'd go for table with a 'YEAR' column and use VPD/RLS/FGAC to apply a YEAR = '2010' predicate. I'd only worry about partitioning if performance was a problem.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.