"YOU AND THE ART OF ONLINE DATING" is the only product on the market that will take you step-by-step through the process of online dating, provide you with the resources to help ensure success. Get it now!
That's some needs for an OLAP system, interesting and challenging :-) : Load original multi-level data into cube and preserve known parents; don't overwrite or display their values as calculated aggregates of children (which may be incomplete) You can change the way cubes aggregate values in a hierarchy. Doing this in one hierarchy is fine doing this using in multiple hierarchies might start to get complicated. It's worth checking twice if there is a mathematical 'unique' solution to the problem with multiple 'special' hierarchies Create/update/delete cells in a cube based on results from complicated queries/joins of other cubes.
Sometimes a cube needs to be transformed to use a slightly different dimension definition Here you can use writeback ( MDX function Update cube ), but I think it's a bit too simple for your needs. Implementation depend on the vendors. Pay attention creating cells can kill your memory as for large cubes you can quickly have millions of cells in a subcube What is the sparsity of your model?
-> number of cells with data / number of total cells Some models have sparsities of 1e-30, here it's easy to explode if you're updating all cells ;-) Users require estimates for unknown values. I can create decent estimates, but need to adjust them so they conform to known parents/children across all dimensions and levels (this is much harder than it sounds). I am already doing this, but it involves pulling the data out of the RDBMS into a custom executable This is looking complicated The issue here is the complexity of the algos, a possible solution using MDX language and how they match with the OLAP engige (fast enough).
You're taking the risk it explodes, but have a look at Scope function Data can be large... up to tens of millions of fact table rows. Performance needs to be decent for batch jobs (minutes are ok, hours not so much) That should not be a real challenge To answer your question, I don't think so. We've a similar problem - on the genetical field - and we are going to solve the problem 'adding' a dedicated calculation module to our OLAP solution.It's an interesting on going project.
That's some needs for an OLAP system, interesting and challenging :-) : - Load original multi-level data into cube and preserve known parents; don't overwrite or display their values as calculated aggregates of children (which may be incomplete). You can change the way cubes aggregate values in a hierarchy. Doing this in one hierarchy is fine doing this using in multiple hierarchies might start to get complicated.It's worth checking twice if there is a mathematical 'unique' solution to the problem with multiple 'special' hierarchies.
Create/update/delete cells in a cube based on results from complicated queries/joins of other cubes. Sometimes a cube needs to be transformed to use a slightly different dimension definition. Here you can use writeback (MDX function Update cube), but I think it's a bit too simple for your needs.
Implementation depend on the vendors. Pay attention creating cells can kill your memory as for large cubes you can quickly have millions of cells in a subcube. What is the sparsity of your model?
-> number of cells with data / number of total cells Some models have sparsities of 1e-30, here it's easy to explode if you're updating all cells ;-). Users require estimates for unknown values. I can create decent estimates, but need to adjust them so they conform to known parents/children across all dimensions and levels (this is much harder than it sounds).
I am already doing this, but it involves pulling the data out of the RDBMS into a custom executable. This is looking complicated The issue here is the complexity of the algos, a possible solution using MDX language and how they match with the OLAP engige (fast enough). You're taking the risk it explodes, but have a look at Scope function Data can be large... up to tens of millions of fact table rows.
Performance needs to be decent for batch jobs (minutes are ok, hours not so much). That should not be a real challenge.. To answer your question, I don't think so. We've a similar problem - on the genetical field - and we are going to solve the problem 'adding' a dedicated calculation module to our OLAP solution.
It's an interesting on going project.
Thank you for the pointers. I suspected that OLAP/MDX would not be a good fit, but wanted to get feedback from an expert who has more experience with it. The sparsity of my data tends to be in the .10 to .3 range, so it is not very sparse.
I think I might load some sample data into an OLAP system and try a few things, but I'm not too hopeful at this point. I would be interested to hear about your custom module for the similar data. – J.
Miller Sep 13 at 22:14.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.