Hi,

Recently coming across this issue, and problems relating to it. I took upon a following approach which I would like to share across, helping a wider community and get reviews on the same:

Problem:

Many times designing the cubes there are situations in which the cube may not have all the dimensions required for the reporting purposes reason’s being, the planning users want to splash data only on certain limited set of dimensions, but for analytical purposes they need some additional dimensions (which may be attributes of previously used dimensions for planning cubes).

Or for many other reasons where there are lookups involved or complex computation is involved between the planning and reporting cubes. But the summary of the problem is, keeping the cubes in sync real time. So that the numbers remain homogeneous across the system and do not get out of balance, confirmed by many numerous reconciliation reports which are tedious to keep an eye always.

Some arguments do exist that Rules can help achieve this to link multiple cubes, but to my experience so far, Rules are great when used for computations upon variables within one cube itself (even good when cube sizes are relatively small with look up formulas), but as soon as lookup functions start to appear in the rules like “PALO.DATAC” that’s where none my rules were ever performant from usability of the system (as my projects involved large data set cubes), not just with Jedox event Microsoft confirms this nature of SSAS and suggest to keep the cubes comprehensive of data as far as possible.

Microsoft:

Any calculation using the LookupCube function is likely to suffer from poor performance. Instead of using this function, consider redesigning your solution so that all of the data you need is present in one cube.

http://msdn.microsoft.com/en-au/library/ms144720.aspx

Solution:

At work I realized always keep things configurable (Golden Rule), as every day some thing changes like rules, logic, constants. So the best mechanism I planned to suit the flexibility requirements to keep the cubes in sync was to leverage the Jedox ETL but custom designing the ETL project to keep the cubes in sync  by the required variables, and kicking off the ETL by the web service triggered by the Supervision Server on the cell change event, this way we wont loose out on any cube change event and all the affected cubes are also synced up:

The configuration of the set up is as follows:

1. Configure the “PALO.INI” for monitoring cell change events via Supervision Server

Add the following command:

“use-cube-worker”

2. In the “SVS/sample_script” folder of Jedox create a copy of “sep.inc.on_cell_change.php” and place it in the “custom_script” folder by renaming it to your choice, in my case it is “sep.inc.on_cell_change_cube_sync.php”

Amend the code in the file as follows:

Also addtionally in the same folder create the following file called “etl_code.php” for custom code relating to ETL kick off

3. Following the above changes amend the “SVS” configuration file for the script “sep.inc.php” to point to you new custom script:

4. Finally restart the MOLAP service and , fire up a Jedox spreadsheet and with your inputs flowing in to the cube and keep an eye on the Jedox ETL log accumulating, indicating the process works and your ETL is being called upon any changes occurring in the targeted cube region. (In my case the ETL name was “LearnDummy” with the Job name as “[J][JustFromEvent]“)

Cube Sync Real Time

Note: One thing to be very careful for this process is, to ensure that this mechanism is primarily built for user based inputs to the cube from spreadsheets, to track them and sync them over. But for any ETL based cube load, the SVS event must be ensured to be de-activated for the optimal performance in the nightly load processes, if missed can lead to huge overburdening of the process which it wasn’t designed to cater.

The main benefit I experienced from this solution is that its an async operations hence user don’t realised any performance impacts, as the user inputs trapped are only used for kicking off the ETL job which works on its own, effectively by queuing jobs under heavy loads and ensuring all operations are accomplished with the logs in place to review and audit.

Hope it helps, Enjoy !!![/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]