Disclaimer

All of the topics discussed here in this blog comes from my real life encounters. They serve as references for future research. All of the data, contents and information presented in my entries have been altered and edited to protect the confidentiality and privacy of the clients.

Various scenarios of designing RPD and data modeling

Find the easiest and most straightforward way of designing RPD and data models that are dynamic and robust

Countless examples of dashboard and report design cases

Making the dashboard truly interactive

The concept of Business Intelligence

The most important concept ever need to understand to implement any successful OBIEE projects

Making it easy for beginners and business users

The perfect place for beginners to learn and get educated with Oracle Business Intelligence

Saturday, November 22, 2014

OBIEE -- Writeback update while keep the update history


Hello

I implemented the writeback in OBIEE as mentioned in this post, however the writeback feature in OBIEE is very basic, it does simple overriding of the data and it doesn't keep the historic changes. I have tried playing around with the writeback.xml file with insert or update tags, but i had no luck with it. Therefore, I decided a different approach.

The original table that I was using for writeback and reporting will be kept as it is. Now I am going to create another table with the same table structure. Then I will create a trigger that will insert records to this table when the original table gets a writeback update. Then last but not least, I will create a view which unions these 2 tables and use it in OBIEE rpd. Let me illustrate here:

The original table being used in the older post is FA_Table, which is being used in rpd.

Create a new table called FA_Table_AUX:

CREATE TABLE FA_TABLE_AUX
  (
LEDGER_NAME                   VARCHAR2(30),  
PERIOD_NAME                   VARCHAR2(15),  
JE_HEADER_NAME                VARCHAR2(100),
JE_LINE_NUM                   NUMBER,      
.
OTHER COLUMNS
.
.
)

Then create the trigger called FA_WRITEBACK:

CREATE OR REPLACE TRIGGER FA_WRITEBACK
BEFORE UPDATE
   ON FA_TABLE
   FOR EACH ROW
 
BEGIN
INSERT
INTO FA_TABLE_AUX
  (
    LEDGER_NAME,
    PERIOD_NAME,
    JE_HEADER_NAME,
    JE_LINE_NUM,
    CURRENCY_CODE,
    CURRENCY_CONVERSION_RATE,
    CURRENCY_CONVERSION_TYPE,
    CURRENCY_CONVERSION_DATE,
    POSTED_DATE,
    JE_BATCH_ID,
    JE_CATEGORY,
    JE_SOURCE,
    ENTERED_DR,
    ENTERED_CR,
    ACCOUNTED_DR,
    ACCOUNTED_CR,
    EXCHANGE_RATE_USD,
    TRANSACTION_LOCAL_AMOUNT,
    TRANSACTION_USD_AMOUNT,
    JOURNAL_LINE_DESCRIPTION,
    EFFECTIVE_DATE,
    COMPANY,
    COST_CENTER,
    COST_CENTER_NAME,
    ACCOUNT,
    ACCOUNT_NAME,
    PRODUCT,
    PROJECT,
    PROJECT_NAME,
    INTERCOMPANY,
    GEOGRAPHY,
    GEOGRAPHY_NAME,
    FUTURE2,
    INVOICE_NUM,
    INVOICE_AMOUNT,
    INVOICE_LINE_DESCRIPTION,
    VENDOR_NAME,
    INVOICE_DATE,
    INVOICE_GL_DATE,
    INVOICE_PERIOD_NAME,
    DIST_GL_PERIOD_NAME,
    PO_NUM,
    RECORD_TYPE,
    PERIOD_NUM,
    ASSET_NUMBER,
    GLOBAL_TRANSFER,
    REGION,
    INTERIM_CATEGORY,
    CAPEX_CATEGORY,
    CAPEX_YES_NO,
    RECORD_ID,
    REVISION_NUMBER,
    CURRENT_REVISION,
    LAST_UPDATED_BY,
    LAST_UPDATED_DATE,
    RAW_DATA_FLAG,
    FINAL_DATA_FLAG
  )
  VALUES
  (
    :old.LEDGER_NAME,
    :old.PERIOD_NAME,
    :old.JE_HEADER_NAME,
    :old.JE_LINE_NUM,
    :old.CURRENCY_CODE,
    :old.CURRENCY_CONVERSION_RATE,
    :old.CURRENCY_CONVERSION_TYPE,
    :old.CURRENCY_CONVERSION_DATE,
    :old.POSTED_DATE,
    :old.JE_BATCH_ID,
    :old.JE_CATEGORY,
    :old.JE_SOURCE,
    :old.ENTERED_DR,
    :old.ENTERED_CR,
    :old.ACCOUNTED_DR,
    :old.ACCOUNTED_CR,
    :old.EXCHANGE_RATE_USD,
    :old.TRANSACTION_LOCAL_AMOUNT,
    :old.TRANSACTION_USD_AMOUNT,
    :old.JOURNAL_LINE_DESCRIPTION,
    :old.EFFECTIVE_DATE,
    :old.COMPANY,
    :old.COST_CENTER,
    :old.COST_CENTER_NAME,
    :old.ACCOUNT,
    :old.ACCOUNT_NAME,
    :old.PRODUCT,
    :old.PROJECT,
    :old.PROJECT_NAME,
    :old.INTERCOMPANY,
    :old.GEOGRAPHY,
    :old.GEOGRAPHY_NAME,
    :old.FUTURE2,
    :old.INVOICE_NUM,
    :old.INVOICE_AMOUNT,
    :old.INVOICE_LINE_DESCRIPTION,
    :old.VENDOR_NAME,
    :old.INVOICE_DATE,
    :old.INVOICE_GL_DATE,
    :old.INVOICE_PERIOD_NAME,
    :old.DIST_GL_PERIOD_NAME,
    :old.PO_NUM,
    :old.RECORD_TYPE,
    :old.PERIOD_NUM,
    :old.ASSET_NUMBER,
    :old.GLOBAL_TRANSFER,
    :old.REGION,
    :old.INTERIM_CATEGORY,
    :old.CAPEX_CATEGORY,
    :old.CAPEX_YES_NO,
    :old.RECORD_ID,
    :old.REVISION_NUMBER,
    'N',
    'Trigger',
    SYSDATE,
    :old.RAW_DATA_FLAG,
    'N');

:new.REVISION_NUMBER := :old.REVISION_NUMBER+1;
END;

--------------------------

Here as you can see, the old records from FA_Table gets inserted into FA_Table_AUX, and the last 5 fields will be hard-coded. It also updates the revision_number by 1 increment just to indicate the update versions.

Now create a view that union both of these tables. The BOLD columns will be included in the report and it will reflect the historical changes because the columns in FB_TABLE_AUX will be inserted based on trigger.

create or replace view FA_TABLE_AUD_TRAIL
    (LEDGER_NAME,
    PERIOD_NAME,
    JE_HEADER_NAME,
    JE_LINE_NUM,
    CURRENCY_CODE,
    CURRENCY_CONVERSION_RATE,
    CURRENCY_CONVERSION_TYPE,
    CURRENCY_CONVERSION_DATE,
    POSTED_DATE,
    JE_BATCH_ID,
    JE_CATEGORY,
    JE_SOURCE,
    ENTERED_DR,
    ENTERED_CR,
    ACCOUNTED_DR,
    ACCOUNTED_CR,
    EXCHANGE_RATE_USD,
    TRANSACTION_LOCAL_AMOUNT,
    TRANSACTION_USD_AMOUNT,
    JOURNAL_LINE_DESCRIPTION,
    EFFECTIVE_DATE,
    COMPANY,
    COST_CENTER,
    COST_CENTER_NAME,
    ACCOUNT,
    ACCOUNT_NAME,
    PRODUCT,
    PROJECT,
    PROJECT_NAME,
    INTERCOMPANY,
    GEOGRAPHY,
    GEOGRAPHY_NAME,
    FUTURE2,
    INVOICE_NUM,
    INVOICE_AMOUNT,
    INVOICE_LINE_DESCRIPTION,
    VENDOR_NAME,
    INVOICE_DATE,
    INVOICE_GL_DATE,
    INVOICE_PERIOD_NAME,
    DIST_GL_PERIOD_NAME,
    PO_NUM,
    RECORD_TYPE,
    PERIOD_NUM,
    ASSET_NUMBER,
    GLOBAL_TRANSFER,
    REGION,
    INTERIM_CATEGORY,
    CAPEX_CATEGORY,
    CAPEX_YES_NO,
    RECORD_ID,
    REVISION_NUMBER,
    CURRENT_REVISION,
    LAST_UPDATED_BY,
    LAST_UPDATED_DATE,
    RAW_DATA_FLAG,
    FINAL_DATA_FLAG)
    as
    select
    LEDGER_NAME,
    PERIOD_NAME,
    JE_HEADER_NAME,
    JE_LINE_NUM,
    CURRENCY_CODE,
    CURRENCY_CONVERSION_RATE,
    CURRENCY_CONVERSION_TYPE,
    CURRENCY_CONVERSION_DATE,
    POSTED_DATE,
    JE_BATCH_ID,
    JE_CATEGORY,
    JE_SOURCE,
    ENTERED_DR,
    ENTERED_CR,
    ACCOUNTED_DR,
    ACCOUNTED_CR,
    EXCHANGE_RATE_USD,
    TRANSACTION_LOCAL_AMOUNT,
    TRANSACTION_USD_AMOUNT,
    JOURNAL_LINE_DESCRIPTION,
    EFFECTIVE_DATE,
    COMPANY,
    COST_CENTER,
    COST_CENTER_NAME,
    ACCOUNT,
    ACCOUNT_NAME,
    PRODUCT,
    PROJECT,
    PROJECT_NAME,
    INTERCOMPANY,
    GEOGRAPHY,
    GEOGRAPHY_NAME,
    FUTURE2,
    INVOICE_NUM,
    INVOICE_AMOUNT,
    INVOICE_LINE_DESCRIPTION,
    VENDOR_NAME,
    INVOICE_DATE,
    INVOICE_GL_DATE,
    INVOICE_PERIOD_NAME,
    DIST_GL_PERIOD_NAME,
    PO_NUM,
    RECORD_TYPE,
    PERIOD_NUM,
    ASSET_NUMBER,
    GLOBAL_TRANSFER,
    REGION,
    INTERIM_CATEGORY,
    CAPEX_CATEGORY,
    CAPEX_YES_NO,
    RECORD_ID,
    REVISION_NUMBER,
    CURRENT_REVISION,
    LAST_UPDATED_BY,
    LAST_UPDATED_DATE,
    RAW_DATA_FLAG,
    FINAL_DATA_FLAG
from FA_TABLE
union
select
LEDGER_NAME,
    PERIOD_NAME,
    JE_HEADER_NAME,
    JE_LINE_NUM,
    CURRENCY_CODE,
    CURRENCY_CONVERSION_RATE,
    CURRENCY_CONVERSION_TYPE,
    CURRENCY_CONVERSION_DATE,
    POSTED_DATE,
    JE_BATCH_ID,
    JE_CATEGORY,
    JE_SOURCE,
    ENTERED_DR,
    ENTERED_CR,
    ACCOUNTED_DR,
    ACCOUNTED_CR,
    EXCHANGE_RATE_USD,
    TRANSACTION_LOCAL_AMOUNT,
    TRANSACTION_USD_AMOUNT,
    JOURNAL_LINE_DESCRIPTION,
    EFFECTIVE_DATE,
    COMPANY,
    COST_CENTER,
    COST_CENTER_NAME,
    ACCOUNT,
    ACCOUNT_NAME,
    PRODUCT,
    PROJECT,
    PROJECT_NAME,
    INTERCOMPANY,
    GEOGRAPHY,
    GEOGRAPHY_NAME,
    FUTURE2,
    INVOICE_NUM,
    INVOICE_AMOUNT,
    INVOICE_LINE_DESCRIPTION,
    VENDOR_NAME,
    INVOICE_DATE,
    INVOICE_GL_DATE,
    INVOICE_PERIOD_NAME,
    DIST_GL_PERIOD_NAME,
    PO_NUM,
    RECORD_TYPE,
    PERIOD_NUM,
    ASSET_NUMBER,
    GLOBAL_TRANSFER,
    REGION,
    INTERIM_CATEGORY,
    CAPEX_CATEGORY,
    CAPEX_YES_NO,
    RECORD_ID,
    REVISION_NUMBER,
    CURRENT_REVISION,
    LAST_UPDATED_BY,
    LAST_UPDATED_DATE,
    RAW_DATA_FLAG,
    FINAL_DATA_FLAG
    from
    FA_TABLE_AUX

Then in the rpd file, simply replace the physical layer table with FA_TABLE_AUD_TRAIL. No need to change the writeback.xml, because the writeback.xml is still using update FA_TABLE as implemented from the older post, When it is trying to update FA_TABLE with user inputs, the trigger will be invoked and therefore the same row before user updates will be inserted into FA_TABLE_AUX table while FA_TABLE will have the updated data.. Since we are reporting against the view which unions both table, then from the OBIEE dashboard, you will get to see both new and older records.

If you are in doubt, you can always query against FB_TABLE_AUX from the DB, if it is blank, that means no insertion occurs, if it returns 1 row, that means user has trigger the update..

Thank you

Until next time..

Tuesday, August 19, 2014

OBIEE: Dynamically calculate quarterly difference without using time series function

Sometimes, you have a custom report built from custom schema, it could be a materialized view or view that were custom made, where the number is at a certain granularity without hierarchy defined. Like you have a quarterly report, but now the requirement is to display the numbers not only in the quarter that user pick, but also display the previous quarter of that user defined quarter along. Not only that, you also need to display the difference between the quarters as well as the monthly measures under these 2 quarters. Like the report below for instance:





As you can see from the report, the user just pick 2014 Q3 and the report comes back with Q3 as well as Q2 data with month and quarter column pivoted. Now the pivot view allows us to get the total per quarter, which as you can see, is done. Now the challenge is, how are we going calculate the difference between these 2 quarterly difference?


So, let me go back a little bit. Before I got to the report was you can see, I created a new column in the RPD which brings Year and Quarter concatenated based on what this table have. In other words, i take '2014' and '3', then concatenated it 'Q' in the middle to get you 2014 Q3. 'Trim' function is used here to get rid of any unwanted spaces that is trailing as a result of converting number into string.


The next step I decided to do is to create a simple report which takes the user input quarter value, then it should return that quarter value as well as the previous quarter value using the rpd column that I just created above.


In the above report, 'Current Year Quarter' is nothing but the RPD column, the Previous Year QTR has the following formula that returns the previous Quarter:

Since the time dimension does have Date (DATE_VALUE) column, by doing timestampadd at month level with '-3' will automatically give you the month that is 3 month prior to the month of that corresponding DATE_VALUE. It also needs to address the fact that the prior quarter of Q1 should be Q4 of the previous year, therefore a case statement is embedded to take care of this situation:

TRIM(TRAILING ' ' FROM CAST(case when "- XXFB_TIME_DIMENSION"."QUARTER" = 1 then "- XXFB_TIME_DIMENSION"."YEAR" - 1 else "- XXFB_TIME_DIMENSION"."YEAR" end AS CHAR))||' '||'Q'||Trim(trailing ' ' from cast(QUARTER_OF_YEAR(timestampadd(sql_tsi_month,-3,"- XXFB_TIME_DIMENSION"."DATE_VALUE")) as char))

After concatenating these components together, it finally gives me what I want:


This report will be accepting the presentation variable from the dashboard prompt quarter, so i call it 'Quarter'. Save this report as 'Previous Quarter'.

Now, go back to the main report that I have been creating as you can see below:



Here in the filter section, the Year&Quarter column (which is the new RPD column) is getting value from 'Current Year Quarter' OR from 'Previous Year Column' of the report 'Previous Quarter' created above. This means that when user select 2014 Q3 from the dashboard, it will return 2014 Q3 and 2014 Q2, both will be passed to year&quarter column of this report through this filter condition. Then by pivoting the quarter and month column, it will display the view that you saw.

Now the second thing is to come up with a way to calculate difference of these 2 quarters. Unfortunately, this will have to be done with a different calculation because time series function will make it very complicated and difficult to validate the logic.

So first thing is to find a way to calculate the measure for each of the quarter and it's previous quarter. This can be done using 'filter' function by making the Year&Quarter column equal 'current quarter' or 'previous quarter' from report 'Previous quarter'.


As it turns out, OBIEE does allow to use filter function with nested query, so this is what the expression looks like below:

FILTER(ifnull(FACT."Accounted Net USD",0) + ifnull(FACT."Accounted Net USD",0) USING ("- XXFB_TIME_DIMENSION"."Year&Quarter" IN (SELECT saw_0 FROM (SELECT "- XXFB_TIME_DIMENSION"."Year&Quarter" saw_0, TRIM(TRAILING ' ' FROM CAST(case when "- XXFB_TIME_DIMENSION"."QUARTER" = 1 then "- XXFB_TIME_DIMENSION"."YEAR" - 1 else "- XXFB_TIME_DIMENSION"."YEAR" end AS CHAR))||' '||'Q'||Trim(trailing ' ' from cast(QUARTER_OF_YEAR(timestampadd(sql_tsi_month,-3,"- XXFB_TIME_DIMENSION"."DATE_VALUE")) as char)) saw_1 FROM "FB - Custom Views for Oracle Financials" WHERE "- XXFB_TIME_DIMENSION"."Year&Quarter" IN ('@{Quarter}')) nqw_1 ))) -  FILTER(ifnull(FACT."Accounted Net USD",0) + ifnull(FACT."Accounted Net USD",0) USING ("- XXFB_TIME_DIMENSION"."Year&Quarter" IN (SELECT saw_1 FROM (SELECT "- XXFB_TIME_DIMENSION"."Year&Quarter" saw_0, TRIM(TRAILING ' ' FROM CAST(case when "- XXFB_TIME_DIMENSION"."QUARTER" = 1 then "- XXFB_TIME_DIMENSION"."YEAR" - 1 else "- XXFB_TIME_DIMENSION"."YEAR" end AS CHAR))||' '||'Q'||Trim(trailing ' ' from cast(QUARTER_OF_YEAR(timestampadd(sql_tsi_month,-3,"- XXFB_TIME_DIMENSION"."DATE_VALUE")) as char)) saw_1 FROM "FB - Custom Views for Oracle Financials" WHERE "- XXFB_TIME_DIMENSION"."Year&Quarter" IN ('@{Quarter}')) nqw_1 )))

Now the break down of this expression is below:

(Below will get you Current month measure)
FILTER(ifnull(FACT."Accounted Net USD",0) + ifnull(FACT."Accounted Net USD",0) USING
("- XXFB_TIME_DIMENSION"."Year&Quarter" IN
----- (The red part is the measure that we are dealing with)


(SELECT saw_0 FROM (SELECT "- XXFB_TIME_DIMENSION"."Year&Quarter" saw_0, TRIM(TRAILING ' ' FROM CAST(case when "- XXFB_TIME_DIMENSION"."QUARTER" = 1 then "- XXFB_TIME_DIMENSION"."YEAR" - 1 else "- XXFB_TIME_DIMENSION"."YEAR" end AS CHAR))||' '||'Q'||Trim(trailing ' ' from cast(QUARTER_OF_YEAR(timestampadd(sql_tsi_month,-3,"- XXFB_TIME_DIMENSION"."DATE_VALUE")) as char)) saw_1 FROM "FB - Custom Views for Oracle Financials" WHERE "- XXFB_TIME_DIMENSION"."Year&Quarter" IN ('@{Quarter}')) nqw_1 )))  
---- this is the other report that we created earlier, presentation variable needs to be entered here, saw_0 represent 'current year and quarter' from that report

 -
(Below will get you previous month measure)
FILTER(ifnull(FACT."Accounted Net USD",0) + ifnull(FACT."Accounted Net USD",0)

USING ("- XXFB_TIME_DIMENSION"."Year&Quarter" IN

(SELECT saw_1 FROM (SELECT "- XXFB_TIME_DIMENSION"."Year&Quarter" saw_0, TRIM(TRAILING ' ' FROM CAST(case when "- XXFB_TIME_DIMENSION"."QUARTER" = 1 then "- XXFB_TIME_DIMENSION"."YEAR" - 1 else "- XXFB_TIME_DIMENSION"."YEAR" end AS CHAR))||' '||'Q'||Trim(trailing ' ' from cast(QUARTER_OF_YEAR(timestampadd(sql_tsi_month,-3,"- XXFB_TIME_DIMENSION"."DATE_VALUE")) as char)) saw_1 FROM "FB - Custom Views for Oracle Financials" WHERE "- XXFB_TIME_DIMENSION"."Year&Quarter" IN ('@{Quarter}')) nqw_1 )))
 --- saw_1 represents the 'previous year and quarter' from that report 

Now, test the new report:



It's working nicely..

Thanks

Friday, August 1, 2014

OBIEE vs Tableau Report

Having used both technologies quite extensively, I can't help but to make some comparisons between the 2 tools.

For those that are not quite familiar with Tableau, it is a pretty cool technology that also provides very good look and feels of the reports. It is a pretty popular technology in a lot of companies.

Here are some of the cool things that tableau has to offer:
1.
Tableau allows developers to bypass the metadata design and go directly into the report building phases. In other words, there is no equivalence of Admin Tool in Tableau. You develop your SQL or PL/SQL, you then create reports and views based on this SQL. This allows the development to be very flexible because there is no needs for dimensional modeling. You can join as many tables, create as many self-joins, making as many snowflake or normalizations of dimensions as you want, as long as your query is giving you the right data, you can put it in tableau and it becomes the report you want. The same thing in OBIEE would probably have to be done through custom views or materialized views, which will still need to be migrated across the 3 layers in Admin tool.

2.
Tableau makes mapview much easier to build. All you need is spatial data in your table: latitude and Longitude and city. Once these fields are in your query, it will show up in tableau with different Icons letting you know they are available. You can then putting in the mapview and the map will be generated with every city in your table on the map. This process is much simpler than doing the same in OBIEE.

3.
The latest versions of Tableau provides your calender for any date filters (I am surprised it took this long for such important feature to come up ). The date filter has some pretty cool features that it allows users to not only filter based on dates, but also provides other filtering options such as 'last 2 weeks', 'last 2 months', 'between each Monday' with all just a click away. This can also be done in OBIEE but will require the use of variables, which will table some engineering.

4.
Tableau report allows users to pick their own filters while viewing the report. In OBIEE, the user will normally have to contact the report developers if they want to change filters and prompts.

5. The common features that OBIEE has, such as agents and navigation among reports and views can also be found in tableau.

6 Tableau is less complicated in its server architecture compared with OBIEE and weblogic.

On the other hand, there are some downside of tableau as well compared with OBIEE:

1.One of the main thing about Tableau is that it doesn't seem to have a query engine. This means it will try to query the entire DB first before applying any filters. This will cause a lot of overheads and oftenly at a enterprise level of reporting, the Tableau server is less stable than OBIEE because of the size of the Data they are dealing with.

2.For enterprise reporting, there might be reports that has to go through auditing to meet certain compliance based on the formatting and other things. OBIEE reporting has been tested for this purpose while Tableau will still have to be trialed and tested. What's the use of having all of the fancy reports that don't pass the auditing?

3.Another disadvantage of Tableau, which can be major for some companies, is that it doesn't allow writebacks. We all know that OBIEE has a writeback feature, which allows report users to update records on the dashboard without having to go to DB. This can be important in a lot of the financial reporting that the users typically would want to update certain records during their month-end closing. This feature is absent in Tableau.

Overall, Tableau is more of a point solution rather than enterprise-wise solution like OBIEE. But Tableau is way less costly than OBIEE and much more flexible in it's design, which also means that the level of SQL Programming for Tableau is much higher than in OBIEE.

I hope this helps

 

Friday, July 25, 2014

How to report on 2 datasets across different database in OBIEE

Hello

Today I want to talk about this feature in OBIEE that allows you to do cross-db reporting, which I think is pretty cool.

Say you have 2 different result sets, they could be 2 tables or could be 2 queries. Now I want to see an OBIEE report that joins these 2 result sets based on some common dimensions, so how do we do it?

First of all, these 2 objects will be residing on different physical folders with different connection pools, this makes it complicated if you want to do physical joins.

In my example below, I have created 2 oblique views that contents 2 SQL queries, but they are located in 2 different physical folders:


Table1 is GL_Balance, which is from 1 DB and table 2 is A_GL_Balance, which is from another DB..

Both tables are actually SQL queries which already contents dimensions and measures from multiple DB tables.


Now in order to create a report that queries both table across the DB, I will leave these 2 tables as it is in the physical layer, but in the logic layer, I will create a logical confirm dimension that separates the dimensional attributes from the measures.  So in the BMM layer, it will become 3 tables, F1, F2 and D. F1 will only have begin_bal and close_bal from Table1 and F2 will have only Balance_Acct_Amt and Balance_Loc_Amt from Table2. Then D will have all of the attributes mapped to both physical tables, you can both this in any BMM folder.





Now in the dimension table, I will map the logical columns to both physical table as sources, which means this table will have 2 LTRs and each columns will have 2 sources mapped to:


You can see that this table has 2 LTRs and each column (company for instances) is mapped to 2 sources..

The rest are pretty standard process, I will create a logical hierarchical dimension out of this table, set the hierarchy level at each fact and move everything into the presentation layer.


The reason I am putting all of the attributes in this table is because I want OBIEE to join these 2 datasets based on all of them, i think they are all common to one another. You can identify which columns are common between the 2 in your case and choose the right list.

What OBIEE does at run time is that, it creates 2 queries that execute in each DB to get the results seperately, it then merge them together in one report.

So I am creating a simple report by picking company and 1 measure from each table


The resulting data:



Now the query log shows the following:

The blue parts are just the SQL that each table was generated by.

]]
[2014-07-25T19:19:03.000-07:00] [OracleBIServerComponent] [TRACE:2] [USER-18] [] [ecid: 1710f6cbdf3264b0:6b7606d0:1476ec8e12a:-8000-00000000000061e2,0:1:9:5] [tid: eb631940] [requestid: 54700010] [sessionid: 54700000] [username: ] -------------------- Sending query to database named Omega Custom (id: <<1390911>>), connection pool named Omega Custom CP, logical request hash , physical request hash 8d6cb66e: [[

select sum(T2060476.BEGIN_BAL) as c1,
     T2060476.SEGMENT1 as c2
from
     (select 
gcc.concatenated_segments, 
gcc.segment1,
gcc.segment2,
gcc.segment3,
gcc.segment4,
gcc.segment5,
gcc.segment6,
gcc.segment7,
gcc.segment8,
gcc.enabled_flag ,
gcc.end_date_active, 
gb.period_name,
gb.currency_code,
gb.translated_flag,
fv.flex_value_set_id
--,gcc.segment3||'-'||gcc.segment1||'-'||gcc.segment6 combo,
,(NVL(gb.begin_balance_dr, 0) - NVL(gb.begin_balance_cr, 0)) begin_bal
, (NVL(gb.period_net_dr, 0) - NVL(gb.period_net_cr, 0)) period_net
, ((NVL(gb.begin_balance_dr, 0) - NVL(gb.begin_balance_cr, 0)) + (NVL(gb.period_net_dr, 0) - NVL(gb.period_net_cr, 0))) close_bal
/*, (NVL(gb.begin_balance_dr, 0) - NVL(gb.begin_balance_cr, 0))) begin_bal
, SUM((NVL(gb.period_net_dr, 0) - NVL(gb.period_net_cr, 0))) period_net
, sum(((NVL(gb.begin_balance_dr, 0) - NVL(gb.begin_balance_cr, 0)) + (NVL(gb.period_net_dr, 0) - NVL(gb.period_net_cr, 0)))) close_bal */
from 
gl_balances gb, 
gl_code_combinations gcc, 
fnd_flex_values fv
where gcc.code_combination_id = gb.code_combination_id
and gb.actual_flag='A'
and gb.currency_code !='STAT'
and gcc.segment2 = fv.flex_value
--and gcc.segment3 like '213530'
--and gb.translated_flag IS NULL
--and gcc.segment1 like '304'
--group by gcc.segment3||'-'||gcc.segment1||'-'||gcc.segment6
order by 4
) T2060476
group by T2060476.SEGMENT1
order by c2

]]
[2014-07-25T19:19:03.000-07:00] [OracleBIServerComponent] [TRACE:2] [USER-18] [] [ecid: 1710f6cbdf3264b0:6b7606d0:1476ec8e12a:-8000-00000000000061e2,0:1:9:5] [tid: eb631940] [requestid: 54700010] [sessionid: 54700000] [username: ] -------------------- Sending query to database named Oracle Data Warehouse (id: <<1390933>>), connection pool named Oracle Data Warehouse Connection Pool, logical request hash 5f0261e4, physical request hash 597bac0c: [[

select sum(T2060495.BALANCE_ACCT_AMT) as c1,
     T2060495.COMPANY as c2
from
     (Select
C.GL_ACCOUNT_NAME,
C.ACCOUNT_SEG1_CODE COMPANY,
C.ACCOUNT_SEG2_CODE COST_CENTER,
C.ACCOUNT_SEG3_CODE ACCOUNT,
C.ACCOUNT_SEG4_CODE PRODUCT,
C.ACCOUNT_SEG5_CODE PROJECT,
C.ACCOUNT_SEG6_CODE INTERCOMPANY,
C.ACCOUNT_SEG7_CODE GEOGRAPHY,
C.ACCOUNT_SEG1_CODE FUTURE2,
A.MCAL_PERIOD_NAME,
B.BALANCE_ACCT_AMT,
B.BALANCE_LOC_AMT
from
W_MCAL_PERIOD_D A,
W_GL_BALANCE_F B,
W_GL_ACCOUNT_D C
WHERE A.ROW_WID = B.BALANCE_DT_WID
AND B.GL_ACCOUNT_WID = C.ROW_WID
) T2060495
group by T2060495.COMPANY
order by c2


So what OBIEE does is that it gets the result from each query individually, then at the server level, it merges the 2 dataset into one report..

The caveat here is that, if each query brings back huge data set, then this technique should be used with caution as it may kill performance during the merging..

Thanks

Until next time

Wednesday, June 25, 2014

How to update report and automatically store updated by and updated time through writeback in OBIEE

Today I am going to talk about writeback in OBIEE. As you know, writeback is a feature in OBIEE that allows users to update reports that they see in OBIEE without having to deal with the ETL process and reloading table. 

There can be reasons why users would want to be able to update the records they are seeing in the report. There could be a known error that users are aware of in their accounting system that is producing the wrong Journal Entries that they are seeing, long term solution will be to correct the data from the system, but for auditing purposes at quarter-end, they are just going to manually update the record through OBIEE report so that the numbers will balance out. 

Another user case can be to have users manually update their username and password in OBIEE on their own. I know that in big enterprises, the user credentials are typically synchronized with their other credentials so it would not likely to have such requirement to let users change their passwords, but in some cases where you may have external users who are not part of the company, they may want to have their own password that can be changed. They can do so by using writeback against the user table to have username and password fields updated.

Anyways, the set up of writeback is pretty standard, which I am not going to get in too much details. Google around and you will find many articles that shows you how to do so. Basically, it comes down to the following steps:

1.you update the instanceconfig.xml file to enable writeback feature. 

2. Enable writeback on the columns you want in rpd (BMM Layer)

3. Give permissions to the application roles to have writeback, done in RPD. (Check permission identity manager and presentation column)

4. Create writeback template and save it as writeback.xml file and put it in following directory in the obiee server machine  /opt/app/OBIEE/instances/instance1/bifoundation/OracleBIPresentationServicesComponent/coreapplication_obips1/analyticsRes/customMessages/  
 
5. Unable writeback in presentation service by checking the setting in column property

6. Define writeback template name in table properties (Only Table view setting)

So here I am only going to talk about step 4, how to create writeback template so that not only it updates the columns, but also it automatically update the username and timestamp when any user executes the writeback.

In my case I have 7 update-able columns, and I have created 3 columns to help with the writeback: last_updated_by, last_updated_date, record_ID. Last_updated_by will store a default system generated value initally and Last_Updated_Date will store the system timestamp of the day that data was loaded to the table through ETL. Record_ID is like Row_ID, it is a unique identifier of each row of the table. 

With the requirement, I am first creating my report in OBIEE 

Noticed I added a new column called 'User'. The expression of this column is 'VALUEOF(NQ_SESSION.USER)', this gives you the user ID of the user who is using the report (and of course, who is likely going to update record next). If you have successfully configured user authentications in rpd, you will have this session variable, it could be named differently in your environment. 

In Advance Tab, you will find the XML code for this report, there you will pick up the column ID for each of the columns you are interested. I will show you an easy way to identify the column IDs:

So here is a part of the entire XML code:



Now if I want to find the column ID of Record ID, I will then do control+F and type 'Record ID' in my search box. You will find the column name highlighted by the computer search, then find the column_ID right above this column name, it will be the column ID of this column name:

Note: Record_ID is the physical column name and Record ID is the displayed column name that I made:

Using this method, I have found all of the column IDs of the columns that I am interested. 

Now let's start writing the writeback template:


Template:





 ------ The name of your template, you name it however


 ---- This is the name of connection pool that your physical table 

SELECT NULL from DUAL                                                        
UPDATE fa_table SET JE_CATEGORY = '@{c4532bb3c8bf3fa07}', COST_CENTER = '@{c2133b77cfebd0a7c}', PRODUCT = '@{cfd5c9816d8ca43fa}' ,
PROJECT = '@{c06bafe4ddddd622e}', GEOGRAPHY = '@{c55fdf58fcd9b827e}', FUTURE2 = '@{c44356274c12d521f}', YES_NO = '@{c1e7217420772545b}', LAST_UPDATED_BY = '@{cf5dd00aa4eeabdf3}',
LAST_UPDATED_DATE = sysdate
WHERE RECORD_ID= @{c70c57d49eba128a6}  









Let's talk about this part:

UPDATE fa_table SET JE_CATEGORY = '@{c4532bb3c8bf3fa07}', COST_CENTER = '@{c2133b77cfebd0a7c}', PRODUCT = '@{cfd5c9816d8ca43fa}' ,
PROJECT = '@{c06bafe4ddddd622e}', GEOGRAPHY = '@{c55fdf58fcd9b827e}', FUTURE2 = '@{c44356274c12d521f}', YES_NO = '@{c1e7217420772545b}', LAST_UPDATED_BY = '@{cf5dd00aa4eeabdf3}',
LAST_UPDATED_DATE = sysdate
WHERE RECORD_ID= @{c70c57d49eba128a6}

fa_table is the physical table name. The following 7 columns are to be update-able by users:

JE_CATEGORY, COST_CENTER, PRODUCT, PROJECT, GEOGRAPHY, FUTURE2, YES_NO

So each of these columns are equal it's column ID.

Now column LAST_UPDATED_BY = cf5dd00aa4eeabdf3. This column ID is not for last_updated_by, it's actually the column ID for column USER. The column ID of last_updated_by is never used.

The idea here is that last_updated_by will be updated with the value of column USER, which is the username of the session. 

Now last_Updated_Date = sysdate. This means whenever user is updating the record, it's happening at the present moment, therefore this column will be updated with whichever date the update happens.

In the where clause I am using record_id as the criteria because I want to ensure that when user makes an update, the update is only applied to 1 row, not a whole bunch of rows. Without record_id in the where clause, then when user updates cost center record, then all of the rows with the same cost center name that the user is trying to modify will get updated. 

After putting this writeback.xml in the right place and restarting OBIEE, let's go back to the report. Here it is import to uncheck the writeback on column last updated by and last updated on:



Define the template name in the table view properties:

Now let's run the report and make some updates:



Updating JE Catogory with random stuff:


After done and the record is updated. Moreover the last_updated_by column is automatically updated with my username.


Now it is important to sort the report based on record ID column. Because every time you update a text field with different value, the row will be re-sorted because text field is always going alphabetically. To avoid having your updated record 'disappeared' on you after you save, just sort your report by record ID.

Thanks and until next time.
 

Tuesday, June 3, 2014

About Merging 2 rpd in OBIEE -- The only article that matters

Hello All

About merging rpds. You all know this merge button under 'file' in Admin tool right?

This launches the merge repository wizard, which is easy.

The confusing part is, how to determine original master rpd, Modified rpd, Current Rpd?

This is what I do to remember:

Let's say there is a scenario that I have RPD1 and RPD2. I want to merge these 2 rpds so that there will be a new RPD that has both RPD1 and RPD2 objects. 

In this case (or any case), you need to create a new rpd with nothing in it. basically a dummy rpd, call it RPD(Dummy)3.

Now open either RPD1 or RPD2, which one doesn't matter, but once it's opened, use the merge button from that rpd to start merging. Therefore, the rpd that is currently opened (so that you can click 'merge' button from it) will automatically be your 'Current' RPD.

Pick RPD(Dummie)3 as your original master rpd and the other RPD (RPD 2 or RPD1) as your modified RPD, then merge..

To make it simple, when merging RPD1 and RPD2 into RPD3, this is the identification in the merging wizard:

Dummy RPD (empty RPD made by you ) = Original Master RPD

RPD1 or RPD2 = Current RPD

RPD2 or RPD1 = Modified RPD

Save Modified RPD AS: Give the resultant RPD a name and location as you wish =  RPD4.

RPD4 becomes the finishing product of this merge and you can leave Dummy rpd there for your next merge.

In the merging strategy step, if you are ever in doubt whether to select 'Current' or 'Modified', just see which rpd did these red objection come from.
In my case, all of the are from 'Current Repository', so in decision list, I pick current for all.


Let the application take care of the rest after you hit 'Finish' when you are finished selecting decisions.

Note:
The merging of rpds can take a while. If one of the RPD is 20MB, the whole merging process can take up to 3 hours.

Try to have 2 rpds that are completely unrelated with each other when merging. A lot of time, merging is unreliable, specially in this version, you never know what has gone missing after merging. If RPD1 and RPD2 have absolutely nothing in common, then the merging of the two might minimize the risk of conflict during merge. If both rpds have a lot in common, I don't recommend merging. Its easier to manually reconcile the difference than to merge.


Thanks 

Until next time

Tuesday, March 4, 2014

Evaluting BI tools: Compare OBIEE with open source reporting tools

Having worked for various BI projects, often it comes to time to make decision on what tool or technology to use to implement the requirements. Often, the company has a great vision knowing where they want to go and what roadmap they want to follow, but they don't know what technology will be the best for them to achieve their goals and meet their budget. So many OBIEE projects are having difficulties and issues because from the beginning, OBIEE simply wasn't the most ideal tool for what the clients want to do. Not saying OBIEE can't do certain things (actually, there are certain things they can't), it's just that the complexity, resources and time that would involve don't always match the expectation that the clients had in mind during POCs. Therefore, It is very important to first understand that POC and real life implementation are too different things, I am just going to be as honest as I can be for the benefit of all.

Any tools can look great during POC where you don't have real data and real scalability issues. Every problem can be solved 'in theory' if the simple POC works. Look at T.V commercials, advertisements, these are the work of the best marketing genius of the world. They know how to make things look great on T.V, in ads and in demo, which is just a variation of the 'prove of concept' for general consumers. What happen after you blow your wallet for some magical weight-loss pills or some total package home care system, or even a weekend self-help workshops that promises to change your life? Then you realize that things aren't as good as what you see at first site. The bottom-line, POC is just a simple sales demo, which is not restricted to any specific environments against specific wants and needs. Now it is the buyer's responsibility to know how to apply that to their specific requirements and making it work. Anyways, enough of that.

Now lets look at what are some of the things that OBIEE isn't very efficient at doing compared to some other open source java reporting tools:

1. The implementation of localization and internationalization of OBIEE reports isn't very straightforward. There are blogs and documents out there that explains how to do such things in OBIEE, but go through them and try to implement it yourself. You will see that it does take a lot of work. Now integrating it with other home-grown java applications while maintaining the OBIEE local languages and time zones in Synch with that application is another thing. I am not saying it can't work, but it is complicated and it does take time and resources (Money.) The same thing can be done fairly easily in tools like Jasper.

2. New reporting requirements from the business users can't always be fulfilled right away. Oftenly, the requirements are identified from the OLTP side, but it is going to go through an ETL process and OBIEE configuration process before it can be made available for reporting. Several places can cause issues and there are many interdependencies that can cause data issues or performance issues. Most of all, this is not something you can get back to the user in 2 hours with new reports up and running. It requires development cycle. At the same time, in reporting tool like Jasper, you can develop these requirements directly by writing PL/SQL packages against the OLTP system, it can be a quick thing if you have a good & healthy framework

3. OBIEE has a high cost. Here, we are not just talking about the cost of OBIEE application itself (which is already pretty costly). You have to decide what ETL tool to use to work with OBIEE, if you happen to pick Informatica, then great, that's another high expenses. On top of that, you are going to need more resources, data architects, ETL developers, OBIEE developers, DBA and maybe more guys. That can be an expensive team right there. On top of that, you are not just going to have one machine that takes care of all these projects, you are going to need more machines, more hardware & software resources. That's another expense. Now, how do you feel after spending all these money but have to wait for development cycles (with high probability of delays and issues) to get job 1 and 2 done, while you see some other projects that use less than 1/50 of sources than yours but can get the requirement more agilely and dynamically done in much less time?

4. OBIEE is less flexible, less customizable (graphs, look and feels and other programmable integration features). Requirements can come in various ways. Not all of the OBIEE features are created to satisfy them. That's why there are always new enhancement request being made against the product, but it's up to the future release of the product to decide which ER can be fulfilled. If you are currently implementing your project, don't count on future releases. Open source reporting tools are much more flexible. These tools are good for people that knows how to program, how to mess around. It's like Iphone vs Android (oops I said it again) .

5. OBIEE works strictly on relational data models. In order for OBIEE to work, your BMM layer needs to have your logic tables set up and arranged according to standard star-schema. This can be a big restriction if your requirement is very complex that doesn't translate to standard star schema. Often, you will have to take the data fields from the source and design your pre-computed aggregate tables in order to make OBIEE's job easier, in other words, the complexity goes to ETL layer. Then again, why would I spend too much money to have OBIEE does easy stuffs where pre-computations are spoon fed to you? In open source tools or some other BI tools like Business Object, it is much more flexible in terms of what data model can be worked with directly with reporting.

The above are some of the most common areas where the OBIEE project has difficult challenges, specifically when expectation is not met. So before you get into OBIEE, you have to know where you are getting into to best avoid the difficult situations later on.

In my opinion, the value of OBIEE lays in the solution being delivered to you. Take BI Apps for instance, it is the best example of how OBIEE can be advantageous when the entire framework of solution is already provided to you after years of research of various business scenarios. The package, which includes DB schemas, ETL framework, reports and dashboards are already designed and engineered with the best solution in the most optimized way, can save clients tons of money and time for getting the information they desire. Imagine how it's going to be if you have to do your own research, requirement gathering, design and engineering of BI solution? Well, unless you are a genius :)

Another advantage of OBIEE is that it is easier to get into. Since the programming aspect of it is minimum, a lot of the education are conceptual based. Therefore, OBIEE can be favored by those who wants to rely on some object oriented tools to get the job done, instead of doing their own coding on a black screen. Jasper is more for programmers, it requires good understanding and design of the DB model to help optimize the report queries, or it can run very slow. For OBIEE, the performance can be much faster with standard star-schema designs (the price to pay is the time spent on ETL development cycle).

There is no perfect tool, it's up to the individuals, from company visionaries to architects, from engineers and project managers to make things work perfectly. But knowing the limitation of each tools, I hope it can drive future improvement of these technologies, but more importantly, help the decision makers to make the right decision from the beginning.

Thanks

Tuesday, February 18, 2014

Interview for OBIEE position and how to succeed

Hello All

Today's topic is everybody's favorite, that is about interviews. Interview is a pain in the ass and it is for a lot of people. The more senior you are (or you think you are), the more you dislike interviews. Doing the work and talking about your work are completely two different things. When you are at work, you may run into challenges, but at least you don't have to come up with a working solution in the next 30 minutes if you can't. However in interviews, you are expected to provide proper responses in 5 mins. Therefore, if you want to excel at interviews, you should prepare yourself on your emotional intelligence, not so much on memorizing the things you can google.

Some of the mostly overlooked questions are:

Tell me a little bit about yourself, in particular, your experience related to OBIEE, BI Apps, Informatica, project management etc.

What is the most challenging problem you have encountered in your past experience with OBIEE, BI Apps, Informatica, project management etc, and how you solved that problem.

What is your approach to _____?

See, these are open-ended questions, depending on how you respond, the interviewer can throw anything at you. But first and foremost, it is important to prepare a well structured response to all these questions before any interviews. You don't have to memorize your speech word for word, but you do need to follow your structure. Over time, based on your growing experience of interviews, you might refine your speech.

Oftenly, interviewers aren't much better than you either. However, they are in the position to make decision, therefore, you need to learn to interact with them properly.

Sometimes, interviewers make random comments at your responses that may affect you emotionally. You were describing how great your previous project was, or how complex your design was and then here he comes with 'That's not a big deal.' or 'Aren't you over-complicating things? ' or even 'How is that going to work?'. Suddenly, you feel you have been disrespected, looked down on or misunderstood. This is very common among experienced job candidates.  The more experience you have, the more likely you will resist other's criticism, especially in the engineering fields where people rarely worked on people skills. You have been through so many rounds of technical discussions at work, you are so used to arguing with your coworker and you always win, therefore, you can't handle when people don't show appreciation of how great you think you are during interviews. Your subconscious will look at this interaction just as another technical discussion or even a 'political fight' in the office. You start raising your voice and so does the interviewer. At the end, you may win the argument, but lose the interview; or this verbal exchange will leave a significant mark on your psyche that after that all you are doing is mumbling to yourself with 'this guy is an asshole' and you totally forget about the fact that the interview isn't over.

Over the years of interacting with engineers and managers, I have realized something when it comes to technical discussions. If it occurs during the interview, it's easy. Just answer the question with whatever you know, don't worry about trying to convince the other person. If the interviewer is saying something you don't like, just take it easy. Reply with 'ok, that's interesting you said it', 'ok, I see your point' or 'What you said also make sense, I think we can definitely discuss this in more details off line.'. For God sake, just get over with this question and go to the next, if the interviewer wants to show off his knowledge, that's even better, let him do all the talking, he might enjoy so much that he would hire you based on that. Now of course, if you are applying for management, sales or customer service jobs, that's different. For engineering jobs, the less you talk the better it is.

In today's world, the competition in IT is getting fierce, especially in niche markets like OBIEE, Hadoop, Salesforce, Tibco where there are plenty of candidates, the interviewers rarely ask questions where you can google the answer anymore. They want to find out whether you really have experience or not. Therefore, a lot of questions are open-ended, opinion based questions. While it is important for you to prepare your speeches, it is just as important to realize that there is nothing wrong with you if you don't get the job after.

Everything happens for a reason. How you perform during your interview is always an accurate reflection of where you stand in the job market and your relationship with this particular interviewer. There is no surprise no matter what the outcome is. Take it easy. Try to learn something from the interview if there is anything to be learned, remember the questions that you couldn't answer and research on it afterwards. Do what you can to improve your chance for the next interview. If you fail, don't take it too seriously. I have worked in this field for long enough time that I personally can tell you that you can rejected for all kinds of reasons that it might even sound ridiculous to you. Just chill out, don't think about the interviews after it's done.

Hopefully, after working in the field for long enough time, you build up your personal network well enough that opportunities will get referred to you rather than you having to apply and compete for it. Building contact is one of the most important things to do in life and that's how you can minimize your chance of getting interviews in the future.

Until then:




Saturday, February 1, 2014

Create Pixel Perfect Report using BI Publisher that includes charts, tables and dynamic SQL query

Hello

This time I want to share with you how to create a pixel perfect BI Publisher report out of an existing dashboard that includes different type of reports.

When it comes to reporting, the BI Dashboard is highly interactive, it allows navigation, ad-hoc query and other dynamic features. That's the value of OBIEE. However, sometimes when you want to send the results with pdf or other formats, it is not so pixel perfect. In some areas people don't mind, but in a lot of inter-company communications, the version has to be pixel perfect. I mean, have you ever seen an invoice letter from your bank that's components are out of proportion? This is why BI Publisher has been integrated into the OBIEE platform.

Take a look at the following dashboard:




It has multiple charts, tables and texts arranged in a articulated way. There are even reports created using direct SQL query:


So reports like the following are regular reports displayed in charts or tables:

Some reports like the following are created using direct SQL with presentation variables passed from the dashboard prompt:

So lets see how we can create the same version using BI Publisher which is pixel perfect when you print it out.

We will start by creating new data model. In the data model windows, go to diagram and select Oracle BI Analysis for the reports that are created using regular approach. I then locate all of the saved BI analysis that I want to create, just select them one by one.


For the one that are created using SQL query with presentation variable, you need to change the data set to using SQL Query and paste the SQL query into the windows:
As shown in the screenshot, the syntax of the presentation variable will need to be changed from @{PV} to :PV:


After these are done, close the box and it will automatically ask you to create parameters. Just create the parameters with the same Presentation variable name and give default value:




After some times, all of the data sets (each one corresponds a BI Dashboard analysis) are create as shown:

Now save the data model:



Now create new BI Report using the data model that just created as the data source. While creating and going through each windows, it will ask you for layout templates or add tables, just click next and go through them without doing anything:


Then save it:


Now open the saved report. It is going to run and most likely it will not return anything. But it will allow you to edit report, so do just that:


It will open up a blank template, this is where you can feel free to design your own report arrangements:

One the left side, all of the data set that you created earlier for that data model will be available to be put on the blank sheet.


The insert type on the top of the sheet allows you the insert all kinds of objects, from gauages to tables to charts. I used layout grids to break this blank sheet into different smaller subdivisions and then insert each of the reports into those subdivisions:


The properties on the left side of the sheet allows you to adjust it's size, color and arrangement to make it look good:


After it's done (these are not technical work, it is more tedious art work), save the report. 


Now we can add this report to the original BI Dashboard:



The BI came be launched here:



Now after launching the BIP report, it has to pass the value of the BI dashboard:


The result is working:



Thanks

Until next time

Friday, January 24, 2014

Case Study: Emailing OBIEE dashboards with ad-hoc agent that dynamically passes dashboard prompt values based on user request -- Part 3

After rpd work is done, let's go to the dashboard.

Well before that, I am manually creating these users with '_1' and so on in weblogics and make sure they all have their OBIEE account set up with the proper emails.


Now go to dashboard and we will copy the existing dashboard and reports and create another version for emailing agents.

First, modify the date and month prompt and manufacturer prompt

Here in the default section, we are adding session variable as filters:

The idea is to use timestampadd function here to add a new year and month value based on the integers of the session variable of each 'startmonth' and 'endmonth', then add 'yyyy' with 'mm' to make it 'yyyymm' as numbers. This new value then will be filtered against the year and month column of the date dimension.
If the dashboard is at date level, then for the same logic, we use timestampadd to compute a new date using the integers from session variable  'startmonth' and 'endmonth'.


For the other prompts, I realized they are variable prompts, and there is no computation like Date and Month prompt does, so we will use variable expression here to call session variable:
Syntax is below:
@{biserver.variables['nq_session.pay_type']}


After getting done with the prompts, we will make necessary changes of the reports and dashboard structure.


Now take a look at the dashboard. Initially the prompt values will mostly be empty because it is run by admin user who doesn't have anything defined in the security table.



Now create an agent and set it as 'run as recipient':


Recipients of the agent would be all those test users I created:


Save and manually trigger this agent and see what we got:


Now I got a few emails since all of these users have the same email account, which is mine:


Now I have to open each up to make sure the contents are different and the values should be different. The first 3 emails are all for user 'charles', which is recipient charles_1, charles_2,charles_3:

Ok, 1 for manufacturer 'HMA', customer pay, dealer cost $300 between July 2013 and Sept 2013


For manufacturer 'AUDIUSA' and different other prompt values:


For Manufacturer 'PORSCHEUSA' and different prompt values:

This report is for user 'dchadsey' and all of the parameters defined for this user in the table are passed along


Now let's test for error handling. I will manually update dchasey's MFG_HIERARCHY value to 'BMW'. This will be a wrong value for this user because in the security table, this user is not scoped to that manufacturer.

Now rerun the agent and i should be getting an email with no data. It would be a breach of confidentiality if this user indeed gets data for BMW that he is not scoped to.

Success!

By changing the value to BMW, the query defined in the session initialization block should return nothing, therefore the 'Manufacturer prompt would be empty.' Of course, I then played some tricks by adding conditions of each dashboard section to hide or display them based on whether manufacturer prompt is blank or not. There are many ways to do that, which I am going to skip for now.

So, the prove of concept works for a few test users. However, this is still not enough if to implement this at a larger scale. There are two main pieces of the puzzles according to me:

1. users will probably document their request in the form of spreadsheet and upload it somewhere. For each request of delivering a different dashboard, there will be a separate table like the one I designed. The user defined parameters will then be loaded into our DB table and adding '_' and sequence number behind each user name will need to be done. Managing this process of communicating between users inputs and DB tables will need to be in places. A lot of decision makings need to happen. User could even request an UI for their updating of request, which will need to be engineered to communicate with the DB Table systematically.

2. Manually creating of all these user_1s in OBIEE will be a nightmare. I know OBIEE has a project called 'SA SYSTEM Subject area', which once implemented, will automatically populate users into OBIEE and associating it with emails. However, the requirement for this to work is pretty rigid. If I can automatically populate columns for 'user_name' (all of the _1s values) not the actual 'users' column using SA Subject Area, then this would be a huge win.

3. Different dashboards are created differently. Some dashboards are more complex than others. For each specific dashboard to be delivered, it will have to be modified differently to pass the session variables correctly. There is no one size fits all solution to this approach.

Having said all that, I think I have come to as far as I can come with this solution. It's not perfect solution, in fact, it may be a terrible solution in someone's eyes, but it does work to some extend. Maybe oracle has plans for this type of requirements in their future products.

For now, I am opened to see other solutions.

Until then 
Related Posts Plugin for WordPress, Blogger...