Friday 16 August 2013

OBIEE Architecture



If a client runs a report, the request first goes to the Presentation Server and then it gets routed to the BI Server and then it gets further routed to the underlying Database or the data source.
Client -> Presentation Server -> BI Server -> Data source
Now, the request is routed back through the similar route to the client. Which means, the data is fetched from the Data source and it gets routed to Presentation server through BI server and then to the client.
Client <- Presentation Server <- BI Server <- Data Source
The above flows provide a very basic idea of how the data is fetched and showed in a report in OBIEE.

Now, let us understand it more properly by dividing the above diag. into segments:
1) Client and User Interface
2) Presentation Server & Presentation Catalog
3) BI Server & Admin Tool
4) Data Source

Client & User Interface:
 This level has the UI of OBIEE which is accessible to the clients and users. The OBIEE UI has several components like OBIEE Answers, Interactive Dashboards etc.
§  Oracle BI Answers is a powerful, ad hoc query and analysis tool that works against a logical view of information from multiple data sources in a pure Web environment.
§  Oracle BI Interactive Dashboards are interactive Web pages that display personalized, role-based information to guide users to precise and effective decisions.
§  BI Delivers is an alerting engine which gives users flexibility to schedule their reports and get them delivered to their handheld devices or interactive dashboards or any other delivery profile and helps in making quick business decisions.
In simpler terms we can say that, this is a web application which is accessible to the users for preparing their reports/dashboards and do Ad-Hoc reporting to cater the business needs.

Presentation Server & Presentation Catalog:
The BI Presentation server is basically a web server on which the OBIEE web application runs. It processes the client requests and routes it to the BI Server and vice versa. It can be deployed on any of the following IIS or Oc4j. It makes use of the Presentation catalog which contains the aspects of the application.
The Presentation catalog stores the application dashboards, reports, folders and filters. It also contains information regarding the permissions of dashboards & reports created by users. It is created when the Presentation server starts and can be administered using the tool called Catalog Manager.
In other words we can say that the Presentation server and the Presentation Catalog are together responsible for providing the clients with a web server on which the web application runs and also administers the look and feel of the User Interface.

BI SERVER AND ADMIN TOOL
BI Server is a highly scalable query and analysis server.  It is the heart of the entire architecture. It efficiently integrates data from multiple relational, unstructured, OLAP application sources, both Oracle and non-Oracle.
It interacts with the Presentation server over TCP/IP  and takes the reporting request from the presentation server. Then the BI server processes the request and form logical and physical queries(in case of database as data source) and this physical query is sent to the underlying data source from which the data is processed. The BI Server interacts with the underlying database using ODBC. Hence, the entire processing of request is done by the BI server.
In the above paragraph I have mentioned that the BI server creates a logical and physical query. But how will the BI server generate this query?? How will the BI Server know what all joins need to be used??
The BI server makes use of the BI Repository for converting the user request into logical and physical queries. The BI Repository is the metadata using which the server gets the information of the joins and the filters to be used in the query. It is the backbone of the architecture.
Now, this is the place where all the modelling is done and the role of OBIEE developers come into picture. The BI Repository is created using the Administration Tool. The repository contains three layers: Physical, BMM and Presentation Layer.

Physical Layer: Contains the tables imported from the underlying DB with appropriate joins between them.
BMM Layer: This is the Business Model layer and hence all the Business logics are implemented on this layer eg: Calculation of %age Sales, Revenue etc.
Presentation Layer: As the names specifies this layer is used for Presentation of required tables and columns to the users. The columns pulled in this layer are directly visible to the users.

Where BI Server and Admin Tool come in picture???
Now, when the users log into the BI Answers i.e the user interface, they see all the columns that are pulled on the Presentation Layer in the Repository. They choose the desired columns from there and click results button to view the report. After that the request is sent to the BI Server through the Presentation server, the BI server makes use of the BI Repository to formulate a query out of the requested report based on the joins and tables specified in the repository. This query is sent to the underlying DB and hence results are fetched.

Data Source:
This is a rather simple one as we all know till now that OBIEE is a reporting tool and works on data from  underlying Databases, so here Data Sources are the underlying Databases with which the OBIEE server interacts. OBIEE is a very smart tool and it has got the capability of reporting on multiple Databases and also multiple types of Databases like XML, Oracle, SQL Server etc.
Now, in the previous posts you have seen what is an OBIEE Repository and what is the Physical Layer and what are connection pools.
Now, when we design the OBIEE Metadata or repository for reporting, we import the tables on which we need to perform reporting into the physical layer from the respective DBs. And then we apply appropriate joins between the tables and further pull them to BMM and then to Presentation Layer for reporting.
 The question that comes out here is “How does the BI Server interacts with the underlying DBs for showing the reports???”
The answer to this question lies in the Connection Pools. If we open the Connection Pool we can see that we need to select the Call Interface, give the name of the DSN, give a Username and password. These things help up to connect to the Database.
Call Interface – There is a drop down from where we can select the appropriate Call Interface. Some examples are ODBC, OCI etc. Both ODBC and OCI can be used for Oracle. The main difference between using them is, In ODBC we need to create a DSN in the system where the server is installed but OCI is a native DSN and we can use it directly without creating the DSN in the system.
DSN- This is the name of the DSN which OBIEE uses to connect to the underlying DB.
Username- The user with which OBIEE connects the DB. Generally the user used for reporting should only have the read privileges on the DB.
Password- Password of the user with which OBIEE connects to the DB.
 Now, when a user runs the report in Answers the OBIEE server accesses the DB using the connection pool with the specified Call Interface and username and returns the data.
The next question is “How does the BI server takes care of a report formed using columns and tables from multiple DBs???”

As I have told you earlier also that BI server is very intelligent and is built in such a way that it can process request formed from multiple DBs. When the user generates a report involving multiple DBs, the request navigates to the Navigator section in the BI Server which checks the underlying DBs with which OBIEE needs to interact to. Then the BI server generates separate queries for the DBs and fire them on the respective DBs. Then it fetches the data from the underlying DBs and combines the result set in its own memory and displays the result in the report.

Basic Terminology...

What Is Siebel Analytics?


It is a Reporting Tool which provides insight, processing and pre -built solutions that allow users to seamlessly access critical business information and acquire the business intelligence required to achieve optimal results.
Purpose of Siebel Analytics
• To provide data and tools to users to answer questions that are important for business
• To cater to large & changing data volumes
• To take care of differing requirements
• To replace existing tools that are not aligned to business needs of an organization
• To leverage and extend common industry practices — Data Warehousing & Dimensional Modeling
• Other reporting tools are often difficult to master and also static or fixed and do not allow for interactivity
Siebel Analytics Components
• Intelligence Dashboards
• Siebel Answers
• Siebel Delivers
• Siebel Analytics Server and Siebel Analytics Web
• Siebel Relationship Management Warehouse
• Siebel Analytics Administration Tool
Intelligence Dashboards
A page in an Analytics application that is used to display the results (corporate and external information) of Siebel Analytics requests and other kinds of content. Based on your permissions, you can view pre-configured dashboards, and create or modify dashboards
Siebel Answers
Siebel Answers provides answers to business questions. Allows exploring and interacting with information, and presenting and visualizing information using charts, pivot tables, and reports
Results can be saved, organized, and shared in the Siebel Analytics Web Catalog and can be enhanced through charting, result layout, calculation, and drilldown features
Siebel Delivers
Interface used to create alerts based on analytics results. Detect specific results and immediately notify the appropriate person or group through Web, wireless, mobile, and voice communications channels.
Siebel Analytics Server and Siebel Analytics Web
Is the core server behind Siebel Analytics Provides power behind Siebel Intelligence Dashboards for access and analysis of structured data distributed across an organization.
Single request to query multiple data sources, providing information access to members of the enterprise and, in Web-based applications, to suppliers, customers, prospects, or any authorized user with Web access
Siebel Relationship Management Warehouse
Is a predefined data source to support analysis of Siebel application data
Is in star schema format
Is included in with Siebel Analytics Applications (not available with standalone Analytics)
Siebel Analytics Administration Tool
To create and edit repositories and manage Jobs, Sessions, Cache, Clusters, Security, Joins, Variables, Projects — by Administrator
Is a graphical representation of the three parts (Physical layer, Business Model and Mapping layer, Presentation layer) of a repository.
Siebel Analytics Architecture : Comprised of five components:
• Clients
• Siebel Analytics Web Server
• Siebel Analytics Server
• Siebel Analytics Scheduler
• Data Sources

Siebel Analytics Web Server
• Provides the processing to visualize the information for client consumption
• Receives data from Siebel Analytics Server and provides it to the client that requested it
• Uses the web catalog file (.web cat) to store aspects of the application.
Siebel Analytics Web Catalog (web cat)
• Stores the application dashboards, request definitions, pages and filters
• Contains information regarding permissions and accessibility of the dashboards by groups and users
• Is created when the web server starts
• Is specified in the registry of the machine running the web server
• Is administered using Siebel Analytics Catalog Manager
Siebel Analytics Server
Provides efficient processing to intelligently access the physical data sources and structures the information
Uses metadata to direct processing
Generates dynamic SQL to query data in the data sources
Connects natively or via ODBC to the RDBMS
Structures results to satisfy requests — Merge results & calculate measures
• Provides the data to the Siebel Analytics Web Server
• Repository file (.rpd)
• Cache
• NQSConfig.ini
• DBFeatures.ini
• Log files

Repository File (rpd)
• Contains metadata that represents the analytical model
• Is created using the Siebel Analytics Administration Tool

Cache
• Contains results of queries
• Is used to eliminate redundant queries to database and Speeds up results processing
• Query caching is optional and can be disabled
NQSConfig.ini
• Is a configuration file used by the Siebel Analytics Server at startup
• Specifies values that control processing, such as:
• Defining the repository (.rpd) to load
• Enabling or disabling caching of results
• Setting server performance parameters
DBFeatures.ini
• Is a configuration file used by the Siebel Analytics Server
• Specifies values that control SQL generation
• Defines the features supported by each database
Log Files
• NQSServer.log records Siebel Analytics Server messages
• NQQuery.log records information about query requests
Siebel Analytics Scheduler
• Manages and executes jobs requesting data analytics
• Schedules reports to be delivered to users at specified times
• In Windows, the scheduler runs as a service
Physical Layer
• Is the metadata that describes the source of the analytical data
• Defines what the data is, how the data relates and how to access the data
• Is used by the Siebel Analytics Server to generate SQL to access the business data to provide answers to business questions
• Is created using the Analytics Administration Tool. Can be imported from the source information.
• Is typically the first layer built in the repository.
Connection Pool
• Specifies the ODBC or native data source name
• Defines how the Siebel Analytics Server connects to the data source
• Allows multiple users to share a pool of database connections
• May create multiple connection pools to improve performance for groups of users
Creating Dimension Levels and Keys:
• A dimension contains two or more levels.
• The recommended sequence for creating levels is to create a grand total level and then create child levels, working down to the lowest level.
• Grand total level. A special level representing the grand total for a dimension. Each dimension can have just one Grand Total level. A grand total level does not contain dimensional attributes and does not have a level key.
• Level. All levels, except the Grand Total level, need to have at least one column.
• Hierarchy. In each business model, in the logical levels, you need to establish the hierarchy (parent-child levels). One model might be set up so that weeks roll up into a year.
• Level keys. Each level (except the topmost level defined as a Grand Total level) needs to have one or more attributes that compose a level key. The level key defines the unique elements in each level. The dimension table logical key has to be associated with the lowest level of a dimension and has to be the level key for that level.
Associating a Logical Column and Its Table with a Dimension Level
After you create all levels within a dimension, you need to drag and drop one or more columns from the dimension table to each level except the Grand Total level. The first time you drag a column to a dimension it associates the logical table to the dimension. It also associates the logical column with that level of the dimension. To change the level to be associated with that logical column, you can drag a column from one level to another.
After you associate a logical column with a dimension level, the tables in which these columns exist appear in the Tables tab of the Dimensions dialog box.
To verify tables those are associated with a dimension
1. In the Business Model and Mapping layer, double-click a dimension.
2. In the Dimensions dialog box, click the Tables tab.
The tables list contains tables that you associated with that dimension. This list of tables includes only one logical dimension table and one or more logical fact tables (if you created level-based measures).
3. Click OK or Cancel to close the Dimensions dialog box.
Defining a Non Aggregated Measure of a Fact Table
Two methods to do this
Method 1:
• Find any dimension logical table is available to add these filed
• If so add these fact table as source to existed dimensional logical table
Method 2:
• If there is no logical dimensional table
• Create new logical table
• Make the source as Fact table
• Create a Dimensional hierarchy to the new logical table
• In business model diagram create a complex join between the dimension logical table and the fact logical table
• Also create a complex join to any other fact logical table mapped to the same physical table
Defining an Aggregated Measure of a Dimension Table:
1) Create new Fact Logical Table
2) Dimension Table as source table for the new Fact logical table
3) Include the logical columns that should be a measure of fact table.

If aggregated calculations are performed directly from a dimension logical table field, an error similar to the following will appear:

A general error has occurred. [nQSError: 14026] Unable to navigate requested expression: ). Please fix the metadata consistency warnings.

To resolve this type of error, put the measure indicated by the error message in a fact table object.

OBIEE?
Oracle Business Intelligence Enterprise Edition
Note :
Job Able to see Online Mode
Cache
Session

Project Contains ?
Presentation Catalogs ,
Logical Fact Tables,(Can able to see only Logical Fact Table , No Dimension tables and Hierarchies )
Variables,
Groups,
Users ,
Initialization Blocks

Where we will use Projects ?
We will use the projects in Multi-user Development Environment .

Where Primary Key and Foreign Key available ?
PK and FK are available in Physical and Logical Tables.

Can we crate Physical column for alias Table ?
No we cant create .
we can create only for Physical table

Use of Alias table ?
To avoid Circular joins
Situation where we have to see same table more than once

Base Line Column ?
Is a column that has no aggregation Rule defined in Aggregation Tab of Logical Column
Base line column map to non-aggregated Data at the level of Granularity of logical source

Case 1: If there is no GROUP BY clause specified, the level of aggregation is grouped by all of the nonaggregate columns in the SELECT list.
select year, product, sum(revenue) from time, products, facts Group By will be happened in year and Product
Case 2 :
If there is a GROUP BY clause specified, the level of aggregation is based on the columns specified in the GROUP BY clause.
select year, product, sum(revenue) from time, products, facts group by year, product
Offline Mode ?
RPD is not loaded in to SAS server
RPD opens in Read only Mode
At a time only one admin tool session will be editable after restart SAS then only saved changes will be reflect to UI

Online Mode?
RPD Loaded in to SAS Server
After Check in and Save by click on the ‘Reload Server Metadata ‘ will display the saved changes without SAS server

Load all Objects on Start up ?
this option available only in Online mode

This loads all objects immediately, rather than as selected. The initial connect time may increase slightly, but opening items in the tree and checking out items will be faster

Data Source name (DSN ) in Online open Rep dialog box?
AnalyticsWeb is DSN.This Option available in only in Online mode

From above we need to select DSN. We can able to all User and System DSN which are configured using SAS (Oracle BI ) ODBC Driver.This DSN we have to config in SAW (10.195.120.48)… and provide data for the following option
‘Which SAS Server DO we need to Connect to “ ---- SAS(10.195.120.49)
To configure Siebel Analytics Web installed on a different machine from the Siebel
Analytics Server
1 On the machine where Siebel Analytics Web is installed, modify the odbc.ini file (located in the folder $INSTALLDIR/setup) as follows: [AnalyticsWeb]
Driver=[client $INSTALLDIR]/Bin/libnqsodbc.[$libsuffix]
NOTE: The string [$libsuffix] represents the library suffix appropriate to the specific UNIX
operating system you are using.
For example, for Solaris or AIX, use libnqsodbc.so; for HP-UX, use libnqsodbc.sl.
Description=Siebel Analytics Server
ServerMachine= Port=
2 Save and close the file.
Consistency Check Manager can provide following types of messages ?
Error Messages
Warning Messages
Best Practices
Check Consistency levels ?
Repository level
Object Level ( in 3 layers )
What is the use of “ Options -> Display qualified names in diagrams”?
Before Check :

After Check

What is the use of “Tools ->Option -> Allow import from repository “ ?
By this “Import from the repository” on file menu will be available
it is recommended to create Projects and use this option while Merge .

Use of Display Folders ?
To organize the objects in Physical and Logical Layer
For this No Metadata Meaning
Selected objects appears in this folder as shortcut and In BMM Or Physical Layer as Objects
we can hide the Objects in BMM and Physical Layer so that only shortcuts will be visible.

Update Row Count is in 2 Ways ?
Update row count is possible
Table Level
Column Level

Update Row count is not possible in following Scenarios ?
SP Object Type
XML Data Source
Multi Dimensional Data Source
In Online mode if Connection Pool uses following Session Variable
User name : USER and
Password :PASSWORD
In Online mode after Importing or Manually creation of tables and columns – After check in only Update row count will be available

Use of Level Counts ?
Level counts are utilized by the Query Engine to determine the most optimal Query plan and Optimize the overall system Performance
Types of Physical Schemas?
E-R Schema
Dimensional Schema
Types of Dimensional Schema?
Star Schema
Snow flake Schema
Note :In Snowflake schema one or more Dimensions are Normalized to some extent
RPD Contains what ?
SAS or OBI Server stores Metadata in Repository
Tips while designing Physical Layer ?
Before Import from DW Eliminate all outer joins
Import Physical Data without PK and FK

Tips while designing BMM layer ?
Create BMM layer with 1:N Complex Join between Dim – Fact tables .
Every Dim Table associated with Dim Hierarchy
All Fact Sources links to Proper level in the Hierarchy using Aggregation Content
Use Alias table to eliminate Circular Joins

Physical Layer

What is the use of “Allow Direct Database Request By default ?”
This property allow all users to execute Physical Queries


What is the use of “Allow Populate Queries By default ?”
It will allow to execute POPULATE SQL

SQL Features ?
These SQL Features will automatically populate with default values of database types.

EX: if Data source supports left outer join but we want to prohibited the SAS server to from sending such queries to particular data base , then we can change the default settings in features table .


Connectionpool -> Enable Connection Pooling ?
Single Database connection remain open for Specified time for further query usage
So by this Open and crate for new connection for every request will be reduced.
Persists Connection Pool Property ?
To use this property we must use Temp table first.
This is a database Property .and it is used for specific type of Queries

Ex: In some queries all of the logical query cannot sent to Transactional DB because that DB may not support those functions which used in Query. This might be solved by temporarily creating table in DB and rewriting the SAS server to reference new temp Table

Persistent connection pool will give change to write back option. if this was enabled User name specified in connection pool have privileges to create DDL and DML in DB

Use Default Specific SQL?
For Table Type
Stored Procedure
Select

Need to select above check box.
If select : at run time SP or Select Statement has been defined the SP or Select statement has been executed
If not Selected : Default configurations will be executed
Where we can give 1:1 relation ?
We can give the 1:1 relation to Dim and Mini or Dim to Dim Extn Tables
Bridge Table?
If required Many-to-Many relation between Dimension and Fact we have go for Bridge table
We can create a bridge table that resides between the fact table and the dimension table.
Bridge table stores the Multiple records corresponding to Dimension Table.
Fact Bridge Dimension

for each patient admission, there can be multiple diagnoses.
Example,
a patient can be diagnosed with the flu and with a broken wrist.
The bridge table then needs to have a weight factor column in it so that all of the diagnoses for a single admission add up to a value of 1.
The weight factor has to be calculated as part of the process of building the data.
For the case of the patient diagnosed with the flu and a broken wrist, there would be one record in the Admission Records table, two records in the Diagnosis Record table, and two records in the Diagnosis table,

Deleting Physical Table ?
When we delete Physical table all dependent objects will be deleted .
Note: View Data ?
View data willnot be possible if we use the User : USER Password : PASSWORD session variable for the Connection pool .
Hierarchy in Physical Layer?
This is possible for Multidimensional Data Source. I.e. adding Hierarchy to Physical Cube Table.
Catalog Folder ?
Catalog Folder contains one or more Schema Folders .
Catalog folders are optional folders in Physical Layer
Schema Folder ?
Schema Folder contains tables and Columns.
Schema folders are optional .
Usage of Variable to specify name of Catalog and Schema ?
We can use variable to specify name of Catalog and Schema objects .
Ex : we have data for Separate Clients .
Can creates separate Catalog for each separate client
For this crated Session variable named Client
This could be used to set the name of the client Dynamically when user signs to SAS


Display Folder in Physical Layer ?
To Organize the Table objects in Physical Layer .
No metadata meaning
Selected Tables appear in the folder as shortcut and also Physical Layer tree as objects .
We can hide the Objects as physical tree so Short cut only visible in Display folder
Notes : Joins ?
Imported Physical and Foreign Key joins are do not used in meta data
Notes Joins ?
There is possible of join between Multiple Database . ie table under one database can join with table under another database
But this is significantly slower than Join between 2 tables in same DB.
Fragmented Data ?
Data from a single domain that split between different tables

a database might store sales data for customers with last names beginning with the letter A through M in one table and last names from N through Z in another table. With fragmented tables, you need to define all of the join conditions between each fragment and all the tables it relates to.
Complex join ?
It is non PK-FK join .

Physical Layer Expression is Possible
No Cordiality
BMM Layer No Expression
Cordiality is possible
Physical and Logical Foreign Key Join ?
In Both Physical and BMM layer Expression is Possible but not the cordinality
It is always 1:N
Opaque View?
Physical Layer table that consists of Select Statement. Opaque view appears as View in Physical layer but it doesn’t exist actually. Need to deploy opaque view using Opaque Utility After Deployed it is called Deployed View .It can be used with out deployed but SAS server generates more complex query when this view encountered XLS and Non-Relation DB doesn’t support this feature .
Make sure CREATE_VIEW_SUPPORTED SQL feature should select in DB dialog Box Deploying Opaque View Utility available in Offline.
Driving Table?
It is available in BMM Layer in Both Logical Foreign Key Join and Logical Join (In
Physical Layer it is Disabled) It is used in where SAS server processes Cross – DB Joins when One table is very small (Driving Table) and another table is very Big. Driving tables can be used with Inner Joins. For outer Join , if it Left outer join Driving table is Left table , if it Right outer join Driving table is Right table What are the 2 entries (Performance Tuning Parameter ) in DB features table that control and Tune driving table Performance ?
MAX_PARAMETERS_PER_DRIVE_JOIN
MAX_QUERIES_PER_DRIVE_JOIN
Above parameters available in C:\OracleBI\server\Config\DBFeatures.INI file
Database Hints ?
Database hints are instructions that are placed with in SQL Statement which tells the DB Query optimizer the most efficient way to execute the statement .
Hints override Optimizer execution plan Hints are DB specific.It is available only for Oracle 8i,9i,10g server
Note :In Physical Layer DB -> General ->If the Database type is Oracle

Then only we can find HINT option in Table General Properties

For alias table Hint will be in disabled state
Caching for Alias table ?
By default it will be Disabled If we select “Override Source table Caching Properties “ then Options will be in enabled state
BMM Layer :
Complex join in BMM Layer ?
In BMM we use complex join to establish to which logical tables are joined with which table ?
SAS server goes to Physical layer to search Physical join to make Query.
We can also set Complex join in Physical layer but SAS won’t be able to construct Physical Query
BMM -> Table -> Property->Source ->Edit (Add) -> Content -> Aggregation Content Group By ?
If we select Logical Level .The Group by ( Aggregation ) will be at the Dimension Hierarchy (Month,Year,Week etc) level will be happen

If we select Column the Group by (Aggregation) will be at the Table->Column Level

Note: Do not mix aggregation by Logical Level and Column Level in same Business model .
It is recommended to use Logical Level

Logical Primary Key ?
Logical Primary key must have for Logical Dimensional Table . and Optional for Logical Fact table .
Logical Foreign Key ?
Do not create foreign key for Logical Tables.
Default Aggregation Rule?
Is Count Distinct
Grand Total Level ?
Each Dimension will have 1 Grand Total. It doesn’t contain Level key and Attributes.
Preferred Drill Path ?
To identify Preferred drill path to use when SAW user to drill down their data request .
Use this feature to specify a drill path that is used outside of normal drill path defined by Dimensional Hierarchy.
This is commonly used to drill from one Dimension to Another Dimension (Select the level from Current Dimension or other Dimension)
Creating Dimension Automatically?
Can create Dimension Automatically from Logical Dimension Table if Dimension is not existed.
Dimension Specific Aggregation?
Mostly Measures have Same Aggregation for each Dimension .Ie bank balances might be averaged over time but summed over the individual accounts .SAS allows Dimension Specific Aggregation.
Can we provide Aggregation for Multiple rows at a time ?
Yes
Logical Joins In BMM Layer ?
Logical Join are nothing But Complex joins
Logical Tables are related to each other . how they are related is expressed in Logical Joins .
Key properly of Logical Joins is Cordiality
Cardinality express how rows in one table are related rows in second table .

Logical Table joins are required so that SAS can have necessary metadata to translate Logical Request against the BMM layer to SQL Queries against Physical Data source
In BMM layer we should create only Complex joins one –To-many Relation and not any FK join .
The Existance of Physical join doesn’t require machining join in BMM Layer
Usage of Logical Foreign Keys ?
Logical Foreign Key Join may be needed if SAS server is to be used ODBC data source for certain third party query and Reporting tool

Presentation Layer:

Column Alias Name ?
Whenever if we change the name of the Presentation column name an alias is automatically created for the Old name , So compatibility to the old name remains .
Note : Alias is available for Presentation catalog
Presentation Table
Presentation Column

Presentation Catalog ?
The contents of Catalog can be populated only from Single Business Mode. Can not span Business Models.

Nested Folders in Answers ?
Prefix the name of the presentation folder to be nested with a hyphen and a space and place it after the folder in which it nests (- ).

Presentation Column Name ?
By default Presentation column name if identical to BMM Layer column Name.
However we can give different column name be uncheck
‘Use Logical Column name ‘
‘Display Custom Name’

Availability of “Permissions Tab “?
It is available in
Presentation Ctalog
Presentation Table
Presentation Column




Variables :

Repository Variable ?
Has Single value at any point of time .
Static
Dynamic

Session variable ?
Created and assigned a value when each user logs on.

Initialization Block?
It is used to initialize Dynamic ,Session non System variables.

Where can use Static Repository variables ?
Variables can be used instead of Literals and Constants in Expression Builder in tool.
Ex:
CASE WHEN "Hour" >= 17 AND "Hour" < 23 THEN 'Prime Time' WHEN... ELSE...END

CASE WHEN "Hour" >=VALUEOF(“VAR1”) AND "Hour"

Dynamic Repository Variable ?
It is same as Static variable . but values are refreshed by data returned from queries .
For this need to use Initialization block which execute SQL Query
An also schedule that the SAS will refresh the value of variable periodically

Session variables ?
These are similar to Dynamic Variables. But this will not Scheduled.
Unlike repository variable, this will have many instances

Non System Session Variable ?
It is same as session variable .
Common use of this is setting User Filters .
Ex: Create non System variable called Sales Region
This would be initialized to name of users Sales Region
So we can set security filter for all members of group would allow them to view only data related to their region.

Session variable -> Enable Any User to Set the Value ?


Allow to set the value of variable after Initialization block has populated the value by calling ODBC SP NQSetSessionValue()

What is NQ_SYSTEM session variable ?
It is initialization block is used to refresh system session variable .

Session variable -> Displayname
Is used to display in the UI “Welcome Swapna”
If we not provide Displayname session variable and login the app with v-swapns , it will display as “Welcome v-swpns”
Because Displayname use the initializationblok -> Login Properties (Select P.NAME
from VALUEOF(TBO).S_PARTY P, VALUEOF(TBO).S_USER U
WHERE U.LOGIN=':USER' AND U.PAR_ROW_ID=P.ROW_ID’)

Row-wise Initialization?
It allow to create session variable dynamically and set their values when session starts .
Name and value of session variable reside in external table that access through connection pool

Create the session variables using values contained in table XXXXX
Contains the columns
USERID: Represents user unique Identifier
NAME: Represent Session variable Name
VALUE: Represents the Session variable Value


Create Initialization Block and Select Row-wise Initialization check box.

Select NAME ,VALUE from XXXXX where USERID= ‘VALUEOF(NQ_SESSION.USERID)’

Here NQ_SESSION.USERID is already initialized another initialization block
When JOHN log in his session contain 2 session variable (LEVEL , STATUS)
When JANE los in his session contain 3 session variables (LEVEL , STATUS,GRADE)

Dedicated Connection for initialization block ?
Create Dedicated Connection for initialization block .

Value of Repository variable ?
When we open Rep in Online mode the value of variable is which we defined a default value.

Note : If number of variables are differ from number of columns …..then
If variables are less than columns then Extra column values are ignored .
If variables are more than columns then additional variables are not Refreshed

Notes on Row – Wise initialization ?
For session variables initialization block we can create this

Initialization Block -> Execution Precedence?
If REP contains more than one Initialization block, we can set the order in which block will be initialized.
Ex : we have A and B .
Open B and Specify A will be execute before B

Setting Up Aggregate Navigation:

Use of Where clause Filter in Logical table -> Source -> Content ?
It is used to Limit or Restrict the Physical Table that is referenced in logical table source .
If there is no Limit , leave that as blank .

Each logical table Source Should contains data at single aggregation level .should not create a source that had the sales data at both Brand and Manufacturing levels.

If Physical table include date at more than one level add appropriate where clause limit to filter values to single level .
Any limit in where clause filter are made on the Physical table in source .




Use of Fragment Content in Logical table -> Source -> Content ?
If logical table doesn’t contains entire set of data at given level, need to specify the Portion or Fragment.
Describe the content in terms of logical columns.
Fragment1:
Logical column IN

Fragment1:
Logical Column IN




Security:
Usage of Filters?
Use filters to limit data accessible by user.

User?
User accounts can be defined explicitly in SAS , External DB and LDAP.

Grant permission rights?
We can grant rights permission to user individual , group , or combination of both .

Creation of user?
After creation of user , it will have default rights was granted .
In NQSConfig.ini , the default rights are specified by DEFAULT_PREVILAGES

Administrator Account ?
We can’t delete or modify other than Login level and Password change
Can set Password min length in NQSConfig.ini file using MINIMUM_PASSWORD_LENGTH

User Privileges?
Users can have explicitly granted Privileges, and also through Groups.

Privileges Hierarchy?
Privileges granted explicitly to Users have Priority over Privileges granted through Group
user will have Read Permission on Table A
Privileges granted explicitly to Group have Priority over Privileges granted through other Group

User will have read Privileges on table A ,B,C

Note : Group 1 and Group 2 are in same level in this case Less Restrictive level will be takes place (Deny , Read = Read)

LDAP V/S Repository Security?
If we create variable for same user in both REP and LDAP, then local REP user definition will take priority and LDAP authentication will not occur.

Authentication
Authentication?
It is a process to check the user has necessary permissions and authorizations to login to application and access data

Authentication types?
OS
LDAP
External Table
Database
SAS user Authentication

OS Authentication?
It is only for ODBC client Application not for SAW.
It is only for login to SAS client

LDAP?
Lightweight Directory Access Protocol.
Along with user authentication, it also contains
Display name,
user belongs to which group
Name of DB catalogs and Schema

External table?
Along with user authentication, it also contains
Display name ,
user belongs to which group
Name of DB catalogs and Schema

External table Authentication can be used in conjunction with Database authentication .

DB authentication ?
If user have read permissions on specific DB then user will trusted by SAS server .
Unlike OS authentication this can be applied to SAW also.

Bypassing(Avoiding) Siebel Analytics Security?
We have option in NQSConfig.ini file
AUTHENTICATION_TYPE=BYPASS_NQ

Caching :

Ways to Purge the cache ?
Manually, using the Administration Tool Cache Manager facility (in online mode).
Automatically, by setting the Cache Persistence Time field in the Physical Table
Event polling table.
Automatically, as the cache storage space fills up.

Initializing cache entry for User ID?
To do this , the connection pool need to be setup for shared login with session variables USER and PASSWORD

Cache Storage gets filled up ?
Then LRU are discarded and make space for new entries

Max Cache values?
If number of rows returned by Query is more than the value specified in ‘MAX_ROWS_PER_CACHE_ENTRY’ parameter then Query will not be cached.

Event Pooling Tables ?
This tables store the information about updates in underlying DB
Create the table with following Schema (Database name ,Catalog name , Schema Name, Table Name , Other ,Update Time ,Update Type)
To mark the table object as an Event Polling Table
1. Click on the Tools > Utilities menu item.
2. Select the option Oracle BI Event Tables from the list of options.
3. Click Execute.
4. Select the table to register as an Event Table and click the >> button.
5. Specify the polling frequency in minutes, and click OK.
The default value is 60 minutes.
NOTE: You should not set the polling frequency to less than 10 minutes. If you want a very short polling interval, consider marking some or all of the tables non-cacheable.
Disabling Caching?
Disabling cache for whole system can done in NQSConfig.ini by ENABLE = NO . and Restart SAS.
Disbling cache will do
Stops all new cache entries .
Stops new quires from Existing cache

Disabling cache can be enabled without losing any entries already stored in cache

Purge Cache Programmatically ?
Call SAPurgeCacheByQuery ('select lastname, firstname from employee where salary > 100000’);
Call SAPurgeCacheByTable('DBName', 'CatName', 'SchName', 'TabName' );
Call SAPurgeAllCache();
Call SAPurgeCacheByDatabase( 'DBName' );
Nulls passed as input parameters to SAPurgeCacheByTable serve as wild cards.
For example, specifying a database name but leaving the catalog, schema and table names null will direct the function to purge all entries associated with the specified database.

Cache Hits ?
For cache hits , it should follows some conditions .

Make changes to Repository ?what will be happen when changes occur in Online,Offline and Switch Btw Rep?
Online Mode :
If we change any object , cache related to that changed object will be Purged automatically.
Any changes made to BMM will purge the all cache entries for the BMM layer .
Purge occurs when check in will takes place

Offline Mode :
In Offline purge will not happen automatically.

Switch Btw Rep:
Before Switch btw repositories Purge the cache and then switch to another

Purging cache ways?
Manually using Admin tool
Cache Persistence Time in Physical tables
Event Pooling Table
Automatically cache storage fills up

Administering the Query Environment:
What NQServer.log file contains ?
Start up time
Business model that are started
Errors if any occurred .

Controlling size of NQQuery .log file ?
The parameter USER_LOG_FILE_SIZE in NQSConfig.INI file determines the size of the NQQuery.log file.
When the log file grows to one-half the size specified by the USER_LOG_FILE_SIZE parameter, the file is renamed to NQQuery.log.old, and a new log file is created automatically.
Only one copy of the old file is kept.
If you change the value of the USER_LOG_FILE_SIZE parameter, you need to restart the Siebel Analytics Server

Enabling Logging Level ?
It is possible to enable Logging level for users
Not for Group .
Logging levels greater than 2 should be used only with the assistance of Siebel Technical Support.
Usage Tracking ?
We can enable this in NQSConfig.ini file
ENABLE = YES;
Setup and Managing Repository:
Import Repository ?
To enable this Tools->Options->General
Will work in Offline Mode .
Comparing Repositories ?
It will compare 2 repositories .
Compare ur customized rep to your new version of Repository.
It will be work in Offline Mode .
Steps:
Open Rep in Offline . this rep is Current Rep
File->Compare
Select Original Rep Dialog Box->Select Rep which we require to compare.
Use compare rep Dialog Box
Merge Repositories ?
This option is used to upgrade the Custom Rep
This process involves 3 versions of Rep.
Original Previous Version of Rep (Like Dummy Rep 1st Rep)
Modified Customizations that modified to Original Rep (This is the rep whose objects would like to copy to current rep)
Current Installed with this Version and Currently Opened as Main Rep(Like 3rd Rep)
During this Merge Process we can compare with
Original To Modified
Original To Current

we have 2 rep with their own Phy,BMM,Pre layers
use Merge Option to Merge above 2 rep to 3rd Rep.
1+2 = 3

Ex : We have Paint Rep
Another is UsageTracking Rep
Our aim to get usageTracking Rep to Paint Rep
Projects ?
Projects consists of subset of metadata
Its contains Catalogs and associated BMM objects(Fact Tables Only ) , Groups, Users , variables and Initialization Blocks
Usage of Projects ?
Mostly we will use in Multi User Development (MUD)
Only one can create Projects in master Rep
Multi User Development?
Need to work Concurrently on subset of metadata and Merge those into master Repository.
IMP Steps: Admin create Projects
Rep Copied into Shared N/W path
Developers checkout their Projects

Total Steps; Admin create Projects
Rep Copied into Shared N/W path
Before Checkout Developer must points Admin tool to Shared path

Checkout rep Projects
Multi-user -> Checkout
Compare with Original (Compare Working Extracted Local Rep to Original Rep)
Merge Local Changes (Locks Master Rep to allow you to check in changes)
Or Discard Local Changes (Any time After Checkout and Before Check in can discard changes)
Publish To Network (After Successfully Merge, Master Rep open local and This Item’ll be available. After select this option lock is removed Rep is Published and rep will be closed)
Only one developer at a time can merge metadata from Local Rep into Master Rep.
Other :

Calculation Wizard ?
To Create new calculation column that compare 2 existing columns and to created metric in Bulk(Along with Aggregation )

Start this wizard under BMM Layer -> Logical Column (Right Click)with data type Numeric.

Hierarchy Dimension -> Number of Elements at this Level ?
Number of elements at this level to 3. This number does not have to be exact. The ratio from one level to the next is more important than the absolute number. These numbers only affect which aggregate source is used (optimization, not correctness of queries).

Case sensitive Option ?
CASE_SENSITIVE_CHARACTER_COMPARISON = OFF
In NQSConfig.ini

Siebel Analytics Server :- It generates dynamic SQL to query data in the data sources. The Siebel Analytics Server user IDs are stored in non-encrypted form in a Siebel Analytics Server repository and are case insensitive. Passwords are stored in encrypted form and are case-sensitive.
Siebel relationship management warehouse(SRMW):- It is a database that contains the data extracted, transformed and loaded from Siebel eBusiness Applications.
Siebel analytics scheduler :- Schedules reports to be delivered to users at specified times.
NQQuery.log :- Records query requests.
Siebel Analytics Web server :- It receives data from the Siebel analytics server and provides data to the client that requested it.
Clients :- Provides the interface to access the data.
Siebel Delivers :- It automates requests that have been created and saved with Siebel Answers.
Repository File(.rpd) :- Contains metadata that represents the analytical model.
NQSServer.log :- Records Siebel analytics server messages.
NQSConfig.ini :- Configuration file used by Siebel analytics server at start up.
.webcat :- Stores application dashboards, request definitions, pages and filters.
Datasources :- Contain the business data users want to analyze.
Pivot Table :- The Pivot Table view allows you to take row, column, and section headings, and swap them around to obtain different perspectives of the data.
Funnel Chart:- The Funnel Chart view displays a three-dimensional chart representing target land actual values using volume, level and color.
Ibots:- Siebel Delivers uses intelligence agents called ibots. iBots provide delivery of real-time and personalized analytics alerts throughout your organization’s network.
Siebel Alerts:- The Siebel Alerts page shows your currently active alerts, along with information about when the content was delivered. When alerts are present, the link Alerts! appears at the top of each Siebel Answers, Siebel Delivers, and Siebel Intelligence Dashboard page.
Global filters:- They act as an independent control for the entire dashboard, and can update any report on that dashboard that shares columns with the global filter.
Query Caching:- The query cache in Siebel Analytics Server is a facility that stores the results from queries. It is used for improvement of query performance, less network traffic.
Repository Variables:- A repository variable has a single value at any point in time. There are two types of repository variables: static and dynamic. Repository variables are represented by a question mark icon.
Static variable: The value of a static repository value is initialized in the Variable dialog box. This value persists, and does not change until a Siebel Analytics Server administrator decides to change it.
Dynamic variable: You initialize dynamic repository variables in the same way as static variables, but the values are refreshed by data returned from queries. When defining a dynamic repository variable, you will create an initialization block or use a preexisting one that contains a SQL query. You will also set up a schedule that the Siebel Analytics Server will follow to execute the query and periodically refresh the value of the variable.
Session Variables:- Session variables are created and assigned a value when each user logs on. If a user is authenticated successfully, session variables can be used to set filters and permissions for that session. There are two types of session variables: system and non-system. System and non-system variables are represented by a question mark icon.
System Variables: System variables are session variables that the Siebel Analytics Server and Siebel Analytics Web use for specific purposes. System variables have reserved names, which cannot be used for other kinds of variables. When using these variables in the Web, preface their names with NQ_SESSION.
Non-system Variables: The procedure for defining non-system session variables is the same as for system session variables. When using these variables in the Web, preface their names with NQ_SESSION. A common use for non-system session variables is setting User filters.
Initialization Blocks:- An initialization block contains the SQL that will be executed to initialize or refresh the variables associated with that block. Initialization blocks are used to initialize dynamic repository variables, system session variables, and non-system session variables. (The NQ_SYSTEM initialization block is used to refresh system session variables.)
Stand-Alone Siebel Analytics (Siebel Analytics Server)The stand-alone configuration involves the Siebel Analytics Server only. You must develop your own analytics applications and configure them to connect to legacy data warehouses or other data sources.
Integrated Siebel Analytics (Siebel Analytics applications)You can configure Siebel Analytics to run with Siebel eBusiness Applications and with Siebel Industry Applications to use the Siebel Data Warehouse or pre-built (and sometimes specialized) data warehouses.
Security:- The Siebel Analytics Server and Web client support industry-standard security for login and password encryption. When an end user enters a login and password in the Web browser, the Siebel Analytics Server uses the Hyper Text Transport Protocol Secure (HTTPS) standard to send the information to a secure port on the Web server. From the Web server, the information is passed through ODBC to the Siebel Analytics Server, using Triple DES (Data Encryption Standard). This provides an extremely high level of security (168 bit), preventing unauthorized users from accessing data or analytics metadata. The Siebel Analytics Server Administrator account (user ID of Administrator) is a default user account in every Siebel Analytics Server repository. This is a permanent account. When you create a new repository, the Administrator account is created automatically and has no password assigned to it. It cannot be deleted or modified other than to change the password and logging level. It is designed to perform all administrative tasks in a repository, such as importing physical schemas, creating business models, and creating users and groups.
Authentication:- Authentication is the process, by which a system verifies, through the use of a user ID and password, that a user has the necessary permissions and authorizations to log in and access data.
OS Authentication:- Users with identical Windows and Siebel Analytics Server user IDs do not need to submit a password when logging in to the Siebel Analytics Server from a trusted domain. When operating system authentication is enabled, users connecting to the Siebel Analytics Server should not type a user ID or password in the logon prompt. If a user enters a user ID and (optionally) a password in the logon prompt, that user ID and password overrides the operating system authentication and the Siebel Analytics Server performs the authentication. NOTE: Operating system authentication cannot be used with Analytics Web. It can only be used with ODBC client applications.
LDAP(Lightweight Directory Access Protocol) Authentication:-It is used for hierarchical data access.To configure LDAP authentication, you define a system variable called USER and associate it with an LDAP initialization block, which is associated with an LDAP server. Whenever a user logs into the Siebel Analytics Server, the user ID and password will be passed to the LDAP server for authentication. After the user is authenticated successfully, other session variables for the user could also be populated from information returned by the LDAP server.
Database Authentication:- The Siebel Analytics Server can authenticate users through database logons. If a user has read permission on a specified database, the user will be trusted by the Siebel Analytics Server. NOTE: Siebel Delivers does not work with database authentication.
Mini Dimension Tables:- contains the combination of most frequently queried attributes.
Aggregate Tables:- Aggregate tables store pre-computed results — measures that have been aggregated (typically summed) over a set of dimensional attributes. Using aggregate tables is a very popular technique for speeding up query response times in decision support systems
About Dimensions and Hierarchical Levels
In a business model, a dimension represents a hierarchical organization of logical
columns (attributes) belonging to a single logical dimension table. Common
dimensions might be time periods, products, markets, customers, suppliers,
promotion conditions, raw materials, manufacturing plants, transportation
methods, media types, and time of day. Dimensions exist in the Business Model and
Mapping (logical) layer and end users do not see them.
In each dimension, you organize attributes into hierarchical levels. These levels
represent the organizational rules, and reporting needs required by your business.
They provide the structure (metadata) that the Siebel Analytics Server uses to drill
into and across dimensions to get more detailed views of the data.
Dimension hierarchical levels are used to perform the following actions:
Aggregate navigation
Configure level-based measure calculations (see “Level-Based Measure
Calculations Example” on page 149)
Determine what attributes appear when Siebel Analytics Web users drill down
in their data requests
Message numbers are listed in the format nnxxx, where nn is the message prefix
that identifies the category of the message, and xxx is the numeric identifier of the
message in that category.

Siebel Analytics Scheduler

Siebel Analytics Scheduler manages and schedules jobs. A job is a task performed by Siebel Analytics
Server. Siebel Analytics Scheduler supports two types of jobs:
Scripted jobs that you set up and submit using the Job Manager feature of the Server
Administration Tool
Unscripted jobs, called iBots, that you set up and submit using Siebel Delivers





Siebel Analytics Complete Solution
Summary of Siebel Analytics as defined in this module:




Subject Areas

 Contain information about the
n areas of your organization’s business
 Have names that correspond to the
n type of information they contain





 Select columns from
n subject area virtual tables in the selection pane to create request criteria


By default, results are displayed in compound layout format, which includes the Title and Table views

Use Save Request to save a request in a personal or shared folder


Intelligence Dashboards
n Are pages in a Siebel Analytics application used to display:
 Results of one
} or more saved Siebel Analytics requests
 Other content items, such as
}
n Links to Web sites
 ActiveX objects
n
 HTML text
n
 Links to
n documents
 Embedded content: images, text, charts, tables
n
 Are provided
n in Siebel Analytics applications
 Can be created by Siebel Analytics users
n or application developers
 Can be shared by common groups of users
n
 Can
n be modified based on personal preferences and business needs
Accessing Intelligence Dashboards

To access Intelligence Dashboards in the standalone version of Siebel Analytics, select Start > Programs > Siebel Analytics > Siebel Analytics Web

Accessing Saved Intelligence Dashboards
 Select Dashboards tab to access saved dashboards in Siebel
n Answers

Provide rebuilt, fully-interactive access to analytics information

Siebel Analytics Architecture
 Is made up of five main
n components:
 Clients
}
 Siebel Analytics Web Server 
}
 Siebel Analytics
} Server
 Siebel Analytics Scheduler
}
 Data Sources
}

Siebel Analytics Web Administration
Is used to access administrative functions of Siebel Analytics Web and view information about the installed system


Siebel Analytics Web Catalog (.webcat)
 Stores the
n application dashboards, request definitions, pages, and filters
 Contains
n information regarding permissions and accessibility of the dashboards by groups and users
 Is created when the Web Server starts
n
 Is specified in the
n registry of the machine running the Web Server
 Is administered using Siebel
n Analytics Catalog Manager

Repository File (.rpd)
 Contains metadata
n that represents the analytical model
 Is created using the Siebel Analytics
n Administration Tool
 Is divided into three layers
n
 Physical — represents
} the data sources
 Business — models the data sources into facts and
} dimensions
 Presentation - specifies the users view of the model; rendered
} in Siebel Answers

Cache
 Contains results of queries
n
 Is used to
n eliminate redundant queries to database
 Speeds up results processing
}
n Query caching is optional
 Can be disabled
}



NQSConfig.ini
n Is a configuration file used by the Siebel Analytics Server at startup
n Specifies values that control processing, such as:
 Defining the repository
} (.rpd) to load
 Enabling or disabling caching of results
}
 Setting server
} performance parameters

DBFeatures.ini
 Is a configuration file used
n by the Siebel Analytics Server
 Specifies values that control SQL
n generation
 Defines the features supported by each database
}


Log Files
 NQServer.log records Siebel Analytics Server messages
n
n NQQuery.log records information about query requests


Siebel Analytics Scheduler
 Manages and executes jobs requesting data analytics
n
n Schedules reports to be delivered to users at specified times
 In Windows,
n the scheduler runs as a service


Data Sources
 Contain the
n business data users want to analyze
 Are accessed by the Siebel Analytics
n Server
 Can be in any format, such as
n
 Relational databases
}
 Online
} Analytical Processing (OLAP) databases
 Flat files
}
 Spreadsheets or
} other ODBC data sources
 XML
}

Siebel Relationship Management Warehouse
 Is a predefined data source to support analysis of Siebel
n application data
 Relevant data structures support Siebel eBusiness
} Applications
 Is in a star schema format
n
 Is included with Siebel
n Analytics Applications (not available with standalone Analytics purchases)


DAC and Informatica Server
 Data Warehouse Application
n Console (DAC) Client
 Used to schedule, monitor, configure, and customize
} SRMW extraction, transformation, and load
 Accesses metadata about ETL
} mappings and dependencies in the DAC repository
 DAC Server
n
 Organizes
} ETL requests for processing
 Third party Informatica Server populates the
n SRMW from the Siebel eBusiness Application Database (Siebel OLTP)
 Uses
} extract, transform, and load (ETL) routines

Siebel RMW: Siebel Relationship management warehouse

Informatica Server ETL
 Uses
n Source Dependent Extraction (SDE) routines to extract data
 Loads data into
n staging tables within the SRMW
 Uses Source Independent Loading (SIL)
n routines to transform data into stars within the SRMW


Sample Request Processing
1. User views a dashboard or submits an Answers request
2. The Siebel Analytics Web Server makes a request to the Siebel Analytics Server to retrieve the requested data
3. The Siebel Analytics Server using the .rpd file, optimizes functions to request the data from the data sources
4. The Siebel Analytics Server receives the data from the data sources and processes as necessary
5. The Siebel Analytics Server passes the data to the Siebel Analytics Web Server
6. The Siebel Analytics Web Server formats the data and sends it to the client


Siebel Analytics Standalone Architecture
Does not require any Siebel eBusiness Applications



Siebel Analytics Integrated Architecture
n Supports the Siebel Analytics Applications
 Parallels the Siebel eBusiness
n Applications architecture




Implementation
 Siebel
n Analytics components are often implemented across several computers on the network
 For example:
n


Clustering Siebel Analytics Servers
n Cluster Server Feature
 Allows up to 16 Siebel Analytics Servers in a
} network domain to act as a single server
 Servers in cluster share requests
} from multiple Siebel Analytics clients, including Siebel Analytics Answers and Siebel Analytics Delivers
 Cluster Controller is primary component of the
n Cluster Server feature
 Monitors status of resources in a cluster and
} performs session assignment as resources change
 Supports detection of
} server failures and failover for ODBC clients of failed servers

Data Warehousing

 Brings together data from many sources
n

 Organizes
n data for analytical processing

 Denormalize data: Duplicate and flatten
} data structures
 Reduce joins: Reduce the number of tables and
} relationships
 Simplify keys: Use surrogate keys such as a sequence
} number
 Employ star schemas: Simplify relationships between tables
}

n Two major ways to organize data, each optimized for different uses
} Transactional systems
 Organize data to optimize transactional throughput:
n inserts, updates, and deletes
 Example: Siebel transactional database
n
n OLTP
 Transactional schema optimized for read/write—multiple
n joins


 Analytical systems 
}
 Organize data to optimize queries on
n large datasets on separate database instance
 Example: Siebel Relationship
n Management Warehouse (SRMW)
 OLAP
n
 Analytics schema optimized for
n querying large datasets—few joins
 Star Schema 
n



 Organizes
n data into a central fact table with surrounding dimension tables
 Each
n dimension row has many associated fact rows
 Dimension tables do not
n directly relate to each other



Sales fact table with dimension tables and relationships



 Contains business measures or
n metrics
 Data is often numerical
}
 Is the central table in the
n star


 Contains attributes or characteristics about the business
n
} Data is often descriptive (alphanumeric)
 Qualifies the fact data
n

n Is a technique for logically organizing business data in a way that helps end users understand it
 Data is separated into facts and dimensions
}
 Users
} view facts in any combination of the dimensions
 Allows users to answer
n “Show me X by Y by Z” type questions
 Example: Show me sales by product by
} month


 Siebel Analytics is sold in two varieties
n
 Siebel
} Analytics standalone
 Siebel Analytics Applications
}
 Access Siebel data
n only (CRM Edition)
 Access Siebel and/or other data (Enterprise
n Edition)
Siebel Analytics Standalone
 Provides a platform to model data
n so users can understand it
 Provides server to generate SQL and seamlessly
n access and manipulate data from multiple sources
 Provides a simple to use,
n highly interactive, Web-based analysis tool and the ability to pre-construct dynamic reports and alerts
Siebel Analytics Applications
 Provides all
n that the standalone application does, plus:
 Applications for common
} industry analytical processing such as Service Analytics, Sales Analytics, Pharma Analytics, and so on
 Prebuilt role-based dashboards to support the
} needs of line managers to chief executive officers
 A prebuilt database
} (Siebel Relationship Management Warehouse) designed for analytical processing with prebuilt routines to extract, load, and transform data from the Siebel eBusiness application (transactional) database




Siebel Intelligence Dashboards
 Siebel Answers
n
 Siebel Delivers
n
 Siebel
n Analytics Server and Siebel Analytics Web
 Siebel Relationship Management
n Warehouse (SRMW)
 Siebel Analytics Administration Tool
n

Siebel Answers
On-demand user interface to analytical information

Is the Siebel Analytics user interface used to query an organization’s data
Provides a set of graphical tools to create and execute requests for information

To access the standalone version of Siebel Answers, select Start > Programs > Siebel Analytics > Siebel Analytics Web

Which calls http://loaclhost/analytics/saw.dll?answers

 Provides a
n self-service analysis platform
 Is rendered from information in the Siebel
n Analytics Server and Siebel Analytics Web Server

Siebel Delivers
n Platform to launch jobs and proactively deliver results to users
 Scheduled
} intelligence Bots (iBots)
 Proactive delivery of real-time, personalized,
} and actionable intelligence via Web, wireless, mobile, and voice
n Capabilities and content tailored to the device
 Client application
n that:
 Is used to create iBots
}
 Delivers alerts to subscribed users
}
} Is integrated with Dashboards and Answers
 Job identifies what information
n to filter, when it should run, and who to send alerts to


Siebel Analytics Server and Siebel Analytics Web Server
 Services that access data
n and return results to the user
 Determine appropriate source, generate SQL,
n and merge and sort as necessary





Siebel Analytics Web Server
 Provides the processing to visualize the information for client
n consumption
 Is implemented as an extension to a Web server
}
 Uses the
} web catalog file (.webcat) to store aspects of the application
 Receives
n data from the Siebel Analytics Server and provides it to the client that requested it



Siebel Analytics Server
 Provides efficient
n processing to intelligently access the physical data sources and structures the information
 Uses metadata to direct processing 
}
 Generates dynamic SQL
} to query data in the data sources
 Connects natively or via ODBC to the
} RDBMS
 Structures results to satisfy requests
}
 Merges results when it
n generates multiple queries
 Calculates measures on result sets when
n necessary
 Provides the data to the Siebel Analytics Web
} Server

Siebel Analytics Server Details
 Several important components
n are used by the Siebel Analytics Server
 Repository file (.rpd)
}
} Cache
 NQSConfig.ini
}
 DBFeatures.ini
}
 Log files
}


Siebel Relationship Management Warehouse
 Prebuilt database in star schema
n format
 Uses Siebel Analytics tools to design, manage, and run routines to
n extract, transform, and load (ETL) data from the Siebel eBusiness Applications (transactional) database and external databases


Siebel Analytics Administration Tool
 Tool to build a metadata model
n
 Outputs a
n repository file that is used by the services to resolve requests in an optimized fashion.