left-icon

BI Solutions Using SSAS Tabular Model Succinctly®
by Parikshit Savjani

Previous
Chapter

of
A
A
A

CHAPTER 4

Preparing the Data Model for Reporting and Deployment

Preparing the Data Model for Reporting and Deployment


Once we have completed the design of the data model and added the required calculated columns and measures using DAX, we next plan and prepare the data model for reporting and deployment. In this chapter, we will learn a few tips to prepare the data model for reporting by hiding undesired columns, creating perspectives, and improving the end user experience. In the latter half of this chapter, we will learn about defining roles, partitions, and deploying the data model to an SSAS instance.

Hiding undesired columns and tables from the data model

In our data model, we import some of the key columns since they are used to define relationships with other tables in the model. Besides key columns, we also import some columns that may not be directly consumed by end users, but are used to define calculated columns. These key columns, called intermediate columns, need not be exposed to end users. If the columns are not used for reporting, they should be hidden from the reporting view to enhance the user experience.

Note: If the column is not used in the data model at all, it should not be imported, since any redundant columns in the data model add to the processing time and storage and impact the overall performance of the system.

In our data model, we have key columns and a few unused columns that are used for defining relationships or intermediate calculations; we will hide those columns.

In order to hide a column from the data model, we select the table and right-click the column as shown in the following figure.

Selecting a column to hide

  1.   Selecting a column to hide

Click Hide from Client Tools, and the column will appear unavailable as shown in the following figure.

A hidden column

  1.   A hidden column

Similarly, if a table is not required for reporting, we can hide it from client tools. To hide a table from client tools, right-click the table’s tab and select Hide from Client Tools.

Hiding a table

  1.   Hiding a table

Once a table or a column is hidden from client tools, it is not visible when users interact with the data model using Excel, Power View, or PPS Dashboards.

Setting the Default Field Set and Table Behavior properties

The tabular data model supports a couple of table properties, default field set and table behavior, which were specifically designed to enhance the user interactivity and experience for the Power View reporting client.

Default field set

The default field set property is used to define the set of fields or columns of the table so that when the end user clicks on the table in the Power View report, the defined column set is reported automatically.

In order to set the default field set for a given table, we need to select the table tab at the bottom in the grid view and press F4 to open the properties window. In the Properties window for the SalesTerritory table, we see the table property Default Field Set as shown in the following figure.

SalesTerritory table properties

  1.   SalesTerritory table properties

When we click the Click to Edit cell next to the Default Field Set cell, we can select the default field set as shown in the following figure.

Setting default fields for the SalesTerritory table

  1.   Setting default fields for the SalesTerritory table

We then select Region and Country as the default fields for the SalesTerritory. Now, when users click on the SalesTerritory table in Power View, the report automatically populates the Region and Country fields.

Power View with default fields

  1.   Power View with default fields

Table Behavior

Table behavior properties are another set of properties designed specifically to enhance interactivity for Power View users.

Setting table behavior properties is recommended for tables where the individual rows are of primary interest, such as employee or customer records. In contrast, tables that do not benefit from these properties include those that act as lookup tables (for example, a date table, a product category table, or a department table, where the table consists of a relatively small number of rows and columns), or summary tables containing rows that are only relevant when summarized (for example, census data that rolls up by gender, age, or geography). For lookup and summary tables, the default grouping behavior produces the best result.

Table behavior properties include the following:

  • Row Identifier: Specifies a column that contains only unique values, allowing that column to be used as an internal grouping key.
  • Keep Unique Rows: Specifies which columns provide values that should be treated as unique even if they are duplicates (for example, employee first name and last name in cases where two or more employees share the same name).
  • Default Label: Specifies which column provides a display name to represent row data (for example, employee name in an employee record).
  • Default Image: Specifies which column provides an image that represents the row data (for example, a photo ID in an employee record).

In our data model we have a Products table which will generally be reported as individual records, and hence would be a good candidate for using table behavior properties.

In the grid view (also called the data view), we click on the Product table tab at the bottom and press F4. We see table behavior properties just below the default field set properties. In the table behavior properties for Product Table, we set the following values:

Row Identifier:                    ProductKey
Keep Unique Rows:     Product
Default Label:           Product
Default Image:                    LargePhoto

Setting the Data Category property for columns

In the tabular data model, we can classify some of the column data into a set of predefined categories. These properties are useful for client tools like Power View, which can use these properties to report the data with the best possible visualization.

In our data model, we have a SalesTerritory table with a Country column; we can set the Data category of the Country column to Country. To change the Data Category property for the Country column, select the column and press F4 to open the Properties window, as shown in the following figure:

Country column properties

  1.   Country column properties

After setting the Data Category for the Country column, when we browse the data model in Power View, we see the map icon next to the Country column.

Map icon next to Country column

  1.   Map icon next to Country column

Power View uses these Data Category values to map the data with the best visualization available for that type of data.

Similarly, we can classify other columns if the data in the column matches the set of predefined data category available.

Setting the Format property for measures

In the previous chapter, we defined various calculated measures (Sales, Cost, Profit, Margin, etc.). When these calculated measures are used as-is in an Excel or Power View report, they will not make much sense to the end user since the measures will be reported in a plain number format. It is important for us to set the Format property for measures specifying whether the format of the measure should be a currency or percentage.

In our data model, we have defined the Sales measure using the following DAX formula.

Sales:=SUM(ResellerSales[SalesAmount])

When this DAX measure is used as is in a Power View or Excel report, we will see the following report.

Sales measure without a data format

  1.   Sales measure without a data format

In order to set the Format property for the Sales measure, select the Sales Measure cell in the measure grid and press F4 to open the Properties window. In the Properties window, we set the Format property to Currency as shown in the following figure.

Setting the Format property

  1.   Setting the Format property

We can further specify the number of decimal places, which is set to 2 by default, and Currency Symbol for localization purposes.

Similarly, we can define the properties for other measures to appropriately format values before we deploy the data model.

Setting the Summarize property for columns

In the tabular data model, we can set the default aggregation for the column so that whenever the column is checked in the Power View report, it automatically aggregates the data.

However, if the column is of numeric data type, the default aggregation for the column is set to Sum, which may be undesired at times. For example, when we browse the data model in Power View and expand the Date table, we see the following:

Date table with summation columns

  1.    Date table with summation columns

In order to ensure these columns do not have the summation sign next to them, press F4 to open the Properties window for that column and select Do not summarize for the Summarize By property as shown in the following figure.

Changing the Summarize By property

  1.    Changing the Summarize By property

We need to select Do not summarize for all the columns with whole number data types that we do not want to aggregate. For the columns we wish to aggregate, we can select from the default aggregations in the Properties window for each column.

Adding descriptions for columns, tables, and measures

In the tabular data model, we can add descriptions to columns, tables, and calculated measures. These descriptions might be a helpful tip for the end users or analysts, informing them what the column, table, or measure is meant for and what data it will display. The description is shown as a tooltip in the Power View report.

In our data model, we can add descriptions to a table, column, or field by right-clicking on it and clicking Description as shown in the following figure.

Adding a description to a table

  1.   Adding a description to a table

Adding a description to a column

  1.   Adding a description to a column

Adding a description to a field

  1.   Adding a description to a field

The description we write will appear as a tooltip for end users in the Power View report:

Field description

  1.   Field description

Descriptions are useful by providing additional information about the column, table, or calculated fields, which can enhance the user’s reporting experience.

Defining perspectives

A tabular data model can contain multiple data marts together in a single database to avoid redundancy for shared dimension tables. Therefore, a single data model can have a large number of tables, columns, and measures defined. If all these columns and measures are exposed to the end user via reporting clients like Excel or Power View, it will be overwhelming for end users. In order to improve the user experience so that users can see only the tables, columns, and measures that are relevant to them for analysis, we can define perspectives.

Perspectives are similar to views in RDBMS, which act as windows to the data model, displaying the fields that are relevant for analysis. We can define multiple perspectives in the data model based on the end user’s need for analysis.

In our data model, in order to define perspectives, we click the Perspectives button on the toolbar as shown in the following figure:

Perspectives option

  1.   Perspectives option

The Perspectives wizard will appear. When we click on New Perspective, we can provide a name to the new perspective defined and use the check boxes to include the columns, hierarchies, measures, and tables needed. In our data model, we define the perspective “Reseller Sales” as shown in the following figure.

Defining a new perspective

  1.   Defining a new perspective

To create the “Reseller Sales” perspective, we select the following columns and fields from each table.

Table

Fields Included in Reseller Sales Perspective

Date

Calendar Hierarchy, Fiscal Hierarchy

Employee

Name, BirthDate, EmailAddress, Gender, Marital Status

Product

Products, Color, Size, LargePhoto

ResellerSales

Sales, OrderQuantity, % Geography Contribution, Profit, Revised Sales Target, YoY, YoY %

SalesTerritory

Geography

For the Date, Product, and SalesTerritory tables, we have included the hierarchy we defined earlier in the perspective instead of selecting the individual columns to avoid redundancy.

In order to connect to the perspective, the end user needs to specify the perspective name while defining the connection string for the data model.

After deploying the model in Excel, we will define the data connection in the Data Connection Wizard. The wizard gives us an option to select the perspective, as shown in the following figure.

Connecting to a perspective

  1.    Connecting to a perspective

When end users connect to the Reseller Sales perspective, the fields we included in defining the perspective are displayed as shown in the following figure.

Reseller Sales perspective fields

  1.   Reseller Sales perspective fields

For large data models or cubes, it is always recommended to define multiple perspectives to provide end users only the fields required for their analysis.

Defining roles and security

The SSAS tabular model uses role-based security where a valid authenticated user would see the data, depending on the roles the user belongs to. Like the multidimensional model, the SSAS tabular model supports Windows authentication (the only authentication supported). Unlike the multidimensional model (which has cell security and dimension security) used to restrict the user at cell-level and dimension members respectively, the tabular model has row-level security where we can restrict the users to the rows of the table which they can view. Row filters also apply to related tables; for example, if a row is restricted in the Product table, the row in the Reseller Sales table corresponding to the restricted row in the Product table will also be restricted.

Roles are used in Microsoft SQL Server Analysis Services to manage security for Analysis Services and data. There are two types of roles in Analysis Services:

  • The server role: A fixed role that provides administrator access to an instance of Analysis Services.
  • Database roles: Roles defined by model authors and administrators to control access to a model database and data for non-administrator users.

Roles defined for a tabular model are database roles. That is, the roles contain members consisting of Windows users or groups that have specific permissions that define the actions those members can perform in the model database. A database role is created as a separate object in the database, and applies only to the database in which that role is created. Windows users, Windows groups, or both are included in the role by the model author, who, by default, has administrator permissions on the workspace database server. In a deployed model, the roles are managed by an administrator.

Roles in tabular models can be further defined with row filters. Row filters use DAX expressions to define the rows in a table, and any related rows that a user can query. Row filters using DAX expressions can only be defined for the Read and Read and Process permissions.

By default, when you create a new tabular model project, the project does not have any roles. You can define roles using the Role Manager in SQL Server Data Tools. When roles are defined during model authoring, they are applied to the model workspace database. When the model is deployed, the same roles are applied to the deployed model. After a model has been deployed, members of the server role (Analysis Services administrators) and database administrators can manage the roles associated with the model and the members associated with each role by using SQL Server Management Studio.

In our data model, we need to define security such that each country’s users should see the sales data specific to their region, while top-level management should have unrestricted access to all the data. In other words, we need to define roles for each country to restrict the data to that country and further assign users to roles based on the country they belong to.

To define roles in our data model, click the Roles button in the toolbar as shown in the following figure.

Roles option

  1.   Roles option

This launches the Role Manager. Click New to create a new role. By default, the name of the new role created is “Role.” Click on the role to rename it to United States, which is meant for all users who are allowed to view U.S. sales transactions.

When we click on the Permissions drop-down, we see the following options.

Permissions options in the Role Manager

  1.   Permissions options in the Role Manager

The permissions options are similar to those in the multidimensional model.

Permissions

Description

Row filters using DAX

None

Members cannot make any modifications to the model database schema and cannot query data.

Row filters do not apply. No data is visible to users in this role.

Read

Members are allowed to query data based on row filters but cannot see the model database in SSMS, cannot make any changes to the model database schema, and cannot process the model.

Row filters can be applied. Only data specified in the row filter DAX formula is visible to users.

Read and Process

Members are allowed to query data based on row-level filters and run process operations by running a script or package that contains a process command, but cannot make any changes to the database. Cannot view the model database in SQL Server Management Studio.

Row filters can be applied. Only data specified in the row filter DAX formula can be queried.

Process

Members can run process operations by running a script or package that contains a process command. Cannot modify the model database schema. Cannot query data. Cannot query the model database in SQL Server Management Studio.

Row filters do not apply. No data can be queried in this role.

Administrator

Members can make modifications to the model schema and can query all data in the model designer, reporting client, and SQL Server Management Studio.

Row filters do not apply. All data can be queried in this role.

In our data model, we define permissions and a description for the role, and add a DAX filter as shown in the following figure.

Defining a role

  1.   Defining a role

The DAX filter used to define the role is nothing but a DAX expression that is evaluated in row context (similar to calculated columns) and returns Boolean values of True or False. In the previous role definition, the DAX filter is evaluated for each row of the SalesTerritory table; for the rows where the country is the United States, it returns True, while for all other rows it returns False.

To assign Windows groups or individual users to this role, click the Members tab next to the Row Filters tab and click Add to add the Windows user or group.

Adding members to the role

  1.   Adding members to the role

Similarly, we need to define the role for each country. To expedite development, we can click the Copy button in the Role Manager wizard, which will create a duplicate copy of the United States role that we defined earlier. In the copy, we can modify the DAX filter to a different country name and add members who belong to the role.

Note: If a member is part of multiple roles, he or she will inherit the permissions of both roles. If a member is part of multiple roles and one role denies a permission while another role allows the permission, the user will be allowed the permission.

To test the role, connect to the data model using Excel by clicking Analyze in Excel in the toolbar as shown in the following figure.

Analyze in Excel

  1.   Analyze in Excel

We get an option to connect to the model as the current Windows user, other Windows user, or a role-defined user. Choose the Role option and select United States from the drop-down list as shown in the following figure.

Choose the role to use when analyzing in Excel

  1.   Choose the role to use when analyzing in Excel

If we browse the data model using a pivot table, we will see the data in the table is restricted to the United States as shown in the following figure.

Data model restricted to United States data

  1.   Data model restricted to United States data

Dynamic security

Dynamic security provides a way to define row-level security based on the user name of the user currently logged in or the CustomData property returned from a connection string. In order to implement dynamic security, you must include in your model a table with login values (Windows user names) for users as well as a field that can be used to define a particular permission; for example, a DimEmployees table with a login ID (domain and username) as well as a Country value for each employee.

To implement dynamic security, you can use the following functions as part of a DAX formula to return the user name of the user currently logged in, or the CustomData property in a connection string:

Function

Description

USERNAME Function (DAX)

Returns the domain and username of the user currently logged on.

CUSTOMDATA Function (DAX)

Returns the CustomData property in a connection string.

We can use the LOOKUPVALUE function to return values for a column in which the Windows user name is the same as the user name returned by the USERNAME function or a string returned by the CustomData function. Queries can then be restricted where the values returned by LOOKUPVALUE match values in the same or a related table. For example:

='Sales Territory'[Country]=LOOKUPVALUE('Employee Security'[Country], 'Employee Security'[Login Id], USERNAME(), 'Employee Security'[Country], 'Sales Territory'[Country])

Note: This formula was written for a customized version of the AdventureWorksDW2012 database and will not work as is for the sample AdventureWorksDW2012 database.

For more information on setting dynamic security, you can refer to the following white paper from Cathy Dumas, the program manager for SSAS Team during the SQL 2012 release: http://msdn.microsoft.com/en-us/library/jj127437.aspx.

Defining partitions

Partitions in tabular models divide a table into logical partition objects. Each partition can then be processed independent of other partitions. For example, a table may include certain row sets that contain data that rarely changes, but other row sets have data that changes often. In these cases, there is no need to process all of the data when you really just want to process a portion of the data. Partitions enable you to divide portions of data you need to process frequently from the data that can be processed less frequently.

Effective model design utilizes partitions to eliminate unnecessary processing and subsequent processor load on Analysis Services servers, while making sure that data is processed and refreshed often enough to reflect the most recent data from data sources. One of the objectives of using partition tables in our data model is to expedite data refreshing in the model.

In our data model, we have the Reseller Sales table, which currently contains around 65,000 records. These can be processed pretty quickly, but during the development of the data model, one has to also take into account the growth of the data warehouse in the coming years. In most data models, fact or transaction tables are good candidates for partitioning, as the data grows daily. However, partitioning is not restricted to fact tables, and can be applied to dimension tables as well.

In most data models, we might also prefer to partition by date or time since in most cases after the transaction enters the data warehouse, there are rare chances to update or delete the transaction. Hence we design the yearly partition for the previous year, while for the current year we can design a monthly partition to minimize the processing requirements.

To define partitions, select the relevant table and click the Partitions button in the toolbar as shown in the following figure.

Partitions button

  1.   Partitions button

This will launch the Partition Manager window.

Partition Manager

  1.    Partition Manager

The Partition Manager exposes the current partition for the Reseller Sales Table, which is a single partition with all the records.

We choose to partition the Reseller Sales table using OrderDateKey in Fiscal Calendar. The minimum OrderDateKey value available in our FactResellerSales table is 20010701, while the maximum OrderDateKey value available is 20040601. We divide the table into five partitions using the OrderDateKey in Fiscal Calendar as shown in the following table.

Partition Name

Partition Logic

FY01 Sales

OrderDateKey < 20010701

FY02 Sales

OrderDateKey between 20010701 and 20020630

FY03 Sales

OrderDateKey between 20020701 and 20030630

FY04 Sales

OrderDatekey between 20030701 and 20040630

FY05 Sales

OrderDatekey > 20040630

In order to define the first partition, change the Partition Name to FY01 Sales and switch to the SQL query view, where we define the query for the partition as:

SELECT [dbo].[FactResellerSales].* FROM [dbo].[FactResellerSales]
Where OrderDateKey < 20010701

Adding the FY01 Sales partition

  1.   Adding the FY01 Sales partition

Next, click Copy to create another partition, which we name FY02 Sales and modify the query as:

SELECT [dbo].[FactResellerSales].* FROM [dbo].[FactResellerSales]
Where OrderDateKey between 20010701 and 20020630

Adding the FY02 Sales partition

  1.   Adding the FY02 Sales partition

Similarly, we define the partitions FY03 Sales, FY04 Sales, and FY05 Sales. While defining partitions, we need to ensure there isn’t an overlap between the partitions, and that we haven’t missed any rows from the fact table; otherwise we will get incorrect calculation results.

Once we have all the partitions defined, we can either process all the partitions at once by clicking Process Table, or process one partition at a time by clicking Process Partitions.

Note: The Process All option processes the entire data model, so if other tables are already processed, we might not want to use this option.

Process options

  1.   Process options

While developing the data model, it is always recommended to define all partitions, but not process all partitions. When we process the partitions while developing the data model in SSDT, the data is stored in the workspace database, which is like a temporary storage location. Processing the partitions will also add to the memory usage, since the data is cached in memory.

As a best practice during development, we should define all the partitions but process one of the smallest partitions and define the calculations based on that partition data. Later when the partition is deployed to a SSAS tabular instance, we can process the data model using Management Studio or a script which will populate all the data from the data warehouse.

If we need to add or delete partitions after deployment, we can manage the partitions using SQL Server Management Studio and later process the partitions. This functionality ensures partition management doesn’t need redeployment and even administrators in the team can manage the partitions in the data model.

Deploying the data model

Once we have completed the design and development of the tabular data model, it is time to deploy the model to the SSAS tabular instance.

There are several methods you can use to deploy a tabular model project. Most of the deployment methods that can be used for other Analysis Services projects, such as the multidimensional method, can also be used to deploy tabular model projects.

Method

Description

Deploy command in SQL Server Data Tools

The deploy command provides a simple and intuitive method to deploy a tabular model project from the SQL Server Data Tools environment.

Analysis Management Objects (AMO) automation

AMO provides a programmatic interface to the complete command set for Analysis Services, including commands that can be used for solution deployment. As an approach for solution deployment, AMO automation is the most flexible, but it also requires some programming effort. A key advantage to using AMO is that you can use SQL Server Agent with your AMO application to run deployment on a preset schedule.

XMLA

Use SQL Server Management Studio to generate an XMLA script of the metadata of an existing Analysis Services database, and then run that script on another server to recreate the initial database. XMLA scripts are easily formed in SQL Server Management Studio by defining the deployment process, and then codifying it and saving it in an XMLA script. Once you have the XMLA script in a saved file, you can easily run the script according to a schedule, or embed the script in an application that connects directly to an instance of Analysis Services.

You can also run XMLA Scripts on a preset basis using SQL Server Agent, but you do not have the same flexibility with XMLA scripts as with AMO. AMO provides a wider breadth of functionality by hosting the complete spectrum of administrative commands.

Deployment Wizard

Use the Deployment Wizard to use the XMLA output files generated by an Analysis Services project to deploy the project's metadata to a destination server. With the Deployment Wizard, you can deploy directly from the Analysis Services file, as created by the output directory by the project build.

The primary advantage of using the Analysis Services Deployment Wizard is convenience. Just as you can save an XMLA script for later use in SQL Server Management Studio, you can save Deployment Wizard scripts. The Deployment Wizard can be run both interactively and at the command prompt via the Deployment Utility.

Deployment utility

The Deployment utility lets you start the Analysis Services deployment engine from a command prompt.

Synchronize Database Wizard

Use the Synchronize Database Wizard to synchronize the metadata and data between any two Analysis Services databases.
The Synchronize Wizard can be used to copy both data and metadata from a source server to a destination server. If the destination server does not have a copy of the database that you want to deploy, a new database is copied to the destination server. If the destination server already has a copy of the same database, the database on the destination server is updated to use the metadata and data of the source database.

Backup and restore

Backup offers the simplest approach to transferring Analysis Services databases. From the Backup dialog box, you can set the configuration options, and then you can run the backup from the dialog box itself. Or, you can create a script that can be saved and run as frequently as required.

Backup and restore is not used as frequently as the other deployment methods, but it is a way to quickly complete a deployment with minimal infrastructure requirements.

In this chapter, we will use the first method of deploying the data model using SSDT.

Important: Before we deploy the data model, we must change the data source connection for the data model to point to the production data warehouse.

In order to deploy the data model, we need to set some of the deployment properties. Right-click the project name in Solution Explorer and select Properties as shown in the following figure.

Data model in Solution Explorer

  1.   Data model in Solution Explorer

Data model properties

  1.   Data model properties

When you deploy the data model, a connection to the Analysis Services instance specified in the Server property is established. A new model database with the name specified in the Database property is then created on that instance if one does not already exist. Metadata from the model project’s Model.bim file is used to configure objects in the model database on the deployment server. With the Processing option, you can specify whether only the model metadata is deployed, creating the model database. If Default or Full is specified, impersonation credentials used to connect to data sources are passed in-memory from the model workspace database to the deployed model database. Analysis Services then runs processing to populate data into the deployed model. Once the deployment process is complete, client applications can connect to the model using a data connection or by using a .bism connection file in SharePoint.

The deployment options properties include the following.

Property

Default Setting

Description

Processing Option

Default

This property specifies the type of processing required when changes to objects are deployed. This property has the following options:

  • Default: Analysis Services will determine the type of processing required. Unprocessed objects will be processed, and if required, attribute relationships, attribute hierarchies, user hierarchies, and calculated columns will be recalculated. This setting generally results in a faster deployment time than using the Full processing option.
  • Do Not Process: Only the metadata will be deployed. After deploying, it may be necessary to run a process operation on the deployed model to update and recalculate data.
  • Full: Both the metadata is deployed and a process full operation is performed. This assures that the deployed model has the most recent updates to both metadata and data.

Transactional Deployment

False

This property specifies whether the deployment is transactional. By default, the deployment of all or changed objects is not transactional with the processing of those deployed objects. Deployment can succeed and persist even if processing fails. You can change this to incorporate deployment and processing in a single transaction.

Query Mode

In-Memory

This property specifies whether the source from which query results are returned is running in in-memory (cached) mode or in DirectQuery mode. This property has the following options:

  • DirectQuery: Specifies all queries to the model should use the relational data source only. The DirectQuery mode requires Impersonate settings to be set to specify the credentials used to authenticate while making a connection to the relational data source
  • DirectQuery with In-Memory: Specifies that, by default, queries should be answered by using the relational source, unless otherwise specified in the connection string from the client.
  • In-Memory: Specifies that queries should be answered by using the cache only.
  • In-Memory with DirectQuery: Specifies that, by default, queries should be answered by using the cache, unless otherwise specified in the connection string from the client.

After setting the deployment properties, right-click on the project again in the Solution Explorer and click Deploy to deploy the data model to the SSAS tabular instance specified in the Server property. After deployment is completed successfully, we see the following screen.

Successful deployment

  1.   Successful deployment

We can further verify the deployment by logging into the target server and connecting to the SSAS tabular instance using SSMS as shown in the following figure.

Successful deployment as shown in SSMS

  1.   Successful deployment as shown in SSMS

Post-deployment tasks

After the data model has been deployed, we need to complete certain post-deployment tasks:

  • Ensure the data source is pointing to the production data warehouse and not the development data warehouse, which can happen if we forget to update the data source before deployment.
  • Process the data model if the model was deployed with the Do Not Process option.
  • Create a process plan and job to process the table or partition frequently to ensure the data is up to date.
  • Assign Windows users or groups to the appropriate roles to give them the right access and permissions to the data model.
  • Create or add partitions to the table using Management Studio to minimize the data to process and improve the data refresh time.

Summary

In the next chapter, we will discuss how to use Power View to explore the data model and create visually appealing reports, which can help end users derive meaningful information from their data model.

Scroll To Top
Disclaimer
DISCLAIMER: Web reader is currently in beta. Please report any issues through our support system. PDF and Kindle format files are also available for download.

Previous

Next



You are one step away from downloading ebooks from the Succinctly® series premier collection!
A confirmation has been sent to your email address. Please check and confirm your email subscription to complete the download.