left-icon

SQL Server Analysis Services Succinctly®
by Stacia Misner

Previous
Chapter

of
A
A
A

CHAPTER 6

Managing Analysis Services Databases

Managing Analysis Services Databases


Our focus up to this point has been the development process. Periodically during development, you use the Deploy command in SSDT to deploy and process the Analysis Services database. When you do this, you are able to browse dimensions and launch Excel to browse the cube. However, you have yet to learn what it really means to deploy and process a database. In this chapter, I start with an overview of deployment and how to use the Deployment Wizard to move an Analysis Services database from development to production. Then I explain what happens during processing. Next, I show you how to configure security to enable users and administrators to access the database. Lastly, I review the alternatives you have for making copies of your database for disaster recovery and for scale-out.

Deployment Options

There are two ways to deploy an Analysis Services database after completing development tasks:

  • Deploy Command
  • Deployment Wizard

Deploy Command

In Chapter 3, “Developing dimensions,” you learned how to use the Deploy command on the Build menu in SSDT. When you make changes to the Analysis Services database design, whether updating a measure group in a cube or changing an attribute in a dimension, you must deploy the changes made in the development environment so that the database on the server reflects those changes, which allows you to review the results. Now let’s take a closer look at the three steps this command performs:

  • Build
  • Deploy
  • Process

The Build step consolidates the database object definitions in the project from each of the following files in the project: DS, DSV, CUBE, DIM, and ROLE. The result of the consolidation is a single ASDATABASE file that contains the XML definitions for each of the database objects.

The Deploy step copies the ASDATABASE file to the Data folder for the Analysis Services server. By default, this folder is located at /Program Files/Microsoft SQL Server/MSAS11.MSSQLSERVER/OLAP. The individual object definitions are then read from the file and decomposed into separate files again, matching the file structures in the SSDT project.

The operations performed in the Process step depend on the options defined on the Deployment page of the Project Properties dialog box. Here you can specify settings to determine whether processing occurs, whether deployment operations are transactional, and which objects to deploy.

You can choose from the following settings when configuring the Processing Option in Configuration Properties, as shown in Figure 83:

  • Default. Analysis Services determines the type of processing to perform based on the changes it detects when you use the Deploy command.
  • Do Not Process. Choose this option if you prefer to control the processing operation manually as a separate step following deployment.
  • Full. Use this option if you want Analysis Services to fully process each object after you use the Deploy command.

Deployment Properties

  1. Deployment Properties

Another option in the Project Properties is the Transactional Deployment setting. By default, this setting is False, but you can change it to True if you prefer. When you use transactional deployment, the deployment and processing of the database objects must both succeed to persist the changes on the server. Otherwise, if either step fails, Analysis Services returns the database on the server to the state it was in prior to deployment of the changes.

The final setting affecting deployment is Server Mode, which has the following options:

  • Deploy Changes Only. This is the default setting. Analysis Services deploys only the objects that do not exist or no longer match objects on the server. This is also the faster setting of the two choices.
  • Deploy All. When you use this setting, Analysis Services copies all database objects to the server, replacing any existing objects if applicable.

Deployment Wizard

As you now know, the deployment process places a new copy of changed files on the server, which the server uses to retrieve the current definition of database objects during processing. Although it’s possible to use XMLA scripts to modify database objects, it’s considered best practice to use SSDT to make design changes so that you can save your work in a source control system. However, on an ongoing basis, you might manually update Roles on the server to manage security or you might use scripts or automated processes to update the database object files or to add new partitions to hold newer data. In those cases, any partition or role definitions in SSDT that you deploy to the server would wipe out anything you update on the server directly by using XMLA scripts.

To better manage partitioning or roles after a database has been placed in production, or when you as a developer lack the permissions to deploy a database directly to production, a good alternative to use is the Analysis Services Deployment Wizard. Before launching the wizard, use the Build command in SSDT to generate the ASDATABASE file. Launch the wizard which is accessible by opening the Program Group menu for Microsoft SQL Server and then navigating to the Analysis Services folder. The wizard walks you through the following pages (after the Welcome page if it is still enabled):

  • Specify Source Analysis Services Database. Here you must provide the path and file name for the ASDATABASE file, or click the ellipsis button to navigate through the file system to locate and open this file. The file is located by default in the bin folder of your project.
  • Installation Target. On this page, you must specify the target server and target database names for your project on the Installation Target page.
  • Specify Options for Partitions and Roles. Your choices here affect what happens to existing partitions and roles. By default, existing partitions will be replaced by whatever partition definition happens to be in your development files. If you have partitions on the server that you want to keep, you need to choose the replace option for partitions. In that case, any partitions in your database for measure groups that already exist are ignored and only new measure groups and their partitions are deployed to the server. On the other hand, the default for roles is to deploy new roles and keep the roles already on the server intact. Your other options would be to either replace all roles on the server with the roles that have been defined in the project, or to ignore anything in the project and keep the roles on the server intact.
  • Specify Configuration Properties. You can configure the deployment process to keep any existing configuration and optimization settings already stored on the server. In addition, you can change the connection string to the data source, impersonation information for processing, locations for error files, the report location for reporting actions, and storage locations for the database cube, measure groups, or partitions.
  • Select Processing Options. Here you have the option to set values for the project property settings related to processing: Processing Method and Transactional Deployment. If your cube is configured to support writeback, you have an additional property to set that controls whether Analysis Services should use an existing table for writeback or create a new table.
  • Confirm Deployment. You can continue through the wizard without creating a deployment script if you prefer to have the wizard invoke the deployment process. On the other hand, if you choose to create a deployment script, you must specify a location for storing the XMLA file that the wizard generates. You can later open this file in SSMS and execute it to perform deployment.

Processing Strategies

When you first deploy an Analysis Services database, it is in an unprocessed state and empty. That means no one can query the database. You need to perform a full process of the database to load the dimensions and cube partitions with data. After the initial process, you have the following options for keeping data up-to-date in the database:

  • Full process
  • Process data and process index
  • Process update
  • Process add

Full Process

If the amount of time required to do a full database process is small enough, you can choose to do a full database process on a nightly basis. You can schedule the full process as a SQL Server Agent job or include it as an Analysis Services Processing Task in an Integration Services package, as shown in Figure 84. However, for large cubes that take longer to process, you might need to implement other strategies. Each time you do a full process, the existing contents are removed from the database and the data is completely reloaded in all dimensions and cube partitions. You can choose to do a full process at the database level, cube level, measure group level, or dimension level. To configure the Analysis Service Processing Task, select the target object(s) for the process, and select Process Full in the Process Options drop-down list.

Selection of Objects to Process

  1. Selection of Objects to Process

A similar user interface appears when you right-click a database, cube, measure group, or dimension in SSMS and select the Process command on the submenu. However, in SSMS, you cannot select multiple objects to process in the same dialog box. On the other hand, if you open the Process dialog box, you can click Script in the dialog box toolbar to generate a script like this:

<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">

  <Parallel>

    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">

      <Object>

        <DatabaseID>SSAS Succinctly</DatabaseID>

        <CubeID>Sales</CubeID>

      </Object>

      <Type>ProcessFull</Type>

      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>

    </Process>

  </Parallel>

</Batch>

You can repeat this procedure to generate a script for each object to process. You can then combine the Object elements from each script below the Process node in the first XMLA script you create. That way, you can process multiple objects in parallel. If you prefer to process objects sequentially (which might be necessary if your server memory is constrained), you can remove the Parallel element from the script.

The Process dialog box, in both the Analysis Services Processing Task and SSMS, includes a Batch Settings Summary section. Click Change Settings in this section to update the following settings:

  • Processing Order. The default processing order is Parallel. You can let the server decide how many processing tasks to execute in parallel or set a specific maximum number of parallel tasks. As an alternative, you can specify sequential processing.
  • Transaction Mode. If you process tasks sequentially, you can specify whether to treat each task as a single transaction or as separate transactions.
  • Dimension Errors. You can keep the default error configuration to automatically fail processing if a problem occurs during dimension processing, or define a custom error configuration as described in Processing Options and Settings.
  • Dimension Key Error Log Path. You can specify a path in which Analysis Services will store a log file containing dimension key errors if you set up a custom error configuration.
  • Process Affected Objects. You can instruct Analysis Services to process all objects that have a dependency on the selected object.

Note: If you perform a full process on a dimension, then you MUST do a full process on any associated cube.

One way to optimize processing is to create partitions in a cube as described in Chapter 4, “Developing cubes.” A common partitioning strategy is to create separate partitions for separate time periods. Consequently, it becomes necessary only to process the most current partition because data in older partitions rarely changes. Therefore, you can limit the time necessary for processing a cube by running a full process only for the current partition. Analysis Services removes the data in that partition only, reloads data from the source, and rebuilds aggregations. If the current partition is relatively small compared to the overall cube size, the full process on the partition will be faster and more efficient than processing all partitions. All other existing partitions remain unchanged.

Process Data and Process Index

When working with a large cube or dimension, you might find it more effective to replace a Process Full operation with Process Data and Process Index operations. By splitting up these two operations that are included as part of Process Full, you can enable Analysis Services to focus resources on one operation at a time and thereby get better performance. However, this is typically necessary only when working with large data volumes.

Process Update

Another option that takes less time than the Process Full operation is to only add new data or update changed data to the database. This option is applicable only to a dimension. The Process Update operation loads all new dimension members and makes changes as necessary to existing dimension members.

Performing a Process Update does not require you to execute a Process Full operation on a cube as is necessary after executing Process Full on a dimension. Inside a cube, there are internal identifiers for the dimension members that are created when storing data in the cube partitions. When you run Full Process on a dimension, even when loading in the same data, there is no guarantee that each dimension member gets the same internal identifier, so the cube becomes invalid. With Process Update, the identifiers remain the same. The only potential problem is that if you move a member from one parent in a hierarchy to another parent, and if flexible attribute relationships are defined, any aggregations for that hierarchy are invalidated. The cube can be queried, but queries can potentially run more slowly until aggregations are rebuilt with a Process Index or Process Default operation on the cube.

Process Add

If dimension data that is already loaded doesn’t change and there are only new dimension records to add periodically, then you can use the Process Add operation to insert the new records. This operation runs faster than Process Update and has no impact on the cube. It is not necessary to rebuild aggregations nor process the cube when adding new dimension members. On the other hand, because Process Add requires some additional configuration, it’s not a particularly straightforward option. Greg Galloway, a SQL Server MVP and BI architect at Artis Consulting, has blogged about Process Add Examples with explanations that you can adapt to suit your needs.

If you are using Standard edition, you can maintain a maximum of three partitions per measure group, but you can still manage new partition data efficiently by using Process Add to insert that data into an existing partition. (Incidentally, this shows up as Process Incremental when you’re using the graphical interface to configure processing.) This approach is likely to run much faster than running Process Full on the partition. However, it’s important not to let this become the only form of processing you perform, because some fragmentation can occur when you do this often. Consider running Process Add nightly and then running Process Full on a weekly or monthly basis. Aggregations are updated with Process Add, so there is no need to run a separate process afterwards.

Security

As you build and deploy the database throughout the development process, its contents—the cube and dimensions—are secure by default. No one besides the database developer is able to query the cube. To make the database accessible to user queries, there are a few extra tasks to perform. In addition, you might need to configure administrative security for users who need to perform processing tasks. Lastly, the Analysis Services service account requires specific permissions to support processing and, when applicable, query logging and writeback.

User Security

You must grant users access to Analysis Services at the cube level in each database. Before you can do this, each user must have a Windows login and can optionally be assigned to a Windows group in your network’s Active Directory. Analysis Services uses role-based security, which has a dependency on Windows authentication. At minimum, you create a role to which you assign Windows logins, or better yet, Windows groups, and grant the role permission to access a cube. For more precise security, you need to take additional steps. For example, you can configure security at the dimension level, choosing the members you want users to see and excluding all others. You can also get more granular with security by controlling which cells a user can or cannot see.

Role Membership

To create a role, either in SSDT or in SSMS after opening a database folder, right-click the Roles folder, select New Role, and type in a role name. There is no need to assign database permissions to users who only need the ability to query the cube. On the Membership page of the Create Role dialog box, click Add and retrieve user logins or Windows group names from Active Directory.

Cube Security

After creating roles and assigning users or groups to them, you must explicitly grant permissions to authorize what users can see or do. At minimum, you need to assign the role Read access to each cube on the Cubes page of the Create Role dialog box in SSMS, as shown in Figure 85, or on the Cubes tab of the Role Designer in SSDT. You can also optionally specify whether the role will have permission to use drillthrough functionality or create a local cube using a client tool.

Assignment of Read Access to Cube

  1. Assignment of Read Access to Cube

Dimension Security

With dimension security, you can create sets of dimension members that are either allowed or denied for any or all attribute hierarchies in a dimension. For example, if you create a set of allowed members, the user can only see the members assigned to that set. Conversely, if you configure a denied set, the user can see all members in the attribute hierarchy except the members in that set. In general, dimension security restricts access to members of a specific attribute hierarchy.

Let’s say that you want to allow a role to see only Europe in the Group attribute hierarchy in the Territory dimension. On the Dimension Data page, select the Territory dimension in the drop-down list at the top of the page, and then select Group in the Attribute Hierarchy drop-down list. Select Deselect All Members to clear the selection of members, and then select the Europe check box, as shown in Figure 86.

Dimension Security

  1. Dimension Security

Note: On the Advanced tab for dimension data, you can use MDX expressions to define allowed or denied sets rather than explicitly selecting members for inclusion on the Basic tab.

Dimension security affects what the user sees in the metadata for the cube. Figure 87 shows the metadata tree from SSMS for a user assigned to a role that has only Europe as an allowed member in the Group attribute hierarchy. As you can see, none of the other members of the attribute hierarchy are visible: NA, North America, and Pacific. The user simply cannot see that these other members exist in the dimension. Notice also in the user-defined Territories hierarchy that Europe is again the only member visible at the Group level, but the children of Europe—France, Germany, and United Kingdom—are also visible even though the dimension security did not explicitly exclude those members. Furthermore, the Country attribute hierarchy is also restricted to only those countries that are children of Europe in the user-defined Territories hierarchy. The attribute relationship definition for the Territory dimension controls the security across all hierarchies in the dimension based upon the selection of allowed or denied members defined in dimension security.

Visible Members Across Hierarchies

  1. Visible Members Across Hierarchies

Another option available when configuring dimension security is specification of a default member. By defining a default member, you are adding an automatic filter to the dimension for that member. For example, let’s say that you have department managers who want to see sales for their respective departments. You can create a role for each department and then configure a default member for each role—one for the Bikes manager, one for the Accessories manager, and so on. That way, every time managers run a query, they don’t have to explicitly ask for their department’s category because Analysis Services automatically applies the filter for their respective roles. However, this is not the same as restricting access to dimension members. If managers asked for all categories in a query, they could get that information as long as no dimension security restricted that information. To add a default member, select the applicable dimension and attribute hierarchy on the Basic tab of the Dimension Data page, open the Advanced tab, and then provide the member-unique name, as shown in Figure 88.

Default Member Configuration

  1. Default Member Configuration

You also have the ability to control whether the grand total for an attribute displays only the aggregation of the visible members or the aggregation of all members in the hierarchy. This feature is known as Visual Totals and is configurable on the Advanced tab of the Dimension Data page. By default, the Enable Visual Totals check box is not selected. In the case when dimension security restricts a user to Europe only, the grand total for the hierarchy displays the total for all groups, as shown on the left side of Figure 89. However, when the Enable Visual Totals check box is selected, the grand total is adjusted to show only the aggregation of the visible members, as shown on the right side of Figure 89.

Effect of Visual Totals Feature

  1. Effect of Visual Totals Feature

Cell Security

Another way to manage security is to apply cell security. This will control whether the user sees a numeric value for certain intersections of measures and dimensions. You configure cell security on the Cell Data page and select a cube, and then you select one of the following permissions options:

  • Enable Read Permissions. You provide an expression that evaluates as True or False to instruct Analysis Services whether to allow the user to see a value, as shown in Figure 90. In this case, the expression checks if the current member of the Measures dimension is Total Product Cost and displays a value only if the result is False. Therefore, only the Total Product Cost measure will not be visible to the user. Instead, a placeholder token is displayed to indicate the value is not available.
  • Enable Read-Contingent Permissions. When you select this option, you also use an expression that returns True or False. Typically for read-contingent permissions, you reference a measure that is a calculation based on other measures. If the user is able to view all the other measures, based on Read permissions, the user can also view the calculation. For example, because Gross Profit Margin depends on Sales Amount and Total Product Cost, if Read-Contingent permission is assigned to Gross Profit Margin, the user can only view that value if the user has Read permission on Sales Amount and Total Product Cost.
  • Enable Read/Write Permissions. If the cube is enabled for writeback, you can use this option to control which cells the user can update. Again, you use an expression that returns True or False.

Cell Security

  1. Cell Security

Tip: If you have multiple measures to exclude from view or multiple conditions, you use the OR and AND operators to connect the expressions together.

Administrator Security

Besides user security, you might also need to configure administrator security. There are two levels of security for administrators: server and database.

Server Administrator Security

In SSMS, right-click the Analysis Services server node in Object Explorer to access the server properties. In the Properties dialog box, the security page allows you to add Windows groups or users as server administrators. At the server level, these users can configure security options and access any database deployed to the server.

Note: Built-in administrators on the server automatically have server-wide permissions, even without explicit assignment to the server administrator.

You can also create database-level administrators. To do this, open a database and add a new role for administrators, and then define database permissions, as shown in Figure 91. For example, if you want a set of users to have the ability to process a database or read the XMLA definition of the database, you can create a role for those users and then grant those permissions at the database level. Having database permissions does not automatically grant access to the database contents. You must also grant cube access at minimum and define dimension and cell security where applicable.

Database Permissions for Administrators

  1. Database Permissions for Administrators

Service Account Security

When Analysis Services is installed, a service account is defined to run the windows service. When you define the service account, you can select a local user account, a domain user account, or a built-in system account. The best practice is to use a domain user account with low privileges.

Regardless of which type of account you use, you must make sure you grant the account permissions to access the data source for your Analysis Services database. If you use MOLAP or HOLAP storage for your cube, the service account requires only Read permission on the data source. However, if you use ROLAP storage, you must configure Read permission at minimum. If the ROLAP partition also includes aggregations, you must also configure Write permission.

When you install Analysis Services, the correct file system permissions should be set automatically. If the service account is changed without using the proper method, it’s possible that the new account will lack the necessary permissions. In other cases, when you enable query logging or writeback, there is no automatic assignment of permissions, so you must manually update the applicable databases. In particular, the service account needs the following permissions:

  • Backup folder. The service account must have Full Control permission on the Program Files\Microsoft SQL Server\MASA11.MSSQLServer\OLAP\Backup folder in order to save backup files.
  • Query Log Database. If you enable the query log in preparation for usage-based optimization (described in Chapter 4, “Developing cubes), the service account must be a member of the db_owner database role for the database in which the query log is to be created.
  • Writeback Database. The service account also needs to be a member of the db_owner database role for the database storing the writeback table if you have writeback enabled on a partition in your cube.

Note: To make sure all permissions are correct for the service account on the file system, be sure to use SQL Server Configuration Manger if you need to change the service account or password for local or domain user accounts.

Database Copies

After putting an Analysis Services database into production, you should make a backup for disaster recovery. Backups are also a useful way to move a database from one server to another. For example, you might want to process a database on one server and then make it available on another server for querying, or you might want to place a copy on multiple servers to spread the query load across multiple servers. Whatever the reason, there is more than one way to copy and move a database.

Backup and Restore

Backup and restore is a simple method to use. An advantage of this approach is that you do not have to add yet another operation to your normal routine, because you should already perform a backup each time you process the database. If you have a large database, you can get your database back into working order after a failure much more quickly from a backup than by performing a full process after redeploying the Analysis Services project files. If you are distributing a single Analysis Services database to multiple servers, you can create one backup, make as many copies as you need, distribute each copy to a separate server, and then run the restore operation.

Note: Users cannot query the database during the restore process.

To perform a backup manually, right-click the database in SSMS and select the Back Up command. In the dialog box, specify a name for the backup file and select options to control whether Analysis Services can overwrite an existing backup file, compress the file, or encrypt the backup with a password, as shown in Figure 92. As another option, you can click the Script button to generate an XMLA script. You can execute the script in an SSIS package by using an Execute DDL task, or as a SQL Server Agent job by using a SQL Server Analysis Services Command step.

Database Backup

  1. Database Backup

There is no limit to the size of a database you can back up. The actual contents of the backup depend on the storage mode you use for the cube partitions:

  • MOLAP stores data, aggregations, and database metadata.
  • HOLAP stores aggregations and metadata only.
  • ROLAP stores metadata only.

Synchronization

To use synchronization, you have one Analysis Services server to host the source database and a second Analysis Services server to host a synchronized copy. Right-click the database on the first server and run the Synchronize command to launch the process. Analysis Services compares the two databases to determine if there are any differences between them. If the databases are not identical, Analysis Services uses synchronization to update the second server and only sends the changes in compressed form. The number of changes affects performance. On the other hand, users can query a database during synchronization, unlike during the restore process. That said, when synchronization is complete, the second server switches out the older version of the database with the updated version, which doesn’t take long, but during that period no queries are permitted. As with backups, you can script the synchronization process to automate it using SSIS or SQL Server Agent.

Detach and Attach

With this method, you right-click a database on one Analysis Services server and select the Detach command. You can then move the associated database files to another server where you reverse the process. You right-click the Analysis Services server node, and select the Attach command. This method is a very fast way to make a database available on a second server. You can also put the database on a logical SAN in Read Only Mode and then have multiple servers attach the same database to create a scale-out solution.

Scroll To Top
Disclaimer
DISCLAIMER: Web reader is currently in beta. Please report any issues through our support system. PDF and Kindle format files are also available for download.

Previous

Next



You are one step away from downloading ebooks from the Succinctly® series premier collection!
A confirmation has been sent to your email address. Please check and confirm your email subscription to complete the download.