CHAPTER 7
Mixer Policies
We discussed earlier that the Istio Mixer component receives configurations from the user, mixes them with different data sources, and then dispatches them to different channels. Mixer supports the following three categories of policies:
- Precondition checking: Checks conditions such as ACLs and authorization.
- Quota management: Checks conditions such as rate limits.
- Telemetry collection: Checks on metrics, logs, and traces.
Given the two completely different areas of responsibilities of Mixer, it is deployed as two different pods, each of which either enforces policy or collects and transmits telemetry. Execute the following command to view the pods that make up Mixer.
Code Listing 87: Get Mixer pods
$ kubectl get pods -n istio-system -l istio=mixer NAME READY STATUS RESTARTS AGE istio-policy-769664fcf7-8vtz8 2/2 Running 60 6d1h istio-telemetry-577c6f5b8c-8qb7x 2/2 Running 63 6d1h |
Before we discuss the Mixer policies in detail, let’s discuss the high-level architecture of Mixer. The following diagram illustrates the components of Istio and their interactions.

Figure 15: Mixer architecture
The path of the request varies depending on whether the request is for evaluating a policy or for persisting telemetry. Let’s study the two paths that a request can take.
Policy control flow
Service proxies send request details in the form of attributes to Mixer before each request to check preconditions, and again afterwards to report telemetry. Mixer determines whether the request complies with preconditions and/or quota restrictions by invoking the appropriate handler, which is a logical representation of an adapter. After receiving the check results from Mixer once, the service proxies cache the result locally, which enables them to serve many subsequent requests without referring each one to Mixer. Mixer also maintains a cache so that it can refer to it when other Envoy instances send a request for evaluating the same policy.
Note: Mixer is a highly available component. It is stateless and has several active replicas processing requests together. It also uses caching and buffering techniques and a robust underlying design that attempts to ensure an uptime of 99.999 percent.
Let’s briefly discuss what attributes are. Attributes are a collection of typed key-value pairs that describe the request traffic and its context. Attributes are not only used for making decisions in Mixer, but are also logged after the completion of every request by Mixer (see telemetry control flow). There are a finite but exhaustive number of attributes, which are documented here. The attributes are primarily produced by the proxies, but can also be generated by the adapters.
Telemetry control flow
For each request received, Envoy captures an array of attributes of the request. Envoy continuously processes these attributes and sends them asynchronously to Mixer’s Report API. After a buffer of reports is captured by Envoy, they are pushed at once to Mixer, which transforms the attributes and pushes them to the backend using handlers (which compile to an adapter).
In this chapter, we will focus on the policy enforcement aspect of Mixer, and discuss observability in a later chapter. Mixer’s policy rules are a little complex; therefore, their configuration is split into six major parts: the adapter, the handler, the instance, the rule, the quota spec, and the quota spec binding. Let’s discuss all these components one by one.
Adapter
Adapters are independent components of Mixer that can perform a logical operation. As you can see in the previous architecture diagram, while some adapters connect Mixer to backend systems, others perform a complete functionality within themselves, such as quota and whitelist. You can find the complete list of adapters here.
By default, Mixer binaries contain several adapters, such as denier, OPA, and statsd. This ties the vendor releases of these adapters with Mixer releases. Mixer is now moving to an out-of-process adapter model in which the adapter process executes separately from the Mixer process. Mixer communicates with such adapters using gRPC, and it is therefore not influenced by crashes of adapter processes.
Handler
A handler is an instance of the adapter that is configured with operation parameters. To draw an analogy with object-oriented programming, if the adapter is a class, then a handler is an object of the class, and the operational parameters are properties defined in the class whose values will be specified in the handler. These properties in handlers are called attributes.
The following handler specification limits the number of requests to the fruits API service to one request every five seconds.
apiVersion: config.istio.io/v1alpha2 kind: handler metadata: name: throttle-handler namespace: istio-system spec: compiledAdapter: memquota params: quotas: - name: request-quota-instance.instance.istio-system maxAmount: 500 validDuration: 1s overrides: - dimensions: destination: fruits-api-service maxAmount: 1 validDuration: 5s |
Code Listing 88: Handler specification
The previous handler specification will apply a throttle limit of 500 requests per second on all other client requests that get mapped to this handler. This throttling limit gets down to one request every five seconds if the requests are sent to the fruits API service. Note that the value of the maxAmount attribute is configured through another resource named QuotaSpec. Think of it as the maximum currency available to a client to spend; QuotaSpec will specify how much each request will cost.
Instance
An instance is an object that contains request data. We know that adapters require configuration data that is supplied to them through attributes of handlers. Instances map the attributes of the incoming request to the values that are passed to the handler. Instances support the mapping of attributes through attribute expressions.
Attribute expressions are configurations written in Configuration Expression Language (CEXL), which is a subset of GoLang expressions. You can read about the syntax of CEXL expressions here. The following configuration demonstrates how you can configure an instance and shows an attribute expression in use.
Code Listing 89: Instance specification
apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: request-quota-instance namespace: istio-system spec: compiledTemplate: quota params: dimensions: destination: destination.service.name | "unknown" |
The instance specified in the previous listing declares an attribute named destination and conditionally sets its value to one of the properties specified through the attribute expression. You can see that the dimension attribute expression consists of canonical attributes (destination.service.name) that form the value of the attribute. To view all the supported attributes, visit this link.
Rule
Rules map the handlers and instances with each other, and specify the condition in which a handler will be invoked with values from an instance. Each rule specifies a match condition and actions in the form of mapping between handlers and instances when the match condition evaluates to true. In a rule, you can specify the empty match condition or set the value of the match key to true, which would make the rule perform the actions in all conditions.
Code Listing 90: Rule specification
apiVersion: config.istio.io/v1alpha2 kind: rule metadata: name: quota-rule namespace: istio-system spec: match: match(request.headers["user-agent"], "curl*") actions: - handler: throttle-handler instances: - request-quota-instance |
In the previous listing, we specified that the handler named throttle-handler should be invoked with data stored in the instance named request-quota-instance when the user agent specified in the request headers is curl. The match value curl* ensures that the match conditions succeed evaluation regardless of the version of the curl utility used.
QuotaSpec
Some services are more resource intensive than others, so you may want finer-grained control over the traffic targeted to those services. Note that in the handler specification, we configured the attribute maxAmount to some value. The QuotaSpec allows you to specify how much each request will deplete from your quota. For this sample, we will make each request to pay one unit.
Code Listing 91: QuotaSpec specification
apiVersion: config.istio.io/v1alpha2 kind: QuotaSpec metadata: name: request-count namespace: istio-system spec: rules: - quotas: - charge: 1 quota: request-quota-instance |
The schema in the previous listing charges one unit for each request sent to the instance request-quota-instance.
QuotaSpecBinding
This is the glue that binds the policies with actual services in the mesh. You may have noticed that the namespace of all the policies is istio-system, whereas the actual service might be present in a different namespace. The QuotaSpecBinding helps attach the services with policies across namespaces. The following policy binds the request-count quota specification with the fruits API service.
Code Listing 92: QuotaSpecBinding specification
apiVersion: config.istio.io/v1alpha2 kind: QuotaSpecBinding metadata: name: request-count namespace: micro-shake-factory spec: quotaSpecs: - name: request-count namespace: istio-system services: - name: fruits-api-service namespace: micro-shake-factory |
These are all the building blocks that we need to write any Mixer policy. Note that different policies have their own specifications, and we can’t cover all of them here. However, with the understanding of the building blocks, you will be able to configure any policy with relative ease. Let’s now test what we have built so far.
Testing the Mixer policy
Like before, delete any resources from the micro-shake-factory namespace so that they don’t interfere with our demo.
Note: The required files for this demo are present in the GitHub repository at Policies/Chapter 6.
Let’s first create the fruits API service and its associated virtual service and gateway by executing the following command.
Code Listing 93: Create fruits API service
$ kubectl apply -f fruits-api.yml -f fruits-api-vs-gw.yml namespace/micro-shake-factory created deployment.apps/fruits-api-deployment-v1 created service/fruits-api-service created gateway.networking.istio.io/fruits-api-gateway created virtualservice.networking.istio.io/fruits-api-vservice created |
Now, aggregate all the configurations that you declared previously in a single YAML file and execute the specification with the following command.
Code Listing 94: Apply throttle policy
$ kubectl apply -f curl-throttle-policy.yml handler.config.istio.io/throttle-handler created instance.config.istio.io/request-quota-instance created rule.config.istio.io/quota-rule created quotaspec.config.istio.io/request-count created quotaspecbinding.config.istio.io/request-count created |
To test the throttling limit, we have prepared a test script that dispatches two requests to the fruits API service, one after the other. The first request is sent from the curl utility, which succeeds only once every five seconds, and the other from the Invoke-WebRequest PowerShell cmdlet, which succeeds every time. This process is repeated five times in a batch, after which the process waits for five seconds before repeating itself 10 times for a total of 100 requests (50 each from curl and Invoke-WebRequest). Execute the following PowerShell test script.
Code Listing 95: Test fruits API throttling
$ .\curl-throttle-test.ps1 Request 1.1 {"ver":"1","fruit":"Mango"} {"ver":"1","fruit":"Mango"} Request 1.2 RESOURCE_EXHAUSTED:Quota is exhausted for: request-quota-instance {"ver":"1","fruit":"Mango"} |
In the previous listing, you can see the partial output of data generated by the test. You can see that for the first request, both the curl and PowerShell requests succeeded, whereas the next request succeeded only for the request that originated from PowerShell.
Delete the policies that you just created by executing the following command, which removes the restrictions from your service.
Code Listing 96: Delete policy
$ kubectl delete -f curl-throttle-policy.yml handler.config.istio.io "throttle-handler" deleted instance.config.istio.io "request-quota-instance" deleted rule.config.istio.io "quota-rule" deleted quotaspec.config.istio.io "request-count" deleted quotaspecbinding.config.istio.io "request-count" deleted |
We encourage you to experiment with other Mixer policies and find out for yourself where you can use them.
Summary
In this chapter, we discussed the responsibilities of Mixer and how the Mixer policies work. We defined the various components that formed a Mixer policy and applied it to our mesh. An important point to remember is that Mixer v2 is positioned to move much of Mixer’s functionality to Envoy proxy. This move of functionality will reduce latency and avoid the chatty interaction between the proxy and Mixer. In the next chapter, we will discuss the various approaches to securing the services on an Istio mesh.
- 1800+ high-performance UI components.
- Includes popular controls such as Grid, Chart, Scheduler, and more.
- 24x5 unlimited support by developers.