Rate Limits

REST API Rate Limits

The REST API limits the rate of requests you can make to 15 requests per second. If you are performing certain operations that are resource-intensive, you may adversely affect performance of your system. This includes actions such as placing large download orders, download orders with high amounts of transformations such as resizing or watermarking, or uploading large files. Performing a large amount of these operations rapidly may result in increased processing time for these operations. If you require a high volume of these operations, it is recommended to work with our expert services who can review your design and recommend optimizations. The default REST API rate limits can be increased; contact your Aprimo Customer Success Director for more information.

Enforcement

Exceeding the REST API rate limit will result in requests receiving the HTTP response “429 Too Many Requests”. If you receive a 429 response, slow down the amount of calls being made into the API at once. Note that this limit is enforced across all integrations deployed in a tenant, and the per second rate limit applies globally to all calls being made into the REST API. If you are seeing 429 Too Many Requests responses these are so strategies that can be implemented to better handle these responses.

The rate limit may also queue your requests to prevent rejecting requests. If there are too many requests received at once, the requests which are above the allowed limit will be put into a queue and processed according to the rate limit. If there is no room in the queue, then the requests will be rejected with a “429 Too Many Requests” response. By default, the REST API allows 100 requests above the limit in the queue at a time. These responses for these requests will be delayed according to the rate limit. The default REST API rate limits can be increased; contact your Aprimo Customer Success Director for more information.

Log all 429 response: Log the user and request that causes the 429 response. This information can be used to identify which operations need to be investigated. If there is a large volume of 429 requests in a short period of time it might be a good idea to send an email notification to a system administrator to investigate.

Retry the request: After a set amount of time, retry the request.

Example Scenario

With the default REST API limit of 15 requests per second and a queue size of 100 requests, what happens if 130 requests are sent all at the same time?

* The first 15 requests are processed normally.
* The next 100 requests are placed into a queue and are allowed out of the queue at the rate of 15 requests per second. That means the very last request in the queue will not begin processing until 6.7 seconds after it was placed in the queue (100 requests / 15 requests per second).
* The last 15 requests are rejected with a response of “429 Too Many Requests”.

Minimizing API Calls

If hitting the rate limit is a concern of a client application there are numerous strategies that can be implemented to minimize the number of API calls the application is making.

Space out job check requests: When checking the state of a job, such as a download order, wait at least 1 second between requests. Alternatively, if the download url is not needed immediately an algorithm could be implemented that scales the wait time between requests. E.g. 1 second, 3 seconds, 5 second, etc.

Use select headers: Whenever possible, use select headers in your requests to to retrieve related asset record data instead of making an additional API call. Headers like select-record: classifications will return a list of classifications that the asset record is classified in.

Cache asset files and metadata: This is discussed more in depth in the next section.

Asset uploads: Always use the maximum chunk size, 20MB, when using the upload service. https://training3.dam.aprimo.com/api/core/docs#usage_uploading

Re-use authorization tokens: Re-use authorization tokens until the response to the request is a 401 Unauthorized. Once that occurs, use the refresh token to get another token.

 

Asset and Metadata Caching

Caching can be a powerful way to reduce the chatter between two systems. These are some approaches that can be used to effectively cache information from Aprimo DAM.

Query the DAM for changes: Using the “filter” header with the value “modifiedOn>[datetime of last successful cache update in UTC]” is a way to only return information that is not a part of the current cache.

Webhook rules: Use rules to trigger webhooks that make requests to the client application when assets are created or updated. The rules should only be applied within the scope of the classifications that are important to the client application.

Use a long-term cache: Cache data that very rarely changes such as language Ids, fields, user groups, classifications, etc and refresh this data once an hour or once a day.

 

 

Related Topic: System Performance

Reducing the number of API calls to Aprimo DAM will improve the overall performance of the platform and external applications. It would be helpful to implement monitoring techniques to test and get metrics on how long certain operations are taking.

Timer: A timer can be implemented that begins when an operation does, ends when the operation does, and logs the result. This can be used to identify operations that need to be investigated.

Hosting: Hosting the client application on the same Microsoft Azure colocation as the Aprimo DAM can increase performance. If this is not possible, then hosting the the client application geographically close to the colocations ite.