Skip to main content This browser is no longer supported. Show
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support. Azure subscription and service limits, quotas, and constraints
In this articleThis document lists some of the most common Microsoft Azure limits, which are also sometimes called quotas. To learn more about Azure pricing, see Azure pricing overview. There, you can estimate your costs by using the pricing calculator. You also can go to the pricing details page for a particular service, for example, Windows VMs. For tips to help manage your costs, see Prevent unexpected costs with Azure billing and cost management. Managing limitsNote Some services have adjustable limits. When a service doesn't have adjustable limits, the following tables use the header Limit. In those cases, the default and the maximum limits are the same. When the limit can be adjusted, the tables include Default limit and Maximum limit headers. The limit can be raised above the default limit but not above the maximum limit. If you want to raise the limit or quota above the default limit, open an online customer support request at no charge. The terms soft limit and hard limit often are used informally to describe the current, adjustable limit (soft limit) and the maximum limit (hard limit). If a limit isn't adjustable, there won't be a soft limit, only a hard limit. Free Trial subscriptions aren't eligible for limit or quota increases. If you have a Free Trial subscription, you can upgrade to a Pay-As-You-Go subscription. For more information, see Upgrade your Azure Free Trial subscription to a Pay-As-You-Go subscription and the Free Trial subscription FAQ. Some limits are managed at a regional level. Let's use vCPU quotas as an example. To request a quota increase with support for vCPUs, you must decide how many vCPUs you want to use in which regions. You then request an increase in vCPU quotas for the amounts and regions that you want. If you need to use 30 vCPUs in West Europe to run your application there, you specifically request 30 vCPUs in West Europe. Your vCPU quota isn't increased in any other region--only West Europe has the 30-vCPU quota. As a result, decide what your quotas must be for your workload in any one region. Then request that amount in each region into which you want to deploy. For help in how to determine your current quotas for specific regions, see Resolve errors for resource quotas. General limitsFor limits on resource names, see Naming rules and restrictions for Azure resources. For information about Resource Manager API read and write limits, see Throttling Resource Manager requests. Management group limitsThe following limits apply to management groups.
1The 6 levels don't include the subscription level. 2If you reach the limit of 800 deployments, delete deployments from the history that are no longer needed. To delete management group level deployments, use Remove-AzManagementGroupDeployment or az deployment mg delete. Subscription limitsThe following limits apply when you use Azure Resource Manager and Azure resource groups.
1You can apply up to 50 tags directly to a subscription. However, the subscription can contain an unlimited number of tags that are applied to resource groups and resources within the subscription. The number of tags per resource or resource group is limited to 50. 2Resource Manager returns a list of tag name and values in the subscription only when the number of unique tags is 80,000 or less. A unique tag is defined by the combination of resource ID, tag name, and tag value. For example, two resources with the same tag name and value would be calculated as two unique tags. You still can find a resource by tag when the number exceeds 80,000. 3Deployments are automatically deleted from the history as you near the limit. For more information, see Automatic deletions from deployment history. Resource group limits
1Deployments are automatically deleted from the history as you near the limit. Deleting an entry from the deployment history doesn't affect the deployed resources. For more information, see Automatic deletions from deployment history. Template limits
You can exceed some template limits by using a nested template. For more information, see Use linked templates when you deploy Azure resources. To reduce the number of parameters, variables, or outputs, you can combine several values into an object. For more information, see Objects as parameters. You may get an error with a template or parameter file of less than 4 MB, if the total size of the request is too large. For more information about how to simplify your template to avoid a large request, see Resolve errors for job size exceeded. Active Directory limitsHere are the usage constraints and other service limits for the Azure AD service.
API Management limits
1 Scaling limits depend on the pricing tier. For details on the pricing tiers and their scaling limits, see API Management pricing. App Service limits
1 Apps and storage quotas are per App Service plan unless noted otherwise. 2 The actual number of apps that you can host on these machines depends on the activity of the apps, the size of the machine instances, and the corresponding resource utilization. 3 Dedicated instances can be of different sizes. For more information, see App Service pricing. 4 More are allowed upon request. 5 The storage limit is the total content size across all apps in the same App service plan. The total content size of all apps across all App service plans in a single resource group and region cannot exceed 500 GB. The file system quota for App Service hosted apps is determined by the aggregate of App Service plans created in a region and resource group. 6 These resources are constrained by physical resources on the dedicated instances (the instance size and the number of instances). 7 If you scale an app in the Basic tier to two instances, you have 350 concurrent connections for each of the two instances. For Standard tier and above, there are no theoretical limits to web sockets, but other factors can limit the number of web sockets. For example,
maximum concurrent requests allowed (defined by 8 The maximum IP connections are per instance and depend on the instance size: 1,920 per B1/S1/P1V3 instance, 3,968 per B2/S2/P2V3 instance, 8,064 per B3/S3/P3V3 instance. 9 App Service Isolated SKUs can be internally load balanced (ILB) with Azure Load Balancer, so there's no public connectivity from the internet. As a result, some features of an ILB Isolated App Service must be used from machines that have direct access to the ILB network endpoint. 10 Run custom executables and/or scripts on demand, on a schedule, or continuously as a background task within your App Service instance. Always On is required for continuous WebJobs execution. There's no predefined limit on the number of WebJobs that can run in an App Service instance. There are practical limits that depend on what the application code is trying to do. 11 Only issuing standard certificates (wildcard certificates aren't available). Limited to only one free certificate per custom domain. 12 Total storage usage across all apps deployed in a single App Service Environment (regardless of how they're allocated across different resource groups). Automation limitsProcess automation
1A sandbox is a shared environment that can be used by multiple jobs. Jobs that use the same sandbox are bound by the resource limitations of the sandbox. Change Tracking and InventoryThe following table shows the tracked item limits per machine for change tracking.
Update ManagementThe following table shows the limits for Update Management.
Azure App Configuration
Azure Cache for Redis limits
Azure Cache for Redis limits and sizes are different for each pricing tier. To see the pricing tiers and their associated sizes, see Azure Cache for Redis pricing. For more information on Azure Cache for Redis configuration limits, see Default Redis server configuration. Because configuration and management of Azure Cache for Redis instances is done by Microsoft, not all Redis commands are supported in Azure Cache for Redis. For more information, see Redis commands not supported in Azure Cache for Redis. Azure Cloud Services limits
1Each Azure Cloud Service with web or worker roles can have two deployments, one for production and one for staging. This limit refers to the number of distinct roles, that is, configuration. This limit doesn't refer to the number of instances per role, that is, scaling. Azure Cognitive Search limitsPricing tiers determine the capacity and limits of your search service. Tiers include:
Limits per subscription You can create multiple services, limited only by the number of services allowed at each tier. For example, you could create up to 16 services at the Basic tier and another 16 services at the S1 tier within the same subscription. For more information about tiers, see Choose an SKU or tier for Azure Cognitive Search. Maximum service limits can be raised upon request. If you need more services within the same subscription, file a support request.
1 Free is based on infrastructure that's shared with other customers. Because the hardware isn't dedicated, scale-up isn't supported on the free tier. 2 Search units are billing units, allocated as either a replica or a partition. You need both resources for storage, indexing, and query operations. To learn more about SU computations, see Scale resource levels for query and index workloads. Limits per search service A search service is constrained by disk space or by a hard limit on the maximum number of indexes or indexers, whichever comes first. The following table documents storage limits. For maximum object limits, see Limits by resource.
1 Basic has one fixed partition. Additional search units can be used to add replicas for larger query volumes. 2 Service level agreements are in effect for billable services on dedicated resources. Free services and preview features have no SLA. For billable services, SLAs take effect when you provision sufficient redundancy for your service. Two or more replicas are required for query (read) SLAs. Three or more replicas are required for query and indexing (read-write) SLAs. The number of partitions isn't an SLA consideration. To learn more about limits on a more granular level, such as document size, queries per second, keys, requests, and responses, see Service limits in Azure Cognitive Search. Azure Cognitive Services limitsThe following limits are for the number of Cognitive Services resources per Azure subscription. There is a limit of only one allowed 'Free' account, per Cognitive Service type, per subscription. Each of the Cognitive Services may have other limitations, for more information, see Azure Cognitive Services.
Azure Container Apps limitsFor Azure Container Apps limits, see Quotas in Azure Container Apps. Azure Cosmos DB limitsFor Azure Cosmos DB limits, see Limits in Azure Cosmos DB. Azure Data Explorer limitsThe following table describes the maximum limits for Azure Data Explorer clusters.
The following table describes the limits on management operations performed on Azure Data Explorer clusters.
Azure Database for MySQLFor Azure Database for MySQL limits, see Limitations in Azure Database for MySQL. Azure Database for PostgreSQLFor Azure Database for PostgreSQL limits, see Limitations in Azure Database for PostgreSQL. Azure Functions limits
1 By default, the timeout for the Functions 1.x runtime in an App Service plan is unbounded. For more information, see Functions Hosting plans comparison. Azure Health Data ServicesAzure Health Data Services limitsHealth Data Services is a set of managed API services based on open standards and frameworks. Health Data Services enables workflows to improve healthcare and offers scalable and secure healthcare solutions. Health Data Services includes Fast Healthcare Interoperability Resources (FHIR) service, the Digital Imaging and Communications in Medicine (DICOM) service, and MedTech service. FHIR service is an implementation of the FHIR specification within Health Data Services. It enables you to combine in a single workspace one or more FHIR service instances with optional DICOM and MedTech service instances. Azure API for FHIR is generally available as a stand-alone service offering. FHIR service in Azure Health Data Services has a limit of 4 TB for structured storage.
Azure API for FHIR service limitsAzure API for FHIR is a managed, standards-based, compliant API for clinical health data that enables solutions for actionable analytics and machine learning.
Azure Kubernetes Service limits
Azure Lab ServicesThe following limits are for the number of Azure Lab Services resources. Per resource type
Per region - Lab plans and labs
For more information about Azure Lab Services capacity limits, see Capacity limits in Azure Lab Services. Contact support to request an increase your limit. Azure Load Testing limitsFor Azure Load Testing limits, see Service limits in Azure Load Testing. Azure Machine Learning limitsThe latest values for Azure Machine Learning Compute quotas can be found in the Azure Machine Learning quota page Azure Maps limitsThe following table shows the usage limit for the Azure Maps S0 pricing tier. Usage limit depends on the pricing tier.
The following table shows the cumulative data size limit for Azure Maps accounts in an Azure subscription. The Azure Maps Data service is available only at the S1 pricing tier.
For more information on the Azure Maps pricing tiers, see Azure Maps pricing. Azure Monitor limitsAlerts
Alerts APIAzure Monitor Alerts have several throttling limits to protect against users making an excessive number of calls. Such behavior can potentially overload the system backend resources and jeopardize service responsiveness. The following limits are designed to protect customers from interruptions and ensure consistent service level. The user throttling and limits are designed to impact only extreme usage scenario and should not be relevant for typical usage.
Action groupsYou may have an unlimited number of action groups in a subscription.
Autoscale
Log queries and languageGeneral query limits
User query throttlingAzure Monitor has several throttling limits to protect against users sending an excessive number of queries. Such behavior can potentially overload the system backend resources and jeopardize service responsiveness. The following limits are designed to protect customers from interruptions and ensure consistent service level. The user throttling and limits are designed to impact only extreme usage scenario and should not be relevant for typical usage.
Log Analytics workspacesData collection volume and retention
Number of workspaces per subscription.
Azure portal
Data Collector API
Query API
Azure Monitor Logs connector
General workspace limits
Data ingestion volume rate Azure Monitor is a high scale data service that serves thousands of customers sending terabytes of data each month at a growing pace. The volume rate limit intends to isolate Azure Monitor customers from sudden ingestion spikes in multitenancy environment. A default ingestion volume rate threshold of 500 MB (compressed) is defined in workspaces, this is translated to approximately 6 GB/min uncompressed -- the actual size can vary between data types depending on the log length and its compression ratio. The volume rate limit applies to data ingested from Azure resources via Diagnostic settings. When volume rate limit is reached, a retry mechanism attempts to ingest the data four times in a period of 30 minutes and drop it if operation fails. It doesn't apply to data ingested from agents or Data Collector API. When data sent to your workspace is at a volume rate higher than 80% of the threshold configured in your workspace, an event is sent to the Operation table in your workspace every 6 hours while the threshold continues to be exceeded. When ingested volume rate is higher than threshold, some data is dropped, and an event is sent to the Operation table in your workspace every 6 hours while the threshold continues to be exceeded. If your ingestion volume rate continues to exceed the threshold or you're expecting to reach it sometime soon, you can request to increase it in by opening a support request. See Monitor health of Log Analytics workspace in Azure Monitor to create alert rules to be proactively notified when you reach any ingestion limits. Application InsightsThere are some limits on the number of metrics and events per application, that is, per instrumentation key. Limits depend on the pricing plan that you choose.
For more information about pricing and quotas, see Application Insights billing. Azure Data Factory limitsAzure Data Factory is a multitenant service that has the following default limits in place to make sure customer subscriptions are protected from each other's workloads. To raise the limits up to the maximum for your subscription, contact support. Version 2
1 The data integration unit (DIU) is used in a cloud-to-cloud copy operation, learn more from Data integration units (version 2). For information on billing, see Azure Data Factory pricing. 2 Azure Integration Runtime is globally available to ensure data compliance, efficiency, and reduced network egress costs.
If managed virtual network is enabled, the data integration unit (DIU) in all region groups are 2,400. 3 Pipeline, data set, and linked service objects represent a logical grouping of your workload. Limits for these objects don't relate to the amount of data you can move and process with Azure Data Factory. Data Factory is designed to scale to handle petabytes of data. 4 The payload for each activity run includes the activity configuration, the associated dataset(s) and linked service(s) configurations if any, and a small portion of system properties generated per activity type. Limit for this payload size doesn't relate to the amount of data you can move and process with Azure Data Factory. Learn about the symptoms and recommendation if you hit this limit. Version 1
1 Pipeline, data set, and linked service objects represent a logical grouping of your workload. Limits for these objects don't relate to the amount of data you can move and process with Azure Data Factory. Data Factory is designed to scale to handle petabytes of data. 2 On-demand HDInsight cores are allocated out of the subscription that contains the data factory. As a result, the previous limit is the Data Factory-enforced core limit for on-demand HDInsight cores. It's different from the core limit that's associated with your Azure subscription. 3 The cloud data movement unit (DMU) for version 1 is used in a cloud-to-cloud copy operation, learn more from Cloud data movement units (version 1). For information on billing, see Azure Data Factory pricing.
Web service call limitsAzure Resource Manager has limits for API calls. You can make API calls at a rate within the Azure Resource Manager API limits. Azure NetApp FilesAzure NetApp Files has a regional limit for capacity. The standard capacity limit for each subscription is 25 TiB, per region, across all service levels. To increase the capacity, use the Service and subscription limits (quotas) support request. To learn more about the limits for Azure NetApp Files, see Resource limits for Azure NetApp Files. Azure Policy limitsThere's a maximum count for each object type for Azure Policy. For definitions, an entry of Scope means the management group or subscription. For assignments and exemptions, an entry of Scope means the management group, subscription, resource group, or individual resource.
Policy rules have additional limits to the number of conditions and their complexity. See Policy rule limits for more details. Azure Quantum limitsProvider Limits & QuotaThe Azure Quantum Service supports both first and third-party service providers. Third-party providers own their limits and quotas. Users can view offers and limits in the Azure portal when configuring third-party providers. You can find the published quota limits for Microsoft's first party Optimization Solutions provider below. Learn & Develop SKU
While on the Learn & Develop SKU, you cannot request an increase on your quota limits. Instead you should switch to the Performance at Scale SKU. Performance at Scale SKU
Reach out to Azure Support to request a limit increase. For more information, please review the Azure Quantum pricing page. Review the relevant provider pricing pages in the Azure portal for details on third-party offerings. 1 Describes the number of jobs that can be queued at the same time. Azure RBAC limitsThe following limits apply to Azure role-based access control (Azure RBAC).
Azure SignalR Service limits
To request an update to your subscription's default limits, open a support ticket. For more information about how connections and messages are counted, see Messages and connections in Azure SignalR Service. If your requirements exceed the limits, switch from Free tier to Standard tier and add units. For more information, see How to scale an Azure SignalR Service instance?. If your requirements exceed the limits of a single instance, add instances. For more information, see How to scale SignalR Service with multiple instances?. Azure Virtual Desktop Service limitsThe following table describes the maximum limits for Azure Virtual Desktop.
1If you require over 500 Application groups then please raise a support ticket via the Azure portal. All other Azure resources used in Azure Virtual Desktop such as Virtual Machines, Storage, Networking etc. are all subject to their own resource limitations documented in the relevant sections of this article. To visualise the relationship between all the Azure Virtual Desktop objects, review this article Relationships between Azure Virtual Desktop logical components. To get started with Azure Virtual Desktop, use the getting started guide. For deeper architectural content for Azure Virtual Desktop, use the Azure Virtual Desktop section of the Cloud Adoption Framework. For pricing information for Azure Virtual Desktop, add "Azure Virtual Desktop" within the Compute section of the Azure Pricing Calculator. Azure VMware Solution limitsThe following table describes the maximum limits for Azure VMware Solution.
* For information about Recovery Point Objective (RPO) lower than 15 minutes, see How the 5 Minute Recovery Point Objective Works in the vSphere Replication Administration guide. For other VMware-specific limits, use the VMware configuration maximum tool. Backup limitsFor a summary of Azure Backup support settings and limitations, see Azure Backup Support Matrices. Batch limits
1To request an increase beyond this limit, contact Azure Support. Note Default limits vary depending on the type of subscription you use to create a Batch account. Cores quotas shown are for Batch accounts in Batch service mode. View the quotas in your Batch account. Important To help us better manage capacity during the global health pandemic, the default core quotas for new Batch accounts in some regions and for some types of subscription have been reduced from the above range of values, in some cases to zero cores. When you create a new Batch account, check your core quota and request a core quota increase, if required. Alternatively, consider reusing Batch accounts that already have sufficient quota. Classic deployment model limitsIf you use classic deployment model instead of the Azure Resource Manager deployment model, the following limits apply.
1Extra small instances count as one vCPU toward the vCPU limit despite using a partial CPU core. 2The storage account limit includes both Standard and Premium storage accounts. Container Instances limits
1To request a limit increase, create an Azure Support request. Free subscriptions including Azure Free Account and
Azure for Students aren't eligible for limit or quota increases. If you have a free subscription, you can upgrade to a Pay-As-You-Go subscription. Container Registry limitsThe following table details the features and limits of the Basic, Standard, and Premium service tiers.
1 Storage included in the daily rate for each tier. Additional storage may be used, up to the registry storage limit, at an additional daily rate per GiB. For rate information, see Azure Container Registry pricing. If you need storage beyond the registry storage limit, please contact Azure Support. 2ReadOps, WriteOps, and Bandwidth are minimum estimates. Azure Container Registry strives to improve performance as usage requires. 3A docker pull translates to multiple read operations based on the number of layers in the image, plus the manifest retrieval. 4A
docker push translates to multiple write operations, based on the number of layers that must be pushed. A 5 Individual actions of Content Delivery Network limits
A Content Delivery Network subscription can contain one or more Content Delivery Network profiles. A Content Delivery Network profile can contain one or more Content Delivery Network endpoints. You might want to use multiple profiles to organize your Content Delivery Network endpoints by internet domain, web application, or some other criteria. Data Lake Analytics limitsAzure Data Lake Analytics makes the complex task of managing distributed infrastructure and complex code easy. It dynamically provisions resources, and you can use it to do analytics on exabytes of data. When the job completes, it winds down resources automatically. You pay only for the processing power that was used. As you increase or decrease the size of data stored or the amount of compute used, you don't have to rewrite code. To raise the default limits for your subscription, contact support.
Data Factory limitsAzure Data Factory is a multitenant service that has the following default limits in place to make sure customer subscriptions are protected from each other's workloads. To raise the limits up to the maximum for your subscription, contact support. Version 2
1 The data integration unit (DIU) is used in a cloud-to-cloud copy operation, learn more from Data integration units (version 2). For information on billing, see Azure Data Factory pricing. 2 Azure Integration Runtime is globally available to ensure data compliance, efficiency, and reduced network egress costs.
If managed virtual network is enabled, the data integration unit (DIU) in all region groups are 2,400. 3 Pipeline, data set, and linked service objects represent a logical grouping of your workload. Limits for these objects don't relate to the amount of data you can move and process with Azure Data Factory. Data Factory is designed to scale to handle petabytes of data. 4 The payload for each activity run includes the activity configuration, the associated dataset(s) and linked service(s) configurations if any, and a small portion of system properties generated per activity type. Limit for this payload size doesn't relate to the amount of data you can move and process with Azure Data Factory. Learn about the symptoms and recommendation if you hit this limit. Version 1
1 Pipeline, data set, and linked service objects represent a logical grouping of your workload. Limits for these objects don't relate to the amount of data you can move and process with Azure Data Factory. Data Factory is designed to scale to handle petabytes of data. 2 On-demand HDInsight cores are allocated out of the subscription that contains the data factory. As a result, the previous limit is the Data Factory-enforced core limit for on-demand HDInsight cores. It's different from the core limit that's associated with your Azure subscription. 3 The cloud data movement unit (DMU) for version 1 is used in a cloud-to-cloud copy operation, learn more from Cloud data movement units (version 1). For information on billing, see Azure Data Factory pricing.
Web service call limitsAzure Resource Manager has limits for API calls. You can make API calls at a rate within the Azure Resource Manager API limits. Data Lake Storage limitsAzure Data Lake Storage Gen2 is not a dedicated service or storage account type. It is the latest release of capabilities that are dedicated to big data analytics. These capabilities are available in a general-purpose v2 or BlockBlobStorage storage account, and you can obtain them by enabling the Hierarchical namespace feature of the account. For scale targets, see these articles.
Azure Data Lake Storage Gen1 is a dedicated service. It's an enterprise-wide hyper-scale repository for big data analytic workloads. You can use Data Lake Storage Gen1 to capture data of any size, type, and ingestion speed in one single place for operational and exploratory analytics. There's no limit to the amount of data you can store in a Data Lake Storage Gen1 account.
Azure Data Share enables organizations to simply and securely share data with their customers and partners.
Database Migration Service LimitsAzure Database Migration Service is a fully managed service designed to enable seamless migrations from multiple database sources to Azure data platforms with minimal downtime.
Device Update for IoT Hub limitsNote When a given resource or operation doesn't have adjustable limits, the default and the maximum limits are the same. When the limit can be adjusted, the following table includes both the default limit and maximum limit. The limit can be raised above the default limit but not above the maximum limit. If you want to raise the limit or quota above the default limit, open an online customer support request. This table provides the limits for the Device Update for IoT Hub resource in Azure Resource Manager:
This table provides the various limits associated with the operations within Device Update for IoT Hub:
Digital Twins limitsNote Some areas of this service have adjustable limits, and others do not. This is represented in the tables below with the Adjustable? column. When the limit can be adjusted, the Adjustable? value is Yes. Functional limitsThe following table lists the functional limits of Azure Digital Twins.
Rate limitsThe following table reflects the rate limits of different APIs.
Other limitsLimits on data types and fields within DTDL documents for Azure Digital Twins models can be found within its spec documentation in GitHub: Digital Twins Definition Language (DTDL) - version 2. Query latency details are described in Query language. Limitations of particular query language features can be found in the query reference documentation. Event Grid limitsThe following limits apply to Azure Event Grid topics (system, custom, and partner topics). Note These limits are per region.
The following limits apply to Azure Event Grid domains.
Event Hubs limitsThe following tables provide quotas and limits specific to Azure Event Hubs. For information about Event Hubs pricing, see Event Hubs pricing. Common limits for all tiersThe following limits are common across all tiers.
Basic vs. standard vs. premium vs. dedicated tiersThe following table shows limits that may be different for basic, standard, premium, and dedicated tiers. Note
* Depends on various factors such as resource allocation, number of partitions, storage, and so on. Note You can publish events individually or batched. The publication limit (according to SKU) applies regardless of whether it is a single event or a batch. Publishing events larger than the maximum threshold will be rejected. IoT Central limitsIoT Central limits the number of applications you can deploy in a subscription to 100. If you need to increase this limit, contact Microsoft support. To learn more, see Azure IoT Central quota and limits. IoT Hub limitsThe following table lists the limits associated with the different service tiers S1, S2, S3, and F1. For information about the cost of each unit in each tier, see Azure IoT Hub pricing.
Note If you anticipate using more than 200 units with an S1 or S2 tier hub or 10 units with an S3 tier hub, contact Microsoft Support. The following table lists the limits that apply to IoT Hub resources.
Note If you need more than 50 paid IoT hubs in an Azure subscription, contact Microsoft Support. Note Currently, the total number of devices plus modules that can be registered to a single IoT hub is capped at 1,000,000. If you want to increase this limit, contact Microsoft Support. IoT Hub throttles requests when the following quotas are exceeded.
IoT Hub Device Provisioning Service limitsNote Some areas of this service have adjustable limits. This is represented in the tables below with the Adjustable? column. When the limit can be adjusted, the Adjustable? value is Yes. The actual value to which a limit can be adjusted may vary based on each customer’s deployment. Multiple instances of DPS may be required for very large deployments. If your business requires raising an adjustable limit or quota above the default limit, you can submit a request for additional resources by opening a support ticket. Requesting an increase does not guarantee that it will be granted, as it needs to be reviewed on a case-by-case basis. Please contact Microsoft support as early as possible during your implementation, to be able to determine if your request could be approved and plan accordingly. The following table lists the limits that apply to Azure IoT Hub Device Provisioning Service resources.
Tip If the hard limit on symmetric key enrollment groups is a blocking issue, it is recommended to use individual enrollments as a workaround. The Device Provisioning Service has the following rate limits.
Key Vault limitsAzure Key Vault service supports two resource types: Vaults and Managed HSMs. The following two sections describe the service limits for each of them respectively. Resource type: vaultThis section describes service limits for resource type Key transactions (maximum transactions allowed in 10 seconds, per vault per region1):
Note In the previous table, we see that for RSA 2,048-bit software keys, 4,000 GET transactions per 10 seconds are allowed. For RSA 2,048-bit HSM-keys, 2,000 GET transactions per 10 seconds are allowed. The throttling thresholds are weighted, and enforcement is on their sum. For example, as shown in the previous table, when you perform GET operations on RSA HSM-keys, it's eight times more expensive to use 4,096-bit keys compared to 2,048-bit keys. That's because 2,000/250 = 8. In a given 10-second interval, an Azure Key Vault client can do only one of the following operations before it encounters a
Secrets, managed storage account keys, and vault transactions:
For information on how to handle throttling when these limits are exceeded, see Azure Key Vault throttling guidance. 1 A subscription-wide limit for all transaction types is five times per key vault limit. Backup keys, secrets, certificatesWhen you back up a key vault object, such as a secret, key, or certificate, the backup operation will download the object as an encrypted blob. This blob cannot be decrypted outside of Azure. To get usable data from this blob, you must restore the blob into a key vault within the same Azure subscription and Azure geography
Note Attempting to backup a key, secret, or certificate object with more versions than above limit will result in an error. It is not possible to delete previous versions of a key, secret, or certificate. Limits on count of keys, secrets and certificates:Key Vault does not restrict the number of keys, secrets or certificates that can be stored in a vault. The transaction limits on the vault should be taken into account to ensure that operations are not throttled. Key Vault does not restrict the number of versions on a secret, key or certificate, but storing a large number of versions (500+) can impact the performance of backup operations. See Azure Key Vault Backup. Resource type: Managed HSMThis
section describes service limits for resource type Object limits
Transaction limits for administrative operations (number of operations per second per HSM instance)
Transaction limits for cryptographic operations (number of operations per second per HSM instance)
RSA key operations (number of operations per second per HSM instance)
EC key operations (number of operations per second per HSM instance)This table describes number of operations per second for each curve type.
AES key operations (number of operations per second per HSM instance)
Managed identity limits
Note For resources that aren't fixed, open a support ticket to ask for an increase in the quotas. Don't create additional Azure Media Services accounts in an attempt to obtain higher limits. Account limits
Asset limits
Storage (media) limits
1 The maximum size supported for a single blob is currently up to 5 TB in Azure Blob Storage. Additional limits apply in Media Services based on the VM sizes that are used by the service. The size limit applies to the files that you upload and also the files that get generated as a result of Media Services processing (encoding or analyzing). If your source file is larger than 260-GB, your Job will likely fail. 2 The storage accounts must be from the same Azure subscription. Jobs (encoding & analyzing) limits
3 This number includes queued, finished, active, and canceled Jobs. It does not include deleted Jobs. Any Job record in your account older than 90 days will be automatically deleted, even if the total number of records is below the maximum quota. Live streaming limits
4 For detailed information about Live Event limitations, see Live Event types comparison and limitations. 5 Live Outputs start on creation and stop when deleted. Packaging & delivery limits
6 When using a custom Streaming Policy, you should design a limited set of such policies for your Media Service account, and re-use them for your StreamingLocators whenever the same encryption options and protocols are needed. You should not be creating a new Streaming Policy for each Streaming Locator. 7 Streaming Locators are not designed for managing per-user access control. To give different access rights to individual users, use Digital Rights Management (DRM) solutions. Protection limits
Support ticketFor resources that are not fixed, you may ask for the quotas to be raised, by opening a support ticket. Include detailed information in the request on the desired quota changes, use-case scenarios, and regions required. Media Services v2 (legacy)For limits specific to Media Services v2 (legacy), see Media Services v2 (legacy) Mobile Services limits
For more information on limits and pricing, see Azure Mobile Services pricing. Multi-Factor Authentication limits
Networking limitsNetworking limits - Azure Resource ManagerThe following limits apply only for networking resources managed through Azure Resource Manager per region per subscription. Learn how to view your current resource usage against your subscription limits. Note We recently increased all default limits to their maximum limits. If there's no maximum limit column, the resource doesn't have adjustable limits. If you had these limits increased by support in the past and don't see updated limits in the following tables, open an online customer support request at no charge
Public IP address limits
1Default limits for Public IP addresses vary by offer category type, such as Free Trial, Pay-As-You-Go, CSP. For example, the default for Enterprise Agreement subscriptions is 1000. 2Public IP addresses limit refers to the total amount of Public IP addresses, including Basic and Standard. Load balancer limitsThe following limits apply only for networking resources managed through Azure Resource Manager per region per subscription. Learn how to view your current resource usage against your subscription limits. Standard Load Balancer
1 An exception to this limit is that 2 public load balancers can be in front of a VM if an IPv4 address config is used for one load balancer and IPv6 address config is used for the second. Note that this limit does not apply to IP-based load balancers. For more information on IP-based backend pools, refer to our documentation on IP-based load balancers. 2 Backend IP configurations are aggregated across all load balancer rules including load balancing, inbound NAT, and outbound rules. Each rule a backend pool instance is configured to counts as one configuration. Gateway Load Balancer
Basic Load Balancer
3 The limit for a single discrete resource in a backend pool (standalone virtual machine, availability set, or virtual machine scale-set placement group) is to have up to 250 Frontend IP configurations across a single Basic Public Load Balancer and Basic Internal Load Balancer. The following limits apply only for networking resources managed through the classic deployment model per subscription. Learn how to view your current resource usage against your subscription limits.
Application Gateway limitsThe following table applies to v1, v2, Standard, and WAF SKUs unless otherwise stated.
1 For WAF-enabled SKUs, you must limit the number of resources to 40. 2 Limit is per Application Gateway instance not per Application Gateway resource. 3 Must define the value via WAF Policy for Application Gateway. Azure Bastion limits
*These workload types are defined here: Remote Desktop workloads Azure DNS limitsPublic DNS zones
1If you need to increase these limits, contact Azure Support. Private DNS zones
1These limits are applied to every individual virtual machine and not at the virtual network level. DNS queries exceeding these limits are dropped. DNS private resolver
Azure Firewall limits
Azure Front Door (classic) limits
Azure Front Door Standard and Premium tier service limits
Timeout valuesClient to Front Door
Front Door to application back-end
Upload and download data limit
Other limits
For more information about limits that apply to Rules Engine configurations, see Rules Engine terminology Azure Route Server limits
1 If your NVA advertises more routes than the limit, the BGP session will get dropped. In the event BGP session is dropped between the gateway and Azure Route Server, you'll lose connectivity from your on-premises network to Azure. 2 Please be aware of the ExpressRoute Private Peering limit of 1000 routes per connection from Virtual Network Gateway towards ExpressRoute circuit. For instance, the total number of routes from all Virtual Network Address Spaces + ARS Branch-to-branch, must not exceed 1000. 3 The number of VMs that Azure Route Server can support isn't a hard limit, and it depends on how the Route Server infrastructure is deployed within an Azure Region. ExpressRoute limits
Number of virtual networks per ExpressRoute circuit
*100 Gbps ExpressRoute Direct Only Note Global Reach connections count against the limit of virtual network connections per ExpressRoute Circuit. For example, a 10 Gbps Premium Circuit would allow for 5 Global Reach connections and 95 connections to the ExpressRoute Gateways or 95 Global Reach connections and 5 connections to the ExpressRoute Gateways or any other combination up to the limit of 100 connections for the circuit. NAT Gateway limitsThe following limits apply to NAT gateway resources managed through Azure Resource Manager per region per subscription. Learn how to view your current resource usage against your subscription limits.
Network Watcher limits
Private Link limitsThe following limits apply to Azure private link:
Traffic Manager limits
1If you need to increase these limits, contact Azure Support. Virtual Network Gateway limits
Virtual WAN limits
Notification Hubs limits
For more information on limits and pricing, see Notification Hubs pricing. Microsoft Purview limitsThe latest values for Microsoft Purview quotas can be found in the Microsoft Purview quota page. Microsoft Sentinel limitsThis section lists the most common service limits you might encounter as you use Microsoft Sentinel. Analytics rule limitsThe following limit applies to analytics rules in Microsoft Sentinel.
Incident limitsThe following limits apply to incidents in Microsoft Sentinel.
Machine learning-based limitsThe following limits apply to machine learning-based features in Microsoft Sentinel like customizable anomalies and Fusion.
Multi workspace limitsThe following limit applies to multiple workspaces in Microsoft Sentinel. Limits here are applied when working with Sentinel features across more than workspace at a time.
Notebook limitsThe following limits apply to notebooks in Microsoft Sentinel. The limits are related to the dependencies on other services used by notebooks.
Repositories limitsThe following limits apply to repositories in Microsoft Sentinel.
Threat intelligence limitsThe following limit applies to threat intelligence in Microsoft Sentinel. The limit is related to the dependency on an API used by threat intelligence.
User and Entity Behavior Analytics (UEBA) limitsThe following limit applies to UEBA in Microsoft Sentinel. The limit for UEBA in Microsoft Sentinel is related to dependencies on another service.
Watchlist limitsThe following limits apply to watchlists in Microsoft Sentinel. The limits are related to the dependencies on other services used by watchlists.
Service Bus limitsThe following table lists quota information specific to Azure Service Bus messaging. For information about pricing and other quotas for Service Bus, see Service Bus pricing.
Site Recovery limitsThe following limits apply to Azure Site Recovery.
SQL Database limitsFor SQL Database limits, see SQL Database resource limits for single databases, SQL Database resource limits for elastic pools and pooled databases, and SQL Database resource limits for SQL Managed Instance. The maximum number of private endpoints per Azure SQL Database logical server is 250. Azure Synapse Analytics limitsAzure Synapse Analytics has the following default limits to ensure customer's subscriptions are protected from each other's workloads. To raise the limits to the maximum for your subscription, contact support. Azure Synapse limits for workspacesFor Pay-As-You-Go, Free Trial, Azure Pass, and Azure for Students subscription offer types:
For other subscription offer types:
Azure Synapse limits for Apache SparkFor Pay-As-You-Go, Free Trial, Azure Pass, and Azure for Students subscription offer types:
For other subscription offer types:
Azure Synapse limits for pipelines
1 The data integration unit (DIU) is used in a cloud-to-cloud copy operation, learn more from Data integration units (version 2). For information on billing, see Azure Synapse Analytics Pricing. 2 Azure Integration Runtime is globally available to ensure data compliance, efficiency, and reduced network egress costs.
If managed virtual network is enabled, the data integration unit (DIU) in all region groups are 2,400. 3 Pipeline, data set, and linked service objects represent a logical grouping of your workload. Limits for these objects don't relate to the amount of data you can move and process with Azure Synapse Analytics. Synapse Analytics is designed to scale to handle petabytes of data. 4 The payload for each activity run includes the activity configuration, the associated dataset(s) and linked service(s) configurations if any, and a small portion of system properties generated per activity type. Limit for this payload size doesn't relate to the amount of data you can move and process with Azure Synapse Analytics. Learn about the symptoms and recommendation if you hit this limit. Azure Synapse limits for dedicated SQL poolsFor details of capacity limits for dedicated SQL pools in Azure Synapse Analytics, see dedicated SQL pool resource limits. Azure Resource Manager limits for web service callsAzure Resource Manager has limits for API calls. You can make API calls at a rate within the Azure Resource Manager API limits. Azure Files and Azure File SyncTo learn more about the limits for Azure Files and File Sync, see Azure Files scalability and performance targets. Storage limitsThe following table describes default limits for Azure general-purpose v2 (GPv2), general-purpose v1 (GPv1), and Blob storage accounts. The ingress limit refers to all data that is sent to a storage account. The egress limit refers to all data that is received from a storage account. Microsoft recommends that you use a GPv2 storage account for most scenarios. You can easily upgrade a GPv1 or a Blob storage account to a GPv2 account with no downtime and without the need to copy data. For more information, see Upgrade to a GPv2 storage account. Note You can request higher capacity and ingress limits. To request an increase, contact Azure Support.
1 Azure Storage standard accounts support higher capacity limits and higher limits for ingress and egress by request. To request an increase in account limits, contact Azure Support. For more information on limits for standard storage accounts, see Scalability targets for standard storage accounts. Storage resource provider limitsThe following limits apply only when you perform management operations by using Azure Resource Manager with Azure Storage.
Azure Blob storage limits
1 Throughput for a single blob depends on several factors, including, but not limited to: concurrency, request size, performance tier, speed of source for uploads, and destination for downloads. To take advantage of the performance enhancements of high-throughput block blobs, upload larger blobs or blocks. Specifically, call the Put Blob or Put Block operation with a blob or block size that is greater than 4 MiB for standard storage accounts. For premium block blob or for Data Lake Storage Gen2 storage accounts, use a block or blob size that is greater than 256 KiB. 2 Page blobs are not yet supported in accounts that have the Hierarchical namespace setting on them. The following table describes the maximum block and blob sizes permitted by service version.
Azure Queue storage limits
Azure Table storage limitsThe following table describes capacity, scalability, and performance targets for Table storage.
Virtual machine disk limitsYou can attach a number of data disks to an Azure virtual machine (VM). Based on the scalability and performance targets for a VM's data disks, you can determine the number and type of disk that you need to meet your performance and capacity requirements. Important For optimal performance, limit the number of highly utilized disks attached to the virtual machine to avoid possible throttling. If all attached disks aren't highly utilized at the same time, the virtual machine can support a larger number of disks. For Azure managed disks: The following table illustrates the default and maximum limits of the number of resources per region per subscription. The limits remain the same irrespective of disks encrypted with either platform-managed keys or customer-managed keys. There is no limit for the number of Managed Disks, snapshots and images per resource group.
1An individual disk can have 500 incremental snapshots. For standard storage accounts: A Standard storage account has a maximum total request rate of 20,000 IOPS. The total IOPS across all of your virtual machine disks in a Standard storage account should not exceed this limit. For unmanaged disks, you can roughly calculate the number of highly utilized disks supported by a single standard storage account based on the request rate limit. For example, for a Basic tier VM, the maximum number of highly utilized disks is about 66, which is 20,000/300 IOPS per disk. The maximum number of highly utilized disks for a Standard tier VM is about 40, which is 20,000/500 IOPS per disk. For premium storage accounts: A premium storage account has a maximum total throughput rate of 50 Gbps. The total throughput across all of your VM disks should not exceed this limit. For more information, see Virtual machine sizes. Disk encryption setsThere's a limitation of 1000 disk encryption sets per region, per subscription. For more information, see the encryption documentation for Linux or Windows virtual machines. If you need to increase the quota, contact Azure support. Managed virtual machine disksStandard HDD managed disks
Standard SSD managed disks
Premium SSD managed disks: Per-disk limits
*Applies only to disks with on-demand bursting enabled. Premium SSD managed disks: Per-VM limits
Unmanaged virtual machine disksStandard unmanaged virtual machine disks: Per-disk limits
Premium unmanaged virtual machine disks: Per-account limits
1Ingress refers to all data from requests that are sent to a storage account. Egress refers to all data from responses that are received from a storage account. Premium unmanaged virtual machine disks: Per-disk limits
Premium unmanaged virtual machine disks: Per-VM limits
StorSimple System limits
*Maximum throughput per I/O type was measured with 100 percent read and 100 percent write scenarios. Actual throughput might be lower and depends on I/O mix and network conditions. Stream Analytics limits
Virtual Machines limitsVirtual Machines limits
1 Virtual machines created by using the classic deployment model instead of Azure Resource Manager are automatically stored in a cloud service. You can add more virtual machines to that cloud service for load balancing and availability. 2 Input endpoints allow communications to a virtual machine from outside the virtual machine's cloud service. Virtual machines in the same cloud service or virtual network can automatically communicate with each other. Virtual Machines limits - Azure Resource ManagerThe following limits apply when you use Azure Resource Manager and Azure resource groups.
1 Default limits vary by offer category type, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350. For security, subscriptions default to 20 cores to prevent large core deployments. If you need more cores, submit a support ticket. 2 Properties such as SSH public keys are also pushed as certificates and count towards this limit. To bypass this limit, use the Azure Key Vault extension for Windows or the Azure Key Vault extension for Linux to install certificates. 3 With Azure Resource Manager, certificates are stored in the Azure Key Vault. The number of certificates is unlimited for a subscription. There's a 1-MB limit of certificates per deployment, which consists of either a single VM or an availability set. Note Virtual machine cores have a regional total limit. They also have a limit for regional per-size series, such as Dv2 and F. These limits are separately enforced. For example, consider a subscription with a US East total VM core limit of 30, an A series core limit of 30, and a D series core limit of 30. This subscription can deploy 30 A1 VMs, or 30 D1 VMs, or a combination of the two not to exceed a total of 30 cores. An example of a combination is 10 A1 VMs and 20 D1 VMs. Compute Gallery limitsThere are limits, per subscription, for deploying resources using Compute Galleries:
Virtual Machine Scale Sets limits
See also
FeedbackSubmit and view feedback for When workers perform and react differently because they are being observed it is known as ______?From this, sociologists learned the importance of carefully planning their roles as part of their research design (Franke & Kaul, 1978). Landsberger called the workers' response the Hawthorne effect — people changing their behaviour because they know they are being watched as part of a study.
Which theory or perspective contends there is no one best way to manage and organize because circumstances vary?A contingency theory is an organizational theory that claims that there is no best way to organize a corporation, to lead a company, or to make decisions. Instead, the optimal course of action is contingent (dependent) upon the internal and external situation.
What did the Scientific Management approach advocate?In 1911 Frederick Winslow Taylor published his monograph “The Principles of Scientific Management.” Taylor argued that flaws in a given work process could be scientifically solved through improved management methods and that the best way to increase labor productivity was to optimize the manner in which the work was ...
Which of the following are considered contemporary approaches to management?Sociotechnical Systems Theory, Quantitative Management, Organizational Behavior, and Systems Theory are The Four Contemporary Approaches to Management.
|