Skip to content

Commit ba47b22

Browse files
authored
Merge pull request #134059 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/MicrosoftDocs/azure-docs (branch master)
2 parents af4f625 + 26b7093 commit ba47b22

File tree

9 files changed

+30
-45
lines changed

9 files changed

+30
-45
lines changed

articles/active-directory/develop/tutorial-v2-android.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,6 +98,7 @@ If you do not already have an Android application, follow these steps to set up
9898
"client_id" : "0984a7b6-bc13-4141-8b0d-8f767e136bb7",
9999
"authorization_user_agent" : "DEFAULT",
100100
"redirect_uri" : "msauth://com.azuresamples.msalandroidapp/1wIqXSqBj7w%2Bh11ZifsnqwgyKrY%3D",
101+
"broker_redirect_uri_registered" : true,
101102
"account_mode" : "SINGLE",
102103
"authorities" : [
103104
{
@@ -149,7 +150,7 @@ If you do not already have an Android application, follow these steps to set up
149150
jcenter()
150151
}
151152
dependencies{
152-
implementation 'com.microsoft.identity.client:msal:1.+'
153+
implementation 'com.microsoft.identity.client:msal:2.+'
153154
implementation 'com.microsoft.graph:microsoft-graph:1.5.+'
154155
}
155156
packagingOptions{

articles/analysis-services/analysis-services-gateway.md

Lines changed: 0 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -24,22 +24,6 @@ For Azure Analysis Services, getting setup with the gateway the first time is a
2424

2525
- **Connect the gateway resource to servers** - Once you have a gateway resource, you can begin connecting servers to it. You can connect multiple servers and other resources provided they are in the same region.
2626

27-
28-
29-
## How it works
30-
The gateway you install on a computer in your organization runs as a Windows service, **On-premises data gateway**. This local service is registered with the Gateway Cloud Service through Azure Service Bus. You then create an On-premises data gateway resource for an Azure subscription. Your Azure Analysis Services servers are then connected to your Azure gateway resource. When models on your server need to connect to your on-premises data sources for queries or processing, a query and data flow traverses the gateway resource, Azure Service Bus, the local on-premises data gateway service, and your data sources.
31-
32-
![How it works](./media/analysis-services-gateway/aas-gateway-how-it-works.png)
33-
34-
Queries and data flow:
35-
36-
1. A query is created by the cloud service with the encrypted credentials for the on-premises data source. It's then sent to a queue for the gateway to process.
37-
2. The gateway cloud service analyzes the query and pushes the request to the [Azure Service Bus](https://azure.microsoft.com/documentation/services/service-bus/).
38-
3. The on-premises data gateway polls the Azure Service Bus for pending requests.
39-
4. The gateway gets the query, decrypts the credentials, and connects to the data sources with those credentials.
40-
5. The gateway sends the query to the data source for execution.
41-
6. The results are sent from the data source, back to the gateway, and then onto the cloud service and your server.
42-
4327
## Installing
4428

4529
When installing for an Azure Analysis Services environment, it's important you follow the steps described in [Install and configure on-premises data gateway for Azure Analysis Services](analysis-services-gateway-install.md). This article is specific to Azure Analysis Services. It includes additional steps required to setup an On-premises data gateway resource in Azure, and connect your Azure Analysis Services server to the resource.
@@ -71,16 +55,6 @@ The following are fully qualified domain names used by the gateway.
7155
| *.microsoftonline-p.com |443 |Used for authentication depending on configuration. |
7256
| dc.services.visualstudio.com |443 |Used by AppInsights to collect telemetry. |
7357

74-
### Forcing HTTPS communication with Azure Service Bus
75-
76-
You can force the gateway to communicate with Azure Service Bus by using HTTPS instead of direct TCP; however, doing so can greatly reduce performance. You can modify the *Microsoft.PowerBI.DataMovement.Pipeline.GatewayCore.dll.config* file by changing the value from `AutoDetect` to `Https`. This file is typically located at *C:\Program Files\On-premises data gateway*.
77-
78-
```
79-
<setting name="ServiceBusSystemConnectivityModeString" serializeAs="String">
80-
<value>Https</value>
81-
</setting>
82-
```
83-
8458
## Next steps
8559

8660
The following articles are included in the On-premises data gateway general content that applies to all services the gateway supports:

articles/azure-functions/functions-scale.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Both Consumption and Premium plans automatically add compute power when your cod
2222

2323
Premium plan provides additional features, such as premium compute instances, the ability to keep instances warm indefinitely, and VNet connectivity.
2424

25-
App Service plan allows you to take advantage of dedicated infrastructure, which you manage. Your function app doesn't scale based on events, which means is never scales in to zero. (Requires that [Always on](#always-on) is enabled.)
25+
App Service plan allows you to take advantage of dedicated infrastructure, which you manage. Your function app doesn't scale based on events, which means it never scales in to zero. (Requires that [Always on](#always-on) is enabled.)
2626

2727
For a detailed comparison between the various hosting plans (including Kubernetes-based hosting), see the [Hosting plans comparison section](#hosting-plans-comparison).
2828

articles/cosmos-db/cassandra-support.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -146,6 +146,7 @@ Azure Cosmos DB supports the following database commands on Cassandra API accoun
146146
| CREATE USER (Deprecated in native Apache Cassandra) | No |
147147
| DELETE | Yes |
148148
| DELETE (lightweight transactions with IF CONDITION)| Yes |
149+
| DISTINCT | No |
149150
| DROP AGGREGATE | No |
150151
| DROP FUNCTION | No |
151152
| DROP INDEX | Yes |

articles/iot-edge/how-to-access-built-in-metrics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ services: iot-edge
1515

1616
The IoT Edge runtime components, IoT Edge Hub and IoT Edge Agent, produce built-in metrics in the [Prometheus exposition format](https://prometheus.io/docs/instrumenting/exposition_formats/). Access these metrics remotely to monitor and understand the health of an IoT Edge device.
1717

18-
As of release 1.0.10, metrics are automatically exposed by default on **port 9600** of the **edgeHub** and **edgeAgent** modules (`http://edgeHub:9600/metrics` and `http://edgeAgent:9600/metics`). They aren't port mapped to the host by default.
18+
As of release 1.0.10, metrics are automatically exposed by default on **port 9600** of the **edgeHub** and **edgeAgent** modules (`http://edgeHub:9600/metrics` and `http://edgeAgent:9600/metrics`). They aren't port mapped to the host by default.
1919

2020
Access metrics from the host by exposing and mapping the metrics port from the module's `createOptions`. The example below maps the default metrics port to port 9601 on the host:
2121

articles/lighthouse/how-to/onboard-customer.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -237,18 +237,18 @@ New-AzSubscriptionDeployment -Name <deploymentName> `
237237
# Log in first with az login if you're not using Cloud Shell
238238
239239
# Deploy Azure Resource Manager template using template and parameter file locally
240-
az deployment create --name <deploymentName> \
241-
--location <AzureRegion> \
242-
--template-file <pathToTemplateFile> \
243-
--parameters <parameters/parameterFile> \
244-
--verbose
240+
az deployment sub create --name <deploymentName> \
241+
--location <AzureRegion> \
242+
--template-file <pathToTemplateFile> \
243+
--parameters <parameters/parameterFile> \
244+
--verbose
245245
246246
# Deploy external Azure Resource Manager template, with local parameter file
247-
az deployment create --name <deploymentName> \
248-
--location <AzureRegion> \
249-
--template-uri <templateUri> \
250-
--parameters <parameterFile> \
251-
--verbose
247+
az deployment sub create --name <deploymentName> \
248+
--location <AzureRegion> \
249+
--template-uri <templateUri> \
250+
--parameters <parameterFile> \
251+
--verbose
252252
```
253253

254254
## Confirm successful onboarding

articles/mysql/flexible-server/concepts-business-continuity.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -50,9 +50,9 @@ Here are some unplanned failure scenarios and the recovery process:
5050
| **Scenario** | **Recovery process [non-HA]** | **Recovery process [HA]** |
5151
| :---------- | ---------- | ------- |
5252
| **Database server failure** | If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. Azure will attempt to restart the database server. If that succeeds, then the database recovery is performed. If the restart fails, the database server will be attempted to restart on another physical node. <br /> <br /> The recovery time (RTO) is dependent on various factors including the activity at the time of fault such as large transaction and the amount of recovery to be performed during the database server startup process. <br /> <br /> Applications using the MySQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries, the connections are directed to the newly created database server. | If the database server failure is detected, the standby database server is activated, thus reducing downtime. Refer to [HA concepts page](concepts-high-availability.md) for more details. RTO is expected to be 60-120 s, with RPO=0 |
53-
| **Storage failure** | Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in 3 copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created. | For non-recoverable errors, the flexible server is failed over to the standby replica to reduce downtime. Refer to [HA concepts page](../concepts-high-availability.md) for more details. |
54-
| **Logical/user errors** | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](https://docs.microsoft.com/azure/MySQL/concepts-backup) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) to restore those tables into your database. | These user errors are not protected with high availability due to the fact that all user operations are replicated to the standby too. |
55-
| **Availability zone failure** | While it is a rare event, if you want to recover from a zone-level failure, you can perform point-in-time recovery using the backup and choosing custom restore point to get to the latest data. A new flexible server will be deployed in another zone. The time taken to restore depends on the previous backup and the number of transaction logs to recover. | Flexible server performs automatic failover to the standby site. Refer to [HA concepts page](../concepts-high-availability.md) for more details. |
53+
| **Storage failure** | Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in 3 copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created. | For non-recoverable errors, the flexible server is failed over to the standby replica to reduce downtime. Refer to [HA concepts page](./concepts-high-availability.md) for more details. |
54+
| **Logical/user errors** | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](concepts-backup-restore.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) to restore those tables into your database. | These user errors are not protected with high availability due to the fact that all user operations are replicated to the standby too. |
55+
| **Availability zone failure** | While it is a rare event, if you want to recover from a zone-level failure, you can perform point-in-time recovery using the backup and choosing custom restore point to get to the latest data. A new flexible server will be deployed in another zone. The time taken to restore depends on the previous backup and the number of transaction logs to recover. | Flexible server performs automatic failover to the standby site. Refer to [HA concepts page](./concepts-high-availability.md) for more details. |
5656
| **Region failure** | Cross-region replica and geo-restore features are not yet supported in preview. | |
5757

5858

articles/storage/blobs/data-lake-storage-migrate-gen1-to-gen2.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ Let's take a closer look at each pattern.
117117

118118
### Lift and shift pattern
119119

120-
This is the simplest pattern.
120+
This is the simplest pattern.
121121

122122
1. Stop all writes to Gen1.
123123

@@ -127,6 +127,8 @@ This is the simplest pattern.
127127

128128
4. Decommission Gen1.
129129

130+
Check out our sample code for the lift and shift pattern in our [Lift and Shift migration sample](https://github.com/rukmani-msft/adlsgen1togen2migrationsamples/blob/master/src/Lift%20and%20Shift/README.md).
131+
130132
> [!div class="mx-imgBorder"]
131133
> ![lift and shift pattern](./media/data-lake-storage-migrate-gen1-to-gen2/lift-and-shift.png)
132134
@@ -148,6 +150,9 @@ This is the simplest pattern.
148150

149151
4. Decommission Gen1.
150152

153+
Check out our sample code for the incremental copy pattern in our [Incremental copy migration sample](https://github.com/rukmani-msft/adlsgen1togen2migrationsamples/blob/master/src/Incremental/README.md).
154+
155+
151156
> [!div class="mx-imgBorder"]
152157
> ![Incremental copy pattern](./media/data-lake-storage-migrate-gen1-to-gen2/incremental-copy.png)
153158
@@ -169,6 +174,8 @@ This is the simplest pattern.
169174

170175
4. Stop all writes to Gen1 and then decommission Gen1.
171176

177+
Check out our sample code for the dual pipeline pattern in our [Dual Pipeline migration sample](https://github.com/rukmani-msft/adlsgen1togen2migrationsamples/blob/master/src/Dual%20pipeline/README.md).
178+
172179
> [!div class="mx-imgBorder"]
173180
> ![Dual pipeline pattern](./media/data-lake-storage-migrate-gen1-to-gen2/dual-pipeline.png)
174181
@@ -188,6 +195,8 @@ This is the simplest pattern.
188195

189196
4. Decommission Gen1.
190197

198+
Check out our sample code for the bidirectional sync pattern in our [Bidirectional Sync migration sample](https://github.com/rukmani-msft/adlsgen1togen2migrationsamples/blob/master/src/Bi-directional/README.md).
199+
191200
> [!div class="mx-imgBorder"]
192201
> ![Bidirectional pattern](./media/data-lake-storage-migrate-gen1-to-gen2/bidirectional-sync.png)
193202

articles/stream-analytics/private-endpoints.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ ms.date: 09/22/2020
1414

1515
You can connect your Azure Stream Analytics jobs running on a cluster to input and output resources that are behind a firewall or an Azure Virtual Network (VNet). First, you create a private endpoint for a resource, such as Azure Event Hub or Azure SQL Database, in your Stream Analytics cluster. Then, approve the private endpoint connection from your input or output.
1616

17-
Once you approve the connection, any job running in your Stream Analytics cluster has access the resource through the private endpoint. This article shows you how to create and delete private endpoints in a Stream Analytics cluster.
17+
Once you approve the connection, any job running in your Stream Analytics cluster can access the resource through the private endpoint. This article shows you how to create and delete private endpoints in a Stream Analytics cluster.
1818

1919
## Create private endpoint in Stream Analytics cluster
2020

@@ -58,4 +58,4 @@ In this section, you learn how to create a private endpoint in a Stream Analytic
5858
You now have an overview of how to manage private endpoints in an Azure Stream Analytics cluster. Next, you can learn how to scale your clusters and run jobs in your cluster:
5959

6060
* [Scale an Azure Stream Analytics cluster](scale-cluster.md)
61-
* [Manage Stream Analytics jobs in a Stream Analytics cluster](manage-jobs-cluster.md)
61+
* [Manage Stream Analytics jobs in a Stream Analytics cluster](manage-jobs-cluster.md)

0 commit comments

Comments
 (0)