Skip to content

Commit 794a0bc

Browse files
authored
Merge pull request #42269 from sidramadoss/patch-4
{date} in path uses UTC and not local time
2 parents 943a213 + 2f7effb commit 794a0bc

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/stream-analytics/stream-analytics-define-outputs.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ Stream Analytics supports [Azure Data Lake Store](https://azure.microsoft.com/se
4040
| --- | --- |
4141
| Output alias | A friendly name used in queries to direct the query output to this Data Lake Store. |
4242
| Account name | The name of the Data Lake Storage account where you are sending your output. You are presented with a drop-down list of Data Lake Store accounts that are available in your subscription. |
43-
| Path prefix pattern | The file path used to write your files within the specified Data Lake Store Account. You can specify one or more instances of the {date} and {time} variables.</br><ul><li>Example 1: folder1/logs/{date}/{time}</li><li>Example 2: folder1/logs/{date}</li></ul>If the file path pattern does not contain a trailing "/", the last pattern in the file path is treated as a filename prefix. </br></br>New files are created in these circumstances:<ul><li>Change in output schema</li><li>External or Internal restart of a job.</li></ul> |
43+
| Path prefix pattern | The file path used to write your files within the specified Data Lake Store Account. You can specify one or more instances of the {date} and {time} variables.</br><ul><li>Example 1: folder1/logs/{date}/{time}</li><li>Example 2: folder1/logs/{date}</li></ul><br>The timestamp of the folder structure created follows UTC and not local time.</br><br>If the file path pattern does not contain a trailing "/", the last pattern in the file path is treated as a filename prefix. </br></br>New files are created in these circumstances:<ul><li>Change in output schema</li><li>External or Internal restart of a job.</li></ul> |
4444
| Date format | Optional. If the date token is used in the prefix path, you can select the date format in which your files are organized. Example: YYYY/MM/DD |
4545
|Time format | Optional. If the time token is used in the prefix path, specify the time format in which your files are organized. Currently the only supported value is HH. |
4646
| Event serialization format | Serialization format for output data. JSON, CSV, and Avro are supported.|
@@ -82,7 +82,7 @@ The table below lists the property names and their description for creating a bl
8282
| Storage Account | The name of the storage account where you are sending your output. |
8383
| Storage Account Key | The secret key associated with the storage account. |
8484
| Storage Container | Containers provide a logical grouping for blobs stored in the Microsoft Azure Blob service. When you upload a blob to the Blob service, you must specify a container for that blob. |
85-
| Path pattern | Optional. The file path pattern used to write your blobs within the specified container. </br></br> In the path pattern, you may choose to use one or more instances of the date time variables to specify the frequency that blobs are written: </br> {date}, {time} </br> </br>If you are signed up for the [preview](https://aka.ms/ASAPreview), you may also specify one custom {field} name from your event data to partition blobs by, where the field name is alphanumeric and can include spaces, hyphens, and underscores. Restrictions on custom fields include the following: <ul><li>Case insensitivity (cannot different between column "ID" and column "id")</li><li>Nested fields are not permitted (instead use an alias in the job query to "flatten" the field)</li><li>Expressions cannot be used as a field name</li></ul>Examples: <ul><li>Example 1: cluster1/logs/{date}/{time}</li><li>Example 2: cluster1/logs/{date}</li><li>Example 3 (preview): cluster1/{client_id}/{date}/{time}</li><li>Example 4 (preview): cluster1/{myField} where the query is: SELECT data.myField AS myField FROM Input;</li></ul><BR> File naming follows the following convention: </br> {Path Prefix Pattern}/schemaHashcode_Guid_Number.extension </br></br> Example output files: </br><ul><li>Myoutput/20170901/00/45434_gguid_1.csv</li><li>Myoutput/20170901/01/45434_gguid_1.csv</li></ul><br/>
85+
| Path pattern | Optional. The file path pattern used to write your blobs within the specified container. </br></br> In the path pattern, you may choose to use one or more instances of the date time variables to specify the frequency that blobs are written: </br> {date}, {time} </br> </br>If you are signed up for the [preview](https://aka.ms/ASAPreview), you may also specify one custom {field} name from your event data to partition blobs by, where the field name is alphanumeric and can include spaces, hyphens, and underscores. Restrictions on custom fields include the following: <ul><li>Case insensitivity (cannot different between column "ID" and column "id")</li><li>Nested fields are not permitted (instead use an alias in the job query to "flatten" the field)</li><li>Expressions cannot be used as a field name</li></ul>Examples: <ul><li>Example 1: cluster1/logs/{date}/{time}</li><li>Example 2: cluster1/logs/{date}</li><li>Example 3 (preview): cluster1/{client_id}/{date}/{time}</li><li>Example 4 (preview): cluster1/{myField} where the query is: SELECT data.myField AS myField FROM Input;</li></ul><br>The timestamp of the folder structure created follows UTC and not local time.</br><BR> File naming follows the following convention: </br> {Path Prefix Pattern}/schemaHashcode_Guid_Number.extension </br></br> Example output files: </br><ul><li>Myoutput/20170901/00/45434_gguid_1.csv</li><li>Myoutput/20170901/01/45434_gguid_1.csv</li></ul><br/>
8686
| Date format | Optional. If the date token is used in the prefix path, you can select the date format in which your files are organized. Example: YYYY/MM/DD |
8787
| Time format | Optional. If the time token is used in the prefix path, specify the time format in which your files are organized. Currently the only supported value is HH. |
8888
| Event serialization format | Serialization format for output data. JSON, CSV, and Avro are supported.

0 commit comments

Comments
 (0)