You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/stream-analytics/stream-analytics-define-outputs.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ Stream Analytics supports [Azure Data Lake Store](https://azure.microsoft.com/se
40
40
| --- | --- |
41
41
| Output alias | A friendly name used in queries to direct the query output to this Data Lake Store. |
42
42
| Account name | The name of the Data Lake Storage account where you are sending your output. You are presented with a drop-down list of Data Lake Store accounts that are available in your subscription. |
43
-
| Path prefix pattern | The file path used to write your files within the specified Data Lake Store Account. You can specify one or more instances of the {date} and {time} variables.</br><ul><li>Example 1: folder1/logs/{date}/{time}</li><li>Example 2: folder1/logs/{date}</li></ul>If the file path pattern does not contain a trailing "/", the last pattern in the file path is treated as a filename prefix. </br></br>New files are created in these circumstances:<ul><li>Change in output schema</li><li>External or Internal restart of a job.</li></ul> |
43
+
| Path prefix pattern | The file path used to write your files within the specified Data Lake Store Account. You can specify one or more instances of the {date} and {time} variables.</br><ul><li>Example 1: folder1/logs/{date}/{time}</li><li>Example 2: folder1/logs/{date}</li></ul><br>The timestamp of the folder structure created follows UTC and not local time.</br><br>If the file path pattern does not contain a trailing "/", the last pattern in the file path is treated as a filename prefix. </br></br>New files are created in these circumstances:<ul><li>Change in output schema</li><li>External or Internal restart of a job.</li></ul> |
44
44
| Date format | Optional. If the date token is used in the prefix path, you can select the date format in which your files are organized. Example: YYYY/MM/DD |
45
45
|Time format | Optional. If the time token is used in the prefix path, specify the time format in which your files are organized. Currently the only supported value is HH. |
46
46
| Event serialization format | Serialization format for output data. JSON, CSV, and Avro are supported.|
@@ -82,7 +82,7 @@ The table below lists the property names and their description for creating a bl
82
82
| Storage Account | The name of the storage account where you are sending your output. |
83
83
| Storage Account Key | The secret key associated with the storage account. |
84
84
| Storage Container | Containers provide a logical grouping for blobs stored in the Microsoft Azure Blob service. When you upload a blob to the Blob service, you must specify a container for that blob. |
85
-
| Path pattern | Optional. The file path pattern used to write your blobs within the specified container. </br></br> In the path pattern, you may choose to use one or more instances of the date time variables to specify the frequency that blobs are written: </br> {date}, {time} </br> </br>If you are signed up for the [preview](https://aka.ms/ASAPreview), you may also specify one custom {field} name from your event data to partition blobs by, where the field name is alphanumeric and can include spaces, hyphens, and underscores. Restrictions on custom fields include the following: <ul><li>Case insensitivity (cannot different between column "ID" and column "id")</li><li>Nested fields are not permitted (instead use an alias in the job query to "flatten" the field)</li><li>Expressions cannot be used as a field name</li></ul>Examples: <ul><li>Example 1: cluster1/logs/{date}/{time}</li><li>Example 2: cluster1/logs/{date}</li><li>Example 3 (preview): cluster1/{client_id}/{date}/{time}</li><li>Example 4 (preview): cluster1/{myField} where the query is: SELECT data.myField AS myField FROM Input;</li></ul><BR> File naming follows the following convention: </br> {Path Prefix Pattern}/schemaHashcode_Guid_Number.extension </br></br> Example output files: </br><ul><li>Myoutput/20170901/00/45434_gguid_1.csv</li><li>Myoutput/20170901/01/45434_gguid_1.csv</li></ul><br/>
85
+
| Path pattern | Optional. The file path pattern used to write your blobs within the specified container. </br></br> In the path pattern, you may choose to use one or more instances of the date time variables to specify the frequency that blobs are written: </br> {date}, {time} </br> </br>If you are signed up for the [preview](https://aka.ms/ASAPreview), you may also specify one custom {field} name from your event data to partition blobs by, where the field name is alphanumeric and can include spaces, hyphens, and underscores. Restrictions on custom fields include the following: <ul><li>Case insensitivity (cannot different between column "ID" and column "id")</li><li>Nested fields are not permitted (instead use an alias in the job query to "flatten" the field)</li><li>Expressions cannot be used as a field name</li></ul>Examples: <ul><li>Example 1: cluster1/logs/{date}/{time}</li><li>Example 2: cluster1/logs/{date}</li><li>Example 3 (preview): cluster1/{client_id}/{date}/{time}</li><li>Example 4 (preview): cluster1/{myField} where the query is: SELECT data.myField AS myField FROM Input;</li></ul><br>The timestamp of the folder structure created follows UTC and not local time.</br><BR> File naming follows the following convention: </br> {Path Prefix Pattern}/schemaHashcode_Guid_Number.extension </br></br> Example output files: </br><ul><li>Myoutput/20170901/00/45434_gguid_1.csv</li><li>Myoutput/20170901/01/45434_gguid_1.csv</li></ul><br/>
86
86
| Date format | Optional. If the date token is used in the prefix path, you can select the date format in which your files are organized. Example: YYYY/MM/DD |
87
87
| Time format | Optional. If the time token is used in the prefix path, specify the time format in which your files are organized. Currently the only supported value is HH. |
88
88
| Event serialization format | Serialization format for output data. JSON, CSV, and Avro are supported.
0 commit comments