Skip to content

Commit 35e58fb

Browse files
committed
Update services based on v1.44.122 of AWS Go SDK
Reference: https://github.com/aws/aws-sdk-go/releases/tag/v1.44.122
1 parent 7c3b715 commit 35e58fb

File tree

5 files changed

+88
-61
lines changed

5 files changed

+88
-61
lines changed

.latest-tag-aws-sdk-go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
v1.44.121
1+
v1.44.122

lib/aws/generated/acmpca.ex

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,7 @@
33

44
defmodule AWS.ACMPCA do
55
@moduledoc """
6-
This is the *Certificate Manager Private Certificate Authority (PCA) API
7-
Reference*.
6+
This is the *Private Certificate Authority (PCA) API Reference*.
87
98
It provides descriptions, syntax, and usage examples for each of the actions and
109
data types involved in creating and managing a private certificate authority

lib/aws/generated/batch.ex

Lines changed: 34 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -9,20 +9,19 @@ defmodule AWS.Batch do
99
Cloud.
1010
1111
Batch computing is a common means for developers, scientists, and engineers to
12-
access large amounts of compute resources. Batch uses the advantages of this
13-
computing workload to remove the undifferentiated heavy lifting of configuring
14-
and managing required infrastructure. At the same time, it also adopts a
15-
familiar batch computing software approach. Given these advantages, Batch can
16-
help you to efficiently provision resources in response to jobs submitted, thus
17-
effectively helping you to eliminate capacity constraints, reduce compute costs,
18-
and deliver your results more quickly.
12+
access large amounts of compute resources. Batch uses the advantages of the
13+
batch computing to remove the undifferentiated heavy lifting of configuring and
14+
managing required infrastructure. At the same time, it also adopts a familiar
15+
batch computing software approach. You can use Batch to efficiently provision
16+
resources d, and work toward eliminating capacity constraints, reducing your
17+
overall compute costs, and delivering results more quickly.
1918
2019
As a fully managed service, Batch can run batch computing workloads of any
2120
scale. Batch automatically provisions compute resources and optimizes workload
2221
distribution based on the quantity and scale of your specific workloads. With
2322
Batch, there's no need to install or manage batch computing software. This means
24-
that you can focus your time and energy on analyzing results and solving your
25-
specific problems.
23+
that you can focus on analyzing results and solving your specific problems
24+
instead.
2625
"""
2726

2827
alias AWS.Client
@@ -48,9 +47,9 @@ defmodule AWS.Batch do
4847
Cancels a job in an Batch job queue.
4948
5049
Jobs that are in the `SUBMITTED`, `PENDING`, or `RUNNABLE` state are canceled.
51-
Jobs that have progressed to `STARTING` or `RUNNING` aren't canceled, but the
52-
API operation still succeeds, even if no job is canceled. These jobs must be
53-
terminated with the `TerminateJob` operation.
50+
Jobs that progressed to the `STARTING` or `RUNNING` state aren't canceled.
51+
However, the API operation still succeeds, even if no job is canceled. These
52+
jobs must be terminated with the `TerminateJob` operation.
5453
"""
5554
def cancel_job(%Client{} = client, input, options \\ []) do
5655
url_path = "/v1/canceljob"
@@ -91,10 +90,10 @@ defmodule AWS.Batch do
9190
Multi-node parallel jobs aren't supported on Spot Instances.
9291
9392
In an unmanaged compute environment, you can manage your own EC2 compute
94-
resources and have a lot of flexibility with how you configure your compute
95-
resources. For example, you can use custom AMIs. However, you must verify that
96-
each of your AMIs meet the Amazon ECS container instance AMI specification. For
97-
more information, see [container instance AMIs](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container_instance_AMIs.html)
93+
resources and have flexibility with how you configure your compute resources.
94+
For example, you can use custom AMIs. However, you must verify that each of your
95+
AMIs meet the Amazon ECS container instance AMI specification. For more
96+
information, see [container instance AMIs](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container_instance_AMIs.html)
9897
in the *Amazon Elastic Container Service Developer Guide*. After you created
9998
your unmanaged compute environment, you can use the
10099
`DescribeComputeEnvironments` operation to find the Amazon ECS cluster that's
@@ -124,7 +123,7 @@ defmodule AWS.Batch do
124123
To use the enhanced updating of compute environments to update AMIs, follow
125124
these rules:
126125
127-
Either do not set the service role (`serviceRole`) parameter or set
126+
Either don't set the service role (`serviceRole`) parameter or set
128127
it to the **AWSBatchServiceRole** service-linked role.
129128
130129
Set the allocation strategy (`allocationStrategy`) parameter to
@@ -133,27 +132,26 @@ defmodule AWS.Batch do
133132
Set the update to latest image version
134133
(`updateToLatestImageVersion`) parameter to `true`.
135134
136-
Do not specify an AMI ID in `imageId`, `imageIdOverride` (in [
135+
Don't specify an AMI ID in `imageId`, `imageIdOverride` (in [
137136
`ec2Configuration`
138137
](https://docs.aws.amazon.com/batch/latest/APIReference/API_Ec2Configuration.html)),
139-
or in the launch template (`launchTemplate`). In that case Batch will select the
140-
latest Amazon ECS optimized AMI supported by Batch at the time the
141-
infrastructure update is initiated. Alternatively you can specify the AMI ID in
138+
or in the launch template (`launchTemplate`). In that case, Batch selects the
139+
latest Amazon ECS optimized AMI that's supported by Batch at the time the
140+
infrastructure update is initiated. Alternatively, you can specify the AMI ID in
142141
the `imageId` or `imageIdOverride` parameters, or the launch template identified
143-
by the `LaunchTemplate` properties. Changing any of these properties will
144-
trigger an infrastructure update. If the AMI ID is specified in the launch
145-
template, it can not be replaced by specifying an AMI ID in either the `imageId`
146-
or `imageIdOverride` parameters. It can only be replaced by specifying a
147-
different launch template, or if the launch template version is set to
148-
`$Default` or `$Latest`, by setting either a new default version for the launch
149-
template (if `$Default`)or by adding a new version to the launch template (if
150-
`$Latest`).
151-
152-
If these rules are followed, any update that triggers an infrastructure update
153-
will cause the AMI ID to be re-selected. If the `version` setting in the launch
142+
by the `LaunchTemplate` properties. Changing any of these properties starts an
143+
infrastructure update. If the AMI ID is specified in the launch template, it
144+
can't be replaced by specifying an AMI ID in either the `imageId` or
145+
`imageIdOverride` parameters. It can only be replaced by specifying a different
146+
launch template, or if the launch template version is set to `$Default` or
147+
`$Latest`, by setting either a new default version for the launch template (if
148+
`$Default`) or by adding a new version to the launch template (if `$Latest`).
149+
150+
If these rules are followed, any update that starts an infrastructure update
151+
causes the AMI ID to be re-selected. If the `version` setting in the launch
154152
template (`launchTemplate`) is set to `$Latest` or `$Default`, the latest or
155-
default version of the launch template will be evaluated up at the time of the
156-
infrastructure update, even if the `launchTemplate` was not updated.
153+
default version of the launch template is evaluated up at the time of the
154+
infrastructure update, even if the `launchTemplate` wasn't updated.
157155
"""
158156
def create_compute_environment(%Client{} = client, input, options \\ []) do
159157
url_path = "/v1/createcomputeenvironment"
@@ -525,7 +523,7 @@ defmodule AWS.Batch do
525523
526524
Batch resources that support tags are compute environments, jobs, job
527525
definitions, job queues, and scheduling policies. ARNs for child jobs of array
528-
and multi-node parallel (MNP) jobs are not supported.
526+
and multi-node parallel (MNP) jobs aren't supported.
529527
"""
530528
def list_tags_for_resource(%Client{} = client, resource_arn, options \\ []) do
531529
url_path = "/v1/tags/#{AWS.Util.encode_uri(resource_arn)}"
@@ -605,7 +603,7 @@ defmodule AWS.Batch do
605603
aren't changed. When a resource is deleted, the tags that are associated with
606604
that resource are deleted as well. Batch resources that support tags are compute
607605
environments, jobs, job definitions, job queues, and scheduling policies. ARNs
608-
for child jobs of array and multi-node parallel (MNP) jobs are not supported.
606+
for child jobs of array and multi-node parallel (MNP) jobs aren't supported.
609607
"""
610608
def tag_resource(%Client{} = client, resource_arn, input, options \\ []) do
611609
url_path = "/v1/tags/#{AWS.Util.encode_uri(resource_arn)}"

lib/aws/generated/data_sync.ex

Lines changed: 40 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,15 @@ defmodule AWS.DataSync do
66
DataSync
77
88
DataSync is a managed data transfer service that makes it simpler for you to
9-
automate moving data between on-premises storage and Amazon Simple Storage
10-
Service (Amazon S3) or Amazon Elastic File System (Amazon EFS).
9+
automate moving data between on-premises storage and Amazon Web Services storage
10+
services.
1111
12-
This API interface reference for DataSync contains documentation for a
13-
programming interface that you can use to manage DataSync.
12+
You also can use DataSync to transfer data between other cloud providers and
13+
Amazon Web Services storage services.
14+
15+
This API interface reference includes documentation for using DataSync
16+
programmatically. For complete information, see the * [DataSync User Guide](https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html)
17+
*.
1418
"""
1519

1620
alias AWS.Client
@@ -33,16 +37,17 @@ defmodule AWS.DataSync do
3337
end
3438

3539
@doc """
36-
Cancels execution of a task.
40+
Stops an DataSync task execution that's in progress.
41+
42+
The transfer of some files are abruptly interrupted. File contents that're
43+
transferred to the destination might be incomplete or inconsistent with the
44+
source files.
3745
38-
When you cancel a task execution, the transfer of some files is abruptly
39-
interrupted. The contents of files that are transferred to the destination might
40-
be incomplete or inconsistent with the source files. However, if you start a new
41-
task execution on the same task and you allow the task execution to complete,
42-
file content on the destination is complete and consistent. This applies to
43-
other unexpected failures that interrupt a task execution. In all of these
44-
cases, DataSync successfully complete the transfer when you start the next task
45-
execution.
46+
However, if you start a new task execution using the same task and allow it to
47+
finish, file content on the destination will be complete and consistent. This
48+
applies to other unexpected failures that interrupt a task execution. In all of
49+
these cases, DataSync successfully completes the transfer when you start the
50+
next task execution.
4651
"""
4752
def cancel_task_execution(%Client{} = client, input, options \\ []) do
4853
meta = metadata()
@@ -51,7 +56,7 @@ defmodule AWS.DataSync do
5156
end
5257

5358
@doc """
54-
Activates an DataSync agent that you have deployed on your host.
59+
Activates an DataSync agent that you have deployed in your storage environment.
5560
5661
The activation process associates your agent with your account. In the
5762
activation process, you specify information such as the Amazon Web Services
@@ -111,7 +116,13 @@ defmodule AWS.DataSync do
111116
end
112117

113118
@doc """
114-
Creates an endpoint for an Amazon FSx for OpenZFS file system.
119+
Creates an endpoint for an Amazon FSx for OpenZFS file system that DataSync can
120+
access for a transfer.
121+
122+
For more information, see [Creating a location for FSx for OpenZFS](https://docs.aws.amazon.com/datasync/latest/userguide/create-openzfs-location.html).
123+
124+
Request parameters related to `SMB` aren't supported with the
125+
`CreateLocationFsxOpenZfs` operation.
115126
"""
116127
def create_location_fsx_open_zfs(%Client{} = client, input, options \\ []) do
117128
meta = metadata()
@@ -160,7 +171,8 @@ defmodule AWS.DataSync do
160171
end
161172

162173
@doc """
163-
Creates an endpoint for an Amazon S3 bucket.
174+
Creates an endpoint for an Amazon S3 bucket that DataSync can access for a
175+
transfer.
164176
165177
For more information, see [Create an Amazon S3 location](https://docs.aws.amazon.com/datasync/latest/userguide/create-locations-cli.html#create-location-s3-cli)
166178
in the *DataSync User Guide*.
@@ -259,8 +271,8 @@ defmodule AWS.DataSync do
259271
end
260272

261273
@doc """
262-
Returns metadata about an Amazon FSx for Lustre location, such as information
263-
about its path.
274+
Provides details about how an DataSync location for an Amazon FSx for Lustre
275+
file system is configured.
264276
"""
265277
def describe_location_fsx_lustre(%Client{} = client, input, options \\ []) do
266278
meta = metadata()
@@ -271,6 +283,9 @@ defmodule AWS.DataSync do
271283
@doc """
272284
Provides details about how an DataSync location for an Amazon FSx for NetApp
273285
ONTAP file system is configured.
286+
287+
If your location uses SMB, the `DescribeLocationFsxOntap` operation doesn't
288+
actually return a `Password`.
274289
"""
275290
def describe_location_fsx_ontap(%Client{} = client, input, options \\ []) do
276291
meta = metadata()
@@ -279,8 +294,11 @@ defmodule AWS.DataSync do
279294
end
280295

281296
@doc """
282-
Returns metadata about an Amazon FSx for OpenZFS location, such as information
283-
about its path.
297+
Provides details about how an DataSync location for an Amazon FSx for OpenZFS
298+
file system is configured.
299+
300+
Response elements related to `SMB` aren't supported with the
301+
`DescribeLocationFsxOpenZfs` operation.
284302
"""
285303
def describe_location_fsx_open_zfs(%Client{} = client, input, options \\ []) do
286304
meta = metadata()
@@ -491,8 +509,8 @@ defmodule AWS.DataSync do
491509
end
492510

493511
@doc """
494-
Updates some of the parameters of a previously created location for self-managed
495-
object storage server access.
512+
Updates some parameters of an existing object storage location that DataSync
513+
accesses for a transfer.
496514
497515
For information about creating a self-managed object storage location, see
498516
[Creating a location for object storage](https://docs.aws.amazon.com/datasync/latest/userguide/create-object-location.html).

lib/aws/generated/sage_maker.ex

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2472,6 +2472,18 @@ defmodule AWS.SageMaker do
24722472
Request.request_post(client, meta, "ListImages", input, options)
24732473
end
24742474

2475+
@doc """
2476+
Returns a list of the subtasks for an Inference Recommender job.
2477+
2478+
The supported subtasks are benchmarks, which evaluate the performance of your
2479+
model on different instance types.
2480+
"""
2481+
def list_inference_recommendations_job_steps(%Client{} = client, input, options \\ []) do
2482+
meta = metadata()
2483+
2484+
Request.request_post(client, meta, "ListInferenceRecommendationsJobSteps", input, options)
2485+
end
2486+
24752487
@doc """
24762488
Lists recommendation jobs that satisfy various filters.
24772489
"""

0 commit comments

Comments
 (0)