You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: assistant/v1.ts
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -4228,6 +4228,8 @@ namespace AssistantV1 {
4228
4228
exportinterfaceInputData{
4229
4229
/** The text of the user input. This string cannot contain carriage return, newline, or tab characters, and it must be no longer than 2048 characters. */
4230
4230
text: string;
4231
+
/** InputData accepts additional properties. */
4232
+
[propName: string]: any;
4231
4233
}
4232
4234
4233
4235
/** Intent. */
@@ -4364,8 +4366,6 @@ namespace AssistantV1 {
4364
4366
output: OutputData;
4365
4367
/** An array of objects describing any actions requested by the dialog node. */
/** The unique identifier of the ingested document. */
4937
4992
document_id?: string;
4938
-
/** Status of the document in the ingestion process. */
4993
+
/** Status of the document in the ingestion process. A status of `processing` is returned for documents that are ingested with a *version* date before `2019-01-01`. The `pending` status is returned for all others. */
4939
4994
status?: string;
4940
4995
/** Array of notices produced by the document-ingestion process. */
4941
4996
notices?: Notice[];
@@ -4949,6 +5004,8 @@ namespace DiscoveryV1 {
4949
5004
processing?: number;
4950
5005
/** The number of documents in the collection that failed to be ingested. */
4951
5006
failed?: number;
5007
+
/** The number of documents that have been uploaded to the collection, but have not yet started processing. */
Copy file name to clipboardExpand all lines: speech-to-text/v1-generated.ts
+20-9Lines changed: 20 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,7 @@ import { getMissingParams } from '../lib/helper';
22
22
import{FileObject}from'../lib/helper';
23
23
24
24
/**
25
-
* The IBM® Speech to Text service provides APIs that use IBM's speech-recognition capabilities to produce transcripts of spoken audio. The service can transcribe speech from various languages and audio formats. It addition to basic transcription, the service can produce detailed information about many different aspects of the audio. For most languages, the service supports two sampling rates, broadband and narrowband. It returns all JSON response content in the UTF-8 character set. For speech recognition, the service supports synchronous and asynchronous HTTP Representational State Transfer (REST) interfaces. It also supports a WebSocket interface that provides a full-duplex, low-latency communication channel: Clients send requests and audio to the service and receive results over a single connection asynchronously. The service also offers two customization interfaces. Use language model customization to expand the vocabulary of a base model with domain-specific terminology. Use acoustic model customization to adapt a base model for the acoustic characteristics of your audio. For language model customization, the service also supports grammars. A grammar is a formal language specification that lets you restrict the phrases that the service can recognize. Language model customization is generally available for production use with most supported languages. Acoustic model customization is beta functionality that is available for all supported languages.
25
+
* The IBM® Speech to Text service provides APIs that use IBM's speech-recognition capabilities to produce transcripts of spoken audio. The service can transcribe speech from various languages and audio formats. In addition to basic transcription, the service can produce detailed information about many different aspects of the audio. For most languages, the service supports two sampling rates, broadband and narrowband. It returns all JSON response content in the UTF-8 character set. For speech recognition, the service supports synchronous and asynchronous HTTP Representational State Transfer (REST) interfaces. It also supports a WebSocket interface that provides a full-duplex, low-latency communication channel: Clients send requests and audio to the service and receive results over a single connection asynchronously. The service also offers two customization interfaces. Use language model customization to expand the vocabulary of a base model with domain-specific terminology. Use acoustic model customization to adapt a base model for the acoustic characteristics of your audio. For language model customization, the service also supports grammars. A grammar is a formal language specification that lets you restrict the phrases that the service can recognize. Language model customization is generally available for production use with most supported languages. Acoustic model customization is beta functionality that is available for all supported languages.
26
26
*/
27
27
28
28
classSpeechToTextV1extendsBaseService{
@@ -2565,16 +2565,18 @@ class SpeechToTextV1 extends BaseService {
2565
2565
* existing request completes.
2566
2566
*
2567
2567
* You can use the optional `custom_language_model_id` parameter to specify the GUID of a separately created custom
2568
-
* language model that is to be used during training. Specify a custom language model if you have verbatim
2568
+
* language model that is to be used during training. Train with a custom language model if you have verbatim
2569
2569
* transcriptions of the audio files that you have added to the custom model or you have either corpora (text files)
2570
-
* or a list of words that are relevant to the contents of the audio files. For more information, see the **Create a
2571
-
* custom language model** method.
2570
+
* or a list of words that are relevant to the contents of the audio files. Both of the custom models must be based on
2571
+
* the same version of the same base model for training to succeed.
2572
2572
*
2573
2573
* Training can fail to start for the following reasons:
2574
2574
* * The service is currently handling another request for the custom model, such as another training request or a
2575
2575
* request to add audio resources to the model.
2576
2576
* * The custom model contains less than 10 minutes or more than 100 hours of audio data.
2577
2577
* * One or more of the custom model's audio resources is invalid.
2578
+
* * You passed an incompatible custom language model with the `custom_language_model_id` query parameter. Both custom
2579
+
* models must be based on the same version of the same base model.
/** The customization ID (GUID) of the custom acoustic model that is to be used for the request. You must make the request with credentials for the instance of the service that owns the custom model. */
3712
3721
customization_id: string;
3713
-
/** The customization ID (GUID) of a custom language model that is to be used during training of the custom acoustic model. Specify a custom language model that has been trained with verbatim transcriptions of the audio resources or that contains words that are relevant to the contents of the audio resources. */
3722
+
/** The customization ID (GUID) of a custom language model that is to be used during training of the custom acoustic model. Specify a custom language model that has been trained with verbatim transcriptions of the audio resources or that contains words that are relevant to the contents of the audio resources. The custom language model must be based on the same version of the same base model as the custom acoustic model. The credentials specified with the request must own both custom models. */
3714
3723
custom_language_model_id?: string;
3715
3724
headers?: Object;
3716
3725
}
@@ -3719,8 +3728,10 @@ namespace SpeechToTextV1 {
3719
3728
exportinterfaceUpgradeAcousticModelParams{
3720
3729
/** The customization ID (GUID) of the custom acoustic model that is to be used for the request. You must make the request with credentials for the instance of the service that owns the custom model. */
3721
3730
customization_id: string;
3722
-
/** If the custom acoustic model was trained with a custom language model, the customization ID (GUID) of that custom language model. The custom language model must be upgraded before the custom acoustic model can be upgraded. */
3731
+
/** If the custom acoustic model was trained with a custom language model, the customization ID (GUID) of that custom language model. The custom language model must be upgraded before the custom acoustic model can be upgraded. The credentials specified with the request must own both custom models. */
3723
3732
custom_language_model_id?: string;
3733
+
/** If `true`, forces the upgrade of a custom acoustic model for which no input data has been modified since it was last trained. Use this parameter only to force the upgrade of a custom acoustic model that is trained with a custom language model, and only if you receive a 400 response code and the message `No input data modified since last training`. See [Upgrading a custom acoustic model](https://cloud.ibm.com/docs/services/speech-to-text/custom-upgrade.html#upgradeAcoustic). */
0 commit comments