Skip to content

Commit bdbac59

Browse files
authored
Merge pull request watson-developer-cloud#766 from watson-developer-cloud/regenerate-sdk-release-6
SDK Release 6
2 parents 6a67b5d + 4e822cd commit bdbac59

File tree

7 files changed

+835
-26
lines changed

7 files changed

+835
-26
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -88,10 +88,10 @@ Watson services are migrating to token-based Identity and Access Management (IAM
8888
### Getting credentials
8989
To find out which authentication to use, view the service credentials. You find the service credentials for authentication the same way for all Watson services:
9090

91-
1. Go to the IBM Cloud **[Dashboard][watson-dashboard]** page.
92-
1. Either click an existing Watson service instance or click **Create**.
93-
1. Click **Show** to view your service credentials.
94-
1. Copy the `url` and either `apikey` or `username` and `password`.
91+
1. Go to the IBM Cloud [Dashboard](https://console.bluemix.net/dashboard/apps?category=ai) page.
92+
1. Either click an existing Watson service instance or click [**Create resource > AI**](https://console.bluemix.net/catalog/?category=ai) and create a service instance.
93+
1. Click **Show** to view your service credentials.
94+
1. Copy the `url` and either `apikey` or `username` and `password`.
9595

9696
### IAM
9797

@@ -283,7 +283,7 @@ function (err, token) {
283283

284284
Use the [Assistant][conversation] service to determine the intent of a message.
285285

286-
Note: you must first create a workspace via Bluemix. See [the documentation](https://console.bluemix.net/docs/services/conversation/index.html#about) for details.
286+
Note: You must first create a workspace via IBM Cloud. See [the documentation](https://console.bluemix.net/docs/services/conversation/index.html#about) for details.
287287

288288
```js
289289
var AssistantV1 = require('watson-developer-cloud/assistant/v1');

discovery/v1-generated.ts

Lines changed: 569 additions & 13 deletions
Large diffs are not rendered by default.

natural-language-classifier/v1-generated.ts

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ class NaturalLanguageClassifierV1 extends BaseService {
6262
*
6363
* @param {Object} params - The parameters to send to the service.
6464
* @param {string} params.classifier_id - Classifier ID to use.
65-
* @param {string} params.text - The submitted phrase.
65+
* @param {string} params.text - The submitted phrase. The maximum length is 2048 characters.
6666
* @param {Object} [params.headers] - Custom request headers
6767
* @param {Function} [callback] - The callback that handles the response.
6868
* @returns {NodeJS.ReadableStream|void}
@@ -342,7 +342,7 @@ namespace NaturalLanguageClassifierV1 {
342342
export interface ClassifyParams {
343343
/** Classifier ID to use. */
344344
classifier_id: string;
345-
/** The submitted phrase. */
345+
/** The submitted phrase. The maximum length is 2048 characters. */
346346
text: string;
347347
headers?: Object;
348348
}
@@ -446,13 +446,13 @@ namespace NaturalLanguageClassifierV1 {
446446

447447
/** Request payload to classify. */
448448
export interface ClassifyInput {
449-
/** The submitted phrase. */
449+
/** The submitted phrase. The maximum length is 2048 characters. */
450450
text: string;
451451
}
452452

453453
/** Response from the classifier for a phrase in a collection. */
454454
export interface CollectionItem {
455-
/** The submitted phrase. */
455+
/** The submitted phrase. The maximum length is 2048 characters. */
456456
text?: string;
457457
/** The class with the highest confidence. */
458458
top_class?: string;

speech-to-text/v1-generated.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ import { getMissingParams } from '../lib/helper';
2121
import { FileObject } from '../lib/helper';
2222

2323
/**
24-
* The IBM® Speech to Text service provides an API that uses IBM's speech-recognition capabilities to produce transcripts of spoken audio. The service can transcribe speech from various languages and audio formats. It addition to basic transcription, the service can produce detailed information about many aspects of the audio. For most languages, the service supports two sampling rates, broadband and narrowband. It returns all JSON response content in the UTF-8 character set. For more information about the service, see the [IBM® Cloud documentation](https://console.bluemix.net/docs/services/speech-to-text/index.html). ### API usage guidelines * **Audio formats:** The service accepts audio in many formats (MIME types). See [Audio formats](https://console.bluemix.net/docs/services/speech-to-text/audio-formats.html). * **HTTP interfaces:** The service provides three HTTP interfaces for speech recognition. The sessionless interface includes a single synchronous method. The session-based interface includes multiple synchronous methods for maintaining a long, multi-turn exchange with the service. And the asynchronous interface provides multiple methods that use registered callbacks and polling for non-blocking recognition. See [The HTTP REST interface](https://console.bluemix.net/docs/services/speech-to-text/http.html) and [The asynchronous HTTP interface](https://console.bluemix.net/docs/services/speech-to-text/async.html). * **WebSocket interface:** The service also offers a WebSocket interface for speech recognition. The WebSocket interface provides a full-duplex, low-latency communication channel. Clients send requests and audio to the service and receive results over a single connection in an asynchronous fashion. See [The WebSocket interface](https://console.bluemix.net/docs/services/speech-to-text/websockets.html). * **Customization:** Use language model customization to expand the vocabulary of a base model with domain-specific terminology. Use acoustic model customization to adapt a base model for the acoustic characteristics of your audio. Language model customization is generally available for production use by most supported languages; acoustic model customization is beta functionality that is available for all supported languages. See [The customization interface](https://console.bluemix.net/docs/services/speech-to-text/custom.html). * **Customization IDs:** Many methods accept a customization ID to identify a custom language or custom acoustic model. Customization IDs are Globally Unique Identifiers (GUIDs). They are hexadecimal strings that have the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. * **`X-Watson-Learning-Opt-Out`:** By default, all Watson services log requests and their results. Logging is done only to improve the services for future users. The logged data is not shared or made public. To prevent IBM from accessing your data for general service improvements, set the `X-Watson-Learning-Opt-Out` request header to `true` for all requests. You must set the header on each request that you do not want IBM to access for general service improvements. Methods of the customization interface do not log corpora, words, and audio resources that you use to build custom models. Your training data is never used to improve the service's base models. However, the service does log such data when a custom model is used with a recognition request. You must set the `X-Watson-Learning-Opt-Out` request header to `true` to prevent IBM from accessing the data to improve the service. * **`X-Watson-Metadata`**: This header allows you to associate a customer ID with data that is passed with a request. If necessary, you can use the **Delete labeled data** method to delete the data for a customer ID. See [Information security](https://console.bluemix.net/docs/services/speech-to-text/information-security.html).
24+
* The IBM® Speech to Text service provides an API that uses IBM's speech-recognition capabilities to produce transcripts of spoken audio. The service can transcribe speech from various languages and audio formats. It addition to basic transcription, the service can produce detailed information about many aspects of the audio. For most languages, the service supports two sampling rates, broadband and narrowband. It returns all JSON response content in the UTF-8 character set. For more information about the service, see the [IBM® Cloud documentation](https://console.bluemix.net/docs/services/speech-to-text/index.html). ### API usage guidelines * **Audio formats:** The service accepts audio in many formats (MIME types). See [Audio formats](https://console.bluemix.net/docs/services/speech-to-text/audio-formats.html). * **HTTP interfaces:** The service provides three HTTP interfaces for speech recognition. The sessionless interface includes a single synchronous method. The session-based interface includes multiple synchronous methods for maintaining a long, multi-turn exchange with the service. And the asynchronous interface provides multiple methods that use registered callbacks and polling for non-blocking recognition. See [The HTTP REST interface](https://console.bluemix.net/docs/services/speech-to-text/http.html) and [The asynchronous HTTP interface](https://console.bluemix.net/docs/services/speech-to-text/async.html). **Important:** The session-based interface is deprecated as of August 8, 2018, and will be removed from service on September 7, 2018. Use the sessionless, asynchronous, or WebSocket interface instead. For more information, see the August 8 service update in the [Release notes](https://console.bluemix.net/docs/services/speech-to-text/release-notes.html#August2018). * **WebSocket interface:** The service also offers a WebSocket interface for speech recognition. The WebSocket interface provides a full-duplex, low-latency communication channel. Clients send requests and audio to the service and receive results over a single connection in an asynchronous fashion. See [The WebSocket interface](https://console.bluemix.net/docs/services/speech-to-text/websockets.html). * **Customization:** Use language model customization to expand the vocabulary of a base model with domain-specific terminology. Use acoustic model customization to adapt a base model for the acoustic characteristics of your audio. Language model customization is generally available for production use by most supported languages; acoustic model customization is beta functionality that is available for all supported languages. See [The customization interface](https://console.bluemix.net/docs/services/speech-to-text/custom.html). * **Customization IDs:** Many methods accept a customization ID to identify a custom language or custom acoustic model. Customization IDs are Globally Unique Identifiers (GUIDs). They are hexadecimal strings that have the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. * **`X-Watson-Learning-Opt-Out`:** By default, all Watson services log requests and their results. Logging is done only to improve the services for future users. The logged data is not shared or made public. To prevent IBM from accessing your data for general service improvements, set the `X-Watson-Learning-Opt-Out` request header to `true` for all requests. You must set the header on each request that you do not want IBM to access for general service improvements. Methods of the customization interface do not log corpora, words, and audio resources that you use to build custom models. Your training data is never used to improve the service's base models. However, the service does log such data when a custom model is used with a recognition request. You must set the `X-Watson-Learning-Opt-Out` request header to `true` to prevent IBM from accessing the data to improve the service. * **`X-Watson-Metadata`**: This header allows you to associate a customer ID with data that is passed with a request. If necessary, you can use the **Delete labeled data** method to delete the data for a customer ID. See [Information security](https://console.bluemix.net/docs/services/speech-to-text/information-security.html).
2525
*/
2626

2727
class SpeechToTextV1 extends BaseService {

test/integration/test.discovery.js

Lines changed: 170 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ describe('discovery_integration', function() {
2222
let configuration_id;
2323
let collection_id;
2424
let collection_id2;
25+
let document_id;
2526

2627
before(function() {
2728
environment_id = auth.discovery.environment_id;
@@ -158,6 +159,7 @@ describe('discovery_integration', function() {
158159
discovery.addDocument(document_obj, function(err, response) {
159160
assert.ifError(err);
160161
assert(response.document_id);
162+
document_id = response.document_id;
161163
done(err);
162164
});
163165
});
@@ -325,4 +327,172 @@ describe('discovery_integration', function() {
325327
);
326328
});
327329
});
330+
331+
describe('events tests', function() {
332+
let document_id;
333+
let session_token;
334+
335+
before(function(done) {
336+
const addDocParams = {
337+
environment_id,
338+
collection_id,
339+
file: fs.createReadStream('./test/resources/sampleWord.docx'),
340+
};
341+
342+
discovery.addDocument(addDocParams, function(error, response) {
343+
document_id = response.document_id;
344+
345+
const queryParams = {
346+
environment_id,
347+
collection_id,
348+
natural_language_query: 'jeopardy',
349+
};
350+
351+
discovery.query(queryParams, function(err, res) {
352+
session_token = res.session_token;
353+
done();
354+
});
355+
});
356+
});
357+
358+
it('should create event', function(done) {
359+
const type = 'click';
360+
const createEventParams = {
361+
type,
362+
data: {
363+
environment_id,
364+
session_token,
365+
collection_id,
366+
document_id,
367+
},
368+
};
369+
discovery.createEvent(createEventParams, function(err, res) {
370+
assert.ifError(err);
371+
assert.equal(res.type, type);
372+
assert.equal(res.data.environment_id, environment_id);
373+
assert.equal(res.data.collection_id, collection_id);
374+
assert.equal(res.data.document_id, document_id);
375+
assert.equal(res.data.session_token, session_token);
376+
assert(res.data.result_type);
377+
assert(res.data.query_id);
378+
done();
379+
});
380+
});
381+
382+
after(function(done) {
383+
const params = {
384+
environment_id,
385+
collection_id,
386+
document_id,
387+
};
388+
discovery.deleteDocument(params, function(err, res) {
389+
done();
390+
});
391+
});
392+
});
393+
394+
describe('metrics tests', function() {
395+
const start_time = '2018-08-07T00:00:00Z';
396+
const end_time = '2018-08-08T00:00:00Z';
397+
398+
it('should get metrics event rate', function(done) {
399+
const params = {
400+
start_time,
401+
end_time,
402+
// result_type can only be either document or passage.
403+
// but i get no results with either
404+
};
405+
discovery.getMetricsEventRate(params, function(err, res) {
406+
assert.ifError(err);
407+
assert(res.aggregations);
408+
assert(Array.isArray(res.aggregations));
409+
assert(res.aggregations.length);
410+
assert(res.aggregations[0].results);
411+
assert(Array.isArray(res.aggregations[0].results));
412+
assert(res.aggregations[0].results.length);
413+
assert.notEqual(res.aggregations[0].results[0].event_rate, undefined);
414+
done();
415+
});
416+
});
417+
it('should get metrics query', function(done) {
418+
const params = {
419+
start_time,
420+
end_time,
421+
};
422+
discovery.getMetricsQuery(params, function(err, res) {
423+
assert.ifError(err);
424+
assert(res.aggregations);
425+
assert(Array.isArray(res.aggregations));
426+
assert(res.aggregations.length);
427+
assert(res.aggregations[0].results);
428+
assert(Array.isArray(res.aggregations[0].results));
429+
assert(res.aggregations[0].results.length);
430+
assert.notEqual(res.aggregations[0].results[0].matching_results, undefined);
431+
done();
432+
});
433+
});
434+
it('should get metrics query event', function(done) {
435+
discovery.getMetricsQueryEvent(function(err, res) {
436+
assert.ifError(err);
437+
assert(res.aggregations);
438+
assert(Array.isArray(res.aggregations));
439+
assert(res.aggregations.length);
440+
assert(res.aggregations[0].results);
441+
assert(Array.isArray(res.aggregations[0].results));
442+
assert(res.aggregations[0].results.length);
443+
assert.notEqual(res.aggregations[0].results[0].matching_results, undefined);
444+
done();
445+
});
446+
});
447+
it('should get metrics query no results', function(done) {
448+
discovery.getMetricsQueryNoResults(function(err, res) {
449+
assert.ifError(err);
450+
assert(res.aggregations);
451+
assert(Array.isArray(res.aggregations));
452+
assert(res.aggregations.length);
453+
assert(res.aggregations[0].results);
454+
assert(Array.isArray(res.aggregations[0].results));
455+
assert(res.aggregations[0].results.length);
456+
assert.notEqual(res.aggregations[0].results[0].matching_results, undefined);
457+
done();
458+
});
459+
});
460+
it('should get metrics query token event', function(done) {
461+
const count = 2;
462+
const params = { count };
463+
discovery.getMetricsQueryTokenEvent(params, function(err, res) {
464+
assert.ifError(err);
465+
assert(res.aggregations);
466+
assert(Array.isArray(res.aggregations));
467+
assert(res.aggregations.length);
468+
assert(res.aggregations[0].results);
469+
assert(Array.isArray(res.aggregations[0].results));
470+
assert.equal(res.aggregations[0].results.length, count);
471+
assert.notEqual(res.aggregations[0].results[0].event_rate, undefined);
472+
done();
473+
});
474+
});
475+
});
476+
477+
describe('logs tests', function() {
478+
it('should query log', function(done) {
479+
const count = 2;
480+
const filter = 'stuff';
481+
const params = {
482+
count,
483+
offset: 1,
484+
filter,
485+
sort: ['created_timestamp'],
486+
};
487+
discovery.queryLog(params, function(err, res) {
488+
assert.ifError(err);
489+
assert(res.matching_results);
490+
assert(res.results);
491+
assert(Array.isArray(res.results));
492+
assert.equal(res.results.length, count);
493+
assert.notEqual(res.results[0].natural_language_query.indexOf(filter), -1);
494+
done();
495+
});
496+
});
497+
});
328498
});

0 commit comments

Comments
 (0)