diff --git a/ml/cloud_shell_tutorials/cloud-nl-intro/request.json b/ml/cloud_shell_tutorials/cloud-nl-intro/request.json new file mode 100644 index 0000000..cb2b8a8 --- /dev/null +++ b/ml/cloud_shell_tutorials/cloud-nl-intro/request.json @@ -0,0 +1,7 @@ +{ + "document":{ + "type":"PLAIN_TEXT", + "content":"Joanne Rowling, who writes under the pen names J. K. Rowling and Robert Galbraith, is a British novelist and screenwriter who wrote the Harry Potter fantasy series." + }, + "encodingType":"UTF8" +} diff --git a/ml/cloud_shell_tutorials/cloud-nl-intro/request2.json b/ml/cloud_shell_tutorials/cloud-nl-intro/request2.json new file mode 100644 index 0000000..534295a --- /dev/null +++ b/ml/cloud_shell_tutorials/cloud-nl-intro/request2.json @@ -0,0 +1,7 @@ +{ + "document":{ + "type":"PLAIN_TEXT", + "content":"Harry Potter is the best book. I think everyone should read it." + }, + "encodingType": "UTF8" +} diff --git a/ml/cloud_shell_tutorials/cloud-nl-intro/request3.json b/ml/cloud_shell_tutorials/cloud-nl-intro/request3.json new file mode 100644 index 0000000..5b8a238 --- /dev/null +++ b/ml/cloud_shell_tutorials/cloud-nl-intro/request3.json @@ -0,0 +1,7 @@ +{ + "document":{ + "type":"PLAIN_TEXT", + "content":"I liked the sushi but the service was terrible." + }, + "encodingType": "UTF8" +} diff --git a/ml/cloud_shell_tutorials/cloud-nl-intro/request4.json b/ml/cloud_shell_tutorials/cloud-nl-intro/request4.json new file mode 100644 index 0000000..5f5fd1b --- /dev/null +++ b/ml/cloud_shell_tutorials/cloud-nl-intro/request4.json @@ -0,0 +1,7 @@ +{ + "document":{ + "type":"PLAIN_TEXT", + "content": "Hermione often uses her quick wit, deft recall, and encyclopaedic knowledge to help Harry and Ron." + }, + "encodingType": "UTF8" +} diff --git a/ml/cloud_shell_tutorials/cloud-nl-intro/request5.json b/ml/cloud_shell_tutorials/cloud-nl-intro/request5.json new file mode 100644 index 0000000..a73e63e --- /dev/null +++ b/ml/cloud_shell_tutorials/cloud-nl-intro/request5.json @@ -0,0 +1,6 @@ +{ + "document":{ + "type":"PLAIN_TEXT", + "content":"日本のグーグルのオフィスは、東京の六本木ヒルズにあります" + } +} diff --git a/ml/cloud_shell_tutorials/cloud-nl-intro/tutorial.md b/ml/cloud_shell_tutorials/cloud-nl-intro/tutorial.md new file mode 100644 index 0000000..ccc2665 --- /dev/null +++ b/ml/cloud_shell_tutorials/cloud-nl-intro/tutorial.md @@ -0,0 +1,498 @@ +# Entity and Sentiment Analysis with the Natural Language API + + + + +## GSP038 + + + + +![Rs8rJ9Ct1Xr7k8129HkD7h-HcMD5ttJX6NXaw70TwbqCbkh5S0Y1lr-AfND_9grwzgGxvMkmAjgziGRJ_qoqMKfwKG88fg7_IYXwxcQ-H6e_fkLTQ_ypbP4x-pMT9YzAuUb5clXc](https://lh3.googleusercontent.com/Rs8rJ9Ct1Xr7k8129HkD7h-HcMD5ttJX6NXaw70TwbqCbkh5S0Y1lr-AfND_9grwzgGxvMkmAjgziGRJ_qoqMKfwKG88fg7_IYXwxcQ-H6e_fkLTQ_ypbP4x-pMT9YzAuUb5clXc) + +## Overview + +The Cloud Natural Language API lets you extract entities from text, perform sentiment and syntactic analysis, and classify text into categories. +In this lab, we'll learn how to use the Natural Language API to analyze entities, sentiment, and syntax. + +What you'll learn: + +* Creating a Natural Language API request and calling the API with curl +* Extracting entities and running sentiment analysis on text with the Natural Language API +* Performing linguistic analysis on text with the Natural Language API +* Creating a Natural Language API request in a different language + +**Time to complete**: About 30 minutes + +Click the **Continue** button to move to the next step. + +## Create an API Key + +Since we'll be using curl to send a request to the Natural Language API, we'll need to generate an API key to pass in our request URL. + +> **Note**: If you've already created an API key in this project during one of the other Cloud Shell tutorials, you can just use the existing key⸺you don't need to create another one. + +To create an API key, navigate to: + +**APIs & services > Credentials**: + +![apis_and_services](https://storage.googleapis.com/aju-dev-demos-codelabs/apis_and_services.png) + +Then click __Create credentials__: + +![create_credentials1](https://storage.googleapis.com/aju-dev-demos-codelabs/create_credentials1.png) + +In the drop-down menu, select __API key__: + +![create_credentials2](https://storage.googleapis.com/aju-dev-demos-codelabs/create_credentials2.png) + +Next, copy the key you just generated. Click __Close__. + +Now that you have an API key, save it to an environment variable to avoid having to insert the value of your API key in each request. You can do this in Cloud Shell. Be sure to replace `` with the key you just copied. + +```bash +export API_KEY= +``` + +Next, you'll enable the Natural Language API for your project, if you've not already done so. + +## Enable the Natural Langage API + +[** TODO: what's the best approach? **] + +Next, you'll use the Natural Language API to analyze *entities* in text. + +## Make an Entity Analysis Request + + +The first Natural Language API method we'll use is `analyzeEntities`. With this method, the API can extract entities +(like people, places, and events) from text. To try out the API's entity analysis, we'll use the following sentence: + +> *Joanne Rowling, who writes under the pen names J. K. Rowling and Robert Galbraith, is a British novelist and screenwriter who wrote the Harry Potter fantasy series.* + +Bring up the `request.json` file +`walkthrough editor-open-file "code-snippets/ml/cloud_shell_tutorials/cloud-nl-intro/request.json" "in the text editor"`. + +It should look like this: + +```json +{ + "document":{ + "type":"PLAIN_TEXT", + "content":"Joanne Rowling, who writes under the pen names J. K. Rowling and Robert Galbraith, is a British novelist and screenwriter who wrote the Harry Potter fantasy series." + }, + "encodingType":"UTF8" +} +``` + + +In the request, you're telling the Natural Language API about the text being sent. Supported type values +are `PLAIN_TEXT` or `HTML`. In content, we pass the text to send to the Natural Language API for analysis. The Natural Language API also supports sending files stored in Cloud Storage for text processing. If you wanted to send a file from Cloud Storage, you would replace `content` with `gcsContentUri` and give it a value of the text file's uri in Cloud Storage. `encodingType` tells the API which type of text encoding to use when processing our text. The API will use this to calculate where specific entities appear in our text. + +Next, you'll call the Natural Language API with that request. + +## Call the Natural Language API + + +You can now pass your request body, along with the API key environment variable you saved earlier, to the Natural Language API with the following `curl` command (all in one single command line): + +```bash +curl "https://language.googleapis.com/v1/documents:analyzeEntities?key=${API_KEY}" \ + -s -X POST -H "Content-Type: application/json" --data-binary @request.json +``` + +Notice that the curl command used the API key that you generated. + +The beginning of your response should look like this: + +```json +{ + "entities": [ + { + "name": "Robert Galbraith", + "type": "PERSON", + "metadata": { + "mid": "/m/042xh", + "wikipedia_url": "https://en.wikipedia.org/wiki/J._K._Rowling" + }, + "salience": 0.7980405, + "mentions": [ + { + "text": { + "content": "Joanne Rowling", + "beginOffset": 0 + }, + "type": "PROPER" + }, + { + "text": { + "content": "Rowling", + "beginOffset": 53 + }, + "type": "PROPER" + }, + { + "text": { + "content": "novelist", + "beginOffset": 96 + }, + "type": "COMMON" + }, + { + "text": { + "content": "Robert Galbraith", + "beginOffset": 65 + }, + "type": "PROPER" + } + ] + }, + ... + ]} +``` + +For each entity in the response, we get the entity `type`, the associated Wikipedia URL if there is one, the +`salience`, and the indices of where this entity appeared in the text. Salience is a number in the [0,1] range that refers to the centrality of the entity to the text as a whole. The Natural Language API can also recognize the same entity mentioned in different ways. Take a look at the `mentions` list in the response: ​the API is able to tell that "Joanne Rowling", "Rowling", "novelist" and "Robert Galbriath" all point to the same thing.​ + +Next, we'll use the Natural Language API to perform sentiment analysis. + +## Sentiment analysis with the Natural Language API + +In addition to extracting entities, the Natural Language API also lets you perform sentiment analysis on a block of text. This JSON request will include the same parameters as the request above, but this time change the text to include something with a stronger sentiment. + +Bring up the `request2.json` file +`walkthrough editor-open-file "code-snippets/ml/cloud_shell_tutorials/cloud-nl-intro/request2.json" "in the text editor"`. + +It should look like the following. (Feel free to replace the `content` below with your own text). + +```json +{ + "document":{ + "type":"PLAIN_TEXT", + "content":"Harry Potter is the best book. I think everyone should read it." + }, + "encodingType": "UTF8" +} +``` + +Next we'll send the request to the API's `analyzeSentiment` endpoint: + +```bash +curl "https://language.googleapis.com/v1/documents:analyzeSentiment?key=${API_KEY}" \ + -s -X POST -H "Content-Type: application/json" --data-binary @request2.json +``` + +Your response should look like this: + +```json +{ + "documentSentiment": { + "magnitude": 0.8, + "score": 0.4 + }, + "language": "en", + "sentences": [ + { + "text": { + "content": "Harry Potter is the best book.", + "beginOffset": 0 + }, + "sentiment": { + "magnitude": 0.7, + "score": 0.7 + } + }, + { + "text": { + "content": "I think everyone should read it.", + "beginOffset": 31 + }, + "sentiment": { + "magnitude": 0.1, + "score": 0.1 + } + } + ] +} +``` + +Notice that you get two types of sentiment values: sentiment for the document as a whole, and sentiment broken down by sentence. The sentiment method returns two values: + +* `score` - a number from -1.0 to 1.0 indicating how positive or negative the statement is. +* `magnitude` - a number ranging from 0 to infinity that represents the weight of sentiment expressed in the statement, regardless of being positive or negative. + +Longer blocks of text with heavily weighted statements have higher magnitude values. The score for the first sentence is positive (0.7), whereas the score for the second sentence is neutral (0.1). + +In addition to providing sentiment details on the entire text document, the Natural Language API can also break down sentiment by the entities in the text. We'll look at that next. + +## Analyzing entity sentiment + +In addition to providing sentiment details on the entire text document, the Natural Language API can also break down sentiment by the entities in the text. Use this sentence as an example: + +> *I liked the sushi but the service was terrible*. + +In this case, getting a sentiment score for the entire sentence as you did above might not be so useful. If this was a restaurant review and there were hundreds of reviews for the same restaurant, you'd want to know exactly which things people liked and didn't like in their reviews. Fortunately, the Natural Language API has a method that lets you get the sentiment for each entity in the text, called `analyzeEntitySentiment`. Let's see how it works! + +Bring up the `request3.json` file +`walkthrough editor-open-file "code-snippets/ml/cloud_shell_tutorials/cloud-nl-intro/request3.json" "in the text editor"`. + +It should look like this: + +```json +{ + "document":{ + "type":"PLAIN_TEXT", + "content":"I liked the sushi but the service was terrible." + }, + "encodingType": "UTF8" +} +``` + + +Then call the `analyzeEntitySentiment` endpoint with the following curl command: + +```bash +curl "https://language.googleapis.com/v1/documents:analyzeEntitySentiment?key=${API_KEY}" \ + -s -X POST -H "Content-Type: application/json" --data-binary @request3.json +``` + +In the response, you get back two entity objects: one for "sushi" and one for "service." Here's the full JSON response: + +```json +{ + "entities": [ + { + "name": "sushi", + "type": "CONSUMER_GOOD", + "metadata": {}, + "salience": 0.52716845, + "mentions": [ + { + "text": { + "content": "sushi", + "beginOffset": 12 + }, + "type": "COMMON", + "sentiment": { + "magnitude": 0.9, + "score": 0.9 + } + } + ], + "sentiment": { + "magnitude": 0.9, + "score": 0.9 + } + }, + { + "name": "service", + "type": "OTHER", + "metadata": {}, + "salience": 0.47283158, + "mentions": [ + { + "text": { + "content": "service", + "beginOffset": 26 + }, + "type": "COMMON", + "sentiment": { + "magnitude": 0.9, + "score": -0.9 + } + } + ], + "sentiment": { + "magnitude": 0.9, + "score": -0.9 + } + } + ], + "language": "en" +} +``` + +You can see that the score returned for "sushi" was 0.9, whereas "service" got a score of -0.9. Cool! You also may notice that there are two sentiment objects returned for each entity. If either of these terms were mentioned more than once, the API would return a different sentiment score and magnitude for each mention, along with an aggregate sentiment for the entity. + +The Natural Language API can also be used for analyzing syntax and parts of speech. We'll do that next. + +## Analyzing syntax and parts of speech + +Looking at the Natural Language API's third method - text annotation - you'll dive deeper into the the linguistic details of the text. `annotateText` is an advanced method that provides a full set of details on the semantic and syntactic elements of the text. For each word in the text, the API will tell us the word's part of speech (noun, verb, adjective, etc.) and how it relates to other words in the sentence (Is it the root verb? A modifier?). + +Try it out with a simple sentence. This JSON request will be similar to the ones above, with the addition of a features key. This will tell the API that we'd like to perform syntax annotation. + +Bring up the `request4.json` file +`walkthrough editor-open-file "code-snippets/ml/cloud_shell_tutorials/cloud-nl-intro/request4.json" "in the text editor"`. + +It should look like this: + +```json +{ + "document":{ + "type":"PLAIN_TEXT", + "content": "Hermione often uses her quick wit, deft recall, and encyclopaedic knowledge to help Harry and Ron." + }, + "encodingType": "UTF8" +} +``` + +Then call the API's `annotateText` method: + +```bash +curl "https://language.googleapis.com/v1/documents:analyzeSyntax?key=${API_KEY}" \ + -s -X POST -H "Content-Type: application/json" --data-binary @request4.json +``` + +The response should return an object like the one below for each token in the sentence: + +```json +{ + "text": { + "content": "uses", + "beginOffset": 15 + }, + "partOfSpeech": { + "tag": "VERB", + "aspect": "ASPECT_UNKNOWN", + "case": "CASE_UNKNOWN", + "form": "FORM_UNKNOWN", + "gender": "GENDER_UNKNOWN", + "mood": "INDICATIVE", + "number": "SINGULAR", + "person": "THIRD", + "proper": "PROPER_UNKNOWN", + "reciprocity": "RECIPROCITY_UNKNOWN", + "tense": "PRESENT", + "voice": "VOICE_UNKNOWN" + }, + "dependencyEdge": { + "headTokenIndex": 2, + "label": "ROOT" + }, + "lemma": "use" +} +``` + +Let's break down the response: + +* `partOfSpeech` tells us that "Joanne" is a noun. +* `dependencyEdge` includes data that you can use to create a [dependency parse tree](https://en.wikipedia.org/wiki/Parse_tree#Dependency-based_parse_trees) of the text. Essentially, this is a diagram showing how words in a sentence relate to each other. A dependency parse tree for the sentence above would look like this: + +![1fb62ed60618e914.png](img/1fb62ed60618e914.png) + +* `headTokenIndex` is the index of the token that has an arc pointing at "Joanne". We can think of each token in the sentence as a word in an array. +* `headTokenIndex` of 1 for "Joanne" refers to the word "Rowling," which it is connected to in the tree. The label `NN` (short for noun compound modifier) describes the word's role in the sentence. "Joanne" modifies "Rowling," the subject of the sentence. +* `lemma` is the canonical form of the word. For example, the words *run* , *runs* , *ran* , and *running* all have a lemma of *run* . The lemma value is useful for tracking occurrences of a word in a large piece of text over time. + +The Natural Language API also supports languages other than English. Let's look at a Japanese example next. + +## Multilingual natural language processing + +The Natural Language API also supports languages other than English (full list [here](https://cloud.google.com/natural-language/docs/languages)). + + +Bring up the `request5.json` file +`walkthrough editor-open-file "code-snippets/ml/cloud_shell_tutorials/cloud-nl-intro/request5.json" "in the text editor"`. + +It should look like this: + + +```json +{ + "document":{ + "type":"PLAIN_TEXT", + "content":"日本のグーグルのオフィスは、東京の六本木ヒルズにあります" + } +} +``` + +Notice that you didn't need to tell the API which language the text is — it can automatically detect it! + +Next, you'll send this request to the `analyzeEntities` endpoint: + +```bash +curl "https://language.googleapis.com/v1/documents:analyzeEntities?key=${API_KEY}" \ + -s -X POST -H "Content-Type: application/json" --data-binary @request5.json +``` + +You should get the following response: + +```json +{ + "entities": [ + { + "name": "日本", + "type": "LOCATION", + "metadata": { + "mid": "/m/03_3d", + "wikipedia_url": "https://en.wikipedia.org/wiki/Japan" + }, + "salience": 0.23854347, + "mentions": [ + { + "text": { + "content": "日本", + "beginOffset": 0 + }, + "type": "PROPER" + } + ] + }, + { + "name": "グーグル", + "type": "ORGANIZATION", + "metadata": { + "mid": "/m/045c7b", + "wikipedia_url": "https://en.wikipedia.org/wiki/Google" + }, + "salience": 0.21155767, + "mentions": [ + { + "text": { + "content": "グーグル", + "beginOffset": 9 + }, + "type": "PROPER" + } + ] + }, + ... + ] + "language": "ja" +} +``` + +The wikipedia URLs even point to the Japanese Wikipedia pages - so cool! + +## Congratulations! + +`walkthrough conclusion-trophy` + +You've learned how to perform text analysis with the Cloud Natural Language API by extracting entities, analyzing sentiment, and doing syntax annotation. + +#### What we've covered + +* Creating a Natural Language API request and calling the API with curl +* Extracting entities and running sentiment analysis on text with the Natural Language API +* Performing linguistic analysis on text to create dependency parse trees +* Creating a Natural Language API request in Japanese + +![38616f8aa634e047.png](img/38616f8aa634e047.png) + +#### Some next steps + +* Sign up for the full [Coursera Course on Machine Learning](https://www.coursera.org/learn/serverless-machine-learning-gcp/) +* Check out the Natural Language API [tutorials](https://cloud.google.com/natural-language/docs/tutorials) in the documentation. + +--------------- +Copyright 2018 Google Inc. All Rights Reserved. Licensed under the Apache +License, Version 2.0 (the "License"); you may not use this file except in +compliance with the License. You may obtain a copy of the License at +http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +License for the specific language governing permissions and limitations under +the License. diff --git a/ml/cloud_shell_tutorials/cloud-nl-text-classification/tutorial.md b/ml/cloud_shell_tutorials/cloud-nl-text-classification/tutorial.md index f85988d..4b4bad0 100644 --- a/ml/cloud_shell_tutorials/cloud-nl-text-classification/tutorial.md +++ b/ml/cloud_shell_tutorials/cloud-nl-text-classification/tutorial.md @@ -10,7 +10,7 @@ What you'll learn: * Use the NL API's text classification feature * Use text classification to understand a dataset of news articles -![Natural Language API logo](https://storage.googleapis.com/aju-dev-demos-codelabs/NaturalLanguage_Retina_sm.png) +![Natural Language API logo](https://storage.googleapis.com/aju-dev-demos-codelabs/images/NaturalLanguage_Retina_sm.png) **Time to complete**: About 30 minutes @@ -19,23 +19,23 @@ Click the **Continue** button to move to the next step. ## Create an API Key -Since we'll be using curl to send a request to the Vision API, we'll need to generate an API key to pass in our request URL. +Since we'll be using curl to send a request to the Natural Language API, we'll need to generate an API key to pass in our request URL. -**Note**: If you've already created an API key in this project during one of the other Cloud Shell tutorials, you can just use the existing key⸺you don't need to create another one. +> **Note**: If you've already created an API key in this project during one of the other Cloud Shell tutorials, you can just use the existing key⸺you don't need to create another one. To create an API key, navigate to: **APIs & services > Credentials**: -![apis_and_services](https://storage.googleapis.com/aju-dev-demos-codelabs/apis_and_services.png) +![apis_and_services](https://storage.googleapis.com/aju-dev-demos-codelabs/images/apis_and_services.png) Then click __Create credentials__: -![create_credentials1](https://storage.googleapis.com/aju-dev-demos-codelabs/create_credentials1.png) +![create_credentials1](https://storage.googleapis.com/aju-dev-demos-codelabs/images/create_credentials1.png) In the drop-down menu, select __API key__: -![create_credentials2](https://storage.googleapis.com/aju-dev-demos-codelabs/create_credentials2.png) +![create_credentials2](https://storage.googleapis.com/aju-dev-demos-codelabs/images/create_credentials2.png) Next, copy the key you just generated. Click __Close__. @@ -114,7 +114,6 @@ The API returned 2 categories for this text: **/Food & Drink/Cooking & Recipes** Classifying a single article is cool, but to really see the power of this feature we should classify lots of text data. We'll do that next. - ## Classifying a large text dataset @@ -136,23 +135,23 @@ Next we'll create a BigQuery table for our data. Before we send the text to the Natural Language API, we need a place to store the text and category for each article - enter BigQuery! Navigate to the BigQuery web UI in your console: -![Navigate to the BigQuery web UI](https://storage.googleapis.com/aju-dev-demos-codelabs/bigquery1.png) +![Navigate to the BigQuery web UI](https://storage.googleapis.com/aju-dev-demos-codelabs/images/bigquery1.png) Then click on the dropdown arrow next to your project name and select __Create new dataset__: -![Create a new BigQuery dataset](https://storage.googleapis.com/aju-dev-demos-codelabs/bigquery2.png) +![Create a new BigQuery dataset](https://storage.googleapis.com/aju-dev-demos-codelabs/images/bigquery2.png) Name your dataset `news_classification`. You can leave the defaults in the **Data location** and **Data expiration** fields: -![Name your new dataset](https://storage.googleapis.com/aju-dev-demos-codelabs/bigquery3.png) +![Name your new dataset](https://storage.googleapis.com/aju-dev-demos-codelabs/images/bigquery3.png) Click on the dropdown arrow next to your dataset name and select __Create new table__. Under Source Data, select "Create empty table". Then name your table __article_data__ and give it the following 3 fields in the schema: -![Create a new table](https://storage.googleapis.com/aju-dev-demos-codelabs/bigquery4.png) +![Create a new table](https://storage.googleapis.com/aju-dev-demos-codelabs/images/bigquery4.png) After creating the table you should see the following: -![New table details](https://storage.googleapis.com/aju-dev-demos-codelabs/bigquery5.png) +![New table details](https://storage.googleapis.com/aju-dev-demos-codelabs/images/bigquery5.png) Our table is empty right now. In the next step we'll read the articles from Google Cloud Storage, send them to the NL API for classification, and store the result in BigQuery. @@ -239,7 +238,7 @@ We're using the `google-cloud` [Python client library](https://googlecloudplatfo When your script has finished running, it's time to verify that the article data was saved to BigQuery. Navigate to your `article_data` table in the BigQuery web UI and click __Query Table__: -![Query your new BigQuery table](https://storage.googleapis.com/aju-dev-demos-codelabs/bigquery6.png) +![Query your new BigQuery table](https://storage.googleapis.com/aju-dev-demos-codelabs/images/bigquery6.png) Enter the following query in the **Compose Query** box, **first replacing `YOUR_PROJECT`** with your project name: @@ -273,7 +272,7 @@ ORDER BY You should see something like this in the query results: -![Query results](https://storage.googleapis.com/aju-dev-demos-codelabs/query_results.png) +![Query results](https://storage.googleapis.com/aju-dev-demos-codelabs/images/query_results.png) Let's say we wanted to find the article returned for a more obscure category like **/Arts & Entertainment/Music & Audio/Classical Music**. We could write the following query (again, replace `YOUR_PROJECT` first): @@ -296,7 +295,7 @@ WHERE cast(confidence as float64) > 0.9 To perform more queries on your data, explore the [BigQuery documentation](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators). BigQuery also integrates with a number of visualization tools. To create visualizations of your categorized news data, check out the [Data Studio quickstart](https://cloud.google.com/bigquery/docs/visualize-data-studio) for BigQuery. Here's an example of a Data Studio chart we could create for the query above: -![Example Data Studio chart](https://storage.googleapis.com/aju-dev-demos-codelabs/data_studio.png) +![Example Data Studio chart](https://storage.googleapis.com/aju-dev-demos-codelabs/images/data_studio.png) ## Congratulations! diff --git a/ml/cloud_shell_tutorials/cloud-vision-nl-translate/tutorial.md b/ml/cloud_shell_tutorials/cloud-vision-nl-translate/tutorial.md index ad9acef..2ae7518 100644 --- a/ml/cloud_shell_tutorials/cloud-vision-nl-translate/tutorial.md +++ b/ml/cloud_shell_tutorials/cloud-vision-nl-translate/tutorial.md @@ -12,7 +12,7 @@ What you'll learn: * Using the Translation API to translate text from your image * Using the Natural Language API to analyze the text -![Some of the ML APIs](https://storage.googleapis.com/aju-dev-demos-codelabs/tutorial_mlapi_initial_image_sm.png) +![Some of the ML APIs](https://storage.googleapis.com/aju-dev-demos-codelabs/images/tutorial_mlapi_initial_image_sm.png) **Time to complete**: About 30 minutes @@ -28,15 +28,15 @@ To create an API key, navigate to: **APIs & services > Credentials**: -![apis_and_services](https://storage.googleapis.com/aju-dev-demos-codelabs/apis_and_services.png) +![apis_and_services](https://storage.googleapis.com/aju-dev-demos-codelabs/images/apis_and_services.png) Then click __Create credentials__: -![create_credentials1](https://storage.googleapis.com/aju-dev-demos-codelabs/create_credentials1.png) +![create_credentials1](https://storage.googleapis.com/aju-dev-demos-codelabs/images/create_credentials1.png) In the drop-down menu, select __API key__: -![create_credentials2](https://storage.googleapis.com/aju-dev-demos-codelabs/create_credentials2.png) +![create_credentials2](https://storage.googleapis.com/aju-dev-demos-codelabs/images/create_credentials2.png) Next, copy the key you just generated. Click __Close__. @@ -65,10 +65,10 @@ cd ~/code-snippets/ml/cloud_shell_tutorials/cloud-vision-nl-translate You'll remain in this directory for the rest of the tutorial. We've uploaded a picture of a French sign to this Google Cloud Storage -URL, and made it public: `gs://aju-dev-demos-codelabs/french_sign.png`. +URL, and made it public: `gs://aju-dev-demos-codelabs/images/french_sign.png`. The sign looks like this: -![french_sign](https://storage.googleapis.com/aju-dev-demos-codelabs/french_sign.png) +![french_sign](https://storage.googleapis.com/aju-dev-demos-codelabs/images/french_sign.png) You'll use that URL to form a JSON request to analyze the photo. In particular, you're going to use @@ -86,7 +86,7 @@ It contains the following request: { "image": { "source": { - "gcsImageUri": "gs://aju-dev-demos-codelabs/french_sign.png" + "gcsImageUri": "gs://aju-dev-demos-codelabs/images/french_sign.png" } }, "features": [