# Step 2: Create a Service on IBM Cloud

Creating services on IBM Cloud is easy, and we have over a hundred different types of services you can create. To see the full list, visit <https://cloud.ibm.com/catalog> (or visit <https://cloud.ibm.com/catalog/labs> if you'd like to see our experimental 🧪 features!)

![IBM's Cloud Dashboard](https://845368379-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-M0YJkXNeop-VQvo0z8l%2F-M2-l2fCZBdx8-wBWnpC%2F-M20y75JNEBvYdAy3Y4P%2FScreen%20Shot%202020-03-09%20at%206.04.01%20PM.png?alt=media\&token=0fa765f1-626a-40ea-af42-ad9d665bbec2)

For today's lab, we'll create and run a Visual Recognition service within five minutes.

## Step 2a: Create a Service through the Dashboard

First, let's find the Visual Recognition service inside the IBM Cloud resource list. We'll search for `visual`:

![](https://845368379-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-M0YJkXNeop-VQvo0z8l%2F-M26pjlMuzJf6Ydoz2Yw%2F-M29Mj12PSkeFE5bPDxc%2FScreen%20Shot%202020-03-11%20at%209.12.39%20AM.png?alt=media\&token=090dc6bb-93e6-410a-93c8-67e020eb6d54)

Click on **Visual Recognition**.

On the service creation page, make sure the **Lite** plan is selected, and click the blue **Create** button:

![](https://845368379-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-M0YJkXNeop-VQvo0z8l%2F-M26pjlMuzJf6Ydoz2Yw%2F-M29N31rxO2h-GyY_5yo%2FScreen%20Shot%202020-03-11%20at%209.13.33%20AM.png?alt=media\&token=3286097c-5dbc-43e6-b1e6-8a22361f20d2)

This service is free, but it will be removed after 30 days of inactivity.

## Step 2b: Retrieve your Credentials

Now, let's retrieve our service credentials so that we can classify an image. Click on Service Credentials on the left hand menu:

![](https://845368379-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-M0YJkXNeop-VQvo0z8l%2F-M26pjlMuzJf6Ydoz2Yw%2F-M29NXhpK2-GFwHLRvCt%2FScreen%20Shot%202020-03-11%20at%209.15.40%20AM.png?alt=media\&token=d537dcd8-ee6a-419c-85d3-6b12daa27efa)

At the bottom of the page, click **View Credentials**

![](https://845368379-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-M0YJkXNeop-VQvo0z8l%2F-M26pjlMuzJf6Ydoz2Yw%2F-M29NwDbo3ym71bo7DR9%2FScreen%20Shot%202020-03-11%20at%209.16.53%20AM.png?alt=media\&token=b6506df4-c25a-4168-8fce-cfc1e1e51b31)

Take note of the `apikey` and `url` parameters; we will use them in **step 2d**.

## Step 2c: Start the Cloud Shell

Now that you've created your service, let's ping it using a separate environment. Although we could do it from our local machine, IBM has a great tool for us called the Cloud Shell.

In the upper right hand corner of your IBM Cloud Dashboard, you'll see a small terminal icon:

![](https://845368379-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-M0YJkXNeop-VQvo0z8l%2F-M2-l2fCZBdx8-wBWnpC%2F-M219MlL13gJyLjQfv7H%2Fimage.png?alt=media\&token=05c62679-22ff-4222-82e1-0eb3651f3663)

This is your **IBM Cloud Shell**. We'll be able to use this shell to ping our newly-created service.

## Step 2d: Call your Service

This is the command we'll execute in the cloud shell to test it. We'll replace `{apikey}` and `{url}` with the parameters from our IBM Cloud Visual Recognition service, and `{image_url}` with a publicly-available image that we'd like to analyze.

```
curl -u "apikey:{apikey}" "{url}/v3/classify?url={image_url}&version=2020-01-01"
```

Execute this command, and we'll get a JSON response showing the default classifier's analysis of the image. Good job, you've created and called an IBM Watson service!


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://ibmdevelopersf.gitbook.io/ibm-blockchain/step-2-integrate-cloud-services-into-your-app.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
