Getting Started

The Clarifai API offers image and video recognition as a service. Whether you have one image or billions, you are only steps away from using artificial intelligence to recognize your visual content.

The API is built around a simple idea. You send inputs (images) to the service and it returns predictions.

The type of prediction is based on what model you run the input through. For example, if you run your input through the 'food' model, the predictions it returns will contain concepts that the 'food' model knows about. If you run your input through the 'color' model, it will return predictions about the dominant colors in your image.

inputs outputs

Before you get started, if you haven't created an account and received your free API key, please do so before proceeding with this guide. You can begin making API calls for free, a credit card is not required.

Please note that your account will be limited to 100 API calls until you verify your email address. After verification, you will receive the full amount of API calls under your plan.

All API access is over HTTPS, and accessed via the https://api.clarifai.com domain. The relative path prefix /v2/ indicates that we are currently using version 2 of the API.

In the below examples, we use single brackets {variable} to indicate that this is a variable you should replace with a real value.

API Clients

We recommend using a client library to access the API. We offer official clients in various languages. View our API Clients.

For REST documentation please see the cURL examples.

Client Installation Instructions

// The JavaScript client works in both Node.js and the browser.


// Install the client from NPM

npm install clarifai

// Require the client

var Clarifai = require('clarifai');

// initialize with your clientId and clientSecret

var app = new Clarifai.App(
  '{clientId}',
  '{clientSecret}'
);

// You can also use the client directly in your browser:

<script type="text/javascript" src="https://sdk.clarifai.com/js/clarifai-latest.js"></script>
<script>
  var app = new Clarifai.App(
    '{clientId}',
    '{clientSecret}'
  );
</script>

# Pip install the client:
# pip install clarifai


#The package will be accessible by importing clarifai:

from clarifai import rest
from clarifai.rest import ClarifaiApp
from clarifai.rest import Image as ClImage

# The client takes the `APP_ID` and `APP_SECRET` you created in your Clarifai
# account. You can set these variables in your environment as:

# - `CLARIFAI_APP_ID`
# - `CLARIFAI_APP_SECRET`

app = ClarifaiApp()

// Our API client is hosted on jCenter, Maven Central, and JitPack.

///////////////////////////////////////////////////////////////////////////////
// Installation - via Gradle (recommended)
///////////////////////////////////////////////////////////////////////////////

// Add the client to your dependencies:
compile 'com.clarifai.clarifai-api2:core:2.2.6'

// For Android users:
compile 'com.clarifai.clarifai-api2:android:2.2.6'
// Also, on Android, you must suppress an invalid Android lint error:
// https://guides.codepath.com/android/Consuming-APIs-with-Retrofit#issues

///////////////////////////////////////////////////////////////////////////////
// Installation - via Maven
///////////////////////////////////////////////////////////////////////////////

/*
<!-- Add the client to your dependencies: -->
<dependency>
  <groupId>com.clarifai.clarifai-api2</groupId>
  <artifactId>core</artifactId>
  <version>2.2.6</version>
</dependency>
*/

///////////////////////////////////////////////////////////////////////////////
// Initialize client
///////////////////////////////////////////////////////////////////////////////

new ClarifaiBuilder("{clientId}", "{clientSecret}")
    .client(new OkHttpClient()) // OPTIONAL. Allows customization of OkHttp by the user
    .buildSync() // or use .build() to get a Future<ClarifaiClient>

    // if a Client is registered as a default instance, it will be used
    // automatically, without the user having to keep it around as a field.
    // This can be omitted if you want to manually manage your instance
    .registerAsDefaultInstance();

// Installation via CocoaPods - https://cocoapods.org

// 1. Create a new XCode project, or use a current one.

// 2. Add the following to your Podfile:

//  pod 'Clarifai'

// 3. Install dependencies and generate workspace.

//  pod install

// 4. Open the workspace in Xcode

//  open YOUR_PROJECT_NAME.xcworkspace

// 5. You are now able to import ClarifaiApp.h and any other classes you need!

  #import ClarifaiApp.h

// Note: if you are using Swift in your project, make sure to include use_frameworks! in your Podfile. Then import Clarifai as a module.

  import Clarifai

// Install cURL: https://curl.haxx.se/download.html

Preview UI

In addition to the API, Clarifai nows offers a web site that that allows you to preview your Clarifai applications. You can view all the inputs you have added, perform searches and train new models.

inputs outputs

You can access the Preview UI here

For a step-by-step guide on building and training a custom model using Preview UI, checkout this walkthrough.

Authentication

Authentication to the API is handled using OAuth2 client credentials. Each application you create has a unique Client ID and Client Secret which you will use to exchange for an Access Token. You then use this Access Token to make authorized API calls.

If you are using a client to access the API, token exchange and refresh is handled for you. You only need to initialize the client with your Client ID and Client Secret.

The three main components of OAuth2 client credentials:

ComponentDescription
client_idThis identifies which application is trying to access the API. This is unique and generated once for each application in your account.
client_secretThis provides security when authorizing with the API. This is unique and generated once for each application in your account.
access_tokenThis is used to authorize your access to the API. Access tokens expire regularly and must be renewed on an ongoing basis.

For more information regarding OAuth2, please see the spec.

Retrieve an Access Token

All API calls must include an access_token. To retrieve an access token, you send your client_id and client_secret as Basic Auth to the /token endpoint.


var app = new Clarifai.App('{clientId}', '{clientSecret}');
// This returns a promise with the success handler receiving a token object.
// This gets automatically called for you when you use the JS client to call the API.
app.getToken();

app = ClarifaiApp("{clientId}", "{clientSecret}")

# NOTE: you probably won't have to handle this. The API client automatically refreshes your access token
# before making any network calls if it is expired
app.auth.get_token()

new ClarifaiBuilder("{clientId}", "{clientSecret}").buildSync().registerAsDefaultInstance();

// NOTE: you probably won't have to handle this. The API client automatically refreshes your access token
// before making any network calls if it is expired
client.getDefaultInstance().getToken();

ClarifaiApp *app = [[ClarifaiApp alloc] initWithAppID:@"{clientId}"
                                            appSecret:@"{clientSecret}"];

// NOTE: you probably won't have to handle this. The API client automatically refreshes your access token
// before making any network calls if it is expired
app.accessToken;

curl -X POST \
  -u "$CLARIFAI_APP_ID:$CLARIFAI_APP_SECRET" \
  -d "grant_type=client_credentials" \
  https://api.clarifai.com/v2/token

The JSON response will include your access_token. Please note the expires_in time. Access tokens expire regularly and must be renewed on an ongoing basis. You can renew by just retrieving a new Access Token as described above.

Show All
Response{
  "status": {
    "code": 10000,
    "description": "Success"
  },
  "access_token": "",
  "expires_in": 176400,
  "scope": "api_access_write api_access api_access_read"
}

You can now use the access_token value to authorize your API calls. If you are using a client, authentication will be handled for you. If using the REST API, this is achieved by using the Authorization header as described below:


Authorization: Bearer {access_token}

Predict

Predict analyzes your images and tells you what's inside of them.

The API will return a list of concepts with corresponding probabilities of how likely it is these concepts are contained within the image.

When you make a prediction through the API, you tell it what model to use. A model contains a group of concepts. A model will only 'see' the concepts it contains.

Via URL

To get predictions for an input, you need to supply an image and the model you'd like to get predictions from. You can supply an image either with a publicly accessible URL or by directly sending image bytes. You can send up to 128 images in one API call. You specify the model you'd like to use with the modelId parameter.

Below is an example of how you would send two image URLs and receive back predictions from the general model.

You can learn all about the different public models available later in the guide.


app.models.predict(Clarifai.GENERAL_MODEL, "https://samples.clarifai.com/metro-north.jpg").then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

model = app.models.get('general-v1.3')
image = ClImage(url='https://samples.clarifai.com/metro-north.jpg')
model.predict([image])

client.getDefaultModels().generalModel().predict()
    .withInputs(
        ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/metro-north.jpg"))
    )
    .executeSync();

ClarifaiImage *image = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/metro-north.jpg"];
[_app getModelByName:@"general-v1.3" completion:^(ClarifaiModel *model, NSError *error) {
    [model predictOnImages:@[image]
                completion:^(NSArray<ClarifaiSearchResult *> *outputs, NSError *error) {
                    NSLog(@"outputs: %@", outputs);
                }];
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/metro-north.jpg"
          }
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/models/aaa03c23b3724a16a56b629203edc62c/outputs
Show All
Response
{
  "status": {
    "code": 10000,
    "description": "Ok"
  },
  "outputs": [
    {
      "id": "ea68cac87c304b28a8046557062f34a0",
      "status": {
        "code": 10000,
        "description": "Ok"
      },
      "created_at": "2016-11-22T16:50:25Z",
      "model": {
        "name": "general-v1.3",
        "id": "aaa03c23b3724a16a56b629203edc62c",
        "created_at": "2016-03-09T17:11:39Z",
        "app_id": null,
        "output_info": {
          "message": "Show output_info with: GET /models/{model_id}/output_info",
          "type": "concept"
        },
        "model_version": {
          "id": "aa9ca48295b37401f8af92ad1af0d91d",
          "created_at": "2016-07-13T01:19:12Z",
          "status": {
            "code": 21100,
            "description": "Model trained successfully"
          }
        }
      },
      "input": {
        "id": "ea68cac87c304b28a8046557062f34a0",
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/metro-north.jpg"
          }
        }
      },
      "data": {
        "concepts": [
          {
            "id": "ai_HLmqFqBf",
            "name": "train",
            "app_id": null,
            "value": 0.9989112
          },
          {
            "id": "ai_fvlBqXZR",
            "name": "railway",
            "app_id": null,
            "value": 0.9975532
          },
          {
            "id": "ai_Xxjc3MhT",
            "name": "transportation system",
            "app_id": null,
            "value": 0.9959158
          },
          {
            "id": "ai_6kTjGfF6",
            "name": "station",
            "app_id": null,
            "value": 0.992573
          },
          {
            "id": "ai_RRXLczch",
            "name": "locomotive",
            "app_id": null,
            "value": 0.992556
          },
          {
            "id": "ai_VRmbGVWh",
            "name": "travel",
            "app_id": null,
            "value": 0.98789215
          },
          {
            "id": "ai_SHNDcmJ3",
            "name": "subway system",
            "app_id": null,
            "value": 0.9816359
          },
          {
            "id": "ai_jlb9q33b",
            "name": "commuter",
            "app_id": null,
            "value": 0.9712483
          },
          {
            "id": "ai_46lGZ4Gm",
            "name": "railroad track",
            "app_id": null,
            "value": 0.9690325
          },
          {
            "id": "ai_tr0MBp64",
            "name": "traffic",
            "app_id": null,
            "value": 0.9687052
          },
          {
            "id": "ai_l4WckcJN",
            "name": "blur",
            "app_id": null,
            "value": 0.9667078
          },
          {
            "id": "ai_2gkfMDsM",
            "name": "platform",
            "app_id": null,
            "value": 0.9624243
          },
          {
            "id": "ai_CpFBRWzD",
            "name": "urban",
            "app_id": null,
            "value": 0.960752
          },
          {
            "id": "ai_786Zr311",
            "name": "no person",
            "app_id": null,
            "value": 0.95864904
          },
          {
            "id": "ai_6lhccv44",
            "name": "business",
            "app_id": null,
            "value": 0.95720303
          },
          {
            "id": "ai_971KsJkn",
            "name": "track",
            "app_id": null,
            "value": 0.9494642
          },
          {
            "id": "ai_WBQfVV0p",
            "name": "city",
            "app_id": null,
            "value": 0.94089437
          },
          {
            "id": "ai_dSCKh8xv",
            "name": "fast",
            "app_id": null,
            "value": 0.9399334
          },
          {
            "id": "ai_TZ3C79C6",
            "name": "road",
            "app_id": null,
            "value": 0.93121606
          },
          {
            "id": "ai_VSVscs9k",
            "name": "terminal",
            "app_id": null,
            "value": 0.9230834
          }
        ]
      }
    }
  ]
}

Via Image Bytes

Below is an example of how you would send the bytes of one image and receive back predictions from the general model.


app.models.predict(Clarifai.GENERAL_MODEL, {base64: "G7p3m95uAl..."}).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

model = app.models.get('general-v1.3')
image = ClImage(file_obj=open('/home/user/image.jpeg', 'rb'))
model.predict([image])

client.getDefaultModels().generalModel().predict()
    .withInputs(
        ClarifaiInput.forImage(ClarifaiImage.of(new File("/home/user/image.jpeg")))
    )
    .executeSync();

UIImage *image = [UIImage imageNamed:@"dress.jpg"];
ClarifaiImage *clarifaiImage = [[ClarifaiImage alloc] initWithImage:image];
[_app getModelByName:@"general-v1.3" completion:^(ClarifaiModel *model, NSError *error) {
    [model predictOnImages:@[clarifaiImage]
                completion:^(NSArray<ClarifaiSearchResult *> *outputs, NSError *error) {
                    NSLog(@"outputs: %@", outputs);
                }];
}];

// Smaller files (195 KB or less)

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "base64": "'"$(base64 /home/user/image.jpeg)"'"
          }
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/models/aaa03c23b3724a16a56b629203edc62c/outputs

// Larger Files (Greater than 195 KB)

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d @- https://api.clarifai.com/v2/models/aaa03c23b3724a16a56b629203edc62c/outputs << FILEIN
  {
    "inputs": [
      {
        "data": {
          "image": {
            "base64": "$(base64 /home/user/image.png)"
          }
        }
      }
    ]
  }
FILEIN
Show All
Response
{
  "status": {
    "code": 10000,
    "description": "Ok"
  },
  "outputs": [
    {
      "id": "e1cf385843b94c6791bbd9f2654db5c0",
      "status": {
        "code": 10000,
        "description": "Ok"
      },
      "created_at": "2016-11-22T16:59:23Z",
      "model": {
        "name": "general-v1.3",
        "id": "aaa03c23b3724a16a56b629203edc62c",
        "created_at": "2016-03-09T17:11:39Z",
        "app_id": null,
        "output_info": {
          "message": "Show output_info with: GET /models/{model_id}/output_info",
          "type": "concept"
        },
        "model_version": {
          "id": "aa9ca48295b37401f8af92ad1af0d91d",
          "created_at": "2016-07-13T01:19:12Z",
          "status": {
            "code": 21100,
            "description": "Model trained successfully"
          }
        }
      },
      "input": {
        "id": "e1cf385843b94c6791bbd9f2654db5c0",
        "data": {
          "image": {
            "url": "https://s3.amazonaws.com/clarifai-api/img/prod/b749af061d564b829fb816215f6dc832/e11c81745d6d42a78ef712236023df1c.jpeg"
          }
        }
      },
      "data": {
        "concepts": [
          {
            "id": "ai_l4WckcJN",
            "name": "blur",
            "app_id": null,
            "value": 0.9973569
          },
          {
            "id": "ai_786Zr311",
            "name": "no person",
            "app_id": null,
            "value": 0.98865616
          },
          {
            "id": "ai_JBPqff8z",
            "name": "art",
            "app_id": null,
            "value": 0.986006
          },
          {
            "id": "ai_5rD7vW4j",
            "name": "wallpaper",
            "app_id": null,
            "value": 0.9722556
          },
          {
            "id": "ai_sTjX6dqC",
            "name": "abstract",
            "app_id": null,
            "value": 0.96476805
          },
          {
            "id": "ai_Dm5GLXnB",
            "name": "illustration",
            "app_id": null,
            "value": 0.922542
          },
          {
            "id": "ai_5xjvC0Tj",
            "name": "background",
            "app_id": null,
            "value": 0.8775655
          },
          {
            "id": "ai_tBcWlsCp",
            "name": "nature",
            "app_id": null,
            "value": 0.87474406
          },
          {
            "id": "ai_rJGvwlP0",
            "name": "insubstantial",
            "app_id": null,
            "value": 0.8196385
          },
          {
            "id": "ai_2Bh4VMrb",
            "name": "artistic",
            "app_id": null,
            "value": 0.8142488
          },
          {
            "id": "ai_mKzmkKDG",
            "name": "Christmas",
            "app_id": null,
            "value": 0.7996079
          },
          {
            "id": "ai_RQccV41p",
            "name": "woman",
            "app_id": null,
            "value": 0.7955615
          },
          {
            "id": "ai_20SCBBZ0",
            "name": "vector",
            "app_id": null,
            "value": 0.7775099
          },
          {
            "id": "ai_4sJLn6nX",
            "name": "dark",
            "app_id": null,
            "value": 0.7715479
          },
          {
            "id": "ai_5Kp5FMJw",
            "name": "still life",
            "app_id": null,
            "value": 0.7657637
          },
          {
            "id": "ai_LM64MDHs",
            "name": "shining",
            "app_id": null,
            "value": 0.7542407
          },
          {
            "id": "ai_swtdphX8",
            "name": "love",
            "app_id": null,
            "value": 0.74926054
          },
          {
            "id": "ai_h45ZTxZl",
            "name": "square",
            "app_id": null,
            "value": 0.7449074
          },
          {
            "id": "ai_cMfj16kJ",
            "name": "design",
            "app_id": null,
            "value": 0.73926914
          },
          {
            "id": "ai_LxrzLJmf",
            "name": "bright",
            "app_id": null,
            "value": 0.73790145
          }
        ]
      }
    }
  ]
}

A common case for using Clarifai is to get the concepts predicted in an image and then use those concepts to power search.

The Search API allows you to send images (url or bytes) to the service and have them indexed by 'general' model concepts and their visual representations.

Once indexed, you can search for images by concept or using reverse image search.

inputs outputs

Add Images to Search Index

To get started with search, you must first add images to the search index. You can add one or more images to the index at a time. You can supply an image either with a publicly accessible URL or by directly sending image bytes. You can send up to 128 images in one API call.


app.inputs.create([
  {url: "https://samples.clarifai.com/metro-north.jpg"},
  {url: "https://samples.clarifai.com/wedding.jpg"},
  {base64: "G7p3m95uAl..."}
]).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# add image from image url
app.inputs.create_image_from_url("https://samples.clarifai.com/metro-north.jpg")

# add image from image filename
app.inputs.create_image_from_filename("local/file.jpg")

# add image from raw image bytes
raw_bytes = open("local/file.jpg", "rb").read()
app.inputs.create_image_from_bytes(raw_bytes)

# add image from base64 encoded image bytes
raw_bytes = open("local/file.jpg", "rb").read()
base64_bytes = base64.b64encode(raw_bytes)
app.inputs.create_image_from_base64(base64_bytes)

client.addInputs()
    .plus(
        ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/metro-north.jpg")),
        ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/wedding.jpg"))
    )
    .executeSync();

ClarifaiImage *image1 = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/metro-north.jpg"];
ClarifaiImage *image2 = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/wedding.jpg"];

[app addInputs:@[image1, image2] completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error) {
    NSLog(@"inputs: %@", inputs);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/metro-north.jpg"
          }
        }
      },
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/wedding.jpg"
          }
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/inputs
Show All
Response
{
  "status": {
    "code": 10000,
    "description": "Ok"
  },
  "inputs": [
    {
      "id": "edc70c917475499abdc7151f41d6cf3e",
      "created_at": "2016-11-22T17:06:02Z",
      "data": {
        "image": {
          "url": "https://samples.clarifai.com/metro-north.jpg"
        }
      },
      "status": {
        "code": 30001,
        "description": "Download pending"
      }
    },
    {
      "id": "f96ca3bbf02041c59addcc13e3468b7d",
      "created_at": "2016-11-22T17:06:02Z",
      "data": {
        "image": {
          "url": "https://samples.clarifai.com/wedding.jpg"
        }
      },
      "status": {
        "code": 30001,
        "description": "Download pending"
      }
    }
  ]
}

Search By Concept

Once your images are indexed, you can search for them by concept.


app.inputs.search({ concept: {name: 'people'} }).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# search by predicted concept
app.inputs.search_by_predicted_concepts(concept='people')

# search by a list of concepts
app.inputs.search_by_predicted_concepts(concepts=['people'])

# search by concept id
app.inputs.search_by_predicted_concepts(concept_id='ai_dP13sXL4')

# search by a list of concept ids
app.inputs.search_by_predicted_concepts(concept_ids=['ai_dP13sXL4'])

client.searchInputs(SearchClause.matchConcept(Concept.forName("people")))
    .buildSync()
    .getPage(1)
    .executeSync();

// First create a search term with a concept you want to search.
ClarifaiConcept *conceptFromGeneralModel = [[ClarifaiConcept alloc] initWithConceptName:@"people"];
ClarifaiSearchTerm *searchTerm = [ClarifaiSearchTerm searchByPredictedConcept:conceptFromGeneralModel];

[app search:@[searchTerm] page:@1 perPage:@20 completion:^(NSArray<ClarifaiSearchResult *> *results, NSError *error) {
  // Print output of first search result.
  NSLog(@"inputID: %@", results[0].inputID);
  NSLog(@"URL: %@", results[0].mediaURL);
  NSLog(@"probability of predicted concept: %@", results[0].score);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "query": {
      "ands": [
        {
          "output": {
            "data": {
              "concepts": [
                {
                  "name": "people"
                }
              ]
            }
          }
        }
      ]
    }
  }'\
  https://api.clarifai.com/v2/searches
Show All
Response
{
  "status": {
    "code": 10000,
    "description": "Ok"
  },
  "hits": [
    {
      "score": 0.98155165,
      "input": {
        "id": "f96ca3bbf02041c59addcc13e3468b7d",
        "created_at": "2016-11-22T17:06:02Z",
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/wedding.jpg"
          }
        },
        "status": {
          "code": 30000,
          "description": "Download complete"
        }
      }
    }
  ]
}

You can also search for images using reverse image search. In this case, you provide an image (url or bytes) and the results will return all the images in your search index that are visually similar to the one provided.


app.inputs.search({ input: {url: 'https://samples.clarifai.com/puppy.jpeg'} }).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# search by image url
app.inputs.search_by_image(url="https://samples.clarifai.com/metro-north.jpg")

# search by existing input id
input_id = "some_existing_input_id"
app.inputs.search_by_image(image_id=input_id)

# search by raw bytes
data = "image_raw_bytes"
app.inputs.search_by_image(imgbytes=data)

# search by base64 bytes
base64_data = "image_bytes_encoded_in_base64"
app.inputs.search_by_image(base64bytes=base64_data)

# search by local filename
filename="filename_on_local_disk.jpg"
app.inputs.search_by_image(filename=filename)

# search from fileio
fio = open("filename_on_local_disk.jpg", 'rb')
app.inputs.search_by_image(fileobj=fio)

client.searchInputs(SearchClause.matchImageVisually(ClarifaiImage.of("https://samples.clarifai.com/metro-north.jpg")))
    .getPage(1)
    .executeSync();

ClarifaiSearchTerm *searchTerm = [ClarifaiSearchTerm searchVisuallyWithImageURL:@"https://samples.clarifai.com/metro-north.jpg"];

[app search:@[searchTerm] page:@1 perPage:@20 completion:^(NSArray<ClarifaiSearchResult *> *results, NSError *error) {
  // Print output of first search result.
  NSLog(@"inputID: %@", results[0].inputID);
  NSLog(@"URL: %@", results[0].mediaURL);
  NSLog(@"probability of predicted concept: %@", results[0].score);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "query": {
      "ands": [
        {
          "output": {
            "input": {
              "data": {
                "image": {
                  "url": "https://samples.clarifai.com/metro-north.jpg"
                }
              }
            }
          }
        }
      ]
    }
  }'\
  https://api.clarifai.com/v2/searches
Show All
Response
{
  "status": {
    "code": 10000,
    "description": "Ok"
  },
  "hits": [
    {
      "score": 0.9999997,
      "input": {
        "id": "edc70c917475499abdc7151f41d6cf3e",
        "created_at": "2016-11-22T17:06:02Z",
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/metro-north.jpg"
          }
        },
        "status": {
          "code": 30000,
          "description": "Download complete"
        }
      }
    },
    {
      "score": 0.3915897,
      "input": {
        "id": "f96ca3bbf02041c59addcc13e3468b7d",
        "created_at": "2016-11-22T17:06:02Z",
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/wedding.jpg"
          }
        },
        "status": {
          "code": 30000,
          "description": "Download complete"
        }
      }
    }
  ]
}

Train

Clarifai provides many different models that 'see' the world differently. A model contains a group of concepts. A model will only see the concepts it contains.

There are times when you wish you had a model that sees the world the way you see it. The API allows you to do this. You can create your own model and train it with your own images and concepts. Once you train it to see how you would like it to see, you can then use that model to make predictions.

You do not need many images to get started. We recommend starting with 10 and adding more as needed.

inputs outputs

Add Images With Concepts

To get started training your own model, you must first add images that already contain the concepts you want your model to see.


app.inputs.create({
  url: "https://samples.clarifai.com/puppy.jpeg",
  concepts: [
    {
      id: "boscoe",
      value: true
    }
  ]
});

app.inputs.create_image_from_url(url="https://samples.clarifai.com/puppy.jpeg", concepts=['boscoe'])

client.addInputs()
    .plus(
        ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/puppy.jpeg"))
            .withConcepts(Concept.forID("boscoe"))
    )
    .executeSync();

ClarifaiImage *image = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/puppy.jpeg" andConcepts:@"cute puppy"];
[_app addInputs:@[image] completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error) {
    NSLog(@"inputs: %@", inputs);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/puppy.jpeg"
          },
          "concepts":[
            {
              "id": "boscoe",
              "value": true
            }
          ]
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/inputs
Show All
Response
{
  "status": {
    "code": 10000,
    "description": "Ok"
  },
  "inputs": [
    {
      "id": "e82fd13b11354d808cc48dc8f94ec3a9",
      "created_at": "2016-11-22T17:16:00Z",
      "data": {
        "image": {
          "url": "https://samples.clarifai.com/puppy.jpeg"
        },
        "concepts": [
          {
            "id": "boscoe",
            "name": "boscoe",
            "app_id": "f09abb8a57c041cbb94759ebb0cf1b0d",
            "value": 1
          }
        ]
      },
      "status": {
        "code": 30000,
        "description": "Download complete"
      }
    }
  ]
}

Create A Model

Once your images with concepts are added, you are now ready to create the model. You'll need a name for the model and you'll also need to provide it with the concepts you added above.

Take note of the model id that is returned in the response. You'll need that for the next two steps.


app.models.create(
  "pets",
  [
    { "id": "boscoe" }
  ]
).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

model = app.models.create('pets', concepts=['boscoe'])

client.createModel("pets")
    .withOutputInfo(ConceptOutputInfo.forConcepts(
        Concept.forID("boscoe")
    ))
    .executeSync();

[app createModel:@[concept] name:modelName conceptsMutuallyExclusive:NO closedEnvironment:NO
      completion:^(ClarifaiModel *model, NSError *error) {
        NSLog(@"model: %@", model);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "model": {
      "id": "pets",
      "output_info": {
        "data": {
          "concepts": [
            {
              "id": "boscoe"
            }
          ]
        },
        "output_config": {
          "concepts_mutually_exclusive": false,
          "closed_environment":false
        }
      }
    }
  }'\
  https://api.clarifai.com/v2/models
Show All
Response
{
  "status": {
    "code": 10000,
    "description": "Ok"
  },
  "model": {
    "name": "pets",
    "id": "a10f0cf48cf3426cbb8c4805e246c214",
    "created_at": "2016-11-22T17:17:36Z",
    "app_id": "f09abb8a57c041cbb94759ebb0cf1b0d",
    "output_info": {
      "message": "Show output_info with: GET /models/{model_id}/output_info",
      "type": "concept",
      "output_config": {
        "concepts_mutually_exclusive": false,
        "closed_environment": false
      }
    },
    "model_version": {
      "id": "e7bcd534b61b4874a3ab69fba974c012",
      "created_at": "2016-11-22T17:17:36Z",
      "status": {
        "code": 21102,
        "description": "Model not yet trained"
      }
    }
  }
}

Train The Model

Now that you've added images with concepts, then created a model with those concepts, the next step is to train the model. When you train a model, you are telling the system to look at all the images with concepts you've provided and learn from them. This train operation is asynchronous. It may take a few seconds for your model to be fully trained and ready.


app.models.train("{model_id}").then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

// or if you have an instance of a model

model.train().then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

model = app.models.get('{model_id}')
model.train()

client.trainModel("{model_id}").executeSync();

ClarifaiImage *image = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/puppy.jpeg"]
[app getModel:@"{id}" completion:^(ClarifaiModel *model, NSError *error) {
    [model train:^(ClarifaiModel *model, NSError *error) {
        NSLog(@"model: %@", model);
    }];
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  https://api.clarifai.com/v2/models/{model_id}/versions
Show All
Response
{
  "status": {
    "code": 10000,
    "description": "Ok"
  },
  "model": {
    "name": "pets",
    "id": "a10f0cf48cf3426cbb8c4805e246c214",
    "created_at": "2016-11-22T17:17:36Z",
    "app_id": "f09abb8a57c041cbb94759ebb0cf1b0d",
    "output_info": {
      "message": "Show output_info with: GET /models/{model_id}/output_info",
      "type": "concept",
      "output_config": {
        "concepts_mutually_exclusive": false,
        "closed_environment": false
      }
    },
    "model_version": {
      "id": "d1b38fd2251148d08675c5542ef00c7b",
      "created_at": "2016-11-22T17:21:13Z",
      "status": {
        "code": 21103,
        "description": "Custom model is currently in queue for training, waiting on inputs to process."
      }
    }
  }
}

Predict With The Model

Now the moment you've been waiting for. First you added images with concepts. Then you created a model with those concepts. Then you trained the model on those images. Now you are ready to use your new model to get predictions. The predictions returned will only contain the concepts that you told it to see.

Note: you can repeat the above steps as often as you like. By adding more images with concepts and training, you can get the model to predict exactly how you want it to.


app.models.predict("{model_id}", ["https://samples.clarifai.com/puppy.jpeg"]).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

// or if you have an instance of a model

model.predict("https://samples.clarifai.com/puppy.jpeg").then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

model = app.models.get('{model_id}')

image = ClImage(url='https://samples.clarifai.com/puppy.jpeg')
model.predict([image])

client.predict("{model_id}")
    .withInputs(
        ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/puppy.jpeg"))
    )
    .executeSync();

ClarifaiImage *image = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/puppy.jpeg"]
[app getModel:@"{id}" completion:^(ClarifaiModel *model, NSError *error) {
    [model predictOnImages:@[image]
                completion:^(NSArray<ClarifaiSearchResult *> *outputs, NSError *error) {
                    NSLog(@"outputs: %@", outputs);
                }];
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/puppy.jpeg"
          }
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/models/{model_id}/outputs
Show All
Response
{
  "status": {
    "code": 10000,
    "description": "Ok"
  },
  "outputs": [
    {
      "id": "e8b6eb27de764f3fa8d4f7752a3a2dfc",
      "status": {
        "code": 10000,
        "description": "Ok"
      },
      "created_at": "2016-11-22T17:22:23Z",
      "model": {
        "name": "pets",
        "id": "a10f0cf48cf3426cbb8c4805e246c214",
        "created_at": "2016-11-22T17:17:36Z",
        "app_id": "f09abb8a57c041cbb94759ebb0cf1b0d",
        "output_info": {
          "message": "Show output_info with: GET /models/{model_id}/output_info",
          "type": "concept",
          "output_config": {
            "concepts_mutually_exclusive": false,
            "closed_environment": false
          }
        },
        "model_version": {
          "id": "d1b38fd2251148d08675c5542ef00c7b",
          "created_at": "2016-11-22T17:21:13Z",
          "status": {
            "code": 21100,
            "description": "Model trained successfully"
          }
        }
      },
      "input": {
        "id": "e8b6eb27de764f3fa8d4f7752a3a2dfc",
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/puppy.jpeg"
          }
        }
      },
      "data": {
        "concepts": [
          {
            "id": "boscoe",
            "name": "boscoe",
            "app_id": "f09abb8a57c041cbb94759ebb0cf1b0d",
            "value": 0.98308545
          }
        ]
      }
    }
  ]
}

Public Models

Clarifai provides a variety of pre-trained models that you can use to make predictions. These models were developed in-house and have been thoroughly tested. Before you train your own model, we suggest trying these out to see if they fit your needs.

You can browse all of our models here.

Applications

API calls are tied to an account and application. Any model you create or search indexes you add images to, will be contained within an application.

You can create as many applications as you want and can edit or delete them as you see fit. Each application has a unique Client ID and Client Secret. These are used for authentication. You can learn more about authentication below.

Create an Application

To create an application, head on over to the applications page and press the 'Create a New Application' button.

inputs outputs

At a minimum, you'll need to provide an application name. You may also set the default model and language. If you plan on using a language other than English, you must use the 'general-v1.3' model. You can learn more about models and languages in the public model guide above.

inputs outputs

Edit an Application

If at any point you'd like to change the application name or default settings, you may do so by visiting the application page and changing the values.

inputs outputs

Delete an Application

If you'd like to delete an application, you may do so at any time by visiting the application page and pressing the 'Delete application' button. You'll be asked to confirm your change. Please note that once you delete an application, we cannot recover it. You will also lose all images, concepts and models associated with that application. Proceed with caution.

Languages

The Clarifai API supports many languages in addition to English. When making a predict api request, you can pass in the language you would like the concepts returned in.

Supported Languages

LanguageCode
Arabic (ar)ar
Bengali (bn)bn
Danish (da)da
German (de)de
English (en)en
Spanish (es)es
Finnish (fi)fi
French (fr)fr
Hindi (hi)hi
Hungarian (hu)hu
Italian (it)it
Japanese (ja)ja
Korean (ko)ko
Dutch (nl)nl
Norwegian (no)no
Punjabi (pa)pa
Polish (pl)pl
Portuguese (pt)pt
Russian (ru)ru
Swedish (sv)sv
Turkish (tr)tr
Chinese Simplified (zh)zh
Chinese Traditional (zh-TW)zh-TW

Supported Models

The only public model which supports languages other than English is the General model. If you make a predict request using a language other than English on a public model other than General, you will receive an error.

Default Language

When you create a new Application, you must specify a default language. This will be the default language concepts are returned in when you do not explicitly set a language in an API request. You cannot change the default language. You can however change languages per request.

If your application has a default language that is not English, and you would like to use a model other than the General model, you must explicitly pass in English in each request as described below.

create new app

Example Predict API Request

You can predict concepts in a language other then the Application's default, by explicitly passing in the language. Here is how you predict concepts in Chinese:


app.models.predict({ id: Clarifai.GENERAL_MODEL, language: 'zh' }, "https://samples.clarifai.com/metro-north.jpg").then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

m = app.models.get('general-v1.3')

# predict labels in simplified Chinese
m.predict_by_url('https://samples.clarifai.com/metro-north.jpg', lang='zh')

# predict labels in Japanese
m.predict_by_url('https://samples.clarifai.com/metro-north.jpg', lang='ja')

client.predict(client.getDefaultModels().generalModel().id())
        .withInputs(ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/metro-north.jpg")))
        .withLanguage("zh")
        .executeSync();

// first get the general model.
[app getModelByName:@"general-v1.3" completion:^(ClarifaiModel *model, NSError *error) {
  // create input to predict on.
  ClarifaiImage *input = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/metro-north.jpg"];

  // predict with the general model in Chinese.
  [model predictOnImages:@[input] withLanguage:@"zh" completion:^(NSArray<ClarifaiOutput *> *outputs, NSError *error) {
    for (ClarifaiConcept *concept in outputs[0].concepts) {
      NSLog(@"tag: %@", concept.conceptName);
      NSLog(@"probability: %f", concept.score);
    }
  }];
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
  "inputs": [
    {
      "data": {
        "image": {
          "url": "https://samples.clarifai.com/metro-north.jpg"
        }
      }
    }
  ],
  "model":{
    "output_info":{
      "output_config":{
        "language":"zh"
      }
    }
  }
}'\
  https://api.clarifai.com/v2/models/aaa03c23b3724a16a56b629203edc62c/outputs
Show All
Response{
  "status": {
    "code": 10000,
    "description": "Ok"
  },
  "outputs": [
    {
      "id": "b9f3c12f1534440fa984dc463e491780",
      "status": {
        "code": 10000,
        "description": "Ok"
      },
      "created_at": "2017-01-31T20:59:27Z",
      "model": {
        "name": "general-v1.3",
        "id": "aaa03c23b3724a16a56b629203edc62c",
        "created_at": "2016-03-09T17:11:39Z",
        "app_id": null,
        "output_info": {
          "message": "Show output_info with: GET /models/{model_id}/output_info",
          "type": "concept"
        },
        "model_version": {
          "id": "aa9ca48295b37401f8af92ad1af0d91d",
          "created_at": "2016-07-13T01:19:12Z",
          "status": {
            "code": 21100,
            "description": "Model trained successfully"
          }
        }
      },
      "input": {
        "id": "b9f3c12f1534440fa984dc463e491780",
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/metro-north.jpg"
          }
        }
      },
      "data": {
        "concepts": [
          {
            "id": "ai_HLmqFqBf",
            "name": "铁路列车",
            "app_id": null,
            "value": 0.9989112
          },
          {
            "id": "ai_fvlBqXZR",
            "name": "铁路",
            "app_id": null,
            "value": 0.9975532
          },
          {
            "id": "ai_Xxjc3MhT",
            "name": "运输系统",
            "app_id": null,
            "value": 0.9959158
          },
          {
            "id": "ai_6kTjGfF6",
            "name": "站",
            "app_id": null,
            "value": 0.992573
          },
          {
            "id": "ai_RRXLczch",
            "name": "火车",
            "app_id": null,
            "value": 0.992556
          },
          {
            "id": "ai_VRmbGVWh",
            "name": "旅游",
            "app_id": null,
            "value": 0.98789215
          },
          {
            "id": "ai_SHNDcmJ3",
            "name": "地铁",
            "app_id": null,
            "value": 0.9816359
          },
          {
            "id": "ai_jlb9q33b",
            "name": "通勤",
            "app_id": null,
            "value": 0.9712483
          },
          {
            "id": "ai_46lGZ4Gm",
            "name": "铁路",
            "app_id": null,
            "value": 0.9690325
          },
          {
            "id": "ai_tr0MBp64",
            "name": "交通",
            "app_id": null,
            "value": 0.9687052
          },
          {
            "id": "ai_l4WckcJN",
            "name": "模煳",
            "app_id": null,
            "value": 0.9667078
          },
          {
            "id": "ai_2gkfMDsM",
            "name": "平台",
            "app_id": null,
            "value": 0.9624243
          },
          {
            "id": "ai_CpFBRWzD",
            "name": "城市的",
            "app_id": null,
            "value": 0.960752
          },
          {
            "id": "ai_786Zr311",
            "name": "沒有人",
            "app_id": null,
            "value": 0.95864904
          },
          {
            "id": "ai_6lhccv44",
            "name": "商业",
            "app_id": null,
            "value": 0.95720303
          },
          {
            "id": "ai_971KsJkn",
            "name": "跑道",
            "app_id": null,
            "value": 0.9494642
          },
          {
            "id": "ai_WBQfVV0p",
            "name": "城市",
            "app_id": null,
            "value": 0.94089437
          },
          {
            "id": "ai_dSCKh8xv",
            "name": "快速的",
            "app_id": null,
            "value": 0.9399334
          },
          {
            "id": "ai_TZ3C79C6",
            "name": "马路",
            "app_id": null,
            "value": 0.93121606
          },
          {
            "id": "ai_VSVscs9k",
            "name": "终点站",
            "app_id": null,
            "value": 0.9230834
          }
        ]
      }
    }
  ]
}

Example Search By Tag API Request

You can search for concepts in other languages even if the default language of your application is English. When you add inputs to your application, concepts are predicted for every language. Here is an example of searching for '人' which is simplified Chinese for 'people'.


app.inputs.search({
  concept: {
    name: '人'
  },
  language: 'ja'
}).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# search '人' in simplified Chinese
app.inputs.search_by_predicted_concepts(u'人', lang='zh')

client.searchInputs(
        SearchClause.matchImageURL(ClarifaiImage.of(https://samples.clarifai.com/metro-north.jpg))).withLanguage("zh")
        .getPage(1)
        .executeSync();

// create search term with concept you want to search predicted inputs with.
ClarifaiConcept *concept1 = [[ClarifaiConcept alloc] initWithConceptName:@"人"];
ClarifaiSearchTerm *searchTerm = [[ClarifaiSearchTerm alloc] initWithSearchItem:concept1 isInput:NO];

// search will find inputs predicted to be associated with the given concept.
[_app search:@[searchTerm] page:@1 perPage:@20 language:@"zh" completion:^(NSArray<ClarifaiSearchResult *> *results, NSError *error) {
  for (ClarifaiSearchResult *result in results) {
    NSLog(@"image url: %@", result.mediaURL);
    NSLog(@"probability: %f", [result.score floatValue]);
  }
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "query": {
      "ands": [
        {
          "output": {
            "data": {
              "concepts": [
                {
                  "name": "人"
                }
              ]
            }
          }
        }
      ],
      "language": "zh"
    }
  }'\
  https://api.clarifai.com/v2/searches

Example Search Concepts API Request

You can also search for concepts in a different language:


app.concepts.search('人*', 'zh').then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

app.concepts.search(u'人*', lang='zh')

client.searchConcepts("人*")
    .withLanguage("zh")
    .getPage(1)
    .executeSync();

// Search for all concept names in chinese, beginning with "人".
[_app searchForConceptsByName:@"人*" andLanguage:@"zh" completion:^(NSArray<ClarifaiConcept *> *concepts, NSError *error) {
  for (ClarifaiConcept *concept in concepts) {
    NSLog(@"tag name: %@", concept.conceptName);
  }
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "concept_query": {
      "name":"人*",
      "language": "zh"
    }
  }'\
  https://api.clarifai.com/v2/concepts/searches
Show All
Response{
  "status": {
    "code": 10000,
    "description": "Ok"
  },
  "concepts": [
    {
      "id": "ai_l8TKp2h5",
      "name": "人",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_ZKJ48TFz",
      "name": "人",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_GlPlRlTZ",
      "name": "人为破坏",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_8ZsdCrVZ",
      "name": "人体模型",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_K1KL0zgk",
      "name": "人力的",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_Tm9d2BZ2",
      "name": "人口",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_NLF8h1fJ",
      "name": "人口",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_8bHdFtsg",
      "name": "人口",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_vLnr3Mcj",
      "name": "人孔",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_HRt4nfvL",
      "name": "人工智能",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_Qc3mqxTJ",
      "name": "人才",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_VFKQ0qD6",
      "name": "人物",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_Wz8JXXMB",
      "name": "人类免疫缺陷病毒",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_bzp3Lg81",
      "name": "人类的",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_dJ15S9s6",
      "name": "人群",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_MNCVrmml",
      "name": "人行天桥",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_CChWH41S",
      "name": "人行横道",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_4lbXrFgT",
      "name": "人造",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_277LRf4d",
      "name": "人造卫星",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    },
    {
      "id": "ai_H3RDmvSn",
      "name": "人造奶油",
      "created_at": "2016-03-17T11:43:01Z",
      "updated_at": "2016-03-17T11:43:01Z",
      "app_id": null,
      "language": "zh"
    }
  ]
}

Inputs

The API is built around a simple idea. You send inputs (images) to the service and it returns predictions. In addition to receiving predictions on inputs, you can also 'save' inputs and their predictions to later search against. You can also 'save' inputs with concepts to later train your own model.

Add Inputs

You can add inputs one by one or in bulk. If you do send bulk, you are limited to sending 128 inputs at a time.

Images can either be publicly accessible URLs or file bytes. If you are sending file bytes, you must use base64 encoding.

You are encouraged to send inputs with your own id. This will help you later match the input to your own database. If you do not send an id, one will be created for you.

Add an input using a publicly accessible URL

app.inputs.create({
  url: "https://samples.clarifai.com/metro-north.jpg"
}).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

image = app.inputs.create_image_from_url("https://samples.clarifai.com/metro-north.jpg")

client.addInputs()
    .plus(ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/metro-north.jpg")))
    .executeSync();

ClarifaiImage *image = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/metro-north.jpg"];
[app addInputs:@[image] completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error) {
    NSLog(@"inputs: %@", inputs);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/metro-north.jpg"
          }
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/inputs
Add an input using bytes

The data must be base64 encoded. When you add a base64 image to our servers, a copy will be stored and hosted on our servers. If you already have an image hosting service we recommend using it and adding images via the url parameter.


app.inputs.create({
  base64: "Zvfauhti4D..."
}).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# add from filename
app.inputs.create_image_from_filename(filename)

# add from base64 bytes
app.inputs.create_image_from_base64(base64_bytes)

client.addInputs()
    .plus(ClarifaiInput.forImage(ClarifaiImage.of(new File("image.png"))))
    .executeSync();

ClarifaiImage *imageFromImage = [[ClarifaiImage alloc] initWithImage:@"dress.jpg"];
[app addInputs:@[imageFromImage] completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error) {
    NSLog(@"inputs: %@", inputs);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "base64": '"`base64 /home/user/image.jpeg`"'"
          }
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/inputs
Add multiple inputs with ids

app.inputs.create([
  {
    url: "https://samples.clarifai.com/metro-north.jpg",
    id: 'train1'
  },
  {
    url: "https://samples.clarifai.com/puppy.jpeg",
    id: 'puppy1'
  }
]).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

img1 = ClImage(url="https://samples.clarifai.com/metro-north.jpg", image_id="train1")
img2 = ClImage(url="https://samples.clarifai.com/puppy.jpeg", image_id="puppy1")

app.inputs.bulk_create_images([img1, img2])

client.addInputs()
    .plus(
        ClarifaiInput.forImage(
            ClarifaiImage.of("https://samples.clarifai.com/metro-north.jpg")
        ).withConcepts(
            Concept.forID("id1")
        ),
        ClarifaiInput.forImage(
            ClarifaiImage.of("https://samples.clarifai.com/wedding.jpg")
        ).withConcepts(
            Concept.forID("id2")
        )
    )
    .executeSync();

ClarifaiImage *train = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/metro-north.jpg"];
train.inputID = @"train";

ClarifaiImage *puppy = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/puppy.jpeg"];
puppy.inputID = @"puppy";

[app addInputs:@[train, puppy] completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error) {
    NSLog(@"inputs: %@", inputs);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/metro-north.jpg"
          }
        },
        "id": "{id1}"
      },
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/puppy.jpeg"
          }
        },
        "id": "{id2}"
      }
    ]
  }'\
  https://api.clarifai.com/v2/inputs

Add Inputs With Concepts

If you would like to add an input with concepts, you can do so like this. Concepts play an important role in creating your own models using your own concepts. You can learn more about creating your own models above. Concepts also help you search for inputs. You can learn more about search here.

When you add a concept to an input, you need to indicate whether the concept is present in the image or if it is not present.

You can add inputs with concepts as either a URL or bytes.


app.inputs.create({
  url: "https://samples.clarifai.com/puppy.jpeg",
  concepts: [
    {
      id: "boscoe",
      value: true
    }
  ]
}).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# add by url
app.inputs.create_image_from_url("https://samples.clarifai.com/puppy.jpeg", concepts=['boscoe'])

# add by base64 bytes
app.inputs.create_image_from_base64(base64_bytes, concepts=['boscoe'])

# add by raw bytes
app.inputs.create_image_from_bytes(raw_bytes, concepts=['boscoe'])

# add by local file
app.inputs.create_image_from_filename(local_filename, concepts=['boscoe'])

# add multiple with concepts
img1 = ClImage(url="https://samples.clarifai.com/puppy.jpeg", concepts=['boscoe'], not_concepts=['our_wedding'])
img2 = ClImage(url="https://samples.clarifai.com/wedding.jpg", concepts=['our_wedding'], not_concepts=['cat','boscoe'])

app.inputs.bulk_create_images([img1, img2])

client.addInputs()
    .plus(ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/puppy.jpeg"))
        .withConcepts(
            // To mark a concept as being absent, chain `.withValue(false)`
            Concept.forID("boscoe")
        )
    )
    .executeSync();

ClarifaiImage *puppy = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/puppy.jpeg"
                                              andConcepts:@[@"cute puppy"]];

[app addInputs:@[puppy] completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error) {
    NSLog(@"inputs: %@", inputs);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/puppy.jpeg"
          },
          "concepts":[
            {
              "id": "boscoe",
              "value": true
            }
          ]
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/inputs

Add Inputs With Custom Metadata

In addition to adding an input with concepts, you can also add an input with custom metadata. This metadata will then be searchable. Metadata can be any arbitrary JSON.


app.inputs.create({
  url: "https://samples.clarifai.com/puppy.jpeg",
  metadata: {id: 'id001', type: 'plants', size: 100}
}).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# metadata must be defined as JSON object
metadata = {'id':'id001', 'type':'plants', 'size':100}

# adding metadata along with url, filename, etc
app.inputs.create_image_from_url(url="https://samples.clarifai.com/puppy.jpeg", metadata=metadata)
app.inputs.create_image_from_filename(filename="aa.jpg", metadata=metadata)

# define an image with metadata for bulk import
img = Image(url="", metadata=metadata)

app.inputs.bulk_create_images([img])

final JsonObject metadata = new JsonObject();
metadata.addProperty("isPuppy", true);
client.addInputs()
  .plus(
    ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/puppy.jpeg")).withMetadata(metadata)
  ).executeSync();

ClarifaiImage *puppy = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/puppy.jpeg"
                                              andConcepts:@[@"cute puppy"]];
puppy.metadata = @{@"my_key": @[@"my",@"values"], @"cuteness": @"extra-cute"};
[app addInputs:@[puppy] completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error) {
  NSLog(@"inputs: %@", inputs);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/puppy.jpeg",
            "allow_duplicate_url": true
          },
          "metadata": {
            "key": "value",
            "list":[1,2,3]
          }
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/inputs

Add Input With A Crop

When adding an input, you can specify crop points. The API will crop the image and use the resulting image. Crop points are given as percentages from the top left point in the order of top, left, bottom and right.

As an example, if you provide a crop as 0.2, 0.4, 0.3, 0.6 that means the cropped image will have a top edge that starts 20% down from the original top edge, a left edge that starts 40% from the original left edge, a bottom edge that starts 30% from the original top edge and a right edge that starts 60% from the original left edge.


app.inputs.create(
  {
    "url": "https://samples.clarifai.com/metro-north.jpg",
    "crop": [0.2, 0.4, 0.3, 0.6]
  }
).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# add url with crop
app.inputs.create_image_from_url(url="https://samples.clarifai.com/metro-north.jpg", crop=[0.2, 0.4, 0.3, 0.6])

client.addInputs()
    .plus(
        ClarifaiInput.forImage(
            ClarifaiImage.of("https://samples.clarifai.com/metro-north.jpg")
                .withCrop(Crop.create()
                    .top(0.2F)
                    .left(0.4F)
                    .bottom(0.3F)
                    .right(0.6F)
                )
        )
    )
    .executeSync();

ClarifaiCrop *crop = [[ClarifaiCrop alloc] initWithTop:0.2 left:0.3 bottom:0.7 right:0.8];
ClarifaiImage *puppy = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/puppy.jpeg" andCrop:crop];

[app addInputs:@[puppy] completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error) {
    NSLog(@"inputs: %@", inputs);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/metro-north.jpg",
            "crop": [0.2, 0.4, 0.3, 0.6]
          }
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/inputs

Get Inputs

You can list all the inputs (images) you have previously added either for search or train.

If you added inputs with concepts, they will be returned in the response as well.

This request is paginated.


app.inputs.list({page: 1, perPage: 20}).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# this is a generator
app.inputs.get_all()

# get a page of inputs
app.inputs.get_by_page(page=1, per_page=20)

client.getInputs() // optionally takes a perPage parameter
    .getPage(1)
    .executeSync();

[app getInputsOnPage:1 pageSize:20 completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error) {
    NSLog(@"inputs: %@", inputs);
}];

curl -X GET \
  -H "Authorization: Bearer {access_token}" \
  https://api.clarifai.com/v2/inputs

Get Input By Id

If you'd like to get a specific input by id, you can do that as well.


app.inputs.get({id}).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

image = app.inputs.get(input_id)

client.getInputByID("{id}").executeSync();

[_app getInput:input_id completion:^(ClarifaiInput *input, NSError *error) {
    NSLog(@"input": %@, input);
}];

curl -X GET \
  -H "Authorization: Bearer {access_token}" \
  https://api.clarifai.com/v2/inputs/{id}

Get Inputs Status

If you add inputs in bulk, they will process in the background. You can get the status of all your inputs (processed, to_process and errors) like this:


app.inputs.getStatus().then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

app.inputs.check_status()

client.getInputsStatus().executeSync();

[app getInputsStatus:^(int numProcessed, int numToProcess, int errors, NSError *error) {
    NSLog(@"number of inputs processed: %d", numProcessed);
}];

curl -X GET \
  -H "Authorization: Bearer {access_token}" \
  https://api.clarifai.com/v2/inputs/status

Update Input With Concepts

To update an input with a new concept, or to change a concept value from true/false, you can do that:


app.inputs.mergeConcepts([
  {
    id: "{id}",
    concepts: [
      {
        id: "tree"
      },
      {
        id: "water",
        value: false
      }
    ]
  },
])


// or if you have an input instance
app.inputs.get({id}).then(
  function(input) {
    input.mergeConcepts([
      {
        id: "tree",
        value: true
      },
      {
        id: "water",
        value: false
      }
    ])
  },
  function(err) {
    // there was an error
  }
);

app.inputs.merge_concepts('{id}', concepts=['tree'], not_concepts=['water'])

client.modifyInput("")
    .withConcepts(
        Action.MERGE,
        Concept.forID("tree"),
        Concept.forID("water").withValue(false)
    )
    .executeSync();

ClarifaiConcept *concept = [[ClarifaiConcept alloc] initWithConceptName:@"cute cat"];
[_app addConcepts:@[concept] forInputWithID:@"{id}" completion:^(ClarifaiInput *input, NSError *error) {
    NSLog(@"input: %@", input);
}];

curl -X PATCH \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "id": "{id}",
        "data": {
          "concepts": [
            {
              "id": "tree",
              "value": true
            },
            {
              "id": "water",
              "value": false
            }
          ]
        }
      }
    ],
    "action":"merge"
}'\
  https://api.clarifai.com/v2/inputs

Delete Concepts From An Input

To remove concepts that were already added to an input, you can do this:


app.inputs.deleteConcepts([
  {
    id: "{id}",
    concepts: [
      {
        id: "tree"
      },
      {
        id: "water",
        value: false
      }
    ]
  },
])

// or if you have an input instance
app.inputs.get({id}).then(
  function(input) {
    input.deleteConcepts([
      {
        id: "tree",
        value: true
      },
      {
        id: "water",
        value: false
      }
    ])
  },
  function(err) {
    // there was an error
  }
);

app.inputs.delete_concepts({id}, concepts=['tree', 'water'])

client.modifyInput("")
    .withConcepts(
        Action.REMOVE,
        Concept.forID("tree"),
        Concept.forID("water")
    )
    .executeSync();

ClarifaiConcept *concept = [[ClarifaiConcept alloc] initWithConceptName:@"cute cat"];
[app deleteConcepts:@[concept] forInputWithID:{id} completion:^(ClarifaiInput *input, NSError *error) {
    NSLog(@"input: %@", input);
}];

curl -X PATCH \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "id":"",
        "data": {
            "concepts":[
                {"id":"mattid2", "value":true},
                {"id":"ferrari", "value":false}
            ]
        }
      }
    ],
    "action":"remove"
  }'\
  https://api.clarifai.com/v2/inputs/

Bulk Update Inputs With Concepts

You can update an existing input using its Id. This is useful if you'd like to add concepts to an input after its already been added.


app.inputs.mergeConcepts([
  {
    id: "{id1}",
    concepts: [
      {
        id: "tree",
        value: true
      },
      {
        id: "water",
        value: false
      }
    ]
  },
  {
    id: "{id2}",
    concepts: [
      {
        id: "animal",
        value: true
      },
      {
        id: "fruit",
        value: false
      }
    ]
  }
]).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# bulk merge concepts
input_ids = ["{id1}", "{id2}"]
concept_pairs = [
                 [('tree', True), ('water', False)],
                 [('animal', True), ('fruit', False)],
                ]
app.inputs.bulk_merge_concepts(input_ids, concept_pairs)

client.modifyInput("")
    .withConcepts(
        Action.MERGE,
        Concept.forID("tree"),
        Concept.forID("water").withValue(false)
    )
    .executeSync();
ClarifaiConcept *newConcept = [[ClarifaiConcept alloc] initWithConceptID:@"tree"];
[_app getInput:@"{input_id}" completion:^(ClarifaiInput *input, NSError *error) {
  // Add tree concept to each current input's concept list.
  NSMutableArray *newConceptList = [NSMutableArray arrayWithArray:input.concepts];
  [newConceptList addObject:newConcept];
  input.concepts = newConceptList;

  // Merge the new list for one or more inputs.
  [_app mergeConceptsForInputs:@[input] completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error)   {
    NSLog(@"updated inputs: %@", inputs);
  }];
}];

curl -X PATCH \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "id": "{id1}",
        "data": {
          "concepts": [
            {
              "id": "tree",
              "value": true
            },
            {
              "id": "water",
              "value": false
            }
          ]
        }
      },
      {
        "id": "{id2}",
        "data": {
          "concepts": [
            {
              "id": "tree",
              "value": true
            },
            {
              "id": "water",
              "value": false
            }
          ]
        }
      }
    ],
    "action":"merge"
}'\
  https://api.clarifai.com/v2/inputs

Bulk Delete Concepts From A List Of Inputs

You can bulk delete multiple concepts from a list of inputs:


app.inputs.deleteConcepts([
  {
    id: "{id1}",
    concepts: [
      { id: "tree" },
      { id: "water" }
    ]
  },
  {
    id: "{id2}",
    concepts: [
      { id: "animal" },
      { id: "fruit" }
    ]
  }
]).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

input_ids = ["{id1}", "{id2}"]
concept_pairs = [
                 ['tree', 'water'],
                 ['animal', 'fruit']
                ]
app.inputs.bulk_delete_concepts(input_ids, concept_pairs)

client.modifyInput("")
    .withConcepts(
        Action.REMOVE,
        Concept.forID("tree"),
        Concept.forID("water")
    )
    .executeSync();

ClarifaiConcept *concept = [[ClarifaiConcept alloc] initWithConceptName:@"cute cat"];
[app deleteConcepts:@[concept] forInputWithID:input_id completion:^(ClarifaiInput *input, NSError *error) {
    NSLog(@"input: %@", input);
}];

curl -X PATCH \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "id": "{id1}",
        "data": {
          "concepts":[
            {
              "id": "mattid2"
            },
            {
              "id": "ferrari"
            }
          ]
        }
      },
      {
        "id": "{id2}",
        "data": {
          "concepts":[
            {
              "id": "mattid2"
            },
            {
              "id": "ferrari"
            }
          ]
        }
      }
    ],
    "action":"remove"
  }'\
  https://api.clarifai.com/v2/inputs

Delete Input By Id

You can delete a single input by id:


app.inputs.delete({id}).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

app.inputs.delete("{id}")

client.deleteInputs()
    .delete("{id}")
    .executeSync();

[_app deleteInputsByIDList:@[{id1}] completion:^(NSError *error) {
    NSLog(@"input has been deleted");
}];

curl -X DELETE \
  -H "Authorization: Bearer {access_token}" \
  https://api.clarifai.com/v2/inputs/{id}

Delete A List Of Inputs

You can also delete multiple inputs in one API call. This will happen asynchronously.


app.inputs.delete([{id1}, {id2}]).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

app.delete(["{id1}", "{id2}"])

client.deleteInputs()
    .delete("id")
    .executeSync();

[_app deleteInputsByIDList:@[{id1}, {id2}] completion:^(NSError *error) {
    NSLog(@"inputs have been deleted");
}];

curl -X DELETE \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "ids":["{id1}","{id2}"]
  }'\
  https://api.clarifai.com/v2/inputs

Delete All Inputs

If you would like to delete all inputs from an application, you can do that as well. This will happen asynchronously.


app.inputs.delete().then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

app.inputs.delete_all()

client.deleteAllInputs().executeSync();

[app deleteAllInputs:^(ClarifaiInput *input, NSError *error) {
  NSLog(@"all inputs have been deleted");
}];

curl -X DELETE \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "delete_all":true
  }'\
  https://api.clarifai.com/v2/inputs

Models

There are many methods to work with models.

Create Model

You can create your own model and train it with your own images and concepts. Once you train it to see how you would like it to see, you can then use that model to make predictions.

When you create a model you give it a name and an id. If you don't supply an id, one will be created for you.


app.models.create("petsID").then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

app.models.create('petsID')

client.createModel("petsID").executeSync();

[_app createModel:nil name:@"petsModel" modelID:@"petsID" conceptsMutuallyExclusive:NO closedEnvironment:NO completion:^(ClarifaiModel *model, NSError *error) {
    NSLog(@"model: %@", model);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "model": {
      "id": "petsID"
    }
  }'\
  https://api.clarifai.com/v2/models

Create Model With Concepts

You can also create a model and initialize it with the concepts it will contain. You can always add and remove concepts later.


app.models.create(
  "petsID",
  [
    { "id": "boscoe" }
  ]
).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

model = app.models.create('petsID', concepts=['boscoe'])

client.createModel("petsID")
    .withOutputInfo(ConceptOutputInfo.forConcepts(
        Concept.forID("boscoe")
    ))
    .executeSync();

[_app createModel:@[@"cat", @"dog"] name:@"petsModel" modelID:@"petsID" conceptsMutuallyExclusive:NO closedEnvironment:NO completion:^(ClarifaiModel *model, NSError *error) {
    NSLog(@"model: %@", model);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "model": {
      "id": "petsID",
      "output_info": {
        "data": {
          "concepts": [
            {
              "id": "boscoe"
            }
          ]
        },
        "output_config": {
          "concepts_mutually_exclusive": false,
          "closed_environment":false
        }
      }
    }
  }'\
  https://api.clarifai.com/v2/models

Add Concepts To A Model

You can add concepts to a model at any point. As you add concepts to inputs, you may want to add them to your model.


app.models.initModel({model_id}).then(function(model) {
  updateModel,
  function(err) {
    // there was an error
  }
});

function updateModel(model) {
  model.mergeConcepts({"id": "boscoe"}).then(
    function(response) {
      // do something with response
    },
    function(err) {
      // there was an error
    }
  );
}

model = app.models.get('{model_id}')
model.add_concepts(['boscoe'])

client.modifyModel("")
    .withConcepts(Action.MERGE, Concept.forID("dogs"))
    .executeSync();

// Or, if you have a ConceptModel object, you can do it in an OO fashion
final ConceptModel model = client.getModelByID("{model_id}").executeSync().get().asConceptModel();
model.modify()
    .withConcepts(Action.MERGE, Concept.forID("dogs"))
    .executeSync();

ClarifaiConcept *concept = [[ClarifaiConcept alloc] initWithConceptName:@"dress"];
[app addConcepts:@[concept] toModelWithID:@"{model_id}" completion:^(ClarifaiModel *model, NSError *error) {
    NSLog(@"model: %@", model);
}];

curl -X PATCH \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "models": [
      {
        "id": "{model_id}",
        "output_info": {
          "data": {
            "concepts": [
              {
                "id": "dogs"
              }
            ]
          }
        }
      }
    ],
    "action": "merge"
  }'\
  https://api.clarifai.com/v2/models/

Remove Concepts From A Model

Conversely, if you'd like to remove concepts from a model, you can also do that.


app.models.initModel({model_id}).then(function(model) {
  updateModel,
  function(err) {
    // there was an error
  }
});

function updateModel(model) {
  model.deleteConcepts({"id": "boscoe"}).then(
    function(response) {
      // do something with response
    },
    function(err) {
      // there was an error
    }
  );
}

model = app.models.get('{model_id}')
model.delete_concepts(['boscoe'])

client.modifyModel("")
    .withConcepts(Action.REMOVE, Concept.forID("dogs"))
    .executeSync();

// Or, if you have a ConceptModel object, you can do it in an OO fashion
final ConceptModel model = client.getModelByID("").executeSync().get().asConceptModel();
model.modify()
    .withConcepts(Action.REMOVE, Concept.forID("dogs"))
    .executeSync();

ClarifaiConcept *concept = [[ClarifaiConcept alloc] initWithConceptName:@"dress"];
[app deleteConcepts:@[concept] fromModelWithID:@"{model_id}" completion:^(ClarifaiModel *model, NSError *error) {
    NSLog(@"model: %@", model);
}];

curl -X PATCH \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "models": [
      {
        "id": "{model_id}",
        "output_info": {
          "data": {
            "concepts": [
              {
                "id": "dogs"
              }
            ]
          }
        }
      }
    ],
    "action": "remove"
  }'\
  https://api.clarifai.com/v2/models/

Update Model Name and Configuration

Here we will change the model name to 'newname' and the model's configuration to have concepts_mutually_exclusive=true and closed_environment=true.


app.models.initModel({model_id}).then(
  updateModel,
  function(err) {
    // there was an error
  }
);

function updateModel(model) {
  model.update({
    name: 'newname',
    conceptsMutuallyExclusive: true,
    closedEnvironment: true,
    concepts: ['birds', 'hurd']
  }).then(
}

model = app.models.get('{model_id}')

# only update the name
model.update(model_name="newname")

# update the model attributes
model.update(concepts_mutually_exclusive=True, closed_environment=True)

# update more together
model.update(model_name="newname",
             concepts_mutually_exclusive=True, closed_environment=True)

# update attributes together with concepts
model.update(model_name="newname",
             concepts_mutually_exclusive=True,
             concepts=["birds", "hurd"])

client.modifyModel("")
    .withName("newname")
    .withConceptsMutuallyExclusive(true)
    .withClosedEnvironment(true)
    .executeSync();

[_app updateModel:@"{model_id}" name:@"newName" conceptsMutuallyExclusive:NO closedEnvironment:NO completion:^(ClarifaiModel *model, NSError *error) {
    NSLog(@"model: %@", model);
}];

curl -X PATCH \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "models": [
      {
        "id": "",
        "name": "newname",
        "output_info": {
          "output_config": {
            "concepts_mutually_exclusive": true,
            "closed_environment": true
          }
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/models/

Get Models

To get a list of all models including models you've created as well as public models:


app.models.list().then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# this is a generator
app.models.get_all()

client.getModels().getPage(1).executeSync();

[_app getModels:1 resultsPerPage:30 completion:^(NSArray<ClarifaiModel *> *models, NSError *error) {
    NSLog(@"models: %@", models);
}];

curl -X GET \
  -H "Authorization: Bearer {access_token}" \
  https://api.clarifai.com/v2/models

Get Model By Id

All models have unique Ids. You can get a specific model by its id:


app.models.get({model_id}).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# get model by id
model = app.models.get(model_id')

# get model by name
model = app.models.get('my_model1')

client.getModelByID("{model_id}").executeSync();

[_app getModel:@"model_id" completion:^(ClarifaiModel *model, NSError *error) {
    NSLog(@"model: %@", model);
}];

curl -X GET \
  -H "Authorization: Bearer {access_token}" \
  https://api.clarifai.com/v2/models/{model_id}

Get Model Output Info By Id

The output info of a model lists what concepts it contains.


app.models.initModel({model_id}).then(
  getModelOutputInfo,
  handleError
);

function getModelOutputInfo(model) {
  model.getOutputInfo().then(
    function(response) {
      // do something with response
    },
    function(err) {
      // there was an error
    }
  );
}

model = app.models.get('my_model1')
model.get_info(verbose=True)

client.getModelByID("{model_id}").executeSync();

[_app getModelByID:@"{model_id}" completion:^(ClarifaiModel *model, NSError *error) {
    NSLog(@"model: %@", model);
}];

curl -X GET \
  -H "Authorization: Bearer {access_token}" \
  https://api.clarifai.com/v2/models/{model_id}/output_info

List Model Versions

Every time you train a model, it creates a new version. You can list all the versions created.


app.models.initModel('{id}').then(
  function(model) {
    model.getVersions().then(
      function(response) {
        // do something with response
      },
      function(err) {
        // there was an error
      }
    );
  },
  function(err) {
    // there was an error
  }
);

model = app.models.get('{id}')
model.list_versions()

client.getModelVersions("{model_id}").getPage(1).executeSync();

[app listVersionsForModel:@"{model_id}" page:1 resultsPerPage:30 completion:^(NSArray<ClarifaiModelVersion *> *versions, NSError *error) {
    NSLog(@"versions: %@", versions);
}];

curl -X GET \
  -H "Authorization: Bearer {access_token}" \
  https://api.clarifai.com/v2/models/{model_id}/versions

Get Model Version By Id

To get a specific model version, you must provide the modelId as well as the versionId. You can inspect the model version status to determine if your model is trained or still training.


app.models.initModel('{id}').then(
  function(model) {
    model.getVersion('{version_id}').then(
      function(response) {
        // do something with response
      },
      function(err) {
        // there was an error
      }
    );
  },
  function(err) {
    // there was an error
  }
);

model = app.models.get('{id}')
model.get_version('{version_id}')

client.getModelVersionByID("{model_id}", "{version_id}").executeSync();

// Or in a more object-oriented manner:
client.getModelByID("{model_id}")
    .executeSync().get() // Returns Model object
    .getVersionByID("{version_id}").executeSync();

[app getVersionForModel:@"{model_id}" versionID:{version_id} completion:^(ClarifaiModelVersion *version, NSError *error) {
    NSLog(@"version: %@", version);
}];

curl -X GET \
  -H "Authorization: Bearer {access_token}" \
  https://api.clarifai.com/v2/models/{model_id}/versions/{version_id}

Get Model Training Inputs

You can list all the inputs that were used to train the model.


app.models.initModel('{id}').then(
  function(model) {
    model.getInputs().then(
      function(response) {
        // do something with response
      },
      function(err) {
        // there was an error
      }
    );
  },
  function(err) {
    // there was an error
  }
);

model = app.models.get('{id}')
model.get_inputs()

client.getModelInputs("{model_id}").getPage(1).executeSync();

[app listTrainingInputsForModel:@"{model_id}" page:1 resultsPerPage:30 completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error) {
    NSLog(@"inputs: %@", inputs);
}];

curl -X GET \
  -H "Authorization: Bearer {access_token}" \
  https://api.clarifai.com/v2/models/{model_id}/inputs

Get Model Training Inputs By Version

You can also list all the inputs that were used to train a specific model version.


app.models.initModel({id: '{model_id}', version: '{version_id}'}).then(
  function(model) {
    model.getInputs().then(
      function(response) {
        // do something with response
      },
      function(err) {
        // there was an error
      }
    );
  },
  function(err) {
    // there was an error
  }
);

model = app.models.get('{id}')
model.get_inputs('{version_id}')

client.getModelInputs("{model_id}")
    .fromSpecificModelVersion("{version_id}")
    .getPage(1)
    .executeSync();

[_app listTrainingInputsForModel:@"{model_id}" page:1 resultsPerPage:30 completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error) {
    NSLog(@"inputs: %@", inputs);
}];

curl -X GET \
  -H "Authorization: Bearer {access_token}" \
  https://api.clarifai.com/v2/models/{model_id}/versions/{version_id}/inputs

Delete A Model

You can delete a model using the modelId.


app.models.delete('{id}').then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

app.models.delete('{id}')

client.deleteModel("{model_id}").executeSync();

[app deleteModel:@"{model_id}" completion:^(NSError *error) {
    NSLog(@"model is deleted");
}];

curl -X DELETE \
  -H "Authorization: Bearer {access_token}" \
  https://api.clarifai.com/v2/models/{model_id}

Delete A Model Version

You can also delete a specific version of a model with the modelId and versionId.


app.models.delete('{model_id}', '{version_id}').then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

app.models.delete('{id}', '{version_id}')

# or

model = app.models.get('{id}')
model.delete_version('{version_id}')

client.deleteModelVersion("{model_id}", "{version_id}").executeSync();

// Or in a more object-oriented manner:
client.getModelByID("{model_id}")
    .executeSync().get() // Returns Model object
    .deleteVersion("{version_id}")
    .executeSync();

[app deleteVersionForModel:{model_id} versionID:{version_id} completion:^(NSError *error) {
    NSLog(@"model version deleted");
}];

curl -X DELETE \
  -H "Authorization: Bearer {access_token}" \
  https://api.clarifai.com/v2/models/{model_id}/versions/{version_id}

Delete All Models

If you would like to delete all models associated with an application, you can also do that. Please proceed with caution as these cannot be recovered.


app.models.delete().then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

app.models.delete_all()

client.deleteAllModels().executeSync();

[_app deleteAllModels:^(NSError *error) {
    NSLog(@"delete all models");
}];

curl -X DELETE \
  -H "Authorization: Bearer {access_token}" \
  https://api.clarifai.com/v2/models/

Train A Model

When you train a model, you are telling the system to look at all the images with concepts you've provided and learn from them. This train operation is asynchronous. It may take a few seconds for your model to be fully trained and ready.

Note: you can repeat this operation as often as you like. By adding more images with concepts and training, you can get the model to predict exactly how you want it to.


app.models.train("{model_id}").then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

// or if you have an instance of a model

model.train().then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

model = app.models.get('{model_id}')
model.train()

client.trainModel("{model_id}").executeSync();

ClarifaiImage *image = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/puppy.jpeg"]
[app getModel:@"{id}" completion:^(ClarifaiModel *model, NSError *error) {
    [model train:^(ClarifaiModel *model, NSError *error) {
        NSLog(@"model: %@", model);
    }];
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  https://api.clarifai.com/v2/models/{model_id}/versions

Predict With A Model

Once you have trained a model you are ready to use your new model to get predictions. The predictions returned will only contain the concepts that you told it to see.


app.models.predict("{model_id}", ["https://samples.clarifai.com/puppy.jpeg"]).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

// or if you have an instance of a model

model.predict("https://samples.clarifai.com/puppy.jpeg").then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

from clarifai.rest import Image as ClImage

model = app.models.get('{model_id}')

image = ClImage(url='https://samples.clarifai.com/puppy.jpeg')
model.predict([image])

client.predict("{model_id}")
    .withInputs(
        ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/puppy.jpeg"))
    )
    .executeSync();

ClarifaiImage *image = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/puppy.jpeg"]
[app getModel:@"{model_id}" completion:^(ClarifaiModel *model, NSError *error) {
    [model predictOnImages:@[image]
                completion:^(NSArray<ClarifaiSearchResult *> *outputs, NSError *error) {
                    NSLog(@"outputs: %@", outputs);
                }];
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/puppy.jpeg"
          }
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/models/{model_id}/outputs

Search Models By Name And Type

You can search all your models by name and type of model.


app.models.search('general-v1.3', 'concept').then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# search model name
app.models.search('general-v1.3')

# search model name and type
app.models.search(model_name='general-v1.3', model_type='concept')

client.findModel()
    .withModelType(ModelType.CONCEPT)
    .withName("general-v1.3")
    .getPage(1)
    .executeSync();

[app searchForModelByName:@"general-v1.3" modelType:ClarifaiModelTypeConcept completion:^(NSArray<ClarifaiModel *> *models, NSError *error) {
    NSLog(@"models: %@", models);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "model_query": {
      "name": "general-v1.3",
      "type": "concept"
    }
  }'\
  https://api.clarifai.com/v2/models/searches

Searches

Search By Predicted Concepts

When you add an input, it automatically gets predictions from the general model. You can search for those predictions.


app.inputs.search([
  {
    concept: {
      name: 'cat'
    }
  },
  {
    concept: {
      name: 'dog'
    }
  }
]).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# search by single concept name
app.inputs.search_by_predicted_concepts(concept='cat')

# search by single concept id
app.inputs.search_by_predicted_concepts(concept_id='ai_mFqxrph2')

# search by multiple concepts with name
app.inputs.search_by_predicted_concepts(concepts=['cat', 'cute'])

# search by multiple concepts with ids
app.inputs.search_by_predicted_concepts(concept_ids=['ai_mFqxrph2', 'ai_4CRlSvbV'])

# search by multiple concepts with not logic
app.inputs.search_by_predicted_concepts(concepts=['cat', 'dog'], values=[True, False])

// Search concept by name
client.searchInputs(SearchClause.matchConcept(Concept.forName("cat")))
    .getPage(1)
    .executeSync();

// Search concept by ID
client.searchInputs(SearchClause.matchConcept(Concept.forID("ai_mFqxrph2")))
    .getPage(1)
    .executeSync();

// Search multiple concepts
client.searchInputs(SearchClause.matchConcept(Concept.forID("cat")))
    .and(SearchClause.matchConcept(Concept.forID("cute")))
    .getPage(1)
    .executeSync();

// Search NOT by concept
client.searchInputs(SearchClause.matchConcept(Concept.forID("cat").withValue(false)))
    .getPage(1)
    .executeSync();

// First create a search term with a concept you want to search.
ClarifaiConcept *conceptFromGeneralModel = [[ClarifaiConcept alloc] initWithConceptName:@"fast"];
ClarifaiSearchTerm *searchTerm = [ClarifaiSearchTerm searchByPredictedConcept:conceptFromGeneralModel];

[app search:@[searchTerm] page:@1 perPage:@20 completion:^(NSArray<ClarifaiSearchResult *> *results, NSError *error) {
  // Print output of first search result.
  NSLog(@"inputID: %@", results[0].inputID);
  NSLog(@"URL: %@", results[0].mediaURL);
  NSLog(@"probability of input matching search query: %@", results[0].score);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "query": {
      "ands": [
        {
          "output": {
            "data": {
              "concepts": [
                {
                  "name":"dog"
                }
              ]
            }
          }
        }
      ]
    }
  }'\
  https://api.clarifai.com/v2/searches

Search By User Supplied Concept

After you have added inputs with concepts, you can search by those concepts.


app.inputs.search([
  {
    concept: {
      type: 'input',
      name: 'cat'
    }
  },
  {
    concept: {
      type: 'input',
      name: 'dog'
    }
  }
]).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# search by single concept name
app.inputs.search_by_annotated_concepts(concept='cat')

# search by single concept id
app.inputs.search_by_annotated_concepts(concept_id='ai_mFqxrph2')

# search by multiple concepts with name
app.inputs.search_by_annotated_concepts(concepts=['cat', 'cute'])

# search by multiple concepts with ids
app.inputs.search_by_annotated_concepts(concept_ids=['ai_mFqxrph2', 'ai_4CRlSvbV'])

# search by multiple concepts with not logic
app.inputs.search_by_annotated_concepts(concepts=['cat', 'dog'], values=[True, False])

// Search concept by name
client.searchInputs(SearchClause.matchUserTaggedConcept(Concept.forName("cat")))
    .getPage(1)
    .executeSync();

// Search concept by ID
client.searchInputs(SearchClause.matchUserTaggedConcept(Concept.forID("ai_mFqxrph2")))
    .getPage(1)
    .executeSync();

// Search multiple concepts
client.searchInputs(SearchClause.matchUserTaggedConcept(Concept.forID("cat")))
    .and(SearchClause.matchUserTaggedConcept(Concept.forID("cute")))
    .getPage(1)
    .executeSync();

// Search NOT by concept
client.searchInputs(SearchClause.matchUserTaggedConcept(Concept.forID("cat").withValue(false)))
    .getPage(1)
    .executeSync();

// If you have previously added inputs tagged with "dog", you can search for them by the same tag. 
ClarifaiConcept *concept = [[ClarifaiConcept alloc] initWithConceptName:@"dog"];
ClarifaiSearchTerm *term = [ClarifaiSearchTerm searchInputsByConcept:concept];

[app search:@[term] page:@1 perPage:@20 completion:^(NSArray<ClarifaiSearchResult *> *results, NSError *error) {
  // Print output of first search result.
  NSLog(@"inputID: %@", results[0].inputID);
  NSLog(@"URL: %@", results[0].mediaURL);
  NSLog(@"probability of input matching search query: %@", results[0].score);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "query": {
      "ands": [
        {
          "input": {
            "data": {
              "concepts": [
                {
                  "name":"dog"
                }
              ]
            }
          }
        }
      ]
    }
  }'\
  https://api.clarifai.com/v2/searches

Search By Custom Metadata

After you have added inputs with custom metadata, you can search by that metadata.

Below is an example of searching over custom metadata. You can exact match any key: value pair no matter how nested it is. For example, if the metadata on an input is:

{
  "keyname": "value1",
  "somelist": [1,2,3],
  "somenesting": {
     "keyname2":"value2",
     "list2":[4,5]
   }
}

Then the following searches will find this:

{
  "keyname": "value1"
}
{
  "somelist": [1,2,3]
}
{
  "somelist": [1,2]
}
{
  "somenesting": {"keyname2":"value2"}
}
{
  "somenesting": {"list2":[5]}
}

How to perform searches:


// Search with only metadata
app.inputs.search({
  input: {
    metadata: {
      key: 'value'
    }
  }
}).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

// Search with nested metadata
app.inputs.search({
  input: {
    metadata: {
      parent: {
        key: 'value'
      }
    }
  }
}).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

// Search with metadata and concepts or input source
app.inputs.search([
  {
    input: { metadata: { key: 'value' } }
  },
  {
    concept: { name: 'cat' }
  },
  {
    concept: { type: 'output', name: 'group', value: false }
  }
]).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# search with simple metadata only
app.inputs.search_by_metadata(metadata={'name':'bla'})

# search with nested metadata only
app.inputs.search_by_metadata(metadata={'my_class1': { 'name' : 'bla' }})

# search with metadata combined with others

from clarifai.rest import InputSearchTerm
from clarifai.rest import OutputSearchTerm
from clarifai.rest import SearchQueryBuilder

query = SearchQueryBuilder()
query.add_term(InputSearchTerm(concept='cat'))
query.add_term(InputSearchTerm(metadata={'name':'value'}))
query.add_term(OutputSearchTerm(concept='group', value=False))

app.inputs.search(query)

JsonObject metadata = new JsonObject();
metadata.addProperty("isPuppy", true);

List<SearchHit> hits = client
  .searchInputs(SearchClause.matchMetadata(metadata))
  .executeSync();

// Search by metadata only.
[_app searchByMetadata:@{@"my_key": @[@"my", @"values"]} page:@1 perPage:@20 completion:^(NSArray<ClarifaiSearchResult *> *results, NSError *error) {
  // Print output of first search result.
  NSLog(@"inputID: %@", results[0].inputID);
  NSLog(@"URL: %@", results[0].mediaURL);
  NSLog(@"probability of input matching search query: %@", results[0].score);
}];

// Search metadata in conjunction with other ClarifaiSearchTerms. For example, the
// following will search for inputs with predicted tag "fast" and matching metadata.
ClarifaiConcept *conceptFromGeneralModel = [[ClarifaiConcept alloc] initWithConceptName:@"fast"];
ClarifaiSearchTerm *searchTerm1 = [ClarifaiSearchTerm searchByPredictedConcept:conceptFromGeneralModel];

ClarifaiSearchTerm *searchTerm2 = [ClarifaiSearchTerm searchInputsWithMetadata:@{@"my_key": @[@"my", @"values"]}];

[app search:@[searchTerm1, searchTerm2] page:@1 perPage:@20 completion:^(NSArray<ClarifaiSearchResult *> *results, NSError *error) {
  // Print output of first search result.
  NSLog(@"inputID: %@", results[0].inputID);
  NSLog(@"URL: %@", results[0].mediaURL);
  NSLog(@"probability of input matching search query: %@", results[0].score);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "query": {
      "ands": [
        {
          "input":{
            "data": {
              "metadata": {
                "key": "value"
              }
            }
          }
        }
      ]
    }
  }'\
  https://api.clarifai.com/v2/searches

Search By Reverse Image

You can use images to do reverse image search on your collection. The API will return ranked results based on how similar the results are to the image you provided in your query.


app.inputs.search(
  {
    input: {
      url: 'https://samples.clarifai.com/puppy.jpeg'
    }
  }
).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

# search by image url
app.inputs.search_by_image(url="https://samples.clarifai.com/metro-north.jpg")

# search by existing input id
input_id = "some_existing_input_id"
app.inputs.search_by_image(image_id=input_id)

# search by raw bytes
data = "image_raw_bytes"
app.inputs.search_by_image(imgbytes=data)

# search by base64 bytes
base64_data = "image_bytes_encoded_in_base64"
app.inputs.search_by_image(base64bytes=base64_data)

# search by local filename
filename="filename_on_local_disk.jpg"
app.inputs.search_by_image(filename=filename)

# search from fileio
fio = open("filename_on_local_disk.jpg", 'rb')
app.inputs.search_by_image(fileobj=fio)

// Search by image URL (String or java.net.URL)
client.searchInputs(SearchClause.matchImageVisually(ClarifaiImage.of("https://samples.clarifai.com/metro-north.jpg")))
    .getPage(1)
    .executeSync();

// Search by local image (java.io.File or byte[])
client.searchInputs(SearchClause.matchImageVisually(ClarifaiImage.of(new File("image.png"))))
    .getPage(1)
    .executeSync();

ClarifaiSearchTerm *searchTerm = [ClarifaiSearchTerm searchVisuallyWithImageURL:@"https://samples.clarifai.com/metro-north.jpg"];

[app search:@[searchTerm] page:@1 perPage:@20 completion:^(NSArray<ClarifaiSearchResult *> *results, NSError *error) {
  // Print output of first search result.
  NSLog(@"inputID: %@", results[0].inputID);
  NSLog(@"URL: %@", results[0].mediaURL);
  NSLog(@"probability of input matching search query: %@", results[0].score);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "query": {
      "ands": [
        {
          "output":{
            "input":{
              "data": {
                "image": {
                  "url": "https://samples.clarifai.com/metro-north.jpg"
                }
              }
            }
          }
        }
      ]
    }
  }'\
  https://api.clarifai.com/v2/searches

Search Match Url

You can also search for an input by URL.


app.inputs.search(
  {
    input: {
      type: 'input',
      url: 'https://samples.clarifai.com/puppy.jpeg'
    }
  }
).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

meta = {"url":"https://samples.clarifai.com/metro-north.jpg"}
app.inputs.search_by_metadata(meta)

// Lookup images with this URL
client.searchInputs(SearchClause.matchImageURL(ClarifaiImage.of("https://samples.clarifai.com/puppy.jpeg")))
    .getPage(1)
    .executeSync();


// Lookup images with this URL
ClarifaiSearchTerm *term = [ClarifaiSearchTerm searchInputsWithImageURL:@"https://samples.clarifai.com/metro-north.jpg"];

[app search:@[term] page:@1 perPage:@20 completion:^(NSArray<ClarifaiSearchResult *> *results, NSError *error) {
  // Print output of first search result.
  NSLog(@"inputID: %@", results[0].inputID);
  NSLog(@"URL: %@", results[0].mediaURL);
  NSLog(@"probability of input matching search query: %@", results[0].score);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "query": {
      "ands": [
        {
          "input":{
            "data": {
              "image": {
                "url": "https://samples.clarifai.com/metro-north.jpg"
              }
            }
          }
        }
      ]
    }
  }'\
  https://api.clarifai.com/v2/searches

Search By Concept And Predictions

You can combine a search to find inputs that have concepts you have supplied as well as predictions from your model.


app.inputs.search([
  // this is the predicted concept
  {
    concept: {
      name: 'cat'
    }
  },
  // this is the user-supplied concept
  {
    concept: {
      type: 'input',
      name: 'dog'
    }
  }
]).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

from clarifai.rest import InputSearchTerm, OutputSearchTerm, SearchQueryBuilder

term1 = InputSearchTerm(concept='cat')
term2 = OutputSearchTerm(concept='dog', value=False)
query = SearchQueryBuilder()
query.add_term(term1)
query.add_term(term2)

app.inputs.search(query)

client.searchInputs()
    // Matches images we tagged as "cat", and that the API tagged as not having "dog"
    .ands(
        SearchClause.matchUserTaggedConcept(Concept.forName("cat")),
        SearchClause.matchConcept(Concept.forName("dog").withValue(false))
    )
    .getPage(1)
    .executeSync();

ClarifaiConcept *conceptFromGeneralModel = [[ClarifaiConcept alloc] initWithConceptName:@"fast"];
ClarifaiConcept *conceptFromTrainedCustomModel = [[ClarifaiConcept alloc] initWithConceptName:@"dog"];

ClarifaiSearchTerm *term1 = [ClarifaiSearchTerm searchByPredictedConcept:conceptFromGeneralModel];
ClarifaiSearchTerm *term2 = [ClarifaiSearchTerm searchByPredictedConcept:conceptFromTrainedCustomModel];

[_app search:@[term1, term2] page:@1 perPage:@20 completion:^(NSArray<ClarifaiSearchResult *> *results, NSError *error) {
  // Print output of first search result.
  NSLog(@"inputID: %@", results[0].inputID);
  NSLog(@"URL: %@", results[0].mediaURL);
  NSLog(@"probability of input matching search query: %@", results[0].score);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
-d '
{
  "query": {
    "ands": [
      {
        "output": {
          "data": {
            "concepts": [
              {
                "name": "fast"
              }
            ]
          }
        }
      },
      {
        "input": {
          "data": {
            "concepts": [
              {
                "name": "ferrari23",
                "value": true
              }
            ]
          }
        }
      }
    ]
  }
}'\
https://api.clarifai.com/v2/searches

Search ANDing

You can also combine searches using AND.


app.inputs.search([
  { input: { url: 'https://samples.clarifai.com/puppy.jpeg' } },
  { concept: { name: 'cat', type: 'input' } },
  { concept: { name: 'dog' } }
]).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);

from clarifai.rest import InputSearchTerm, OutputSearchTerm, SearchQueryBuilder

term1 = InputSearchTerm(concept='cat')
term2 = OutputSearchTerm(concept='dog', value=False)
term3 = OutputSearchTerm(url="https://samples.clarifai.com/metro-north.jpg")

query = SearchQueryBuilder()
query.add_term(term1)
query.add_term(term2)
query.add_term(term3)

app.inputs.search(query)

client.searchInputs()
    .ands(
        SearchClause.matchUserTaggedConcept(Concept.forName("cat")),
        SearchClause.matchConcept(Concept.forName("dog").withValue(false)),
        SearchClause.matchImageVisually(ClarifaiImage.of("https://samples.clarifai.com/metro-north.jpg"))
    )
    .getPage(1)
    .executeSync();

//Search for inputs that are predicted as "fast" and visually similar to the given image.
ClarifaiConcept *conceptFromGeneralModel = [[ClarifaiConcept alloc] initWithConceptName:@"fast"];
ClarifaiSearchTerm *term1 = [ClarifaiSearchTerm searchByPredictedConcept:conceptFromGeneralModel];

ClarifaiSearchTerm *term2 = [ClarifaiSearchTerm searchVisuallyWithImageURL:@"https://samples.clarifai.com/metro-north.jpg"];

[_app search:@[term1, term2] page:@1 perPage:@20 completion:^(NSArray<ClarifaiSearchResult *> *results, NSError *error) {
  // Print output of first search result.
  NSLog(@"inputID: %@", results[0].inputID);
  NSLog(@"URL: %@", results[0].mediaURL);
  NSLog(@"probability of input matching search query: %@", results[0].score);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
-d '
{
    "query": {
        "ands": [
            {
                "output": {
                    "input":{
                        "data": {
                            "image": {
                                "url": "http://i.imgur.com/HEoT5xR.png"
                            }
                        }
                    }
                }
            },
            {
                "output": {
                    "data": {
                        "concepts": [
                            {"name":"fast", "value":true}
                        ]
                    }
                }
            }
        ]
    }
}'\
https://api.clarifai.com/v2/searches

Geo search allows you to restrict your search results to a bounding box based on longitude and latitude points. There are two ways you can provide longitude/latitude points. You can provide one point and a radius or you can provide two points.

It is important to note that a geo-search acts as a filter and returns results ranked by any other provided search criteria, whether that is a visual search, concept search or something else. If no other criteria is provided, results will return in the order the inputs were created, NOT by their distance to center of the search area.

If you are providing one point and a radius, the radius can be in "mile", "kilometer", "degree", or "radian", marked by keywords withinMiles, withinKilometers, withinDegrees, withinRadians.

If you are providing two points, a box will be drawn from the uppermost point to the lowermost point and the leftmost point to the rightmost point.

Before you perform a geo-search, make sure you have added inputs with longitude and latitude points.

Add inputs with longitiude and latitude points

Provide a geo point to an input. The geo point is a JSON object consisting of a longitude and a latitude in GPS coordinate system (SRID 4326). There can be at most one single geo point associated with each input.

app.inputs.create({
  url: "https://samples.clarifai.com/puppy.jpeg",
  geo: { longitude: 116.2317, latitude: 39.5427},
}).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);
from clarifai.rest import Geo, GeoPoint
from clarifai.rest import ClarifaiApp

geo_p1 = Geo(geo_point=GeoPoint(116.2317,39.5427))

app.inputs.create_image_from_url(url="https://samples.clarifai.com/puppy.jpeg", geo=geo_p1)
client.addInputs().plus(ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/puppy.jpeg")).withGeo(116.2317, 39.5427)).executeSync()
ClarifaiImage *image = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/metro-north.jpg"];
image.location = [[ClarifaiLocation alloc] initWithLatitude:116.2317 longitude:39.5427];

[_app addInputs:@[image] completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error) {
  NSLog(@"%@",inputs);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/dog.tiff",
            "allow_duplicate_url": true
          },
          "geo": {
            "geo_point": {
              "longitude": -30,
              "latitude": 40
            }
          }
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/inputs

Perform a search with one geo point and radius in kilometers

app.inputs.search({
  input: {
    geo: {
      longitude: 116.2317,
      latitude: 39.5427,
      type: 'withinKilometers',
      value: 1
    }
  }
}).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);
from clarifai.rest import GeoPoint, GeoLimit

geo_p = GeoPoint(116.2317, 39.5427)
geo_l = GeoLimit(limit_type='kilometer', limit_range=1)

imgs = app.inputs.search_by_geo(geo_point=geo_p, geo_limit=geo_l)
client.searchInputs(SearchClause.matchGeo(PointF.at(59F, 29.75F), Radius.of(500, Radius.Unit.KILOMETER)))
            .getPage(1)
            .executeSync();

ClarifaiLocation *loc = [[ClarifaiLocation alloc] initWithLatitude:116.2317 longitude:39.5427];
ClarifaiGeo *geoFilterKilos = [[ClarifaiGeo alloc] initWithLocation:loc radius:50.0 andRadiusUnit:ClarifaiRadiusUnitKilometers];
ClarifaiSearchTerm *term = [ClarifaiSearchTerm searchInputsWithGeoFilter:geoFilterKilos];

[_app search:@[term] page:@1 perPage:@20 completion:^(NSArray<ClarifaiSearchResult *> *results, NSError *error) {
  NSLog(@"inputID: %@", results[0].inputID);
  NSLog(@"URL: %@", results[0].mediaURL);
  NSLog(@"probability of predicted concept: %@", results[0].score);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "query": {
      "ands": [
        {
          "input": {
            "data": {
              "image":{
                "url":"https://samples.clarifai.com/metro-north.jpeg"
              },
              "geo": {
                "geo_point": {
                  "longitude": -1,
                  "latitude": 1.5
                },
                "geo_limit": {
                  "type": "withinKilometers",
                  "value": 1
                }
              }
            }
          }
        }
      ]
    }
  }'\
  https://api.clarifai.com/v2/searches

Perform a search with two geo points

app.inputs.search({
  input: {
    geo: [{
      latitude: 116.2316,
      longitude: 39.5426
    }, {
      latitude: 116.2318,
      longitude: 39.5428
    }]
  }
}).then(
  function(response) {
    // do something with response
  },
  function(err) {
    // there was an error
  }
);
from clarifai.rest import GeoBox, GeoPoint, GeoLimit

p1 = GeoPoint(116.2316, 39.5426)
p2 = GeoPoint(116.2318, 39.5428)
box1 = GeoBox(point1=p1, point2=p2)

imgs = app.inputs.search_by_geo(geo_box=box1)
client.searchInputs(SearchClause.matchGeo(PointF.at(3F, 0F), PointF.at(70, 30F)))
            .getPage(1)
            .executeSync()

ClarifaiLocation *startLoc = [[ClarifaiLocation alloc] initWithLatitude:50 longitude:58];
ClarifaiLocation *endLoc = [[ClarifaiLocation alloc] initWithLatitude:32 longitude:-30];
ClarifaiGeo *geoBox = [[ClarifaiGeo alloc] initWithGeoBoxFromStartLocation:startLoc toEndLocation:endLoc];

[_app search:@[term] page:@1 perPage:@20 completion:^(NSArray<ClarifaiSearchResult *> *results, NSError *error) {
  NSLog(@"inputID: %@", results[0].inputID);
  NSLog(@"URL: %@", results[0].mediaURL);
  NSLog(@"probability of predicted concept: %@", results[0].score);
}];

curl -X POST \
  -H "Authorization: Bearer {access_token}" \
  -H "Content-Type: application/json" \
  -d '
  {
    "query": {
      "ands": [
        {
          "input": {
            "data": {
              "geo": {
                "geo_box": [
                  {  
                    "geo_point": {
                      "latitude": 35,
                      "longitude": -30
                    }
                  },
                  {
                    "geo_point": {
                      "latitude": 50,
                      "longitude": -35
                    }
                  }
                ]
              }
            }
          }
        }
      ]
    }
  }'\
  https://api.clarifai.com/v2/searches

Pagination

Many API calls are paginated. You can provide page and per_page params to the API. In the example below we are getting all inputs and specifying to start at page 2 and get back 20 results per page.


app.inputs.list({page: 2, perPage: 20});

app.inputs.get_by_page(page=2, per_page=20)

client.getInputs()
    .perPage(20) // OPTIONAL, to specify how many results should be on one page
    .getPage(2)
    .executeSync();

[app getInputsOnPage:2 pageSize:20 completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error) {
    NSLog(@"inputs: %@", inputs);
}];

curl -X GET \
  -H "Authorization: Bearer {access_token}" \
  https://api.clarifai.com/v2/inputs?page=2&per_page=20

Patching

We designed PATCH to work over multiple resources at the same time (bulk) and be flexible enough for all your needs to minimize round trips to the server. Therefore it might seem a little different to any PATCH you've seen before, but it's not complicated. All three actions that are supported do overwrite by default, but have special behaviour for lists of objects (for example lists of concepts).

Merge

merge action will overwrite a key:value with key:new_value or append to an existing list of values, merging dictionaries that match by a corresponding id field.

In the following examples A is being patched into B to create the Result:


*Merges different key:values*
A = `{"a":[1,2,3]}`
B = `{"blah":true}`
Result = `{"blah":true, "a":[1,2,3]}`

*For id lists, merge will append*
A = `{"a":[{"id": 1}]}`
B = `{"a":[{"id": 2}]}`
Result = `{"a":[{"id": 2}, {"id":1}]}`

*Simple merge of key:values and within a list*
A = `{"a":[{"id": "1", "other":true}], "blah":1}`
B = `{"a":[{"id": "2"},{"id":"1", "other":false}]}`
Result = `{"a":[{"id": "2"},{"id": "1"}], "blah":1}`

*Different types should overwrite fine*
A = `{"a":[{"id": "1"}], "blah":1}`
B = `{"a":[{"id": "2"}], "blah":"string"}`
Result = `{"a":[{"id": "2"},{"id": "1"}], "blah":1}`

*Deep merge, notice the "id":"1" matches, so those dicts are merged in the list*
A = `{"a":[{"id": "1","hey":true}], "blah":1}`
B = `{"a":[{"id": "1","foo":"bar","hey":false},{"id":"2"}], "blah":"string"}`
Result = `{"a":[{"hey":true,"id": "1","foo":"bar"},{"id":"2"}], "blah":1}`

*For non-id lists, merge will append*
A = `{"a":[{"blah": "1"}], "blah":1}`
B = `{"a":[{"blah": "2"}], "blah":"string"}`
Result = `{"a":[{"blah": "2"}, {"blah":"1"}], "blah":1}`

*For non-id lists, merge will append*
A = `{"a":[{"blah": "1"}], "blah":1, "dict":{"a":1,"b":2}}`
B = `{"a":[{"blah": "2"}], "blah":"string"}`
Result = `{"a":[{"blah": "2"}, {"blah":"1"}], "blah":1, "dict":{"a":1,"b":2}}`

*Simple overwrite root element*
A = `{"key1":true}`
B = `{"key1":{"key2":"value2", "key3":"value3"}}`
Result = `{"key1":true}`

*Overwrite a sub element*
A = `{"key1":{"key2":true}}`
B = `{"key1":{"key2":"value2", "key3":"value3"}}`
Result = `{"key1":{"key2":true, "key3":"value3"}}`

*Merge a sub element*
A = `{"key1":{"key2":{"key4":"value4"}}}`
B = `{"key1":{"key2":"value2", "key3":"value3"}}`
Result = `{"key1":{"key2":{"key4":"value4"}, "key3":"value3"}}`

*Merge multiple trees*
A = `{"key1":{"key2":{"key9":"value9"}, "key3":{"key4":"value4", "key10":[1,2,3]}}, "key6":{"key11":"value11"}}`
B = `{"key1":{"key2":"value2", "key3":{"key4":{"key5":"value5"}}}, "key6":{"key7":{"key8":"value8"}}}`
Result = `{"key1":{"key2":{"key9":"value9"}, "key3":{"key4":"value4", "key10":[1,2,3]}}, "key6":{"key7":{"key8":"value8"}, "key11":"value11"}}`

*Merge {} element will replace*
A = `{"key1":{"key2":{}}}`
B = `{"key1":{"key2":"value2", "key3":"value3"}}`
Result = `{"key1":{"key2":{}, "key3":"value3"}}`

*Merge a null element does nothing*
A = `{"key1":{"key2":null}}`
B = `{"key1":{"key2":"value2", "key3":"value3"}}`
Result = `{"key1":{"key2":"value2", "key3":"value3"}}`

*Merge a blank list [] will replace root element*
A = `{"key1":[]}`
B = `{"key1":{"key2":"value2", "key3":"value3"}}`
Result = `{"key1":[]}`

*Merge a blank list [] will replace single element*
A = `{"key1":{"key2":[]}}`
B = `{"key1":{"key2":"value2", "key3":"value3"}}`
Result = `{"key1":{"key2":[], "key3":"value3"}}`

*Merge a blank list [] will remove nested objects*
A = `{"key1":{"key2":[{"key3":"value3"}]}}`
B = `{"key1":{"key2":{"key3":"value3"}}}`
Result = `{"key1":{"key2":[{"key3":"value3"}]}}`

*Merge an existing list with some other struct*
A = `{"key1":{"key2":{"key3":[{"key4":"value4"}]}}}`
B = `{"key1":{"key2":[]}}`
Result = `{"key1":{"key2":{"key3":[{"key4":"value4"}]}}}`

Remove

remove action will overwrite a key:value with key:new_value or delete anything in a list that matches the provided values' ids.

In the following examples A is being patched into B to create the Result:

*Remove from list*
A = `{"a":[{"id": "1"}], "blah":1}`
B = `{"a":[{"id": "2"},{"id": "3"}, {"id":"1"}], "blah":"string"}`
Result = `{"a":[{"id": "2"},{"id":"3"}], "blah":1}`

*For non-id lists, remove will append*
A = `{"a":[{"blah": "1"}], "blah":1}`
B = `{"a":[{"blah": "2"}], "blah":"string"}`
Result = `{"a":[{"blah": "2"}, {"blah":"1"}], "blah":1}`

*Empty out a nested dictionary*
A = `{"key1":{"key2":true}}`
B = `{"key1":{"key2":"value2"}}`
Result = `{"key1":{}}`

*Remove the root element, should be empty*
A = `{"key1":true}`
B = `{"key1":{"key2":"value2", "key3":"value3"}}`
Result = `{}`

*Remove a sub element*
A = `{"key1":{"key2":true}}`
B = `{"key1":{"key2":"value2", "key3":"value3"}}`
Result = `{"key1":{"key3":"value3"}}`

*Remove a multiple sub elements*
A = `{"key1":{"key2":{"key3":true}, "key4":true}}`
B = `{"key1":{"key2":{"key3":{"key5":"value5"}}, "key4":{"key6":{"key7":"value7"}}}}`
Result = `{"key1":{"key2":{}}}`

*Remove one of the root elements if there are more than one*
A = `{"key1":true}`
B = `{"key1":{"key2":"value2", "key3":"value3"}, "key4":["a", "b", "c"]}`
Result = `{"key4":["a", "b", "c"]}`

*Remove with false should over write*
A = `{"key1":{"key2":false, "key3":true}, "key4":false}`
B = `{"key1":{"key2":"value2", "key3":"value3"}, "key4":[{"key5":"value5", "key6":"value6"}, {"key7": "value7"}]}`
Result = `{"key1":{"key2":false}, "key4":false}`

*Only objects with id's can be put into lists*
A = `{"key1":[{"key2":true}]}`
B = `{"key1":[{"key2":"value2"}, {"key3":"value3"}]}`
Result = `{}`

*Elements with {} should do nothing*
A = `{"key1":{}}`
B = `{"key1":{"key2":"value2", "key3":"value3"}}`
Result = `{"key1":{"key2":"value2", "key3":"value3"}}`

*Elements with nil should do nothing*
A = `{"key1":{"key2":null}}`
B = `{"key1":{"key2":"value2", "key3":"value3"}}`
Result = `{"key1":{"key2":"value2", "key3":"value3"}}`

Overwrite

overwrite action will overwrite a key:value with key:new_value or overwrite a list of values with the new list of values. In most cases this is similar to merge action.

In the following examples A is being patched into B to create the Result:

*Overwrite whole list*
A = `{"a":[{"id": "1"}], "blah":1}`
B = `{"a":[{"id": "2"}], "blah":"string"}`
Result = `{"a":[{"id": "1"}], "blah":1}`

*For non-id lists, overwrite will overwrite whole list*
A = `{"a":[{"blah": "1"}], "blah":1}`
B = `{"a":[{"blah": "2"}], "blah":"string"}`
Result = `{"a":[{"blah": "1"}], "blah":1}`

Supported Types

The API supports the following image formats:

  • JPEG
  • PNG
  • TIFF
  • BMP