Getting Started

The Clarifai API offers image and video recognition as a service. Whether you have one image or billions, you are only steps away from using artificial intelligence to 'see' what's inside your visual content.

If you haven't created an account, please do so before proceeding with this guide. We have many plans available including a free plan which is perfect to start experimenting with. Please note that all API calls require an account.

All API access is over HTTPS, and accessed via the https://api.clarifai.com domain. The relative path prefix /v1/ indicates that we are currently using version 1 of the API.

Applications

API calls are tied to an account and application. Applications allow you to segment your API usage as well as choose default settings. You can create as many applications as you want and can edit or delete them as you see fit. Each application has a unique Client ID and Client Secret. These are used for authentication. You can learn more about authentication below.

Create an Application

To create an application, head on over to the applications page and press the 'Create a New Application' button. At a minimum, you'll need to provide an application name. You may also set the default model and language. If you plan on using a language other than English, you must use the 'general-v1.3' model. You can learn more about models and languages in the tag guide below.

Edit an Application

If at any point you'd like to change the application name or default settings, you may do so by visiting the application page and pressing the 'Edit application' button.

Delete an Application

If you'd like to delete an application, you may do so at any time by visiting the application page and pressing the 'Delete application' button. You'll be asked to confirm your change. Please note that once you delete an application, we cannot recover it. Proceed with caution.

Authentication

Authentication to the API is handled using OAuth2 client credentials. Each application you create has a unique Client ID and Client Secret which you will use to exchange for an Access Token. You then use this Access Token to make authorized API calls.

The three main components of OAuth2 client credentials:

ComponentDescription
client_idThis identifies which application is trying to access the API. This is unique and generated once for each application in your account.
client_secretThis provides security when authorizing with the API. This is unique and generated once for each application in your account.
access_tokenThis is used to authorize your access to the API. Access tokens expire regularly and must be renewed on an ongoing basis.

For more information regarding OAuth2, please see the spec.

Retrieve an Access Token

  https://api.clarifai.com/v1/token

To retrieve an Access Token, send a POST request to https://api.clarifai.com/v1/token/ with your client_id and client_secret. You must also include grant_type=client_credentials:

curl -X POST "https://api.clarifai.com/v1/token/" \
    -d "client_id={client_id}" \
    -d "client_secret={client_secret}" \
    -d "grant_type=client_credentials"

If you prefer, the client_id and client_secret parameters can also be passed in the header:

curl -X POST "https://{client_id}:{client_secret}@api.clarifai.com/v1/token/" \
    -d "grant_type=client_credentials"

The JSON response will include your access_token. Please note the expires_in time. Access tokens expire regularly and must be renewed on an ongoing basis. You can renew by just retrieving a new Access Token as described above.

Show All
Response{
    "access_token": "U5j3rECLLZ86pWRjK35g489QJ4zrQI",
    "expires_in": 172800,
    "scope": "api_access",
    "token_type": "Bearer"
  }

You can now use the access_token value to authorize your API calls. This is achieved by using the Authorization header as described below:

"Authorization: Bearer {access_token}"

If you are making a GET request, you may also pass in the access_token as a parameter:

https://api.clarifai.com/v1/color/?access_token={access_token}

You have now learned how to make authorized API calls. If you'd like to dive in and start using the API to visually recognize your images and videos, skip ahead to the tag guide.

All of our official client libraries handle authentication for you. Links to all of our client libraries are listed below.

API Clients

If you prefer to use a client library to access the API, we offer official clients in various languages. There are also many other 'unofficial' clients built by the community. If you've built one, please let us know!

Official Clarifai Clients

Community Supported Clients

Requests and Responses

Requests

POST

POST requests are the preferred way to interact with the API. If you are sending image or video bytes in the request, you should send along all parameters as multipart-form.

GET

Some endpoints support GET requests as well and will be marked as such in this guide.

Headers

When using either POST or GET, the Authorization header must be provided with your access_token:

"Authorization: Bearer {access_token}"

If you are making a GET request, you may also pass in the access_token as a parameter:

https://api.clarifai.com/v1/color/?access_token={access_token}
Formats
URL

You may provide a publicly accessible url as the input format for an image or video. This URL is downloaded to our servers and then processed, therefore it must contain valid image or video data and be publicly accessible. If there is an error downloading the image or video, or the format is not correct, an error response will be returned.

Please note: the URL you provide must be URL-encoded. Learn more about URL encoding.

Authenticated URLs are supported in our enterprise tier.

Encoded Data

You may provide raw bytes as the input format for a compressed image or video with the encoded_data parameter. This is convenient for processing local files with the API by transferring image or video bytes directly in the request. This input is decoded as a string of bytes on the server side for several supported image or video formats. This input format is ONLY supported by the POST method. GET is not supported. The encoded_data should be sent via a multipart-form POST request.

ParameterDescription
urlA publicly accessible, url-encoded string.
encoded_dataRaw bytes sent via a multipart-form POST request.
Supported Image and Video Formats

The API supports decoding a variety of image and video formats. If you find a format that is not supported by the API (response will likely return "Image could not be decoded or read"), please let us know and we will try to add support for it.

Mixing Data Types

The API does not currently support mixing images and videos within the same request.

Responses

Responses from the API are always in JSON format. Each response has the following top-level structure:

Show All
Response{
    "status_code": "OK",
    "status_msg": "All data in request have completed.",
    "results": { ...something... }
  }

Each component of this response is described below:

ComponentDescription
status_codeIndicates the state in which the API completed the request. Each response will complete with an indicative HTTP status code in addition to this API status code. The complete list of status codes is described in the status code guide below.
status_msgA description of the status code for this request. This can provide more detail when a request has an error.
metaExtra information such as the type of model used to process a request is returned in the meta field.
resultsThe data sent back with the response. Each endpoint has different values which are described in more detail later in this guide.
Doc ID

Each image or video returned in the results field will contain an internal identifier called a docid. This is assigned to every image or video processed through the API. We recommend storing this docid in your application in order to handle advanced features of the API. For example, to provide feedback and improve the service, you must use the assigned docid. The docid is a string that is unique to a given image or video.

Please note: We recommend using docid_str instead of docid as some libraries have trouble parsing docid integers.

Local ID

You may also provide a local_id for each image or video in the request. This can be any string that provides a unique identifier for each image or video. This simplifies tracking requests and responses by matching up to an identifier that is important on the client side.

Status Codes

Status codes indicate the state in which the API completed the request. Each response will complete with an HTTP status code.

CodeMessageDescription
200OKSuccess
200PARTIAL_ERRORSome images in request have failed. Please review the error messages per image
400ALL_ERRORAll images in request have failed. Please review the error messages per image
401TOKEN_APP_INVALIDThe application for this access token is not valid. Please make sure you are using the correct Client ID and Client Secret
401TOKEN_EXPIREDYour access token has expired, you must generate a new one
401TOKEN_INVALIDYour access token is not valid. Please make sure to use valid access tokens for an application in your account
401TOKEN_NONEAuthentication credentials were not provided in request
401TOKEN_NO_SCOPEYour access token does not have the required scope to access resources

Limits

There are certain limits imposed on you when using the API. Some of these are technical in nature and others are based on your plan. If you need to increase your limits, you may do so by contacting support. Please familiarize yourself with the limits below.

Data Limits

Batch Operations

The API supports sending a batch of images or videos in a single request. The maximum number of images you can send in an API request it specified as max_batch_size for images and max_video_batch_size for videos. When batch requests are used, the order of the response will match the order of the request. If any image or video fails to process within a batch, the overall batch should still complete with a partially successful status code 200. Each image or video within the batch has its own status code and identifiers so you can determine which images or videos succeeded and which failed.

Sizes

Regardless of image or video format used in an API call, the size of the content must fall within the specification defined on the API Info page for the user accessing the service. If you are using any of the API clients, this is handled for you with images. Videos are not handled automatically. The process by which you should resize the input images before sending to the API is as follows:

  1. Request info as described on the API Info page.
  2. Resize such that the minimum dimension of the image is above min_image_size and below max_image_size.

For videos a similar process is used, but should use different API values:

  1. Request info as described on the API Info page.
  2. Resize such that the minimum dimension of the video is above min_video_size and below max_video_size.

Usage Quota

Each plan has a defined usage quota for access to the API. Once your usage exceeds this maximum, additional charges may apply. The usage of the API is monitored on a per unit basis. For example: if you provide a single image for tagging, that counts as one unit. If you request multiple operations for a single image, that counts as multiple units. Similarly, if multiple images are provided in a batch, then one unit per image is accumulated.

Video is charged according to how frequently summary information is returned. One unit equates to one frame per second of information in the result. With multiple operations or multiple videos in a batch the usage is multiplied by the number of operations and the total units is the sum of the operations for each video in the batch as in the case for images.

Rate Limits

In addition to your overall usage, there are rate limits in place to ensure fair use of our service. You can view these on your Account Usage page. If you exceed the limits defined there, you will be throttled to prevent overuse of the API and allow others to benefit from the service.

Tag

https://api.clarifai.com/v1/tag

The tag endpoint is used to tag the contents of your images or videos. Data is input into our system, processed with our deep learning platform and a list of tags is returned. Typical process times are in the milliseconds.

Requests

If you'd like to get tags for one image or video using a publicly accessible url, you may either send a GET or POST request.

curl "https://api.clarifai.com/v1/tag/?url=https://samples.clarifai.com/metro-north.jpg" \
    -H "Authorization: Bearer {access_token}"
curl "https://api.clarifai.com/v1/tag/" \
    -X POST --data-urlencode "url=https://samples.clarifai.com/metro-north.jpg" \
    -H "Authorization: Bearer {access_token}"

You can also upload an image or video from your local filesystem. You must use a POST request for this.

curl "https://api.clarifai.com/v1/tag/" \
    -X POST -F "encoded_data=@/Users/USER/my_image.jpeg" \
    -H "Authorization: Bearer {access_token}"

If you'd like to tag more than one image or video at a time, you can do that as well using a POST request.

curl "https://api.clarifai.com/v1/tag/" \
    -X POST --data-urlencode "url=https://samples.clarifai.com/metro-north.jpg"  \
    --data-urlencode "url=https://samples.clarifai.com/wedding.jpg" \
    -H "Authorization: Bearer {access_token}"

Responses

A typical response for a successful tag request will look like this. The output results are stored in the results key as an array, one result per image or video. Each result has its own status_code and status_msg which allows for individual images or video in a batch to succeed even if some fail.

For each image or video a unique docid is produced and returned with the result. If a local_id was provided in the request, this will also be present in the response. Since this is the tag endpoint, the tagging results for each image or video are in result followed by the tag key.

The output of our service is a list of classes and a corresponding list of probs. The classes are for the most part English and come from a large vocabulary. The probs (short for probability) indicate how well the model believes the corresponding tag is associated with the input data.

Show All
Response{
    "status_code": "OK",
    "status_msg": "All images in request have completed successfully. ",
    "meta": {
      "tag": {
        "timestamp": 1451945197.398036,
        "model": "general-v1.3",
        "config": "34fb1111b4d5f67cf1b8665ebc603704"
      }
    },
    "results": [
      {
        "docid": 15512461224882631443,
        "url": "https://samples.clarifai.com/metro-north.jpg",
        "status_code": "OK",
        "status_msg": "OK",
        "local_id": "",
        "result": {
          "tag": {
            "concept_ids": [
              "ai_HLmqFqBf",
              "ai_fvlBqXZR",
              "ai_Xxjc3MhT",
              "ai_6kTjGfF6",
              "ai_RRXLczch",
              "ai_VRmbGVWh",
              "ai_SHNDcmJ3",
              "ai_jlb9q33b",
              "ai_46lGZ4Gm",
              "ai_tr0MBp64",
              "ai_l4WckcJN",
              "ai_2gkfMDsM",
              "ai_CpFBRWzD",
              "ai_786Zr311",
              "ai_6lhccv44",
              "ai_971KsJkn",
              "ai_WBQfVV0p",
              "ai_dSCKh8xv",
              "ai_TZ3C79C6",
              "ai_VSVscs9k"
            ],
            "classes": [
              "train",
              "railway",
              "transportation system",
              "station",
              "train",
              "travel",
              "tube",
              "commuter",
              "railway",
              "traffic",
              "blur",
              "platform",
              "urban",
              "no person",
              "business",
              "track",
              "city",
              "fast",
              "road",
              "terminal"
            ],
            "probs": [
              0.9989112019538879,
              0.9975532293319702,
              0.9959157705307007,
              0.9925730228424072,
              0.9925559759140015,
              0.9878921508789063,
              0.9816359281539917,
              0.9712483286857605,
              0.9690325260162354,
              0.9687051773071289,
              0.9667078256607056,
              0.9624242782592773,
              0.960752010345459,
              0.9586490392684937,
              0.9572030305862427,
              0.9494642019271851,
              0.940894365310669,
              0.9399334192276001,
              0.9312160611152649,
              0.9230834245681763
            ]
          }
        },
        "docid_str": "31fdb2316ff87fb5d747554ba5267313"
      }
    ]
  }

Models

When images or videos are run through the tag endpoint, they are tagged using a model. A model is a trained classifier that can recognize what's inside an image or video according to what it 'knows'. Different models are trained to 'know' different things. Running an image or video through different models can produce drastically different results.

If you'd like to get tags for an image or video using a different model, you can do so by passing in a model parameter. If you omit this parameter, the API will use the default model for your application. You can change this on the applications page.

curl "https://api.clarifai.com/v1/tag/?model=general-v1.3&url=https://samples.clarifai.com/metro-north.jpg" \
    -H "Authorization: Bearer {access_token}"

If you are uploading an image or video from your local filesystem and are using a POST request, you must pass all parameters as -F.

curl "https://api.clarifai.com/v1/tag/" \
    -F "model=general-v1.3" \
    -F "encoded_data=@/Users/USER/my_image.jpeg" \
    -H "Authorization: Bearer {access_token}"

We currently support these different models:

General

The 'General' model contains a wide range of tags across many different topics. In most cases, tags returned from the general model will sufficiently recognize what's inside your image or video.

Model: general-v1.3

Tags
train
railway
transportation system
station
travel
curl "https://api.clarifai.com/v1/tag/?model=general-v1.3&url=https://samples.clarifai.com/metro-north.jpg" \
    -H "Authorization: Bearer {access_token}"

NSFW

The 'Not Safe For Work' model analyzes images and videos and returns probability scores on the likelihood that the image or video contains pornography.

The response for NSFW returns probabilities for nsfw (Not Safe For Work) and sfw (Safe For Work) that sum to 1.0. Generally, if the nsfw probability is less than 0.15, it is most likely Safe For Work. If the nsfw probability is greater than 0.85, it is most likely Not Safe For Work.

Model: nsfw-v1.0

Tags
SFW
curl "https://api.clarifai.com/v1/tag/?model=nsfw-v1.0&url=https://samples.clarifai.com/nsfw.jpg" \
    -H "Authorization: Bearer {access_token}"
Show All
Response{
    "status_code": "OK",
    "status_msg": "All images in request have completed successfully. ",
    "meta": {
      "tag": {
        "timestamp": 1463065040.141801,
        "model": "nsfw-v1.0",
        "config": "72167b5392d7678ac47dabf4dfb24fbc"
      }
    },
    "results": [
      {
        "docid": 14634293643713452094,
        "url": "https://samples.clarifai.com/nsfw.jpg",
        "status_code": "OK",
        "status_msg": "OK",
        "local_id": "",
        "result": {
          "tag": {
            "classes": [
              "sfw",
              "nsfw"
            ],
            "probs": [
              0.9324242472648621,
              0.06757575273513794
            ]
          }
        },
        "docid_str": "6143ec1209029933cb1774887c4c183e"
      }
    ]
  }

Weddings

The 'Wedding' model 'knows' all about weddings including brides, grooms, dresses, flowers, etc.

Model: weddings-v1.0

Tags
bride
groom
love
ceremony
first dance
curl "https://api.clarifai.com/v1/tag/?model=weddings-v1.0&url=https://samples.clarifai.com/wedding.jpg" \
    -H "Authorization: Bearer {access_token}"
Show All
Response{
    "status_code": "OK",
    "status_msg": "All images in request have completed successfully. ",
    "meta": {
      "tag": {
        "timestamp": 1464278116.79131,
        "model": "weddings-v1.0",
        "config": "bb3948d3a928ed3bd76164d10f738b0e"
      }
    },
    "results": [
      {
        "docid": 16802754379932988827,
        "url": "https://samples.clarifai.com/wedding.jpg",
        "status_code": "OK",
        "status_msg": "OK",
        "local_id": "",
        "result": {
          "tag": {
            "classes": [
              "bride",
              "groom",
              "love",
              "ceremony",
              "first dance",
              "dress",
              "bride and groom",
              "church",
              "vows",
              "bouquet",
              "kiss",
              "wedding party",
              "reception",
              "romance",
              "just married",
              "dancing",
              "groomsmen",
              "vera wang",
              "wedding dresses",
              "romantic"
            ],
            "probs": [
              0.9775932431221008,
              0.9504517316818237,
              0.9231191873550415,
              0.8935598134994507,
              0.883521556854248,
              0.857530951499939,
              0.8522061109542847,
              0.8438937067985535,
              0.8247197270393372,
              0.8161381483078003,
              0.7881923913955688,
              0.7392973899841309,
              0.7287921905517578,
              0.6711450815200806,
              0.6284631490707397,
              0.6148084402084351,
              0.6124113202095032,
              0.5590707063674927,
              0.5457624197006226,
              0.5376051068305969
            ]
          }
        },
        "docid_str": "2ef2c97f0599c50be92f6015700b759b"
      }
    ]
  }

Travel

The 'Travel' model analyzes images and videos and returns probability scores on the likelihood that the image or video contains a recognized travel related category.

The current model is designed to identify specific features of residential, hotel and travel related properties.

Model: travel-v1.0

Tags
Outdoor Pool
Wellness & Spa
Building
Kids Area
Garden
curl "https://api.clarifai.com/v1/tag/?model=travel-v1.0&url=https://samples.clarifai.com/travel.jpg" \
    -H "Authorization: Bearer {access_token}"
Show All
Response{
    "status_code": "OK",
    "status_msg": "All images in request have completed successfully. ",
    "meta": {
      "tag": {
        "timestamp": 1459962071.155607,
        "model": "travel-v1.0",
        "config": "bf3d57e649814e2f2517cbf5a4ab8213"
      }
    },
    "results": [
      {
        "docid": 11771678958020756014,
        "url": "https://samples.clarifai.com/travel.jpg",
        "status_code": "OK",
        "status_msg": "OK",
        "local_id": "",
        "result": {
          "tag": {
            "classes": [
              "Outdoor Pool",
              "Wellness & Spa",
              "Building",
              "Kids Area",
              "Garden",
              "Sports & Activities",
              "Water Park",
              "Summer",
              "Hot Tub",
              "Bar",
              "Yoga",
              "Restaurant",
              "Bedroom",
              "Water Sports",
              "Indoor Swimming Pool",
              "Casino",
              "Terrace",
              "Communal Areas",
              "Bathroom",
              "Animals"
            ],
            "probs": [
              0.9832053780555725,
              0.629555344581604,
              0.3578774631023407,
              0.34257540106773376,
              0.22918906807899475,
              0.21644937992095947,
              0.1435081958770752,
              0.14164696633815765,
              0.10924047976732254,
              0.07004175335168839,
              0.06550466269254684,
              0.06409401446580887,
              0.059375178068876266,
              0.05148367956280708,
              0.045027755200862885,
              0.024884479120373726,
              0.024330025538802147,
              0.023548241704702377,
              0.022626161575317383,
              0.020881962031126022
            ]
          }
        },
        "docid_str": "811e52d05a72f088a35d67a4aec4822e"
      }
    ]
  }

Food

The 'Food' model analyzes images and videos and returns probability scores on the likelihood that the image or video contains a recognized food ingredient and dish.

The current model is designed to identify specific food items and visible ingredients.

Model: food-items-v1.0

Tags
sauce
pasta
basil
penne
meat
curl "https://api.clarifai.com/v1/tag/?model=food-items-v1.0&url=https://samples.clarifai.com/food.jpg" \
    -H "Authorization: Bearer {access_token}"
Show All
Response{
    "status_code": "OK",
    "status_msg": "All images in request have completed successfully. ",
    "meta": {
      "tag": {
        "timestamp": 1458139563.342688,
        "model": "food-items-v1.0",
        "config": "7e6dcf940ad82fed35ce9ca811ba5553"
      }
    },
    "results": [
      {
        "docid": 17816260044645749858,
        "url": "https://samples.clarifai.com/food.jpg",
        "status_code": "OK",
        "status_msg": "OK",
        "local_id": "",
        "result": {
          "tag": {
            "classes": [
              "sauce",
              "pasta",
              "basil",
              "penne",
              "meat",
              "beef",
              "spaghetti",
              "tomato",
              "cheese",
              "macaroni",
              "vegetable",
              "meat sauce",
              "sausage",
              "spaghetti bolognese",
              "tomato sauce",
              "pork",
              "pasta sauce",
              "garlic",
              "tagliatelle",
              "carbohydrate"
            ],
            "probs": [
              0.9986368417739868,
              0.9962599277496338,
              0.9794590473175049,
              0.975263237953186,
              0.9743865728378296,
              0.9702389240264893,
              0.9645417928695679,
              0.9477837085723877,
              0.8919730186462402,
              0.8738347291946411,
              0.8726477026939392,
              0.8595116138458252,
              0.6607301235198975,
              0.6438579559326172,
              0.6094018220901489,
              0.5601148009300232,
              0.4660542905330658,
              0.43789294362068176,
              0.4292607307434082,
              0.4183799624443054
            ]
          }
        },
        "docid_str": "263544ced03c3fc7f740121db3185462"
      }
    ]
  }

Languages

By default this API call returns tags in English. You can change this default setting on the applications page or can pass in a language parameter in each call like so:

curl "https://api.clarifai.com/v1/tag/?language=es&model=general-v1.3&url=https://samples.clarifai.com/metro-north.jpg" \
    -H "Authorization: Bearer {access_token}"
Show All
Response{
    "status_code": "OK",
    "status_msg": "All images in request have completed successfully. ",
    "meta": {
      "tag": {
        "timestamp": 1464278199.287468,
        "model": "general-v1.3",
        "config": "34fb1111b4d5f67cf1b8665ebc603704"
      }
    },
    "results": [
      {
        "docid": 17763255747558799694,
        "url": "https://samples.clarifai.com/metro-north.jpg",
        "status_code": "OK",
        "status_msg": "OK",
        "local_id": "",
        "result": {
          "tag": {
            "concept_ids": [
              "ai_HLmqFqBf",
              "ai_fvlBqXZR",
              "ai_Xxjc3MhT",
              "ai_6kTjGfF6",
              "ai_RRXLczch",
              "ai_VRmbGVWh",
              "ai_SHNDcmJ3",
              "ai_jlb9q33b",
              "ai_46lGZ4Gm",
              "ai_tr0MBp64",
              "ai_l4WckcJN",
              "ai_2gkfMDsM",
              "ai_CpFBRWzD",
              "ai_786Zr311",
              "ai_6lhccv44",
              "ai_971KsJkn",
              "ai_WBQfVV0p",
              "ai_dSCKh8xv",
              "ai_TZ3C79C6",
              "ai_VSVscs9k"
            ],
            "classes": [
              "tren",
              "línea ferroviaria)",
              "transporte",
              "estación",
              "entrenar",
              "viajar",
              "tubo (tren)",
              "de cercanías",
              "vía férrea",
              "tráfico",
              "fuzz (representación)",
              "plataforma",
              "espacio urbano",
              "Ninguna persona",
              "negocio",
              "pista de carreras",
              "ciudad",
              "rápido",
              "carretera",
              "depot (estación)"
            ],
            "probs": [
              0.9989112019538879,
              0.9975532293319702,
              0.9959157705307007,
              0.9925730228424072,
              0.9925559759140015,
              0.9878921508789062,
              0.9816359281539917,
              0.9712483286857605,
              0.9690325260162354,
              0.9687051773071289,
              0.9667078256607056,
              0.9624242782592773,
              0.960752010345459,
              0.9586490392684937,
              0.9572030305862427,
              0.9494642019271851,
              0.940894365310669,
              0.9399334192276001,
              0.9312160611152649,
              0.9230834245681763
            ]
          }
        },
        "docid_str": "76961bb1ddae0e82f683c2fd17a8794e"
      }
    ]
  }

Important: If you use a language other than English, you must make sure the model you are using is general-v1.3.

Learn more about what languages the API supports.

Select Classes

If you'd like to get the probability of a certain tag or tags, you can specify them in the request using the select_classes parameter. Different tags should be comma separated.

curl "https://api.clarifai.com/v1/tag/?select_classes=sky,snow&url=https://samples.clarifai.com/metro-north.jpg" \
    -H "Authorization: Bearer {access_token}"
Show All
Response{
    "status_code": "OK",
    "status_msg": "All images in request have completed successfully. ",
    "meta": {
      "tag": {
        "timestamp": 1464278319.588369,
        "model": "general-v1.3",
        "config": "34fb1111b4d5f67cf1b8665ebc603704"
      }
    },
    "results": [
      {
        "docid": 17763255747558799694,
        "url": "https://samples.clarifai.com/metro-north.jpg",
        "status_code": "OK",
        "status_msg": "OK",
        "local_id": "",
        "result": {
          "tag": {
            "concept_ids": [
              "ai_lNsKfmXb",
              "ai_l09WQRHT"
            ],
            "classes": [
              "sky",
              "snow"
            ],
            "probs": [
              0.7910971641540527,
              0.06230001896619797
            ]
          }
        },
        "docid_str": "76961bb1ddae0e82f683c2fd17a8794e"
      }
    ]
  }

Feedback

https://api.clarifai.com/v1/feedback/

The feedback endpoint provides the ability to give feedback to the API about images and videos that were previously tagged. This is typically used to correct errors made by our deep learning platform. Each piece of feedback helps our system learn better. Please try and provide feedback whenever you see errors.

You must provide at least one parameter to identify the image or video you are providing feedback on. This can either be the docid that was returned in the original tag response or the URL of your image or video. The feedback endpoint also accepts a list of docids or a list of url to provide feedback for multiple images and videos simultaneously.

We highly encourage integrating feedback into your application as this dramatically improves the system over time. You or users of your application can correct errors. For example, if the system returned cat for an image of a dog, you can send feedback with add_tags="dog", remove_tags="cat".

The feedback endpoint does not count against your usage limits. Please use it as much as you can.

Please note: Do not abuse this endpoint. If you provide poor quality or incorrect feedback, your account may be put on hold or terminated as it can affect the service for other users.

Add Tags

If you'd like to add additional tags that are relevant to the given image or video, they can be provided with a add_tags parameter. add_tags accepts a comma-separated list of tags in UTF-8 encoding. You can provide a url or docids parameter to specify the image or video.

curl "https://api.clarifai.com/v1/feedback/" \
    -X POST --data-urlencode "url=https://samples.clarifai.com/metro-north.jpg" \
    -d "add_tags=train station,railway station"  \
    -H "Authorization: Bearer {access_token}"

Remove Tags

If you believe some tags are not relevant to the given image or video, you can provide feedback using the remove_tags parameter. remove_tags accepts a comma-separated list of tags in UTF-8 encoding. You can provide a url or docids parameter to specify the image or video.

curl "https://api.clarifai.com/v1/feedback/" \
    -X POST --data-urlencode "url=https://samples.clarifai.com/metro-north.jpg" \
    -d "remove_tags=sky,clean,red"  \
    -H "Authorization: Bearer {access_token}"

Similar

If there is a notion of similarity between images or videos, this can be fed back to the system by providing an input url or docids and a comma-separated list of url or docids that are similar to the input.

curl "https://api.clarifai.com/v1/feedback/" \
    -X POST -d "docids=78c742b9dee940c8cf2a06f860025141" \
    -d "similar_docids=78c742b9dee940c8cf2a06f860025141,5849206sdee940c8cf2a06f8792384059"  \
    -H "Authorization: Bearer {access_token}"

Dissimilar

If there is a notion of dissimilarity between images or videos, this can be fed back to the system by providing an input url or docids and a comma-separated list of url or docids that are dissimilar to the input.

curl "https://api.clarifai.com/v1/feedback/" \
    -X POST -d "docids=78c742b9dee940c8cf2a06f860025141" \
    -d "dissimilar_docids=acd57ec10abcc0f4507475827626785f"  \
    -H "Authorization: Bearer {access_token}"

Search Clicks

This is useful when showing search results in response to a query, to provide feedback that the search result was relevant to the query. The url or docid parameter identifies the image or video that was clicked, and the search_click parameter is a comma-separated list of search terms that generated the search result.

curl "https://api.clarifai.com/v1/feedback/" \
    -X POST --data-urlencode "url=https://samples.clarifai.com/metro-north.jpg"  \
    -d "search_click=train" \
    -H "Authorization: Bearer {access_token}"

Color

Please note that this endpoint is currently in beta testing and may change at any time.

https://api.clarifai.com/v1/color

The color endpoint is used to retrieve the dominant colors present in your images or videos. Color values are returned in the hex format. A density value is also returned to let you know how much of the color is present. In addition, colors are also mapped to their closest W3C counterparts.

The response for Color returns 1-8 colors with density values that sum to 1.0.

curl "https://api.clarifai.com/v1/color/?url=https://samples.clarifai.com/metro-north.jpg" \
    -H "Authorization: Bearer {access_token}"
curl "https://api.clarifai.com/v1/color/" \
    -X POST --data-urlencode "url=https://samples.clarifai.com/metro-north.jpg" \
    -H "Authorization: Bearer {access_token}"
Show All
Response{
    "status_code": "OK",
    "status_msg": "All images in request have completed successfully. ",
    "results": [
      {
        "docid": 15512461224882631443,
        "url": "https://samples.clarifai.com/metro-north.jpg",
        "docid_str": "31fdb2316ff87fb5d747554ba5267313",
        "colors": [
          {
            "w3c": {
              "hex": "#696969",
              "name": "DimGray"
            },
            "hex": "#513f2c",
            "density": 0.14725
          },
          {
            "w3c": {
              "hex": "#6495ed",
              "name": "CornflowerBlue"
            },
            "hex": "#7298e2",
            "density": 0.31575
          }
        ]
      }
    ]
  }

Info

  https://api.clarifai.com/v1/info

The info endpoint returns the current API details as well as any usage limits your account has.

curl "https://api.clarifai.com/v1/info" \
    -H "Authorization: Bearer {access_token}"
Show All
Response{
    "status_code": "OK",
    "status_msg": "All images in request have completed successfully. ",
    "results": {
      "max_image_size": 100000,
      "default_language": "en",
      "max_video_size": 100000,
      "max_image_bytes": 10485760,
      "min_image_size": 1,
      "default_model": "general-v1.3",
      "max_video_bytes": 104857600,
      "max_video_duration": 1800,
      "max_batch_size": 128,
      "max_video_batch_size": 1,
      "min_video_size": 1,
      "api_version": 0.1
    }
  }
ComponentDescription
default_languageThe language returned when no language parameter is sent as part of the request. This can be set for each application on the application page.
default_modelThe language returned when no model parameter is sent as part of the request. This can be set for each application on the application page.
api_versionEchoes back the API version as set by the relative path prefix in the URL.
max_batch_sizeThe maximum number of images allowed to process in one batch request.
max_image_sizeThe maximum allowed image size (on the minimum dimension). Any images that have a minimum dimension (width or height) greater than this limit will not be processed.
min_image_sizeThe minimum allowed image size (on the minimum dimension). Any images that have a minimum dimension (width or height) less than this limit will not be processed.
max_image_bytesThe maximum allowed image number of bytes. Any images that exceed this limit in size will not be processed.
max_video_batch_sizeThe maximum number of videos allowed to process in one batch request.
max_video_sizeThe maximum allowed video size (on the minimum dimension). Any videos that have a minimum dimension greater than this limit will not be processed.
min_video_sizeThe minimum allowed video size (on the minimum dimension). Any videos that have a minimum dimension less than this limit will not be processed.
max_video_bytesThe maximum allowed video number of bytes. Any videos that exceed this limit in size will not be processed.
max_video_durationThe maximum allowed video duration in seconds. Any videos that exceed this limit in duration will not be processed.

Languages

  https://api.clarifai.com/v1/info/languages

The info/languages endpoint returns all the languages that the tag API call supports.

Important: If you use a language other than English, you must make sure the model you are using is general-v1.3.

curl "https://api.clarifai.com/v1/info/languages" \
    -H "Authorization: Bearer {access_token}"
Show All
Response{
    "languages": {
      "en": "English (en)",
      "zh": "Chinese Simplified (zh)",
      "it": "Italian (it)",
      "ar": "Arabic (ar)",
      "es": "Spanish (es)",
      "ru": "Russian (ru)",
      "nl": "Dutch (nl)",
      "pt": "Portuguese (pt)",
      "no": "Norwegian (no)",
      "tr": "Turkish (tr)",
      "pa": "Punjabi (pa)",
      "pl": "Polish (pl)",
      "fr": "French (fr)",
      "bn": "Bengali (bn)",
      "de": "German (de)",
      "da": "Danish (da)",
      "hi": "Hindi (hi)",
      "fi": "Finnish (fi)",
      "hu": "Hungarian (hu)",
      "ja": "Japanese (ja)",
      "zh-TW": "Chinese Traditional (zh-TW)",
      "ko": "Korean (ko)",
      "sv": "Swedish (sv)"
    },
    "status_code": "OK",
    "status_msg": "All images in request have completed successfully. "
  }

Usage

  https://api.clarifai.com/v1/usage

The usage endpoint returns your API usage for the current month and hour.

curl "https://api.clarifai.com/v1/usage" \
    -H "Authorization: Bearer {access_token}"
Show All
Response{
    "status_code": "OK",
    "status_msg": "All images in request have completed successfully. ",
    "results": {
      "user_throttles": [
        {
          "name": "hourly",
          "consumed": 0,
          "consumed_percentage": 0,
          "limit": 1000,
          "units": "per hour",
          "wait": 3.396084081
        },
        {
          "name": "monthly",
          "consumed": 2,
          "consumed_percentage": 0,
          "limit": 5000,
          "units": "per month",
          "wait": 452.3001357252901
        }
      ],
      "app_throttles": {}
    }
  }
ComponentDescription
nameThe current interval (hourly or monthly).
consumedHow many units you have currently consumed.
consumed_percentageThe percentage of consumed / limit.
limitThe maximum number of units you can consume per month or hour. This can be changed by modifying your current plan.
unitsA short sentence similar to name.
waitTime in seconds until you should make a new request. If you're under the limit for the current interval, it's a suggestion to evenly space out requests over the interval. If you're over the limit, it's the required wait time till the limit resets.