We work every day to improve features of our platform and build more public models for developers to use. Here’s what we’ve been up to!
- Platform Mobile SDK (Limited Preview): We have released Clarifai’s Mobile SDK which enables machine learning directly on your device, bypassing the traditional requirement of internet connectivity and massive computing power. In order to gain access, please submit your information on the form on our Mobile SDK page. You will need a Clarifai Account to request access to the SDK.
Platform Model Evaluation (Beta): We just added a Model Evaluation tool to our Custom Training! This feature will allow you to test the performance of your custom trained model before using it in the production environment. This tool is currently available on the Preview UI only. Learn more about the model evaluation feature in our documentation guide!
Platform API Keys: developers can authorize their API calls through API Keys. These Keys contain finer level of scopes, which enables the developer to create a "predict-only" or "search-only" key, restricting unauthorized API calls, and making their application more secure. Keys can be accessed from Developer Hub and more details are found in our Guide. We also wrote a blog to talk about why we introduced API Keys to our platform!
Platform Predict Parameters: we are enabling our developers to customize their predict requests to receive exactly what they require in the response. We have introduced 3 capabilities:
- Maximum Concepts: allows users to customize the number of concepts they receive back in the response.
- Minimum Value: allows users to specify the minimum prediction value of a concept to be shown in the response.
- Select Concepts: allows users to specify exactly which concept they want to see in the response.
Details on how these features can be used can be found in our Guide.
- Platform Video Support in v2 API: released in private beta, developers can request access to the API that allows them to make predict calls on videos as the input.
Platform Geo Search: allows developers to add location metadata (longitude, latitude) to inputs, and perform a search within a bounding geographic region. See our docs for full details.
Models Focus Model: launched a new model that analyzes an image and returns 1) the overall focus value (probability that there is an in-focus region within the image), and 2) a bounding box and focus density for every in-focus region within the image.
Models Demographics Model: launched a new model that analyzes images and returns information on age, gender, and multicultural appearance for each detected face based on facial characteristics.
Models Logo Model: launched a new model that analyzes images and returns probability scores on the likelihood that the media contains the logos of over 500 recognized brand names.
- Models Model Gallery: introduced a new gallery to showcase all of our visual recognition models. You’ll find information about each of our models, view code documentation, and try them out through our demo.
Platform Multi-language Support in v2 API: all of the languages that were available in our v1 API are now available in our v2 API! We support 22 languages other than English for our Predict calls.
Models Face Detection Model: launched a new model that returns the probability that an image contains faces as well as bounding box location coordinates.
Models Apparel Model: this model understands various fashion and accessory items and is best for identifying clothing against a white backdrop like in your favorite e-commerce stores.
Models Celebrity Model: recognizes a wide assortment of famous people and public figures.
Platform Custom Training (GA): finalized Custom Training and fixed any bugs that came up for testers.
Platform Visual Search (GA): finalized Visual Search and fixed any bugs that came up for testers.
- Platform Custom Metadata: allows developers to add any custom information (for example, price or SKU) to data inputs. This custom information is also fully searchable, just like your images!
- Platform Custom Training (Beta): allowed concepts to be added/removed from models, after a model is created; allowed models to be created without providing a list of concepts.
- Platform Custom Training (Alpha): allows developers to build a visual recognition model in a matter of seconds using only a handful of data examples. Developers can tailor our visual recognition technology for their specific needs, with a few clicks.
- Platform Visual Search (Alpha): Visual Search lets developers easily perform search by tag, search by image, and search by a combination of images and tags.
Models Food Model: you can start building incredible (and tasty) apps that recognize over a thousand types of food down to the ingredient level!
Platform Upgraded Demo: we launched a new demo to give people an easy and eye-pleasing way to test the tags on any image or video!
- Forevery Forevery integration with Google Drive: sync your photos stored on Google Drive with the Forevery app, so you can search and view all your photos in one place!
Models Travel Model: Our new Travel image recognition model automatically identifies travel-related concepts in pictures and video and can be used to build and improve apps in the travel, leisure, and hospitality industries.
- Forevery Forevery integration with Dropbox: sync your photos stored in Dropbox with the Forevery app, and add our image recognition capabilities to your personal photos!
There were many awesome features that were added prior to May 2016 that haven’t been logged here.