Get started

Vidrovr provides full api solutions to embed video AI into your platform and products. In this tutotial we will be detailing two core components of our API: how send us your video and how to get the video AI results. Now let’s take a quick tour of how to get your first video running in the Vidrovr machine learning pipeline.

Getting your api key

Before you get started, you need to obtain a private api key from our system.

  • You can receive a log-in to our platform and API key directly by reaching out to our customer support team, or create an account for yourself on our dashboard page.
  • If you already have an account, you can login to vidrovr dashboard. The api key will be displayed on the homepage.
  • If you lost your api key, or your login credentials, feel free to email us at contact@vidrovr.com to retrieve your api key.

Upload content

Now, select a file that you’d like to test. We accept all major video file containers and encoders, you can see the full list on the video upload explainer API page.

Upload with restful api
You can use either your login credentials or api key to start the uploading. Here’s a sample of using your login credentials.

curl -F upload_type='video' -F  id=`USERNAME` -F password=`PASSWORD`
     -F filename=`FILENAME` -F data=`video_file.mp4`
     https://platform.vidrovr.com/upload/uploader
                

Sample response

{
  "msg":"you just have to play nice",
  "id":"c0e4641ed361f76e57bc679f383d9b49315ff97b-f889-4f3e-9efb-eafb7f00d43d",
  "resp":"upload_success"
}
                

Inside of this response:

  • id refers to a uniqle video asset id for the video that was uploaded to the Vidrovr system, this id can be used in metadata and asset retrieval across the whole vidrovr platform. This id is unique and will not be changed.

Get results

After you upload your content, Vidrovr’s backend engine will start processing your video. Depending on the size of your file, the processing time will vary. In general, a 30 minute long, standard hi-res (1080p) video will take us 10 minutes to prepare the data.

To get the metadata that we generated for your video, you can use a GET request on the URL below

https://platform.vidrovr.com/public/api/v01/get_metadata?id=&api_key=
                

Inside of this api call, the api_key is the private api key related to your account from Step 1. The id here refers to the video asset id you get after a successful content upload via our upload api in Step 2.

If the call returns a success, congratulations! You have just received a detailed understading of what is appearing in your video in the form of the Vidrovr metadata

Sample output

{
  "other_metadata":[...],
  "audio_words":[...],
  "name":"nat_geo_sharks_zone1.mp4",
  "tags":[...],
  "hashtags":[...],
  "scenes":[...],
  "id":"...",
  "keyphrases":[...],
  "on_screen_text":[...]
  "person_identification":[...],
  "creation_date":1500229993885,
  "thumbnail":"..."
}
                

Key properties in the response

Property Type Description
audio_words array This is the transcript generated from the audio within the video, each word that is detected comes with a confidence score and time of appearance.
name string The original file name provided during upload.
tags array This contains a list of recognized concepts from your video. Each tag will have a tags property which represents the name of the tag, a start timecode, and a end timecode, and a confidence
scenes array This contains a list of recognized scenes from your video. Each scene will have a scenes property which represents the name of the scene, a start timecode, and a end timecode, and a confidence
on_screen_text array It includes a list of words that appears on screen. In addition to the standard start timecode and end timecode, the return object will also include a bounding box which details the position of this word on a 640 by 360 resolution video, these properties are x for x axis ,y for y axis, h for height, w for width
person_identification array This contains a list of recognized people from your video. Each person will have a person_name property which represents the name of the person, a start timecode, and a end timecode, and a confidence

More api calls

Our full available api list can be found on Vidrovr api page

On this site, we also go into detail around some of the concepts covered on this page.

  • Upload Content provides the details of how to use our restful upload method, HTTP Live Streaming (HLS) upload, and webhooks interface to upload content.
  • Get Results offers explanations of ways to get results back from the system. These methods include
    get_metadata which is the essential methods to get full metadata from a given video id.
    get_video_list which provides a list of the video in your account in chronological order.
    search offers a way to find videos in your account that are all relevant based on the metadata in your account.

Use our dashboard

Vidrovr also provides a free dashboard which provides a user interface for uploading content, inspect, searching through, and editing metadata, and to configure third party connections and custom model tasks.
For more information and instructions on how to use the dashboard. Please follow this link to learn more.

Understand your api limits

Your api calls are subject to operation limits, which can be seen in the dashboard. Feel free to reach out to us if you have overused your quota.

Uploading

Getting started with RESTful uploading

The easiest way to get a video or audio file into Vidrovr is to use our uploader endpoint. Simply curl a video there and it will be processed into the system. Please note all parameters are passed as form data parameters. We currently accept audio and video as acceptable upload types. Our uploading infrastrcture supports the following list of formats:

  • Smooth Streaming using an mp4 container to house H.264 video and AAC audio
  • MPEG-DASH using an fmp4 container to house H.264 video and AAC audio
  • XDCAM using MXF container using MPEG-2 video and PCM audio
  • MP4 container with H.264 video and AAC or MP3 audio
  • WebM container with VP9 video and Vorbis audio
  • WebM container with VP8 video and Vorbis audio
  • FLV container with H.264 video and AAC or MP3 audio
  • MPG container with MPEG-2 video and MP2 audio
  • MP3 container with MP3 audio
  • MP4 container with AAC audio
  • OGG container with Vorbis or FLAC audio
  • OGA container with FLAC audio
  • FLAC container with FLAC audio
  • WAV container with PCM audio
  • GIF
  • AVI
  • Vob
  • WMV
  • MPG
  • MOV except PRORES encoding
  • HLS using an MPEG-2 TS container to house H.264 video and AAC or MP3 audio

Here is an example shell request:

curl -X POST "https://dev.vidrovr.com/upload/uploader" \
     -H "Content-Type: multipart/form-data" \
     --data-raw "id"="example_user" \
     --data-raw "password"="$password" \
     --data-raw "filename"="FileName" \
     --data-raw "data"="$data" \
     --data-raw "api_key"="2d68d9e17625bc233c1db9f8d5b427a0"
                

Example of a successful response:

{
  "msg":"you just have to play nice",
  "id":"c0e4641ed361f76e57bc679f383d9b49315ff97b-f889-4f3e-9efb-eafb7f00d43d",
  "resp":"upload_success"
}
                

Note: The id refers to a uniqle video asset id for the video that was uploaded to the Vidrovr system, this id can be used in metadata and asset retrieval across the whole vidrovr platform. This id is unique and will not be changed.

Getting started with webhooks uploading

Vidrovr has implemented a webhooks interface for progrmatically managing the upload and retreival of metadata from our system in an asynchrnous way.

If after reading this you are still having difficulties getting it to work, feel free to reach out to the team and they will be more than willing to help! The best way to do this is to email support@vidrovr.com

So here goes – hang on this should not be a bumpy ride :-)

General info

The webhook upload serves as a endpoint for programmatic uploading. Where notification requests are made by the Vidrovr service once an asset(s) has completed processing, and all of the metdata associated for a video has been generated.

The method takes a json data body and parses it in order to understand the requested parameters.

Please note, this method does not require any specific file to be explicitly uploaded to Vidrovr endpoint, simply point Vidrovr to to the correct publicly avaiable download_url and we will take care of the download for you. As such the file you are pointing to needs to be accessible by the general web.

The overall webhook flow looks like this:

alt text

Webhook API Flow

The first POST request sent in the flow provides Vidrovr with the information necessary to pull down your asset and feed it into the processing engine.
What you will need is:

  1. API-KEY
  2. The video to be publicly available and accessible to be downloaded. If you don't want the video to be publicly available, this is not a problem. We can work with you to whitelist our infrastructure to your video, just reach out to support@vidrovr.com.

Please note you will need to provide part of your request information as a query parameter and part as a json body object. The query parameter portion will contain the api_key:

https://platform.vidrovr.com/public2/async/v01/webhooks/upload_request?api_key=<API-KEY>

You also need to provide data as part of the POST request in the form of a json data object:

{
  "name":"TEST",
  "download_url":"http://s3.amazonaws.com/vidrovr-test-bucket/vid_xsmall.mp4",
  "notification_receipt_url":"http://callback_receipt_url.com",
  "notification_complete_url":"http://callback_complete_url.com"
}
                

If you notice there are 4 parameters:

Parameter Description Required
name The name of the file you are uploading, this will be stored in our infrastructure and used as the name of the video moving forward. False
download_url The publicly available url for the video file. True
notification_receipt_url The url where you would like Vidrovr to notify once the video has been sucessfully uploaded to our platform. False
notification_complete_url The url where you would like Vidrovr to notify once the video has been sucessfully processed by the syste, and the metadata has been generated False

Here is an example curl request to get you started:

curl -X POST -i 'https://platform.vidrovr.com/public2/async/v01/webhooks/upload_request?api_key=<API-KEY>'
     --data '{
        "name":"TEST",
        "download_url":"http://s3.amazonaws.com/vidrovr-test-bucket/vid_xsmall.mp4",
        "notification_receipt_url":"http://callback_receipt_url.com",
        "notification_complete_url":"http://callback_complete_url.com"
      }'
                

If everything works successfully you should receive a 200 response code with a complete message.

You will later receive a receipt notification once the asset successfully uploaded to system ( our server will send a message to notification_receipt_url) this will include the id of the asset:

{
  "received": true,
  "begun_processing": true,
  "name": "TEST",
  "url": "http://s3.amazonaws.com/vidrovr-test-bucket/vid_xsmall.mp4", "metadata_url": "",
  "id_asset": "60a2a3abea9e49ba2c3b01da30a03eb421951ccf-ed1b-46e7-bb21-55ffa8e02459"
}
                

The url parameter is the video source url where the asset was pulled from.

Once the video asset is processed our server will send a message to notification_complete_url. It will include the medata_url as part of the object in the metadata_url field

{
  "received": true,
  "begun_processing": true,
  "name": "TEST",
  "url": "http://s3.amazonaws.com/vidrovr-test-bucket/vid_xsmall.mp4",
  "metadata_url": "https://production.vidrovr.com/public/api/v01/get_metadata?id=60a2a3ccea9e493a2c3b01da20a03eb421951ccf-ed1b-46e7-bb21-55ffa8e02459&api_key=<API-KEY>&confidence_scores=true&encoding=ascii",
  "id_asset": "60a2a3abea9e49ba2c3b01da30a03eb421951ccf-ed1b-46e7-bb21-55ffa8e02459"
}
                

And this completes the journey. Hopefully you followed along and it helped to better understand how to use Vidrovr’s webhooks interface. If you are still having difficulties feel free to shoot us a message at support@vidrovr.com

Getting started with metadata retrieval and search

Let’s start off by saying that search is not metedata. Most companies approach the video search problem as: extract_metadata -> elasticsearch -> search

At Vidrovr we believe this is fundamentally the wrong way to think about this problem. But In this walkthrough I will not dive into the details. This being said we do provide our users with metadata-- actually we provide them with loads of it and we don’t sell it piecemeal! You will get everything we have.

To retrieve information about your data you will need your API-KEY for most methods

Getting a list of processed videos (get_video_list)

First things first. We need to find what videos we have processed. To do this we need to simply call get_video_list. The only required parameter is the API-KEY

An example request is:

GET /public/api/v01/get_video_list?api_key=<API-KEY>
HTTP/1.1
Content-Type: multipart/form-data; charset=utf-8; boundary=__X_PAW_BOUNDARY__
Host: platform.vidrovr.com
Connection: close
User-Agent: Paw/3.1.8 (Macintosh; OS X/10.14.2) GCDHTTPRequest

And a example response would look like this:

[
  {
    "failed": false,
    "name": "test_webhook_video",
    "id_asset": "dcbce7c1dcdd8cda3ef5ff3f726dba23a2600596-6a2e-46c6-9277-317fed72503c",
    "creation_date": 1546980743013
  }
]

Filter videos by metadata (filter_videos_by)

Previous method gives you list of all the videos that we have processed. But you can filter that list by presence or absence of a certain type of metadata. For example, you can get a list of videos with OCR data in them. First thing that you will need is the API-KEY. You will also need to pass a string parameter has_metadata with either true or fale value to select filtering based on presence or absence of metadata respectively. Last parameter needed is a comma seperated string filters specifying the types of metadata to use for filtering. Possible options are ‘ocr’, ‘person’, ‘tag’.

An example request is:

GET /public/api/v01/filter_videos_by?api_key=2d68d9e17625bc233c1db9f8d5b427a0&has_metadata=True&filters=ocr,tag HTTP/1.1
Content-Type: multipart/form-data; charset=utf-8; boundary=__X_PAW_BOUNDARY__
Host: frontend-staging.vidrovr.com
Connection: close
User-Agent: Paw/3.1.8 (Macintosh; OS X/10.14.2) GCDHTTPRequest

And a example response would look like this:

{
  "ocr": [
    {
      "service_name": "vidrovr_uploader_service",
      "service_relative_file_location": "dan4/6abe00dbf5d50ec534a0eba7496a12d5879bb75c-b56c-4cba-94c4-530814c52264.mp4",
      "finished_all_processing": false,
      "id": 127298,
      "imported": -1,
      "file_location": "/home/ubuntu/uploaded_content/dan4/6abe00dbf5d50ec534a0eba7496a12d5879bb75c-b56c-4cba-94c4-530814c52264.mp4",
      "id_asset": "6abe00dbf5d50ec534a0eba7496a12d5879bb75c-b56c-4cba-94c4-530814c52264",
      "cc_extraction": -1,
      "failed": false,
      "shot_detection": -1,
      "output_message": "queued",
      "audio_diarization": -1,
      "ocr": -1,
      "thumbnail": "thumb.png",
      "service_mount_pnt": "/home/ubuntu/uploaded_content/",
      "shot_feature_extraction": -1,
      "ner_extraction": -1,
      "face_feature_extraction": -1,
      "owner_user_id": "458461eb5ed8840c8df9b38621d941471dbb51c5-9049-4284-990f-ee12e7e0c43c",
      "face_detection": -1,
      "creation_date": 1548273035365,
      "cc_asr_alignment": -1,
      "name": "test_webhook_video",
      "in_db": true,
      "transcription": -1,
      "speaker_identification": -1
    }
  ]
}

Getting metadata (get_metadata)

Once you have a id of an asset id_asset from the get_video_list request, get_metadata retrieves metadata from a processed object. You will need the returned id parameter that is fed back either from the Restful uploader upload/uploader or the webhooks uploader webhooks/upload_request.

Note: id == id_asset in get_video_list , this may change in the future but for now this is how it is.

An example request is below:

GET /public/api/v01/get_metadata?api_key=<API-KEY>&id=<ID> HTTP/1.1
Host: platform.vidrovr.com
Connection: close
User-Agent: Paw/3.1.8 (Macintosh; OS X/10.14.2) GCDHTTPRequest

And here is a truncated example response:

{
  "other_metadata": [],
  "audio_words": [
    {
      "start": 300.0,
      "end": 300.0,
      "word": "I"
    }
  ],
  "name": "nat_geo_sharks_zone1.mp4",
  "tags": [
    {
      "start": 108.0,
      "end": 263.0,
      "tags": "Bird Strike"
    },
  ],
  "hashtags": [],
  "scenes": [
    {
      "start": 0.0,
      "scene_tags": "Fishing Sports",
      "end": 108.0
    }
  ],
  "id": "8366a4497658ab14fd9234874ccbb0a4369ad47e-ff2e-493b-8b3f-60efffb116a3",
  "on_screen_text": [
    {
      "ocr_string": "H",
      "end": 2122.0,
      "h": 11,
      "start": 1753.0,
      "w": 13,
      "y": 320,
      "x": 358
    }
  ],
  "person_identification": [],
  "creation_date": 1500229993885,
  "thumbnail": "http://dev.vidrovr.com/public/api/v01/get_video_thumbnail/2d68d9e17625bc233c1db9f8d5b427a0/thumbnail/asset/8366a4497658ab14fd9234874ccbb0a4369ad47e-ff2e-493b-8b3f-60efffb116a3/0/0"
}

Note: Other metadata will contain information from your custom detectors.

Searching for data

Vidrovr has spent many years working on video search and retrieval technologies. It leverages various types of inference models and probabilistic graphs to intelligently propagate search based on what people are inputing and mapping it to Vidrovr’s internal knowledge graph.

That being said the search method abstracts all the logic away and returns results plus a neighborhood of our knowledge graph. The only required params are the API-KEY and the query

Here is an example query:

GET /public/api/v01/search?api_key=<API-KEY>0&query=west%20coast HTTP/1.1
Host: platform.vidrovr.com
Connection: close
User-Agent: Paw/3.1.8 (Macintosh; OS X/10.14.2) GCDHTTPRequest

with a response that looks like:

{
  "ranking": {
    "persons_similarity": [
      {
        "person": "Tony_West",
        "word_match_score": 0.2509476309226933,
        "similar_people": {
          "video_count": null,
          "clip_count": null,
          "wiki_url": "https://en.wikipedia.org/wiki/Tony_West",
          "data_type": "CO_OCCURENCE",
          "similar_people": [
            {
              "score": 0.0,
              "name": "Claire Foy",
              "rank": 1
            }
          ],
          "mysql_id": 3386,
          "thumbnail": null
        }
      }
    ],
    "tag_similarity": [
      {
        "embedding_word": "town",
        "query_word": "west",
        "similar_tags": {
          "video_count": null,
          "clip_count": null,
          "embedding_rep": null,
          "data_type": "SEMANTIC_GLOVE",
          "similar_tags": [
            {
              "score": 0.8609558763002203,
              "name": "town",
              "rank": 1
            }
          ],
          "tag_category": null
        },
        "cosine_distance": 0.0006745313173820669
      },
      {
        "embedding_word": "coast",
        "query_word": "coast",
        "similar_tags": {
          "video_count": null,
          "clip_count": null,
          "embedding_rep": null,
          "data_type": "SEMANTIC_GLOVE",
          "similar_tags": [
            {
              "score": 0.6666745620989242,
              "name": "coast",
              "rank": 1
            }
          ],
          "tag_category": null
        },
        "cosine_distance": 0.0008288436350588459
      }
    ]
  },
  "results": [
    {
      "key_tags": [],
      "name": "nat_geo_sharks_zone1.mp4",
      "tags": [
        {
          "frame_end": 263.0,
          "frame_start": 108.0,
          "tag": "Bird Strike"
        }
      ],
      "hashtags": [],
      "scenes": [
        {
          "frame_end": 108.0,
          "frame_start": 0.0,
          "scene": "Fishing Sports"
        }
      ],
      "creation_date": 1500229993885,
      "on_screen_text": [
        {
          "frame_end": 2122.0,
          "text": "H",
          "w": 13,
          "x": 358,
          "y": 320,
          "frame_start": 1753.0,
          "h": 11
        },
      ],
      "score": 1.747759,
      "person_identification": [],
      "key_people": [],
      "audio_transcript": [
        {
          "transcript": " cruise business west coast no   service the kids call andy SU stop the sharks close to the bus andy is using the 6 camera virtual reality the capture of 360 degree view         scientist to great white males reach sexual maturity with a broken toe by 11 and a half by 13 feet links great white teenagers                   now find the great white mothers if he finds large females ground open check up on those big sharks on the bottom   once the camera settles into position indian video technician matt hutchings start to see some of the larger great white hello  "
        }
      ],
      "key_hashtags": [],
      "id": "8366a4497658ab14fd9234874ccbb0a4369ad47e-ff2e-493b-8b3f-60efffb116a3"
    }
  ]
}

The main query results are returned as part of the results array, where as the knowledge graph (which we use to rank results) is returned in ranking. As part of the knowledge graph we return similar people and visual tag similarities to the query

If you are still having difficulties feel free to shoot us a message at support@vidrovr.com

Getting started with custom detectors

Over the last few years you have probably heard about artificial intelligence (AI) and machine learning (ML). AI and ML can be hard to implement, because you need lots of labeled data for examples images with captions to allow these solutions to learn from the data. Without the proper annotated data, it's almost impossible to get meaningful value from these solution. Thankfully, Vidrovr has developed a framework for folks that want to find concepts in their video libraries, but don't have the labeled data to start training their own AI, we call this solution custom detectors. Before we dive in, I want to give you a little background on what exactly these detectors try to solve and how you can use them.

Effectively, custom detectors are concept classifiers. They learn what things like dogs, cats, rockets, galaxies look like and then apply them to our video processing engine to enable you to find anything and everything you are looking for.

The way you use it, is you type in what you want to find and Vidrovr just figures it out. Behind the scenes Vidrovr uses it's proprietary data and data collection algorithms to understand what your query means, and then uses our labeled data to train the custom detectors for you. Examples of the types of content that we'll collect can be seen below, whether you want to find a 'hot dog' or a 'dog', Vidrovr will use the information it knows about already and new information to find examples for you.

“drawing”

vs.

“drawing”

If you already have examples of your own that are labeled we can use those to help train your custom detectors, but it's not necessary.

All this being said learning any ML model is a statistics problem, so if our algorithms cannot find what you are looking for or if the things you want to find are too generic the detectors we'll build for you won't be great. Everytime a detector is trained you will receive a performance score for the category (class) you want. And you will be able to select the cateogies you would like to apply to your set of videos sitting in Vidrovr.

Currently, We limit our custom detectors to 100 categories per detector.

If you are having difficulties feel free to shoot us a message at support@vidrovr.com

Without further ado let’s dive in and learn a bit about Vidrovr’s Custom Detectors.

Overview

Let us first begin by creating our first detector. For this example let us create a hot dog vs anything else - let’s say a puppy.

“drawing”

vs.

“drawing”



Setup a Custom Detector

The method to begin this process in our API for this is called create_detector.

Once again to start this process, we don’t need any labeled images to tell our system what they are. Simply need to pass a POST request to our system with the following parameters:

Parameter Description Required
name name of the detector that you wish to be assigned yes
categories a comma seperated list of the categories you would like to train a detector for yes
api_key The API-KEY requrired for your account yes

Here is the request:

POST /public/api/v01/custom_detector/create_detector?api_key=<
API-KEY>&name=test1&
categories=hot%20dog,cat
 HTTP/1.1
    Host: platform.vidrovr.com
    Connection: close
    User-Agent: Paw/3.1.8 (Macintosh; OS X/10.14.2) GCDHTTPRequest
    Content-Length: 0

And if your request is a success then you should see this in the response.

{
  "status": 200,
  "added": true,
  "id_asset": "08bbd7aea3ec4505b0b2cd2af5bbf1c5"
}

Nice Job! This means you have created your first Custom Detector.

While the custom detector is being created, the detector has not yet been trained and cannot be applied to videos.

If you really want your own images/videos of hot dogs (who can blame you) this is your chance to upload them. We have a method for that: upload_image_custom_detector

Note: If you dont want our system to try to make your model from our internal data stores, you need to provide at least 100 images/videos otherwise our system will augment your data.

Here is what you need to upload an image/video:

Parameter Description Required
name name of the detector that you wish to be assigned yes
id_asset unique id associated with the detector yes
api_key The API-KEY requrired for your account yes
image Test image (should be Multipart form data) yes
keyword Category associated with this image yes

And here is an example request:

POST /api/v01/custom_detector/upload_image_custom_detector?api_key=<
API-KEY>&
id_asset=08bbd7aea3ec4505b0b2cd2af5bbf1c5&
keyword=hot%dog
HTTP/1.1
    Content-Type: multipart/form-data; charset=utf-8; boundary=__X_PAW_BOUNDARY__
    Host: platform.vidrovr.com
    Connection: close
    User-Agent: Paw/3.1.8 (Macintosh; OS X/10.14.1) GCDHTTPRequest
    Content-Length: 2432854

    --__X_PAW_BOUNDARY__
    Content-Disposition: form-data; name="image "; filename="magic_hot_dog.jpg"
    Content-Type: image/jpeg

and if it’s successful you should get:

{
  "status": 200,
  "added": true
}

That’s it! Now you got a detector that’s ready to go with your own data!

Training a Detector

So at this point you have a Custom Detector and maybe a bunch of uploaded images. Now all you need to do is start training the detector and sit back.

To do this you need to toggle the training state of the detector - either on or off. This is just a switch so if you are note sure about the state of your detector just send a request to our get_detector enpoint. The request is a POST

Parameter Description Required
api_key The API-KEY requrired for your account yes
POST /api/v01/custom_detector/toggle_training?api_key=<
API-KEY>&
id_asset=08bbd7aea3ec4505b0b2cd2af5bbf1c5
HTTP/1.1
    Host:  platform.vidrovr.com
    Connection: close
    User-Agent: Paw/3.1.8 (Macintosh; OS X/10.14.2) GCDHTTPRequest
    Content-Length: 0
    

with a response:

{
  "model": "08bbd7aea3ec4505b0b2cd2af5bbf1c5",
  "training": true
}

Note: When the training is complete you will be notified via an email to the account list email, so you don't need to keep pinging this endpoint all the time.

Applying a Trained Detector

Once you have a your detector trained. All you have to do is to apply it and every video you then upload to Vidrovr will have this custom detector run on it. We'll find all the hot dogs in your videos!

Here is how you apply a detector to any new video being passed to this account system - 1 method:
apply_custom_detector

You will have the option of applying this detector to new videos, old videos or both. These are the options that you can set inside of the apply_targets parameter in the POST request.

And all you will need to pass in are the following:

Parameter Description Required
model_id_asset Identifier for the custom detector, returned by create_detector yes
apply_targets This is the parameter you would set to one of the three options of videos to apply to. ["new", "old", "both"]. Note: this parameter is required and will error if no option is passed yes
api_key The API-KEY requrired for your account yes
black_list This is the parameter you would use to black list classes. This should be a comma seperated string. “cat, dog” no
performance_threshold Use this parameter to black list all the classes below a certain accuracy. no

Here is a request:

POST /api/v01/custom_detector/apply_custom_detector?api_key=<
API-KEY>&
model_id_asset=08bbd7aea3ec4505b0b2cd2af5bbf1c5&
apply_targets=both&
;black_list=cat,%20dog&
performance_threshold=0.6
HTTP/1.1
    Host: localhost:8000
    Connection: close
    User-Agent: Paw/3.1.8 (Macintosh; OS X/10.14.2) GCDHTTPRequest
    Content-Length: 0
    

THE END

That’s it – from now on you will be detecting when hot dogs or cats appear on screen in a video you upload!

“drawing”

If you are still having difficulties feel free to shoot us a message at support@vidrovr.com

Use Vidrovr Dashboard

The Vidrovr dashboard provides an easier way to manage your account, locate your videos and setup custom detectors. You can start using the dashboard here

Get your login credentials
Our sales team sends login credentials via email, if you have lost it, want to request a new account, or want to change the password. Please contact us via email contact@vidrovr.com

Setup third party connections for 3rd parties

Vidrovr supports integrations from the following third party video inventories.

  1. Amazon AWS
  2. JWPlayer
  3. Brightcove
  4. Youtube channel

Note: Currently, Vidrovr only supports one third party connection per account.

Amazon AWS
Currently, we only supports Public buckets on AWS, to setup a public bucket on amazon S3, please follow this instruction

  1. Now, open the Vidrovr dashborad
  2. On the home page, the second card in the middle will help you setup your third party connection if you haven’t done this before. Simply hover on the get started dropdown button, and select AWS S3 from the dropdown.
  3. In the modal, set up the name of your public AWS S3 bucket, and check the terms and conditions, then hit the Next button.

JWPlayer

  1. To connect to JW Player, simple navigate to the home page of the Vidrovr dashboard, open the third party dropdown and select JW Player.
  2. Then inside of the modal, you will need to fill in a key and a client secret which can both be found from your JW Player dashboard. You will also need a Cloud Player Library Url which can be found on the Tools page in the JW Player dashboard.

Brightcove

  1. To connect to Brightcove account, simple navigate to the home page of Vidrovr dashboard, open the third party dropdown and select Brightcove.
  2. Then inside of the modal, you will need to fill in client id, client secret, account id and player id which can all be found from your Brightcove dashboard.

Youtube channel

  1. To connect to a Youtube channel, simple navigate to the home page of the Vidrovr dashboard, open the third party dropdown and select Youtube.
  2. Then inside of the modal, you will need to fill in your API key, and channel id.