We start by analyzing your representative sample data and delivering a proof of concept based on your goals and requirements. Together, we set and optimize the annotation requirements, which will determine the number and types of annotations for each image, frame, or other data type. We will also structure the classification ontology to identify the appropriate object relationships.
After we determine the best segmentation types and ontology, we build out a few different annotation workflows for our annotators. We consult with our annotators and use A/B tests to determine the most intuitive way for our annotators to label objects to ensure annotation efficiency and accuracy.
Our solutions engineers design and deploy the final annotation task workflows. We combine human annotation with machine learning-augmented software platforms reducing annotation time by up to 90%. Our project managers ensure the delivery of batches of data meeting your time, budget, and quality requirements, for up to millions of data points.
Concurrently, we recruit the appropriate number of annotators for the project from our global and US-based workforce. We train and qualify annotators using ground truth annotations and enroll them in the project.
As annotators submit work, automated worker scoring is sent to our Quality Assurance team. We address quality issues as soon as they are found to prevent costly delays. Submitted work are reviewed, where clients can accept annotations or provide feedback.
With any machine learning project, we know that experimentation is required for the most accurate model outputs. As batches of labels—typically a hundred thousand at once—are delivered and fed into the model, we understand that data labeling requirements can change based on the model outputs. Our project managers are always ready to work with your annotation guidelines so that models hit quality requirements across different types of annotations and edge cases.