← All Articles
March 29, 2026 · By Ivan Pasichnyk

Data Labeling Pricing Guide 2026: How Much Does Annotation Cost?

"How much will it cost to label our data?" — the first question every ML team asks. The answer depends on annotation type, complexity, volume, and quality requirements. Here are real numbers from production projects, not vendor marketing pages.

Pricing Models: Per-Image vs Per-Hour vs Per-Annotation

Data labeling vendors use three main pricing models. Each has trade-offs:

Model Best For Risk
Per-image Consistent complexity (e.g., all street scenes with similar object count) Vendor may rush complex images or you overpay for simple ones
Per-hour Variable complexity, new annotation types, exploratory projects Less predictable total cost — but you only pay for actual work
Per-annotation Simple tasks with known object counts (e.g., exactly 5 bounding boxes per image) Edge cases and difficult images get the same price as easy ones

Our recommendation: Start with per-hour pricing on your first project. It's the most transparent — you see exactly how long tasks take and can estimate future costs. Switch to per-image once you have baseline time-per-image data.

Hourly rates by region

If you choose per-hour pricing, the rate varies significantly depending on where the annotation team is located and their level of specialization:

Real Pricing by Annotation Type

These ranges are based on real production projects with professional annotation teams — not crowdsourced platforms where quality varies widely.

Annotation Type Price Range What Drives Cost
Bounding boxes $0.02–0.10 per box Number of objects per image, occlusion, classification complexity
Image classification $0.01–0.05 per image Number of categories, ambiguity between classes
Polygon / instance segmentation $0.20–1.50 per object Object shape complexity, number of vertices, overlapping objects
Semantic segmentation (pixel-level) $0.50–3.00 per image Number of classes, image resolution, required precision
Video annotation (per-frame tracking) $0.03–0.15 per keyframe Keyframe frequency, number of tracked objects, interpolation between keyframes reduces cost
Multi-attribute classification $0.05–0.15 per object Number of attributes (age, gender, clothing, etc.)
Named entity recognition (NER) $0.02–0.08 per entity Number of entity types, text length, domain complexity (medical/legal NER costs 2-3x more)
Text classification $0.01–0.05 per document Document length, number of categories, whether multi-label
3D point cloud (LiDAR) $1.00–6.00 per frame Number of objects, 3D cuboid vs segmentation, scene density
Audio transcription $0.50–2.00 per minute Number of speakers, background noise, technical vocabulary

Pricing deep dive: bounding boxes

Bounding boxes are the most common annotation type, but the price range is wide because complexity varies enormously. A retail product image with 3 items on a white background costs $0.02-0.03 per box. A dense urban street scene with 40+ vehicles, pedestrians, and traffic signs costs $0.08-0.10 per box because of occlusion, small objects, and classification decisions at every box.

For projects with 10+ objects per image, per-image pricing often makes more sense than per-box pricing. A typical street scene with 20 bounding boxes priced per-box at $0.05 would cost $1.00 per image. The same image priced per-image might cost $0.60-0.80 because the annotator's context-switching overhead is lower when doing all boxes in one pass.

Pricing deep dive: segmentation

Segmentation is where costs escalate fastest. The difference between a simple polygon (10-15 vertices for a car outline) and a precise polygon (80-100 vertices for a tree canopy) can be 5x in annotation time. Semantic segmentation — where every pixel in the image must be classified — is the most expensive annotation type for images because nothing can be left unlabeled.

On a recent telecom infrastructure project, we labeled 6,500 images with pixel-level segmentation at approximately $0.78 per image. The relatively low per-image cost was possible because the scenes were consistent (overhead aerial views) and we used SAM-assisted pre-labeling to speed up the polygon creation step.

Pricing deep dive: video annotation

Video annotation pricing is often misunderstood. The key concept is keyframe annotation with interpolation. You do not pay for every frame — an annotator labels keyframes (typically every 5-30 frames), and the annotation tool interpolates the object positions between keyframes. This means a 30-second clip at 30fps (900 frames) might only require annotation on 30-90 keyframes.

The cost per keyframe depends on the number of tracked objects. Tracking 3 objects per keyframe costs roughly $0.03-0.05 per keyframe. Tracking 15+ objects with frequent occlusion and re-identification pushes it to $0.10-0.15 per keyframe. For our sports video annotation project, we process over 2,000 hours of footage per month — the volume and consistency of the content keeps per-keyframe costs at the lower end of the range.

What Makes Annotation Expensive (or Cheap)

Factors that increase cost

Factors that decrease cost

Not sure which annotation type you need? Read our Semantic vs Instance Segmentation guide — choosing the right method before you get quotes can save 2-5x on your annotation budget.

In-House vs Outsourced: Cost Comparison

One of the biggest decisions affecting your annotation budget is whether to build an in-house team or outsource to a dedicated vendor. Here is how the costs compare:

In-house annotation costs

Hiring and training your own annotators gives you full control but comes with significant overhead:

All-in cost for an in-house annotator in the US: $18-25/hour including overhead. In Eastern Europe: $8-15/hour. These numbers include salary, management, tooling, and workspace — not just the annotator's wage.

Outsourced annotation costs

Working with a dedicated annotation vendor shifts the overhead to them. You pay per-image, per-hour, or per-annotation, and the vendor handles hiring, training, tooling, and QA. The trade-off: you have less direct control, and communication adds a layer of latency.

For most teams, outsourcing makes sense for projects under 50,000 images or when annotation is not a continuous, daily operation. For teams that label data every day as part of their core product loop, building in-house capacity eventually becomes more cost-effective — but the breakeven point is higher than most people think (usually 3-5 full-time annotators working continuously).

The Pilot Batch: How to Test Before You Commit

Never sign a large contract without a pilot batch first. Here's the standard approach:

  1. Prepare 100-500 representative images — include your hardest cases, not just easy ones
  2. Write annotation guidelines — or ask your vendor to help draft them
  3. Run the pilot — typically takes 3-7 days
  4. Review quality — check edge cases specifically, not random samples
  5. Measure time per image — this gives you cost predictability for production batches

Red flag: If a vendor won't do a pilot batch or insists on a large minimum commitment before you've seen their work — walk away. Any confident team will let you test first.

Want a real estimate for your project? Send us a sample of 10-20 images and your annotation requirements — we'll give you a detailed quote within 24 hours. Book a free call or email us directly.

Hidden Costs to Watch For

The quoted price per annotation is never the full story. Here are the costs that catch teams off guard:

How to Reduce Annotation Costs Without Sacrificing Quality

There are practical ways to bring costs down that do not involve finding a cheaper vendor:

How to Budget: A Quick Formula

For planning purposes:

  1. Estimate your total images/frames
  2. Determine the annotation type (bbox, polygon, segmentation)
  3. Estimate objects per image (average)
  4. Multiply: images × objects × per-annotation cost
  5. Add 20-30% buffer for revisions, edge cases, and guideline iterations
  6. Add guideline development cost — typically 8-20 hours at your engineer's rate for the first project, less for subsequent ones

Example: 5,000 images × 8 objects/image × $0.05/bbox = $2,000 base. With 25% buffer = ~$2,500 total budget. This gives your CFO a number while leaving room for reality.

Budget examples by project type

Project Type Volume Estimated Budget
Object detection prototype 1,000 images, ~10 bbox/image $500–800
Production object detection 50,000 images, ~15 bbox/image $25,000–50,000
Semantic segmentation (small) 2,000 images, 5 classes $2,000–5,000
Video tracking project 100 hours, 5 objects tracked $8,000–15,000
NER for NLP model 10,000 documents, ~8 entities/doc $2,000–6,000

These are rough estimates for professional annotation with QA review included. Crowdsourced platforms may quote lower, but factor in the cost of your engineering team's time reviewing and correcting lower-quality output.

Bottom Line

Data labeling is not a commodity. The cheapest option almost never delivers the best cost-per-usable-label. Focus on:

Data Labeling Pricing Bounding Box Segmentation Cost Analysis ML Operations

Let's Talk

Book a call or send us a message — whatever works for you

Book a Free Call

30-minute consultation to discuss your project, data needs, or AI strategy.

Book Consultation

Send a Message

Or email directly: ivan@welabeldata.com