Data Labeling Pricing Guide 2026: How Much Does Annotation Cost?
Pricing Models: Per-Image vs Per-Hour vs Per-Annotation
Data labeling vendors use three main pricing models. Each has trade-offs:
| Model | Best For | Risk |
|---|---|---|
| Per-image | Consistent complexity (e.g., all street scenes with similar object count) | Vendor may rush complex images or you overpay for simple ones |
| Per-hour | Variable complexity, new annotation types, exploratory projects | Less predictable total cost — but you only pay for actual work |
| Per-annotation | Simple tasks with known object counts (e.g., exactly 5 bounding boxes per image) | Edge cases and difficult images get the same price as easy ones |
Our recommendation: Start with per-hour pricing on your first project. It's the most transparent — you see exactly how long tasks take and can estimate future costs. Switch to per-image once you have baseline time-per-image data.
Hourly rates by region
If you choose per-hour pricing, the rate varies significantly depending on where the annotation team is located and their level of specialization:
- Crowdsourcing platforms (MTurk, Toloka): $2-8/hour equivalent. Low cost, but quality control is your problem. Expect 15-30% error rates without robust QA pipelines.
- Offshore dedicated teams (Southeast Asia, Africa): $4-10/hour. Better consistency than crowdsourcing, but communication overhead can add hidden costs. Works well for high-volume, straightforward tasks.
- Eastern European dedicated teams: $5-15/hour. Strong technical literacy, good English communication, reliable quality. This is the range where most professional annotation vendors operate, including our team at WeLabelData.
- US/Western European teams: $15-35/hour. Justified for specialized domains (medical, legal, defense) where domain expertise is critical and data cannot leave certain jurisdictions.
Real Pricing by Annotation Type
These ranges are based on real production projects with professional annotation teams — not crowdsourced platforms where quality varies widely.
| Annotation Type | Price Range | What Drives Cost |
|---|---|---|
| Bounding boxes | $0.02–0.10 per box | Number of objects per image, occlusion, classification complexity |
| Image classification | $0.01–0.05 per image | Number of categories, ambiguity between classes |
| Polygon / instance segmentation | $0.20–1.50 per object | Object shape complexity, number of vertices, overlapping objects |
| Semantic segmentation (pixel-level) | $0.50–3.00 per image | Number of classes, image resolution, required precision |
| Video annotation (per-frame tracking) | $0.03–0.15 per keyframe | Keyframe frequency, number of tracked objects, interpolation between keyframes reduces cost |
| Multi-attribute classification | $0.05–0.15 per object | Number of attributes (age, gender, clothing, etc.) |
| Named entity recognition (NER) | $0.02–0.08 per entity | Number of entity types, text length, domain complexity (medical/legal NER costs 2-3x more) |
| Text classification | $0.01–0.05 per document | Document length, number of categories, whether multi-label |
| 3D point cloud (LiDAR) | $1.00–6.00 per frame | Number of objects, 3D cuboid vs segmentation, scene density |
| Audio transcription | $0.50–2.00 per minute | Number of speakers, background noise, technical vocabulary |
Pricing deep dive: bounding boxes
Bounding boxes are the most common annotation type, but the price range is wide because complexity varies enormously. A retail product image with 3 items on a white background costs $0.02-0.03 per box. A dense urban street scene with 40+ vehicles, pedestrians, and traffic signs costs $0.08-0.10 per box because of occlusion, small objects, and classification decisions at every box.
For projects with 10+ objects per image, per-image pricing often makes more sense than per-box pricing. A typical street scene with 20 bounding boxes priced per-box at $0.05 would cost $1.00 per image. The same image priced per-image might cost $0.60-0.80 because the annotator's context-switching overhead is lower when doing all boxes in one pass.
Pricing deep dive: segmentation
Segmentation is where costs escalate fastest. The difference between a simple polygon (10-15 vertices for a car outline) and a precise polygon (80-100 vertices for a tree canopy) can be 5x in annotation time. Semantic segmentation — where every pixel in the image must be classified — is the most expensive annotation type for images because nothing can be left unlabeled.
On a recent telecom infrastructure project, we labeled 6,500 images with pixel-level segmentation at approximately $0.78 per image. The relatively low per-image cost was possible because the scenes were consistent (overhead aerial views) and we used SAM-assisted pre-labeling to speed up the polygon creation step.
Pricing deep dive: video annotation
Video annotation pricing is often misunderstood. The key concept is keyframe annotation with interpolation. You do not pay for every frame — an annotator labels keyframes (typically every 5-30 frames), and the annotation tool interpolates the object positions between keyframes. This means a 30-second clip at 30fps (900 frames) might only require annotation on 30-90 keyframes.
The cost per keyframe depends on the number of tracked objects. Tracking 3 objects per keyframe costs roughly $0.03-0.05 per keyframe. Tracking 15+ objects with frequent occlusion and re-identification pushes it to $0.10-0.15 per keyframe. For our sports video annotation project, we process over 2,000 hours of footage per month — the volume and consistency of the content keeps per-keyframe costs at the lower end of the range.
What Makes Annotation Expensive (or Cheap)
Factors that increase cost
- Dense scenes — 50+ objects per image vs 5 objects per image can mean 10x the annotation time
- Ambiguous edge cases — "is this a truck or a van?" requires guidelines, discussion, and sometimes multiple review rounds
- Pixel-level precision — semantic segmentation costs 5-10x more than bounding boxes on the same image
- Multiple annotation types — bounding box + classification + attributes on the same image compounds the work
- Small batches — 100 images cost more per-image than 10,000 because of setup and guideline development overhead
Factors that decrease cost
- Consistent image type — same camera angle, same objects, same scene type = faster annotation
- Good annotation guidelines — clear, visual instructions with edge case examples reduce rework by 20-40%
- Pre-labeling — using model predictions as a starting point, with human review and correction
- Volume — larger batches amortize setup costs and let annotators build speed through repetition
- Ongoing partnership — annotators who know your domain get faster over time without losing quality
Not sure which annotation type you need? Read our Semantic vs Instance Segmentation guide — choosing the right method before you get quotes can save 2-5x on your annotation budget.
In-House vs Outsourced: Cost Comparison
One of the biggest decisions affecting your annotation budget is whether to build an in-house team or outsource to a dedicated vendor. Here is how the costs compare:
In-house annotation costs
Hiring and training your own annotators gives you full control but comes with significant overhead:
- Recruitment and training: expect 2-4 weeks before a new annotator reaches production speed. Training costs include the trainer's time, annotation guideline development, and the annotator's ramp-up period where output quality and speed are below target.
- Tooling: annotation platform licensing or self-hosting costs. Read our CVAT vs Labelbox vs Label Studio comparison to understand the options. Free tools like CVAT still require DevOps time to maintain.
- Management overhead: QA review, performance tracking, scheduling, and handling turnover. A team of 5 annotators typically needs 0.5-1 FTE for management and QA.
- Utilization risk: if your annotation volume is uneven (large batch this month, nothing next month), you are paying idle salaries during downtime.
All-in cost for an in-house annotator in the US: $18-25/hour including overhead. In Eastern Europe: $8-15/hour. These numbers include salary, management, tooling, and workspace — not just the annotator's wage.
Outsourced annotation costs
Working with a dedicated annotation vendor shifts the overhead to them. You pay per-image, per-hour, or per-annotation, and the vendor handles hiring, training, tooling, and QA. The trade-off: you have less direct control, and communication adds a layer of latency.
For most teams, outsourcing makes sense for projects under 50,000 images or when annotation is not a continuous, daily operation. For teams that label data every day as part of their core product loop, building in-house capacity eventually becomes more cost-effective — but the breakeven point is higher than most people think (usually 3-5 full-time annotators working continuously).
The Pilot Batch: How to Test Before You Commit
Never sign a large contract without a pilot batch first. Here's the standard approach:
- Prepare 100-500 representative images — include your hardest cases, not just easy ones
- Write annotation guidelines — or ask your vendor to help draft them
- Run the pilot — typically takes 3-7 days
- Review quality — check edge cases specifically, not random samples
- Measure time per image — this gives you cost predictability for production batches
Red flag: If a vendor won't do a pilot batch or insists on a large minimum commitment before you've seen their work — walk away. Any confident team will let you test first.
Want a real estimate for your project? Send us a sample of 10-20 images and your annotation requirements — we'll give you a detailed quote within 24 hours. Book a free call or email us directly.
Hidden Costs to Watch For
The quoted price per annotation is never the full story. Here are the costs that catch teams off guard:
- Revision rounds — some vendors charge extra for corrections. Others include 1-2 rounds in the base price. Ask upfront. On complex projects, expect at least 2 revision cycles on the first batch as you calibrate edge cases with the annotation team.
- Guideline development — writing clear annotation instructions takes time. A thorough annotation guideline document for a moderately complex project takes 8-20 hours to create and includes visual examples, edge case definitions, and decision trees for ambiguous situations. Some vendors help with this, some expect you to deliver perfect guidelines day one.
- Format conversion — if your vendor delivers in CVAT format but your pipeline needs COCO, who converts? This should be included. Custom export scripts can take a developer 2-8 hours to write and validate.
- Project management — dedicated PM vs ticket system makes a big difference in communication speed and quality feedback loops. A dedicated PM adds 10-15% to project cost but saves significantly more in reduced miscommunication and faster iteration.
- Quality rework — the cheapest per-unit price means nothing if 30% of annotations need fixing. Calculate cost per usable annotation, not cost per annotation. A vendor at $0.05/box with 5% error rate is cheaper than a vendor at $0.03/box with 25% error rate once you account for review and correction time.
- Onboarding time — every new vendor or annotator team needs time to learn your domain. The first batch will always be slower and lower quality than subsequent ones. Factor in 1-2 weeks of calibration before judging a vendor's true production speed.
- Data preparation — cleaning your raw data, removing duplicates, converting file formats, and organizing datasets into annotation-ready batches takes time. This is your cost, not the vendor's, and it is often underestimated.
How to Reduce Annotation Costs Without Sacrificing Quality
There are practical ways to bring costs down that do not involve finding a cheaper vendor:
- Use pre-labeling aggressively. Run your current model on unlabeled data and use its predictions as starting annotations. Human annotators then correct rather than create from scratch. This typically cuts annotation time by 30-50% for bounding boxes and 40-60% for segmentation tasks.
- Invest in annotation guidelines upfront. Spending 2 days writing excellent guidelines with visual examples for every edge case saves weeks of rework later. The best guidelines include "do this / not this" side-by-side image comparisons.
- Use active learning to prioritize. Instead of labeling your entire dataset, use model uncertainty to identify which images will improve your model the most. Label those first. Many teams find that labeling 20-30% of their data with active learning produces a model nearly as good as labeling 100%.
- Standardize your data pipeline. Consistent image resolutions, file naming conventions, and batch sizes reduce overhead for both you and your annotation team. Small inefficiencies compound quickly across thousands of images.
- Build a long-term vendor relationship. An annotation team that has worked on 10 batches of your data is 2-3x faster than a new team on batch one, with higher quality. The learning curve matters.
How to Budget: A Quick Formula
For planning purposes:
- Estimate your total images/frames
- Determine the annotation type (bbox, polygon, segmentation)
- Estimate objects per image (average)
- Multiply: images × objects × per-annotation cost
- Add 20-30% buffer for revisions, edge cases, and guideline iterations
- Add guideline development cost — typically 8-20 hours at your engineer's rate for the first project, less for subsequent ones
Example: 5,000 images × 8 objects/image × $0.05/bbox = $2,000 base. With 25% buffer = ~$2,500 total budget. This gives your CFO a number while leaving room for reality.
Budget examples by project type
| Project Type | Volume | Estimated Budget |
|---|---|---|
| Object detection prototype | 1,000 images, ~10 bbox/image | $500–800 |
| Production object detection | 50,000 images, ~15 bbox/image | $25,000–50,000 |
| Semantic segmentation (small) | 2,000 images, 5 classes | $2,000–5,000 |
| Video tracking project | 100 hours, 5 objects tracked | $8,000–15,000 |
| NER for NLP model | 10,000 documents, ~8 entities/doc | $2,000–6,000 |
These are rough estimates for professional annotation with QA review included. Crowdsourced platforms may quote lower, but factor in the cost of your engineering team's time reviewing and correcting lower-quality output.
Bottom Line
Data labeling is not a commodity. The cheapest option almost never delivers the best cost-per-usable-label. Focus on:
- Getting a pilot batch before committing
- Measuring time per image to predict production costs
- Working with a dedicated team that learns your domain — see how this works in practice
- Calculating total cost including rework, not just unit price
- Using pre-labeling and active learning to reduce the volume of manual annotation needed
- Investing in annotation guidelines that prevent expensive rework cycles