Automatic Horse Measurement on Mobile with LiDAR, ARKit and AI

15 Min Read • Jul 6, 2025

Author Image

Alex Tudose

Technical Director

Author Image

Technology Convergence

The convergence of three advanced technologies creates an unprecedented opportunity to revolutionize equine measurement. LiDAR sensors provide millimeter-accurate depth sensing, capturing detailed 3D representations of horses in real-time. ARKit's spatial computing capabilities establish precise coordinate systems and track device movement, ensuring measurements remain accurate even as the operator moves around the animal. Machine learning algorithms automatically identify anatomical landmarks like the withers point, eliminating human interpretation errors and ensuring consistent measurement standards.

Understanding Horse Measurement Fundamentals

Key Measurement Points and Technical Challenges

Horse measurement centers on three critical dimensions: withers height (ground to highest point between shoulder blades), body length (shoulder to buttock) and girth circumference (around the barrel behind front legs). The withers point presents the primary technical challenge for automated systems—this bony prominence varies significantly between horses, with some having sharp, well-defined ridges while others present broad, muscular areas. 

Traditional Methods and Development Opportunities

Conventional measurement using measuring sticks and tape measures suffers from inherent limitations that create clear opportunities for mobile app development. Physical tools require horses to stand perfectly still on level ground, often necessitating multiple attempts as animals move. Manual measurement introduces standard deviations of 2-4 cm between different measurers and 1-3 cm variations even when the same person repeats measurements. Environmental factors like uneven terrain, lighting conditions and horse behavior further compromise reliability.

Accuracy Requirements for App Development

The technical specifications for a successful horse measurement app are well-defined by industry needs. Competition organizations require accuracy within ±1.27 cm (0.5 inches) for height classifications. Insurance documentation typically requires ±1.27 cm for height and ±5 cm for length measurements. These stringent requirements, combined with the 2-4 cm accuracy gap in traditional methods, create a clear market opportunity for mobile solutions that can deliver consistent, repeatable measurements while eliminating the safety concerns and time constraints of manual approaches.

Implementation Architecture

System Design and Detection Pipeline

The horse measurement application uses a modular architecture that separates computer vision, augmented reality and measurement logic into independent components. The system continuously analyzes camera frames to detect horses and automatically measure height when anatomical landmarks are identified. The architecture follows MVVM patterns with SwiftUI's reactive properties to propagate detection events through the component hierarchy.

The HorseDetectionManager coordinates the detection pipeline, while ARViewContainer handles spatial computing and 3D measurements. This separation enables independent testing and maintains clear responsibilities between vision processing and spatial calculations.

ARKit Integration and Spatial Computing

The ARViewContainer implements world tracking with horizontal plane detection and conditional LiDAR mesh reconstruction. The AR configuration maintains spatial coordinate systems automatically as users move around horses.

let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal]

if ARWorldTrackingConfiguration.supportsSceneReconstruction(.mesh) && isLiDARActive {
    configuration.sceneReconstruction = .mesh
    configuration.frameSemantics.insert(.sceneDepth)
}

The measurement system performs raycast queries from detected anatomical landmarks to establish 3D positions, providing centimeter-level precision when combined with LiDAR depth data.

Two-Stage Detection Pipeline

The application uses a two-stage detection approach for performance optimization. Stage 1 employs YOLOv11s running every 0.5 seconds for general horse detection across 80 COCO object classes. This serves as a gatekeeper, reducing CPU usage by approximately 50% when no horses are present.

Stage 2 activates a custom YOLO model for anatomical keypoint detection, running at 30fps when horses are detected. This model identifies critical measurement points including withers and front leg positions.

private func addPoints(on frame: ARFrame) {
    // Stage 1: Periodic horse detection  
    if Date().timeIntervalSince(lastHorseDetectionTime) > horseDetectionCooldown {
        horseDetectionManager.detectHorse(in: frame.capturedImage)
    }
    …..
    // Stage 2: Keypoint detection when horse present
    if horseDetectionManager.isHorseDetected {
        yoloModel.handle(preprocessedImageBuffer)
    }
}

Computer Vision and Measurement Algorithm

The vision pipeline handles device orientation and image preprocessing to ensure consistent model input. CoreML integration leverages Apple's Vision framework for efficient model execution on the Neural Engine when available.

The measurement algorithm combines computer vision outputs with spatial raycast queries to establish 3D coordinates. When anatomical landmarks are detected, the system performs ground detection using multi-directional sampling to account for horse stance variations.

// Ground detection with lateral offset sampling
let lateralOffsets: [SIMD3<Float>] = [
    .zero,
    camRight * 0.10,    // 10cm offset
    camRight * -0.10
]

for offset in lateralOffsets {
    let groundHit = raycastDown(from: surfacePoint + offset)
    if let point = groundHit, point.y < bestGround?.y ?? Float.greatestFiniteMagnitude {
        bestGround = point
    }
}

LiDAR Enhancement and Validation

When LiDAR is available, the system searches mesh vertices within a 15cm radius of detected landmarks for enhanced precision. This hybrid approach maintains functionality on non-LiDAR devices while offering improved accuracy on supported hardware.

The architecture includes validation at multiple layers: Vision framework error handling, spatial measurement validation for geometric consistency and UI state management to prevent invalid operations. This approach ensures reliable operation and clear user feedback when edge cases occur.

Complete On-Device Processing Architecture

The application implements 100% on-device processing for all computer vision and measurement calculations. This architectural decision eliminates network dependencies, ensures consistent performance regardless of connectivity and maintains user privacy by keeping all image data on the device.

CoreML model execution leverages Apple's Neural Engine when available, providing hardware-accelerated inference for both horse detection and keypoint estimation models. The YOLOv11s model (18.2MB) and custom horse pose model execute with typical inference times of 20-30ms on modern devices.

// On-device model configuration for optimal performance
do {
    let mlModel = try MLModel(contentsOf: modelURL)
    horseDetectionModel = try VNCoreMLModel(for: mlModel)
    // Neural Engine acceleration enabled automatically
} catch {
    print("Model loading failed: \(error.localizedDescription)")
}

Memory Management and Performance Optimization

The system employs intelligent memory management to handle continuous image processing without memory leaks. CVPixelBuffer objects are processed directly from ARKit camera frames, minimizing memory allocations and reducing garbage collection overhead. The two-stage detection pipeline significantly reduces average CPU utilization by activating resource-intensive processing only when necessary.

Output

Alternative Approach: 3D Scanning

Research-Based 3D Scanning Methodology

A recent study by Matsuura et al. (2021) demonstrates a fundamentally different approach to horse measurement using traditional 3D scanning technology. Their methodology employs an iPad Pro with specialized scanning software (Scandy Pro) to capture complete 3D models of horses, followed by manual measurement extraction using desktop CAD software. While this approach achieves high accuracy (correlation coefficients of 0.856-0.998 for most measurements), it represents a significantly more labor-intensive workflow compared to AI-powered mobile measurement.

Time-Intensive Multi-Stage Process

The traditional 3D scanning approach requires approximately 15 minutes per horse, involving multiple distinct phases that make it impractical for routine use. The operator must physically walk around each horse for 1-2 minutes to capture whole-body scans, followed by additional 20-second scans of individual limbs to ensure adequate detail capture. The study protocol requires 2-5 scanning attempts per horse to account for animal movement and scanning quality variations, significantly extending the total time investment.

Traditional 3D Scanning Workflow:
1. Initial setup and horse positioning (2-3 minutes)
2. Whole-body scanning while walking around horse (1-2 minutes)
3. Individual limb scanning (20 seconds × multiple limbs)
4. Quality assessment and re-scanning (2-5 additional attempts)
5. Post-processing in desktop software (5-10 minutes)
6. Manual landmark identification and measurement (3-5 minutes)
Total: ~15-25 minutes per horse

Post-Processing Dependencies and Workflow Limitations

The research methodology demonstrates significant post-processing requirements that limit practical deployment. Captured 3D models must be manually cropped to remove background elements using specialized software (CloudCompare), then imported into CAD software (Fusion 360) for measurement extraction. Operators must manually identify anatomical landmarks on 3D models and draw measurement lines using cross-sectional analysis tools, introducing both time delays and potential human error in landmark identification.

Accuracy vs. Practicality Trade-offs

While the study achieves relative errors of -1.37% to 6.25%, several factors limit practical adoption in field environments. The methodology requires trained operators familiar with 3D scanning techniques and access to desktop computers with specialized CAD software. The researchers note particular challenges with movement artifacts requiring multiple scan attempts, issues that become more pronounced in real-world equine environments.

This approach, while scientifically rigorous, highlights the compelling advantages of AI-powered mobile measurement: eliminating post-processing delays, reducing operator skill requirements, enabling field deployment and providing immediate results suitable for practical equine management applications.

Conclusion

Tapptitude's strategic technical decisions to use LiDAR eliminates the need for physical contact with potentially skittish animals, ARKit maintains spatial accuracy without requiring controlled environments and AI removes subjective interpretation from anatomical landmark identification. The result is a measurement system that is faster while being accessible through devices that practitioners already carry. This convergence transforms what was once a time-consuming, potentially dangerous and often inconsistent process into a quick, safe and standardized procedure that can be performed anywhere.

This focus on real-world usability demonstrates how thoughtful technical implementation makes advanced measurement capabilities accessible to practitioners who need reliable results without disrupting existing workflows

 

Ready to Transform Your AR Experience?

Have questions or need guidance? Our experts are standing by to help you integrate Machine Learning, ARKit and LiDAR. Click the button below to get in touch and take your application’s AI capabilities to the next level.

Contact Us
Alex Tudose

Alex Tudose

Technical Director

Senior iOS Developer with more than 5 years of experience in developing native mobile apps. I always aim to combine fantastic user experiences, with solid design principles, and strong development solutions. When I'm not coding, I'm most likely outdoors, riding my bike.