Machine learning isn't magic—it's a contract. In this episode, we build the mental model iOS developers need to ship Core ML features that don't fail silently in production.
🧠The Simple Mental Model
If you only remember one sentence from this episode, make it this: Machine learning is a function learned from examples.
Unlike traditional programming where you write the rules, ML systems learn patterns from data. This shifts your role from writing logic to managing data and contracts.
- →Training is like compiling—it's the process of learning the function from labeled examples.
- →Inference is like running—it's using that trained function to make predictions on-device.
⚙️The #1 Production Failure
⚠️ WARNING: Preprocessing Mismatch
The most common reason on-device ML fails isn't a "bad model." It's a mismatch between how the model was trained and how the app prepares data.
The Silent Failure: If your app resizes, crops, or normalizes an image differently than the training pipeline, the model will return plausible-looking but incorrect results without ever crashing.
To avoid this, treat your model as a versioned protocol that includes:
- The model weights (.mlmodel file)
- Preprocessing parameters (mean, std, crop settings)
- Label mapping (what index 17 actually means)
🔍Implementation Patterns
Vision + Core ML Classification
Using the Vision framework is the recommended way to handle image-based models, as it manages much of the preprocessing for you.
For a complete implementation guide on using Vision with Core ML, refer to Apple's official documentation:
Classifying Images with Core ML and Vision✨This Week's Checklist
- ✓Read the Spec: Open any .mlmodel in Xcode and treat the input/output descriptions like an API contract.
- ✓Write a Tripwire: Create one "golden path" test with a known input and expected output range.
- ✓Profile Early: Measure inference time on a mid-range device before committing to a specific architecture.
- ✓Design for Uncertainty: Decide how the UI behaves when the model returns low-confidence results.
🎯Key Takeaways
- 1.ML is a function—It's a learned mapping from inputs to outputs, not a mystical "brain."
- 2.The pipeline is the product—Core ML is just the runtime; your success depends on the data and preprocessing.
- 3.Version the contract—Always bundle preprocessing parameters and label mappings with your model weights.
- 4.Test for stability—Use golden-path tests to ensure your pipeline stays consistent across model updates.
About Sandboxed
Sandboxed is a podcast for iOS developers who want to add AI and machine learning features to their apps—without needing a PhD in ML.
Each episode, we take one practical ML topic—like Vision, Core ML, or Apple Intelligence—and walk through how it actually works on iOS, what you can build with it, and how to ship it this week.
If you want to build smarter iOS apps with on-device AI, subscribe to stay ahead of the curve.
Ready to dive deeper?
Next episode, we're mapping the ML landscape for iOS developers: Cloud vs. On-Device. We'll make the decision feel concrete, not ideological.