docs: Add comprehensive README with app features and LaMa ML model details
This commit is contained in:
91
README.md
91
README.md
@@ -0,0 +1,91 @@
|
||||
# CheapRetouch
|
||||
|
||||
A privacy-first iOS photo editor for removing unwanted elements from your photos — powered by on-device machine learning.
|
||||
|
||||

|
||||

|
||||
|
||||
## Features
|
||||
|
||||
### 🧑 Person Removal
|
||||
Tap on any person in your photo to instantly remove them. The app uses Apple's Vision framework to generate precise segmentation masks, then fills the removed area seamlessly.
|
||||
|
||||
### 📦 Object Removal
|
||||
Remove unwanted foreground objects with a single tap. When automatic detection isn't possible, use the smart brush tool with edge-aware refinement for manual selection.
|
||||
|
||||
### ⚡ Wire & Line Removal
|
||||
Easily remove power lines, cables, and other thin linear objects. The app detects contours and automatically selects wire-like shapes, or you can trace them manually with the line brush.
|
||||
|
||||
## How It Works
|
||||
|
||||
CheapRetouch combines Apple's Vision framework for intelligent object detection with an AI-powered inpainting engine:
|
||||
|
||||
### Object Detection (Vision Framework)
|
||||
- **`VNGenerateForegroundInstanceMaskRequest`** — Generates pixel-accurate masks for people and salient foreground objects
|
||||
- **`VNDetectContoursRequest`** — Detects edges and contours for wire/line detection
|
||||
- **Tap-based selection** — Simply tap on what you want to remove
|
||||
|
||||
### AI-Powered Inpainting (LaMa Model)
|
||||
|
||||
The app uses **LaMa (Large Mask Inpainting)**, a state-of-the-art deep learning model optimized for removing objects from images:
|
||||
|
||||
- **Model**: `LaMaFP16_512.mlpackage` — A Core ML-optimized neural network running entirely on-device
|
||||
- **Architecture**: Fourier convolutions that capture both local textures and global image structure
|
||||
- **Processing**: Runs on the Neural Engine (ANE) for fast, efficient inference
|
||||
- **Quality**: Produces natural-looking results even for large masked areas
|
||||
|
||||
**Technical Details:**
|
||||
- Input resolution: 512×512 pixels (automatically crops and scales around masked regions)
|
||||
- Quantization: FP16 for optimal balance of quality and performance
|
||||
- Fallback: Metal-accelerated exemplar-based inpainting when needed
|
||||
|
||||
### Processing Pipeline
|
||||
|
||||
```
|
||||
1. User taps object → Vision generates mask
|
||||
2. Mask is dilated and feathered for smooth edges
|
||||
3. Region is cropped and scaled to 512×512
|
||||
4. LaMa model inpaints the masked area
|
||||
5. Result is composited back into original image
|
||||
```
|
||||
|
||||
## Privacy
|
||||
|
||||
🔒 **100% On-Device Processing**
|
||||
|
||||
- No photos leave your device
|
||||
- No cloud services or network calls
|
||||
- No analytics or telemetry
|
||||
- Photo library access via secure PHPicker
|
||||
|
||||
## Technical Stack
|
||||
|
||||
| Component | Technology |
|
||||
|-----------|------------|
|
||||
| UI | SwiftUI + UIKit |
|
||||
| Object Detection | Vision Framework |
|
||||
| ML Inference | Core ML (Neural Engine) |
|
||||
| GPU Processing | Metal |
|
||||
| Image Pipeline | Core Image |
|
||||
| Fallback Processing | Accelerate/vImage |
|
||||
|
||||
## Requirements
|
||||
|
||||
- iOS 17.0 or later
|
||||
- iPhone or iPad with A14 chip or newer (for optimal Neural Engine performance)
|
||||
|
||||
## Performance
|
||||
|
||||
| Operation | Target Time | Device |
|
||||
|-----------|-------------|--------|
|
||||
| Preview inpaint | < 300ms | iPhone 12+ |
|
||||
| Full resolution (12MP) | < 4 seconds | iPhone 12+ |
|
||||
| Full resolution (48MP) | < 12 seconds | iPhone 15 Pro+ |
|
||||
|
||||
## Non-Destructive Editing
|
||||
|
||||
All edits are stored as an operation stack — your original photos are never modified. Full undo/redo support included.
|
||||
|
||||
## License
|
||||
|
||||
MIT License — see [LICENSE](LICENSE) for details.
|
||||
|
||||
Reference in New Issue
Block a user