Compare commits

4 Commits

Author SHA1 Message Date
c3f4126296 Update BrushCanvasView and CanvasView, add screenshots 2026-01-25 10:46:17 -05:00
9f35ea751e Fix brush tool not working after undo
- Fixed dimension mismatch between mask and display image after undo
- Mask was being created at original image size, but displayImage is at
  preview scale after undo/redo (renderPreview scales images > 2048px)
- Now create mask at actual displayImage dimensions, ensuring mask and
  image sizes match for inpainting
- Also fixed edge refinement gradient to recompute when image changes
2026-01-24 23:26:52 -05:00
83baff5efb docs: Add comprehensive README with app features and LaMa ML model details 2026-01-24 15:14:48 -05:00
eb047e27b8 Add LaMa Core ML model for AI-powered inpainting
- Add LaMaFP16_512.mlpackage (~90MB) for high-quality object removal
- Add LaMaInpainter.swift wrapper with image preprocessing and merging
- Modify InpaintEngine to use LaMa first, gradient fill as fallback
- Fix brush mask size (use scale 1.0 instead of screen scale)
- Fix LaMa output size (use scale 1.0 in merge function)
- Add model loading wait with 5 second timeout

The LaMa model provides significantly better inpainting quality compared
to the gradient fill method, especially for complex backgrounds.
2026-01-24 14:31:54 -05:00
51 changed files with 133253 additions and 893 deletions

1
.gitignore vendored
View File

@@ -54,3 +54,4 @@ logs/
# Temporary files
*.tmp
*.temp
inpaint-ios-reference/

View File

@@ -252,7 +252,7 @@
ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon;
ASSETCATALOG_COMPILER_GLOBAL_ACCENT_COLOR_NAME = AccentColor;
CODE_SIGN_STYLE = Automatic;
CURRENT_PROJECT_VERSION = 1;
CURRENT_PROJECT_VERSION = 3;
DEVELOPMENT_TEAM = 7X85543FQQ;
ENABLE_PREVIEWS = YES;
GENERATE_INFOPLIST_FILE = YES;
@@ -292,7 +292,7 @@
ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon;
ASSETCATALOG_COMPILER_GLOBAL_ACCENT_COLOR_NAME = AccentColor;
CODE_SIGN_STYLE = Automatic;
CURRENT_PROJECT_VERSION = 1;
CURRENT_PROJECT_VERSION = 3;
DEVELOPMENT_TEAM = 7X85543FQQ;
ENABLE_PREVIEWS = YES;
GENERATE_INFOPLIST_FILE = YES;

View File

@@ -1,6 +1,33 @@
{
"colors" : [
{
"color" : {
"color-space" : "srgb",
"components" : {
"alpha" : "1.000",
"blue" : "0.851",
"green" : "0.459",
"red" : "0.239"
}
},
"idiom" : "universal"
},
{
"appearances" : [
{
"appearance" : "luminosity",
"value" : "dark"
}
],
"color" : {
"color-space" : "srgb",
"components" : {
"alpha" : "1.000",
"blue" : "0.918",
"green" : "0.557",
"red" : "0.341"
}
},
"idiom" : "universal"
}
],

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 426 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

View File

@@ -1,35 +1 @@
{
"images" : [
{
"idiom" : "universal",
"platform" : "ios",
"size" : "1024x1024"
},
{
"appearances" : [
{
"appearance" : "luminosity",
"value" : "dark"
}
],
"idiom" : "universal",
"platform" : "ios",
"size" : "1024x1024"
},
{
"appearances" : [
{
"appearance" : "luminosity",
"value" : "tinted"
}
],
"idiom" : "universal",
"platform" : "ios",
"size" : "1024x1024"
}
],
"info" : {
"author" : "xcode",
"version" : 1
}
}
{"images":[{"size":"60x60","expected-size":"180","filename":"180.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"iphone","scale":"3x"},{"size":"40x40","expected-size":"80","filename":"80.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"iphone","scale":"2x"},{"size":"40x40","expected-size":"120","filename":"120.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"iphone","scale":"3x"},{"size":"60x60","expected-size":"120","filename":"120.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"iphone","scale":"2x"},{"size":"57x57","expected-size":"57","filename":"57.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"iphone","scale":"1x"},{"size":"29x29","expected-size":"58","filename":"58.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"iphone","scale":"2x"},{"size":"29x29","expected-size":"29","filename":"29.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"iphone","scale":"1x"},{"size":"29x29","expected-size":"87","filename":"87.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"iphone","scale":"3x"},{"size":"57x57","expected-size":"114","filename":"114.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"iphone","scale":"2x"},{"size":"20x20","expected-size":"40","filename":"40.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"iphone","scale":"2x"},{"size":"20x20","expected-size":"60","filename":"60.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"iphone","scale":"3x"},{"size":"1024x1024","filename":"1024.png","expected-size":"1024","idiom":"ios-marketing","folder":"Assets.xcassets/AppIcon.appiconset/","scale":"1x"},{"size":"40x40","expected-size":"80","filename":"80.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"ipad","scale":"2x"},{"size":"72x72","expected-size":"72","filename":"72.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"ipad","scale":"1x"},{"size":"76x76","expected-size":"152","filename":"152.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"ipad","scale":"2x"},{"size":"50x50","expected-size":"100","filename":"100.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"ipad","scale":"2x"},{"size":"29x29","expected-size":"58","filename":"58.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"ipad","scale":"2x"},{"size":"76x76","expected-size":"76","filename":"76.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"ipad","scale":"1x"},{"size":"29x29","expected-size":"29","filename":"29.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"ipad","scale":"1x"},{"size":"50x50","expected-size":"50","filename":"50.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"ipad","scale":"1x"},{"size":"72x72","expected-size":"144","filename":"144.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"ipad","scale":"2x"},{"size":"40x40","expected-size":"40","filename":"40.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"ipad","scale":"1x"},{"size":"83.5x83.5","expected-size":"167","filename":"167.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"ipad","scale":"2x"},{"size":"20x20","expected-size":"20","filename":"20.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"ipad","scale":"1x"},{"size":"20x20","expected-size":"40","filename":"40.png","folder":"Assets.xcassets/AppIcon.appiconset/","idiom":"ipad","scale":"2x"}]}

View File

@@ -1,6 +0,0 @@
{
"info" : {
"author" : "xcode",
"version" : 1
}
}

View File

@@ -2,7 +2,8 @@
// BrushCanvasView.swift
// CheapRetouch
//
// Canvas overlay for brush-based manual selection.
// Canvas overlay for brush-based manual selection with support for
// single-finger drawing and two-finger pan/zoom.
//
import SwiftUI
@@ -12,44 +13,74 @@ struct BrushCanvasView: View {
@Bindable var viewModel: EditorViewModel
let imageSize: CGSize
let displayedImageFrame: CGRect
@Binding var scale: CGFloat
@Binding var lastScale: CGFloat
@Binding var offset: CGSize
@Binding var lastOffset: CGSize
let minScale: CGFloat
let maxScale: CGFloat
let clampOffset: () -> Void
@State private var currentStroke: [CGPoint] = []
@State private var allStrokes: [[CGPoint]] = []
@State private var isErasing = false
@State private var currentStroke: [CGPoint] = []
@State private var currentTouchLocation: CGPoint?
@State private var gradientImage: EdgeRefinement.GradientImage?
/// The effective brush size on screen, scaled by zoom level
private var scaledBrushSize: CGFloat {
viewModel.brushSize * scale
}
var body: some View {
Canvas { context, size in
// Draw all completed strokes
for stroke in allStrokes {
drawStroke(stroke, in: &context, color: isErasing ? .black : .white)
}
// Draw current stroke
if !currentStroke.isEmpty {
drawStroke(currentStroke, in: &context, color: isErasing ? .black : .white)
}
// Draw brush preview circle at current touch location
if let location = currentTouchLocation {
let previewRect = CGRect(
x: location.x - viewModel.brushSize / 2,
y: location.y - viewModel.brushSize / 2,
width: viewModel.brushSize,
height: viewModel.brushSize
)
context.stroke(
Path(ellipseIn: previewRect),
with: .color(.white.opacity(0.8)),
lineWidth: 2
GeometryReader { geometry in
ZStack {
// UIKit gesture handler for proper single/two-finger differentiation
// This is at the bottom of the ZStack but receives all touches
BrushGestureView(
displayedImageFrame: displayedImageFrame,
scale: $scale,
lastScale: $lastScale,
offset: $offset,
lastOffset: $lastOffset,
minScale: minScale,
maxScale: maxScale,
currentStroke: $currentStroke,
allStrokes: $allStrokes,
currentTouchLocation: $currentTouchLocation,
clampOffset: clampOffset
)
.frame(width: geometry.size.width, height: geometry.size.height)
// SwiftUI Canvas for rendering strokes (on top for visibility)
Canvas { context, size in
// Draw all completed strokes
for stroke in allStrokes {
drawStroke(stroke, in: &context, color: .white)
}
// Draw current stroke
if !currentStroke.isEmpty {
drawStroke(currentStroke, in: &context, color: .white)
}
// Draw brush preview circle at current touch location
if let location = currentTouchLocation {
let previewRect = CGRect(
x: location.x - scaledBrushSize / 2,
y: location.y - scaledBrushSize / 2,
width: scaledBrushSize,
height: scaledBrushSize
)
context.stroke(
Path(ellipseIn: previewRect),
with: .color(.white.opacity(0.8)),
lineWidth: 2
)
}
}
.allowsHitTesting(false) // Let the gesture view handle all touches
}
}
.gesture(drawingGesture)
.overlay(alignment: .bottom) {
brushControls
}
.onAppear {
// Precompute gradient for edge refinement if enabled
if viewModel.useEdgeRefinement, let image = viewModel.displayImage {
@@ -61,6 +92,32 @@ struct BrushCanvasView: View {
computeGradientAsync(from: image)
}
}
.onChange(of: viewModel.editedImage) { _, _ in
// Recompute gradient when image changes (e.g., after undo/redo)
if viewModel.useEdgeRefinement, let image = viewModel.displayImage {
computeGradientAsync(from: image)
} else {
gradientImage = nil
}
}
.onChange(of: allStrokes.count) { _, newCount in
viewModel.brushStrokesCount = newCount
}
.onChange(of: viewModel.triggerClearBrushStrokes) { _, shouldClear in
if shouldClear {
allStrokes.removeAll()
currentStroke.removeAll()
viewModel.triggerClearBrushStrokes = false
}
}
.onChange(of: viewModel.triggerApplyBrushMask) { _, shouldApply in
if shouldApply {
Task {
await applyBrushMask()
}
viewModel.triggerApplyBrushMask = false
}
}
}
private func computeGradientAsync(from image: CGImage) {
@@ -95,94 +152,40 @@ struct BrushCanvasView: View {
path,
with: .color(color.opacity(0.7)),
style: StrokeStyle(
lineWidth: viewModel.brushSize,
lineWidth: scaledBrushSize,
lineCap: .round,
lineJoin: .round
)
)
}
private var drawingGesture: some Gesture {
DragGesture(minimumDistance: 0)
.onChanged { value in
let point = value.location
currentTouchLocation = point
// Only add points within the image bounds
if displayedImageFrame.contains(point) {
currentStroke.append(point)
}
}
.onEnded { _ in
currentTouchLocation = nil
if !currentStroke.isEmpty {
allStrokes.append(currentStroke)
currentStroke = []
}
}
}
private var brushControls: some View {
HStack(spacing: 16) {
// Erase toggle
Button {
isErasing.toggle()
} label: {
Image(systemName: isErasing ? "eraser.fill" : "eraser")
.font(.title2)
.frame(width: 44, height: 44)
.background(isErasing ? Color.accentColor : Color.clear)
.clipShape(Circle())
}
.accessibilityLabel(isErasing ? "Eraser active" : "Switch to eraser")
// Clear all
Button {
allStrokes.removeAll()
currentStroke.removeAll()
} label: {
Image(systemName: "trash")
.font(.title2)
.frame(width: 44, height: 44)
}
.disabled(allStrokes.isEmpty)
.accessibilityLabel("Clear all strokes")
Spacer()
// Done button
Button {
Task {
await applyBrushMask()
}
} label: {
Text("Done")
.font(.headline)
.padding(.horizontal, 20)
.padding(.vertical, 10)
}
.buttonStyle(.borderedProminent)
.disabled(allStrokes.isEmpty)
}
.padding()
.background(.ultraThinMaterial)
}
private func applyBrushMask() async {
DebugLogger.action("BrushCanvasView.applyBrushMask called")
guard !allStrokes.isEmpty else {
DebugLogger.log("No strokes to apply")
let hasStrokes = !allStrokes.isEmpty
let hasPendingMask = viewModel.pendingRefineMask != nil
guard hasStrokes || hasPendingMask else {
DebugLogger.log("No strokes or pending mask to apply")
return
}
DebugLogger.log("Stroke count: \(allStrokes.count), total points: \(allStrokes.reduce(0) { $0 + $1.count })")
DebugLogger.log("Image size: \(imageSize), displayed frame: \(displayedImageFrame)")
guard let displayImage = viewModel.displayImage else {
DebugLogger.error("No display image available")
return
}
let actualImageSize = CGSize(width: displayImage.width, height: displayImage.height)
let scaleX = imageSize.width / displayedImageFrame.width
let scaleY = imageSize.height / displayedImageFrame.height
DebugLogger.log("Stroke count: \(allStrokes.count), total points: \(allStrokes.reduce(0) { $0 + $1.count })")
DebugLogger.log("Original image size: \(imageSize)")
DebugLogger.log("Actual display image size: \(actualImageSize)")
DebugLogger.log("Displayed frame: \(displayedImageFrame)")
DebugLogger.log("Has pending mask: \(hasPendingMask)")
let scaleX = actualImageSize.width / displayedImageFrame.width
let scaleY = actualImageSize.height / displayedImageFrame.height
DebugLogger.log("Scale factors: X=\(scaleX), Y=\(scaleY)")
// Convert all strokes to image coordinates
var imageCoordStrokes: [[CGPoint]] = []
for stroke in allStrokes {
let imageStroke = stroke.map { point in
@@ -194,7 +197,6 @@ struct BrushCanvasView: View {
imageCoordStrokes.append(imageStroke)
}
// Apply edge refinement if enabled and gradient is available
if viewModel.useEdgeRefinement, let gradient = gradientImage {
imageCoordStrokes = imageCoordStrokes.map { stroke in
EdgeRefinement.refineSelectionToEdges(
@@ -205,14 +207,17 @@ struct BrushCanvasView: View {
}
}
// Create mask image from strokes
let renderer = UIGraphicsImageRenderer(size: imageSize)
let format = UIGraphicsImageRendererFormat()
format.scale = 1.0
let renderer = UIGraphicsImageRenderer(size: actualImageSize, format: format)
let maskImage = renderer.image { ctx in
// Fill with black (not masked)
UIColor.black.setFill()
ctx.fill(CGRect(origin: .zero, size: imageSize))
ctx.fill(CGRect(origin: .zero, size: actualImageSize))
if let pendingMask = viewModel.pendingRefineMask {
ctx.cgContext.draw(pendingMask, in: CGRect(origin: .zero, size: actualImageSize))
}
// Draw strokes in white (masked areas)
UIColor.white.setStroke()
for stroke in imageCoordStrokes {
@@ -234,165 +239,288 @@ struct BrushCanvasView: View {
if let cgImage = maskImage.cgImage {
DebugLogger.imageInfo("Created brush mask", image: cgImage)
viewModel.pendingRefineMask = nil
viewModel.maskPreview = nil
await viewModel.applyBrushMask(cgImage)
} else {
DebugLogger.error("Failed to create CGImage from brush mask")
}
// Clear strokes after applying
allStrokes.removeAll()
DebugLogger.log("Strokes cleared")
}
}
// MARK: - Line Brush View for Wire Tool
// MARK: - UIKit Gesture Handler
struct LineBrushView: View {
@Bindable var viewModel: EditorViewModel
let imageSize: CGSize
/// A UIViewRepresentable that handles touch gestures with proper single vs two-finger differentiation.
/// - Single finger: Drawing brush strokes
/// - Two fingers: Pan (when zoomed) and pinch to zoom
struct BrushGestureView: UIViewRepresentable {
let displayedImageFrame: CGRect
@Binding var scale: CGFloat
@Binding var lastScale: CGFloat
@Binding var offset: CGSize
@Binding var lastOffset: CGSize
let minScale: CGFloat
let maxScale: CGFloat
@Binding var currentStroke: [CGPoint]
@Binding var allStrokes: [[CGPoint]]
@Binding var currentTouchLocation: CGPoint?
let clampOffset: () -> Void
@State private var linePoints: [CGPoint] = []
func makeUIView(context: Context) -> BrushGestureUIView {
let view = BrushGestureUIView()
view.backgroundColor = .clear
view.isMultipleTouchEnabled = true
view.coordinator = context.coordinator
return view
}
var body: some View {
Canvas { context, size in
guard linePoints.count >= 2 else { return }
func updateUIView(_ uiView: BrushGestureUIView, context: Context) {
context.coordinator.displayedImageFrame = displayedImageFrame
context.coordinator.scale = $scale
context.coordinator.lastScale = $lastScale
context.coordinator.offset = $offset
context.coordinator.lastOffset = $lastOffset
context.coordinator.minScale = minScale
context.coordinator.maxScale = maxScale
context.coordinator.currentStroke = $currentStroke
context.coordinator.allStrokes = $allStrokes
context.coordinator.currentTouchLocation = $currentTouchLocation
context.coordinator.clampOffset = clampOffset
}
var path = Path()
path.move(to: linePoints[0])
func makeCoordinator() -> Coordinator {
Coordinator(
displayedImageFrame: displayedImageFrame,
scale: $scale,
lastScale: $lastScale,
offset: $offset,
lastOffset: $lastOffset,
minScale: minScale,
maxScale: maxScale,
currentStroke: $currentStroke,
allStrokes: $allStrokes,
currentTouchLocation: $currentTouchLocation,
clampOffset: clampOffset
)
}
for point in linePoints.dropFirst() {
path.addLine(to: point)
class Coordinator: NSObject {
var displayedImageFrame: CGRect
var scale: Binding<CGFloat>
var lastScale: Binding<CGFloat>
var offset: Binding<CGSize>
var lastOffset: Binding<CGSize>
var minScale: CGFloat
var maxScale: CGFloat
var currentStroke: Binding<[CGPoint]>
var allStrokes: Binding<[[CGPoint]]>
var currentTouchLocation: Binding<CGPoint?>
var clampOffset: () -> Void
// Track if we're in drawing mode (single finger) or navigation mode (two fingers)
var isDrawing = false
var initialPinchScale: CGFloat = 1.0
init(
displayedImageFrame: CGRect,
scale: Binding<CGFloat>,
lastScale: Binding<CGFloat>,
offset: Binding<CGSize>,
lastOffset: Binding<CGSize>,
minScale: CGFloat,
maxScale: CGFloat,
currentStroke: Binding<[CGPoint]>,
allStrokes: Binding<[[CGPoint]]>,
currentTouchLocation: Binding<CGPoint?>,
clampOffset: @escaping () -> Void
) {
self.displayedImageFrame = displayedImageFrame
self.scale = scale
self.lastScale = lastScale
self.offset = offset
self.lastOffset = lastOffset
self.minScale = minScale
self.maxScale = maxScale
self.currentStroke = currentStroke
self.allStrokes = allStrokes
self.currentTouchLocation = currentTouchLocation
self.clampOffset = clampOffset
}
}
}
/// Custom UIView that handles touch events directly for precise control over single vs multi-touch.
class BrushGestureUIView: UIView {
var coordinator: BrushGestureView.Coordinator?
// Track active touches ourselves for reliability
private var activeTouches: [UITouch] = []
private var isDrawing = false
private var isNavigating = false
private var initialPinchDistance: CGFloat = 0
private var initialPinchScale: CGFloat = 1.0
private var lastPanLocation: CGPoint = .zero
private func activeTouchCount() -> Int {
return activeTouches.count
}
private func getActiveTouchLocations() -> [CGPoint] {
return activeTouches.map { $0.location(in: self) }
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let coordinator = coordinator else { return }
// Add new touches to our tracking
for touch in touches {
if !activeTouches.contains(touch) {
activeTouches.append(touch)
}
}
let touchCount = activeTouchCount()
if touchCount == 1 {
// Single finger - start drawing
let point = activeTouches[0].location(in: self)
isDrawing = true
isNavigating = false
coordinator.currentTouchLocation.wrappedValue = point
if coordinator.displayedImageFrame.contains(point) {
coordinator.currentStroke.wrappedValue.append(point)
}
} else if touchCount >= 2 {
// Two or more fingers - switch to navigation mode
if isDrawing {
// Was drawing, cancel current stroke
coordinator.currentStroke.wrappedValue.removeAll()
coordinator.currentTouchLocation.wrappedValue = nil
isDrawing = false
}
context.stroke(
path,
with: .color(.white.opacity(0.7)),
style: StrokeStyle(
lineWidth: viewModel.wireWidth,
lineCap: .round,
lineJoin: .round
)
// Start navigation
let locations = getActiveTouchLocations()
let p1 = locations[0]
let p2 = locations[1]
initialPinchDistance = hypot(p2.x - p1.x, p2.y - p1.y)
initialPinchScale = coordinator.scale.wrappedValue
lastPanLocation = CGPoint(x: (p1.x + p2.x) / 2, y: (p1.y + p2.y) / 2)
isNavigating = true
}
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let coordinator = coordinator else { return }
let touchCount = activeTouchCount()
if touchCount == 1 && isDrawing {
// Single finger - continue drawing
let point = activeTouches[0].location(in: self)
coordinator.currentTouchLocation.wrappedValue = point
if coordinator.displayedImageFrame.contains(point) {
coordinator.currentStroke.wrappedValue.append(point)
}
} else if touchCount >= 2 && isNavigating {
// Two fingers - pan and zoom
let locations = getActiveTouchLocations()
let p1 = locations[0]
let p2 = locations[1]
// Handle pinch zoom
if initialPinchDistance > 0 {
let currentDistance = hypot(p2.x - p1.x, p2.y - p1.y)
let scaleFactor = currentDistance / initialPinchDistance
let newScale = initialPinchScale * scaleFactor
coordinator.scale.wrappedValue = min(max(newScale, coordinator.minScale), coordinator.maxScale)
}
// Handle pan
let currentCenter = CGPoint(x: (p1.x + p2.x) / 2, y: (p1.y + p2.y) / 2)
let deltaX = currentCenter.x - lastPanLocation.x
let deltaY = currentCenter.y - lastPanLocation.y
coordinator.offset.wrappedValue = CGSize(
width: coordinator.offset.wrappedValue.width + deltaX,
height: coordinator.offset.wrappedValue.height + deltaY
)
}
.gesture(lineDrawingGesture)
.overlay(alignment: .bottom) {
lineControls
lastPanLocation = currentCenter
}
}
private var lineDrawingGesture: some Gesture {
DragGesture(minimumDistance: 0)
.onChanged { value in
let point = value.location
if displayedImageFrame.contains(point) {
// For line brush, we sample less frequently for smoother lines
if linePoints.isEmpty || distance(from: linePoints.last!, to: point) > 5 {
linePoints.append(point)
}
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let coordinator = coordinator else { return }
// Remove ended touches from tracking
activeTouches.removeAll { touches.contains($0) }
let remainingCount = activeTouchCount()
if isDrawing && remainingCount == 0 {
// Finish drawing stroke
coordinator.currentTouchLocation.wrappedValue = nil
if !coordinator.currentStroke.wrappedValue.isEmpty {
coordinator.allStrokes.wrappedValue.append(coordinator.currentStroke.wrappedValue)
coordinator.currentStroke.wrappedValue = []
}
.onEnded { _ in
// Line complete, ready to apply
}
}
private var lineControls: some View {
HStack(spacing: 16) {
// Clear
Button {
linePoints.removeAll()
} label: {
Image(systemName: "trash")
.font(.title2)
.frame(width: 44, height: 44)
}
.disabled(linePoints.isEmpty)
.accessibilityLabel("Clear line")
Spacer()
// Cancel
Button {
linePoints.removeAll()
viewModel.selectedTool = .wire
} label: {
Text("Cancel")
.font(.headline)
.padding(.horizontal, 16)
.padding(.vertical, 10)
}
.buttonStyle(.bordered)
// Done button
Button {
Task {
await applyLineMask()
}
} label: {
Text("Remove Line")
.font(.headline)
.padding(.horizontal, 16)
.padding(.vertical, 10)
}
.buttonStyle(.borderedProminent)
.disabled(linePoints.count < 2)
}
.padding()
.background(.ultraThinMaterial)
}
private func applyLineMask() async {
guard linePoints.count >= 2 else { return }
let renderer = UIGraphicsImageRenderer(size: imageSize)
let maskImage = renderer.image { ctx in
UIColor.black.setFill()
ctx.fill(CGRect(origin: .zero, size: imageSize))
UIColor.white.setStroke()
let scaleX = imageSize.width / displayedImageFrame.width
let scaleY = imageSize.height / displayedImageFrame.height
let path = UIBezierPath()
let firstPoint = CGPoint(
x: (linePoints[0].x - displayedImageFrame.minX) * scaleX,
y: (linePoints[0].y - displayedImageFrame.minY) * scaleY
)
path.move(to: firstPoint)
for point in linePoints.dropFirst() {
let scaledPoint = CGPoint(
x: (point.x - displayedImageFrame.minX) * scaleX,
y: (point.y - displayedImageFrame.minY) * scaleY
)
path.addLine(to: scaledPoint)
}
path.lineWidth = viewModel.wireWidth * scaleX
path.lineCapStyle = .round
path.lineJoinStyle = .round
path.stroke()
isDrawing = false
}
if let cgImage = maskImage.cgImage {
await viewModel.applyBrushMask(cgImage)
if isNavigating && remainingCount < 2 {
// End navigation
coordinator.lastScale.wrappedValue = coordinator.scale.wrappedValue
coordinator.lastOffset.wrappedValue = coordinator.offset.wrappedValue
withAnimation(.spring(duration: 0.3)) {
coordinator.clampOffset()
}
isNavigating = false
// Don't start drawing with remaining finger - wait for fresh touch
}
linePoints.removeAll()
}
private func distance(from p1: CGPoint, to p2: CGPoint) -> CGFloat {
sqrt(pow(p2.x - p1.x, 2) + pow(p2.y - p1.y, 2))
override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let coordinator = coordinator else { return }
// Remove cancelled touches
activeTouches.removeAll { touches.contains($0) }
// Cancel any ongoing gesture
coordinator.currentTouchLocation.wrappedValue = nil
coordinator.currentStroke.wrappedValue.removeAll()
if isNavigating {
coordinator.lastScale.wrappedValue = coordinator.scale.wrappedValue
coordinator.lastOffset.wrappedValue = coordinator.offset.wrappedValue
coordinator.clampOffset()
}
isDrawing = false
isNavigating = false
}
}
#Preview {
@Previewable @State var scale: CGFloat = 1.0
@Previewable @State var lastScale: CGFloat = 1.0
@Previewable @State var offset: CGSize = .zero
@Previewable @State var lastOffset: CGSize = .zero
let viewModel = EditorViewModel()
return BrushCanvasView(
viewModel: viewModel,
imageSize: CGSize(width: 1000, height: 1000),
displayedImageFrame: CGRect(x: 0, y: 0, width: 300, height: 300)
displayedImageFrame: CGRect(x: 0, y: 0, width: 300, height: 300),
scale: $scale,
lastScale: $lastScale,
offset: $offset,
lastOffset: $lastOffset,
minScale: 1.0,
maxScale: 10.0,
clampOffset: {}
)
}

View File

@@ -8,19 +8,6 @@
import SwiftUI
import UIKit
// MARK: - Conditional View Modifier
extension View {
@ViewBuilder
func `if`<Content: View>(_ condition: Bool, transform: (Self) -> Content) -> some View {
if condition {
transform(self)
} else {
self
}
}
}
struct CanvasView: View {
@Bindable var viewModel: EditorViewModel
@@ -64,39 +51,41 @@ struct CanvasView: View {
BrushCanvasView(
viewModel: viewModel,
imageSize: viewModel.imageSize,
displayedImageFrame: displayedImageFrame(in: geometry.size)
displayedImageFrame: displayedImageFrame(in: geometry.size),
scale: $scale,
lastScale: $lastScale,
offset: $offset,
lastOffset: $lastOffset,
minScale: minScale,
maxScale: maxScale,
clampOffset: { clampOffset(in: geometry.size) }
)
}
// Line brush path overlay
if viewModel.selectedTool == .wire && viewModel.isLineBrushMode && !viewModel.lineBrushPath.isEmpty {
LineBrushPathView(
path: viewModel.lineBrushPath,
lineWidth: viewModel.wireWidth,
imageSize: viewModel.imageSize,
displayedFrame: displayedImageFrame(in: geometry.size)
)
}
}
}
.contentShape(Rectangle())
.gesture(tapGesture(in: geometry))
.gesture(magnificationGesture(in: geometry))
.simultaneousGesture(dragGesture(in: geometry))
.simultaneousGesture(longPressGesture)
// Only attach line brush gesture when in line brush mode
.if(viewModel.selectedTool == .wire && viewModel.isLineBrushMode) { view in
view.gesture(lineBrushGesture(in: geometry))
}
.onTapGesture(count: 2) {
doubleTapZoom()
}
.applyGestures(
isBrushMode: viewModel.selectedTool == .brush,
tapGesture: tapGesture(in: geometry),
magnificationGesture: magnificationGesture(in: geometry),
dragGesture: combinedDragGesture(in: geometry),
longPressGesture: longPressGesture,
doubleTapAction: doubleTapZoom
)
.onAppear {
viewSize = geometry.size
}
.onChange(of: geometry.size) { _, newSize in
viewSize = newSize
}
.onChange(of: viewModel.triggerResetZoom) { _, _ in
// Reset zoom and offset when a new image is loaded
scale = 1.0
lastScale = 1.0
offset = .zero
lastOffset = .zero
}
}
.clipped()
}
@@ -158,10 +147,8 @@ struct CanvasView: View {
DebugLogger.log("Tap ignored - brush tool selected")
return
}
// Skip tap if in line brush mode
if viewModel.selectedTool == .wire && viewModel.isLineBrushMode {
DebugLogger.log("Tap ignored - line brush mode")
guard viewModel.selectedTool != .move else {
DebugLogger.log("Tap ignored - move tool selected")
return
}
@@ -173,41 +160,13 @@ struct CanvasView: View {
}
}
private func lineBrushGesture(in geometry: GeometryProxy) -> some Gesture {
DragGesture(minimumDistance: 0)
private func combinedDragGesture(in geometry: GeometryProxy) -> some Gesture {
DragGesture(minimumDistance: 1)
.onChanged { value in
// Only activate for wire tool in line brush mode
guard viewModel.selectedTool == .wire,
viewModel.isLineBrushMode else { return }
guard !viewModel.isProcessing,
!viewModel.showingMaskConfirmation else { return }
let imagePoint = convertViewPointToImagePoint(value.location, in: geometry.size)
viewModel.addLineBrushPoint(imagePoint)
}
}
private func magnificationGesture(in geometry: GeometryProxy) -> some Gesture {
MagnificationGesture()
.onChanged { value in
let newScale = lastScale * value
scale = min(max(newScale, minScale), maxScale)
}
.onEnded { _ in
lastScale = scale
withAnimation(.spring(duration: 0.3)) {
clampOffset(in: geometry.size)
}
}
}
private func dragGesture(in geometry: GeometryProxy) -> some Gesture {
DragGesture()
.onChanged { value in
// Don't pan when brush tool is selected - let brush drawing take priority
guard viewModel.selectedTool != .brush else { return }
// Allow panning for person, object, and move tools (not brush - uses two-finger pan)
guard viewModel.selectedTool == .person || viewModel.selectedTool == .object || viewModel.selectedTool == .move else { return }
// Pan mode: only when zoomed in
if scale > 1.0 {
offset = CGSize(
width: lastOffset.width + value.translation.width,
@@ -216,8 +175,8 @@ struct CanvasView: View {
}
}
.onEnded { _ in
// Don't update offset if brush tool is selected
guard viewModel.selectedTool != .brush else { return }
// Allow panning for person, object, and move tools (not brush - uses two-finger pan)
guard viewModel.selectedTool == .person || viewModel.selectedTool == .object || viewModel.selectedTool == .move else { return }
lastOffset = offset
withAnimation(.spring(duration: 0.3)) {
@@ -226,6 +185,25 @@ struct CanvasView: View {
}
}
private func magnificationGesture(in geometry: GeometryProxy) -> some Gesture {
MagnificationGesture()
.onChanged { value in
// Allow zooming for person, object, and move tools (brush uses UIKit pinch gesture)
guard viewModel.selectedTool != .brush else { return }
let newScale = lastScale * value
scale = min(max(newScale, minScale), maxScale)
}
.onEnded { _ in
guard viewModel.selectedTool != .brush else { return }
lastScale = scale
withAnimation(.spring(duration: 0.3)) {
clampOffset(in: geometry.size)
}
}
}
private var longPressGesture: some Gesture {
LongPressGesture(minimumDuration: 0.3)
.onEnded { _ in
@@ -239,6 +217,7 @@ struct CanvasView: View {
}
private func doubleTapZoom() {
// Allow double-tap zoom for all tools
withAnimation(.spring(duration: 0.3)) {
if scale > 1.0 {
scale = 1.0
@@ -338,41 +317,34 @@ struct CanvasView: View {
}
}
// MARK: - Line Brush Path View
// MARK: - Conditional Gesture Application
struct LineBrushPathView: View {
let path: [CGPoint]
let lineWidth: CGFloat
let imageSize: CGSize
let displayedFrame: CGRect
var body: some View {
Canvas { context, size in
guard path.count >= 2 else { return }
let scaledPath = path.map { point -> CGPoint in
let normalizedX = point.x / imageSize.width
let normalizedY = point.y / imageSize.height
return CGPoint(
x: displayedFrame.origin.x + normalizedX * displayedFrame.width,
y: displayedFrame.origin.y + normalizedY * displayedFrame.height
)
}
var strokePath = Path()
strokePath.move(to: scaledPath[0])
for point in scaledPath.dropFirst() {
strokePath.addLine(to: point)
}
context.stroke(
strokePath,
with: .color(.red.opacity(0.7)),
lineWidth: lineWidth * (displayedFrame.width / imageSize.width)
)
extension View {
/// Applies gestures conditionally based on whether brush mode is active.
/// In brush mode, all SwiftUI gestures are disabled to allow UIKit touch handling in BrushCanvasView.
@ViewBuilder
func applyGestures<T: Gesture, M: Gesture, D: Gesture, L: Gesture>(
isBrushMode: Bool,
tapGesture: T,
magnificationGesture: M,
dragGesture: D,
longPressGesture: L,
doubleTapAction: @escaping () -> Void
) -> some View {
if isBrushMode {
// In brush mode, don't attach any gestures - let BrushCanvasView handle everything
self
} else {
// In other modes, attach all gestures
self
.gesture(tapGesture)
.gesture(magnificationGesture)
.simultaneousGesture(dragGesture)
.simultaneousGesture(longPressGesture)
.onTapGesture(count: 2) {
doubleTapAction()
}
}
.allowsHitTesting(false)
.accessibilityHidden(true)
}
}

View File

@@ -28,7 +28,11 @@ final class EditorViewModel {
var selectedTool: EditTool = .person
var brushSize: CGFloat = 20
var featherAmount: CGFloat = 4
var wireWidth: CGFloat = 6
// Brush tool state
var brushStrokesCount = 0
var triggerClearBrushStrokes = false
var triggerApplyBrushMask = false
var isProcessing = false
var processingMessage = ""
@@ -36,19 +40,17 @@ final class EditorViewModel {
var showingMaskConfirmation = false
var detectedPeopleCount = 0
var showSelectAllPeople = false
var isLineBrushMode = false
var lineBrushPath: [CGPoint] = []
var useHighContrastMask = false
var useEdgeRefinement = true
var pendingRefineMask: CGImage?
var isLowConfidenceMask = false
var triggerResetZoom = false
private(set) var project: Project?
// MARK: - Services
private let maskingService = MaskingService()
private let contourService = ContourService()
private let inpaintEngine = InpaintEngine()
private let imagePipeline = ImagePipeline()
@@ -92,6 +94,7 @@ final class EditorViewModel {
pendingRefineMask = nil
showSelectAllPeople = false
detectedPeopleCount = 0
triggerResetZoom.toggle()
// Check image size and warn if large
let pixelCount = cgImage.width * cgImage.height
@@ -153,14 +156,13 @@ final class EditorViewModel {
DebugLogger.processing("Starting object detection")
try await handleObjectTap(at: point, in: image)
case .wire:
processingMessage = "Detecting wire..."
DebugLogger.processing("Starting wire detection")
try await handleWireTap(at: point, in: image)
case .brush:
DebugLogger.log("Brush tool - tap ignored (handled by canvas)")
break
case .move:
DebugLogger.log("Move tool - tap ignored")
break
}
} catch {
DebugLogger.error("handleTap failed", error: error)
@@ -278,163 +280,6 @@ final class EditorViewModel {
DebugLogger.state("Object mask preview set, showing confirmation")
}
private func handleWireTap(at point: CGPoint, in image: CGImage) async throws {
// If in line brush mode, don't process taps
if isLineBrushMode {
return
}
let contours = try await contourService.detectContours(in: image)
let bestContour = await contourService.findBestWireContour(
at: point,
from: contours,
imageSize: CGSize(width: image.width, height: image.height)
)
guard let contour = bestContour else {
errorMessage = "No lines detected. Tap 'Line Brush' to draw along the wire."
return
}
let mask = await contourService.contourToMask(
contour,
width: Int(wireWidth),
imageSize: CGSize(width: image.width, height: image.height)
)
guard let mask = mask else {
errorMessage = "Failed to create mask from contour"
return
}
maskPreview = mask
showingMaskConfirmation = true
}
// MARK: - Line Brush Mode
func toggleLineBrushMode() {
isLineBrushMode.toggle()
if !isLineBrushMode {
lineBrushPath.removeAll()
}
}
func addLineBrushPoint(_ point: CGPoint) {
lineBrushPath.append(point)
}
func finishLineBrush() async {
DebugLogger.action("finishLineBrush called, path points: \(lineBrushPath.count)")
guard let image = displayImage, lineBrushPath.count >= 2 else {
DebugLogger.log("Not enough points or no image, clearing path")
lineBrushPath.removeAll()
return
}
DebugLogger.imageInfo("Source image for line brush", image: image)
isProcessing = true
processingMessage = "Creating line mask..."
// Generate mask from line brush path
let mask = createLineBrushMask(
path: lineBrushPath,
width: Int(wireWidth),
imageSize: CGSize(width: image.width, height: image.height)
)
lineBrushPath.removeAll()
guard let mask = mask else {
DebugLogger.error("Failed to create mask from line brush")
errorMessage = "Failed to create mask from line brush"
isProcessing = false
processingMessage = ""
return
}
DebugLogger.imageInfo("Line brush mask created", image: mask)
maskPreview = mask
showingMaskConfirmation = true
isLineBrushMode = false
isProcessing = false
processingMessage = ""
}
private func createLineBrushMask(path: [CGPoint], width: Int, imageSize: CGSize) -> CGImage? {
let intWidth = Int(imageSize.width)
let intHeight = Int(imageSize.height)
guard let context = CGContext(
data: nil,
width: intWidth,
height: intHeight,
bitsPerComponent: 8,
bytesPerRow: intWidth,
space: CGColorSpaceCreateDeviceGray(),
bitmapInfo: CGImageAlphaInfo.none.rawValue
) else { return nil }
context.setFillColor(gray: 0, alpha: 1)
context.fill(CGRect(x: 0, y: 0, width: intWidth, height: intHeight))
context.setStrokeColor(gray: 1, alpha: 1)
context.setLineWidth(CGFloat(width))
context.setLineCap(.round)
context.setLineJoin(.round)
// Use Catmull-Rom spline for smooth path
if path.count >= 4 {
let smoothedPath = catmullRomSpline(points: path, segments: 10)
context.move(to: CGPoint(x: smoothedPath[0].x, y: imageSize.height - smoothedPath[0].y))
for point in smoothedPath.dropFirst() {
context.addLine(to: CGPoint(x: point.x, y: imageSize.height - point.y))
}
} else {
context.move(to: CGPoint(x: path[0].x, y: imageSize.height - path[0].y))
for point in path.dropFirst() {
context.addLine(to: CGPoint(x: point.x, y: imageSize.height - point.y))
}
}
context.strokePath()
return context.makeImage()
}
private func catmullRomSpline(points: [CGPoint], segments: Int) -> [CGPoint] {
guard points.count >= 4 else { return points }
var result: [CGPoint] = []
for i in 0..<(points.count - 1) {
let p0 = points[max(0, i - 1)]
let p1 = points[i]
let p2 = points[min(points.count - 1, i + 1)]
let p3 = points[min(points.count - 1, i + 2)]
for t in 0..<segments {
let t0 = CGFloat(t) / CGFloat(segments)
let t2 = t0 * t0
let t3 = t2 * t0
let x = 0.5 * ((2 * p1.x) +
(-p0.x + p2.x) * t0 +
(2 * p0.x - 5 * p1.x + 4 * p2.x - p3.x) * t2 +
(-p0.x + 3 * p1.x - 3 * p2.x + p3.x) * t3)
let y = 0.5 * ((2 * p1.y) +
(-p0.y + p2.y) * t0 +
(2 * p0.y - 5 * p1.y + 4 * p2.y - p3.y) * t2 +
(-p0.y + 3 * p1.y - 3 * p2.y + p3.y) * t3)
result.append(CGPoint(x: x, y: y))
}
}
result.append(points.last!)
return result
}
// MARK: - Mask Confirmation
func confirmMask() async {
@@ -496,8 +341,8 @@ final class EditorViewModel {
func refineWithBrush() {
// Save current mask for refinement and switch to brush tool
// Keep maskPreview visible so user can see what they're refining
pendingRefineMask = maskPreview
maskPreview = nil
showingMaskConfirmation = false
showSelectAllPeople = false
selectedTool = .brush
@@ -555,8 +400,8 @@ final class EditorViewModel {
switch selectedTool {
case .person: return .person
case .object: return .object
case .wire: return .wire
case .brush: return .brush
case .move: return .move
}
}
}

View File

@@ -95,6 +95,9 @@ struct PhotoEditorView: View {
let uiImage = UIImage(data: data) {
viewModel.loadImage(uiImage, localIdentifier: localIdentifier)
}
// Reset selection so the same image can be selected again
selectedItem = nil
}
}
.photosPicker(isPresented: $isShowingPicker, selection: $selectedItem, matching: .images)
@@ -198,35 +201,25 @@ struct PhotoEditorView: View {
} label: {
Label("Cancel", systemImage: "xmark")
.font(.headline)
.padding(.horizontal, 12)
.padding(.vertical, 10)
.frame(maxWidth: .infinity)
.padding(.vertical, 12)
}
.buttonStyle(.bordered)
.tint(.secondary)
.accessibilityLabel("Cancel mask selection")
Button {
viewModel.refineWithBrush()
} label: {
Label("Refine", systemImage: "paintbrush")
.font(.headline)
.padding(.horizontal, 12)
.padding(.vertical, 10)
}
.buttonStyle(.bordered)
.accessibilityLabel("Refine selection with brush")
.accessibilityHint("Switch to brush tool to adjust the selection")
Button {
Task {
await viewModel.confirmMask()
}
} label: {
Label("Remove", systemImage: "checkmark")
Label("Remove", systemImage: "trash")
.font(.headline)
.padding(.horizontal, 12)
.padding(.vertical, 10)
.frame(maxWidth: .infinity)
.padding(.vertical, 12)
}
.buttonStyle(.borderedProminent)
.tint(.red)
.accessibilityLabel("Confirm and remove selected area")
}
}

View File

@@ -11,8 +11,8 @@ import UIKit
enum EditTool: String, CaseIterable, Identifiable {
case person = "Person"
case object = "Object"
case wire = "Wire"
case brush = "Brush"
case move = "Move/Zoom"
var id: String { rawValue }
@@ -20,8 +20,8 @@ enum EditTool: String, CaseIterable, Identifiable {
switch self {
case .person: return "person.fill"
case .object: return "circle.dashed"
case .wire: return "line.diagonal"
case .brush: return "paintbrush.fill"
case .move: return "arrow.up.and.down.and.arrow.left.and.right"
}
}
@@ -29,8 +29,8 @@ enum EditTool: String, CaseIterable, Identifiable {
switch self {
case .person: return "Tap to remove people"
case .object: return "Tap to remove objects"
case .wire: return "Tap to remove wires"
case .brush: return "Paint to select areas"
case .move: return "Pinch to zoom, drag to move"
}
}
}
@@ -46,7 +46,7 @@ struct ToolbarView: View {
Divider()
// Inspector panel (contextual)
if viewModel.selectedTool == .brush || viewModel.selectedTool == .wire || viewModel.selectedTool == .person || viewModel.selectedTool == .object {
if viewModel.selectedTool == .brush || viewModel.selectedTool == .person || viewModel.selectedTool == .object {
inspectorPanel
.transition(reduceMotion ? .opacity : .move(edge: .bottom).combined(with: .opacity))
}
@@ -178,66 +178,33 @@ struct ToolbarView: View {
.accessibilityHint("When enabled, brush strokes snap to nearby edges for cleaner selections")
}
if viewModel.selectedTool == .wire {
// Line brush toggle
HStack {
Text("Mode")
.font(.subheadline)
// Brush tool action buttons
if viewModel.selectedTool == .brush {
HStack(spacing: 12) {
// Clear button
Button {
viewModel.triggerClearBrushStrokes = true
} label: {
Label("Clear", systemImage: "trash")
.font(.subheadline)
}
.buttonStyle(.bordered)
.disabled(viewModel.brushStrokesCount == 0)
.accessibilityLabel("Clear all brush strokes")
Spacer()
// Remove button
Button {
viewModel.toggleLineBrushMode()
viewModel.triggerApplyBrushMask = true
} label: {
HStack(spacing: 4) {
Image(systemName: viewModel.isLineBrushMode ? "scribble" : "hand.tap")
Text(viewModel.isLineBrushMode ? "Line Brush" : "Tap to Detect")
.font(.caption)
}
.padding(.horizontal, 10)
.padding(.vertical, 6)
.background(
RoundedRectangle(cornerRadius: 8)
.fill(viewModel.isLineBrushMode ? Color.accentColor : Color(.tertiarySystemFill))
)
.foregroundStyle(viewModel.isLineBrushMode ? .white : .primary)
}
.accessibilityLabel(viewModel.isLineBrushMode ? "Line brush mode active" : "Tap to detect mode active")
.accessibilityHint("Double tap to toggle between line brush and tap detection modes")
}
if viewModel.isLineBrushMode && !viewModel.lineBrushPath.isEmpty {
Button {
Task {
await viewModel.finishLineBrush()
}
} label: {
Label("Apply Line", systemImage: "checkmark.circle.fill")
Label("Remove", systemImage: "trash")
.font(.subheadline)
.frame(maxWidth: .infinity)
}
.buttonStyle(.borderedProminent)
.accessibilityLabel("Apply line brush selection")
}
HStack {
Text("Line Width")
.font(.subheadline)
Spacer()
Text("\(Int(viewModel.wireWidth))px")
.font(.subheadline)
.foregroundStyle(.secondary)
.monospacedDigit()
}
HStack {
Slider(value: $viewModel.wireWidth, in: 2...20, step: 1)
.accessibilityLabel("Wire width")
.accessibilityValue("\(Int(viewModel.wireWidth)) pixels")
Stepper("", value: $viewModel.wireWidth, in: 2...20, step: 1)
.labelsHidden()
.accessibilityLabel("Wire width stepper")
.accessibilityValue("\(Int(viewModel.wireWidth)) pixels")
.tint(.red)
.disabled(viewModel.brushStrokesCount == 0)
.accessibilityLabel("Remove selected area")
}
}

View File

@@ -10,8 +10,8 @@ import Foundation
enum ToolType: String, Codable {
case person
case object
case wire
case brush
case move
}
enum EditOperation: Codable {

View File

@@ -10,7 +10,7 @@ import CoreGraphics
import UIKit
import Accelerate
struct MaskData {
struct MaskData: Sendable {
let width: Int
let height: Int
let data: Data
@@ -78,32 +78,36 @@ struct MaskData {
var sourceArray = [UInt8](data)
var destinationArray = [UInt8](repeating: 0, count: count)
var sourceBuffer = vImage_Buffer(
data: &sourceArray,
height: vImagePixelCount(height),
width: vImagePixelCount(width),
rowBytes: width
)
var destinationBuffer = vImage_Buffer(
data: &destinationArray,
height: vImagePixelCount(height),
width: vImagePixelCount(width),
rowBytes: width
)
let kernelSize = pixels * 2 + 1
let kernel = [UInt8](repeating: 255, count: kernelSize * kernelSize)
let error = vImageDilate_Planar8(
&sourceBuffer,
&destinationBuffer,
0, 0,
kernel,
vImagePixelCount(kernelSize),
vImagePixelCount(kernelSize),
vImage_Flags(kvImageNoFlags)
)
let error = sourceArray.withUnsafeMutableBufferPointer { sourcePtr -> vImage_Error in
destinationArray.withUnsafeMutableBufferPointer { destPtr -> vImage_Error in
var sourceBuffer = vImage_Buffer(
data: sourcePtr.baseAddress,
height: vImagePixelCount(height),
width: vImagePixelCount(width),
rowBytes: width
)
var destinationBuffer = vImage_Buffer(
data: destPtr.baseAddress,
height: vImagePixelCount(height),
width: vImagePixelCount(width),
rowBytes: width
)
return vImageDilate_Planar8(
&sourceBuffer,
&destinationBuffer,
0, 0,
kernel,
vImagePixelCount(kernelSize),
vImagePixelCount(kernelSize),
vImage_Flags(kvImageNoFlags)
)
}
}
guard error == kvImageNoError else { return nil }

View File

@@ -1,235 +0,0 @@
//
// ContourService.swift
// CheapRetouch
//
// Service for detecting and scoring wire/line contours.
//
import Foundation
import Vision
import CoreGraphics
import UIKit
actor ContourService {
struct ScoredContour {
let contour: VNContour
let score: Float
}
private let proximityWeight: Float = 0.3
private let aspectWeight: Float = 0.3
private let straightnessWeight: Float = 0.2
private let lengthWeight: Float = 0.2
private let minimumScore: Float = 0.3
func detectContours(in image: CGImage) async throws -> [VNContour] {
let request = VNDetectContoursRequest()
request.contrastAdjustment = 1.0
request.detectsDarkOnLight = true
let handler = VNImageRequestHandler(cgImage: image, options: [:])
try handler.perform([request])
guard let results = request.results?.first else {
return []
}
return collectAllContours(from: results)
}
func findBestWireContour(at point: CGPoint, from contours: [VNContour], imageSize: CGSize) -> VNContour? {
let normalizedPoint = CGPoint(
x: point.x / imageSize.width,
y: 1.0 - point.y / imageSize.height
)
let scoredContours = contours.compactMap { contour -> ScoredContour? in
let score = scoreContour(contour, relativeTo: normalizedPoint)
guard score >= minimumScore else { return nil }
return ScoredContour(contour: contour, score: score)
}
return scoredContours.max(by: { $0.score < $1.score })?.contour
}
func scoreContour(_ contour: VNContour, relativeTo point: CGPoint) -> Float {
let points = contour.normalizedPoints
guard points.count >= 2 else { return 0 }
let proximityScore = calculateProximityScore(points: points, to: point)
let aspectScore = calculateAspectScore(points: points)
let straightnessScore = calculateStraightnessScore(points: points)
let lengthScore = calculateLengthScore(points: points)
return proximityScore * proximityWeight +
aspectScore * aspectWeight +
straightnessScore * straightnessWeight +
lengthScore * lengthWeight
}
func contourToMask(_ contour: VNContour, width: Int, imageSize: CGSize) -> CGImage? {
let maskWidth = Int(imageSize.width)
let maskHeight = Int(imageSize.height)
guard let context = CGContext(
data: nil,
width: maskWidth,
height: maskHeight,
bitsPerComponent: 8,
bytesPerRow: maskWidth,
space: CGColorSpaceCreateDeviceGray(),
bitmapInfo: CGImageAlphaInfo.none.rawValue
) else {
return nil
}
context.setFillColor(gray: 0, alpha: 1)
context.fill(CGRect(x: 0, y: 0, width: maskWidth, height: maskHeight))
context.setStrokeColor(gray: 1, alpha: 1)
context.setLineWidth(CGFloat(width))
context.setLineCap(.round)
context.setLineJoin(.round)
let points = contour.normalizedPoints
guard let firstPoint = points.first else { return nil }
context.beginPath()
context.move(to: CGPoint(
x: CGFloat(firstPoint.x) * imageSize.width,
y: CGFloat(firstPoint.y) * imageSize.height
))
for point in points.dropFirst() {
context.addLine(to: CGPoint(
x: CGFloat(point.x) * imageSize.width,
y: CGFloat(point.y) * imageSize.height
))
}
context.strokePath()
return context.makeImage()
}
// MARK: - Private Scoring Methods
private func calculateProximityScore(points: [SIMD2<Float>], to target: CGPoint) -> Float {
var minDistance: Float = .greatestFiniteMagnitude
for point in points {
let dx = Float(target.x) - point.x
let dy = Float(target.y) - point.y
let distance = sqrt(dx * dx + dy * dy)
minDistance = min(minDistance, distance)
}
// Score decreases with distance, max at 0, 0 at distance > 0.2
return max(0, 1.0 - minDistance * 5)
}
private func calculateAspectScore(points: [SIMD2<Float>]) -> Float {
guard points.count >= 2 else { return 0 }
var minX: Float = .greatestFiniteMagnitude
var maxX: Float = -.greatestFiniteMagnitude
var minY: Float = .greatestFiniteMagnitude
var maxY: Float = -.greatestFiniteMagnitude
for point in points {
minX = min(minX, point.x)
maxX = max(maxX, point.x)
minY = min(minY, point.y)
maxY = max(maxY, point.y)
}
let width = maxX - minX
let height = maxY - minY
let length = calculatePathLength(points: points)
let boundingDiagonal = sqrt(width * width + height * height)
guard boundingDiagonal > 0 else { return 0 }
// Wire-like shapes have length close to their bounding diagonal
// and small perpendicular extent
let minDimension = min(width, height)
let maxDimension = max(width, height)
guard maxDimension > 0 else { return 0 }
let aspectRatio = minDimension / maxDimension
// Low aspect ratio (thin) scores high
return max(0, 1.0 - aspectRatio * 2)
}
private func calculateStraightnessScore(points: [SIMD2<Float>]) -> Float {
guard points.count >= 3 else { return 1.0 }
var totalAngleChange: Float = 0
for i in 1..<(points.count - 1) {
let prev = points[i - 1]
let curr = points[i]
let next = points[i + 1]
let v1 = SIMD2<Float>(curr.x - prev.x, curr.y - prev.y)
let v2 = SIMD2<Float>(next.x - curr.x, next.y - curr.y)
let len1 = sqrt(v1.x * v1.x + v1.y * v1.y)
let len2 = sqrt(v2.x * v2.x + v2.y * v2.y)
guard len1 > 0, len2 > 0 else { continue }
let dot = (v1.x * v2.x + v1.y * v2.y) / (len1 * len2)
let angle = acos(min(1, max(-1, dot)))
totalAngleChange += angle
}
let averageAngleChange = totalAngleChange / Float(points.count - 2)
// Straight lines have low angle change
return max(0, 1.0 - averageAngleChange / .pi)
}
private func calculateLengthScore(points: [SIMD2<Float>]) -> Float {
let length = calculatePathLength(points: points)
// Longer contours score higher, normalized to typical wire length
return min(1.0, length * 2)
}
private func calculatePathLength(points: [SIMD2<Float>]) -> Float {
var length: Float = 0
for i in 1..<points.count {
let dx = points[i].x - points[i - 1].x
let dy = points[i].y - points[i - 1].y
length += sqrt(dx * dx + dy * dy)
}
return length
}
private func collectAllContours(from observation: VNContoursObservation) -> [VNContour] {
var contours: [VNContour] = []
func collect(_ contour: VNContour) {
contours.append(contour)
for i in 0..<contour.childContourCount {
if let child = try? contour.childContour(at: i) {
collect(child)
}
}
}
for i in 0..<observation.contourCount {
if let contour = try? observation.contour(at: i) {
collect(contour)
}
}
return contours
}
}

View File

@@ -74,8 +74,19 @@ actor InpaintEngine {
throw InpaintError.memoryPressure
}
// Try LaMa AI inpainting first for best quality
do {
DebugLogger.log("Attempting LaMa AI inpainting...")
let lamaResult = try await lamaInpainter.inpaint(image: image, mask: mask)
DebugLogger.imageInfo("Inpaint result (LaMa)", image: lamaResult)
return lamaResult
} catch {
DebugLogger.log("LaMa failed: \(error.localizedDescription), falling back to gradient fill")
}
// Fallback to gradient-based Metal inpainting
if isMetalAvailable {
DebugLogger.log("Using Metal for inpainting")
DebugLogger.log("Using Metal gradient fill for inpainting")
return try await inpaintWithMetal(image: image, mask: mask, isPreview: false)
} else {
DebugLogger.log("Using Accelerate fallback for inpainting")
@@ -83,6 +94,19 @@ actor InpaintEngine {
}
}
/// LaMa inpainter for AI-powered inpainting (cached on first use)
private var _lamaInpainter: LaMaInpainter?
private var lamaInpainter: LaMaInpainter {
get async {
if let existing = _lamaInpainter {
return existing
}
let inpainter = await MainActor.run { LaMaInpainter() }
_lamaInpainter = inpainter
return inpainter
}
}
func inpaintPreview(image: CGImage, mask: CGImage) async throws -> CGImage {
// Scale down for preview if needed
let scaledImage: CGImage

View File

@@ -0,0 +1,18 @@
{
"fileFormatVersion": "1.0.0",
"itemInfoEntries": {
"834F1D4D-C413-4927-9314-FF1187E2F6B4": {
"author": "com.apple.CoreML",
"description": "CoreML Model Weights",
"name": "weights",
"path": "com.apple.CoreML/weights"
},
"DFC457AE-DC53-4BC5-B66A-1A6B1CB59064": {
"author": "com.apple.CoreML",
"description": "CoreML Model Specification",
"name": "model.mlmodel",
"path": "com.apple.CoreML/model.mlmodel"
}
},
"rootModelIdentifier": "DFC457AE-DC53-4BC5-B66A-1A6B1CB59064"
}

View File

@@ -0,0 +1,346 @@
//
// LaMaInpainter.swift
// CheapRetouch
//
// LaMa (Large Mask Inpainting) Core ML wrapper for AI-powered object removal.
// Based on https://github.com/wudijimao/Inpaint-iOS
//
import UIKit
import CoreML
/// LaMa-based inpainting using Core ML
/// Provides high-quality AI-powered inpainting for object removal
final class LaMaInpainter {
/// Fixed input size for the LaMa model
private let modelSize: Int = 512
/// The Core ML model instance
private var model: LaMaFP16_512?
/// Configuration for the model
private let config: MLModelConfiguration
/// Work queue for inference
private let workQueue = DispatchQueue(label: "com.cheapretouch.lama", qos: .userInitiated)
/// CI context for image operations
private let ciContext = CIContext()
init() {
config = MLModelConfiguration()
config.computeUnits = .cpuAndGPU
preloadModel()
}
/// Preload the model in the background
private func preloadModel() {
workQueue.async { [weak self] in
guard let self = self else { return }
do {
self.model = try LaMaFP16_512(configuration: self.config)
DebugLogger.log("LaMa model loaded successfully")
} catch {
DebugLogger.error("Failed to load LaMa model: \(error)")
}
}
}
/// Check if the model is ready
var isModelReady: Bool {
return model != nil
}
/// Inpaint the masked region of an image using LaMa
/// - Parameters:
/// - image: The source CGImage
/// - mask: The mask CGImage (white = areas to inpaint)
/// - Returns: The inpainted CGImage
func inpaint(image: CGImage, mask: CGImage) async throws -> CGImage {
// Wait for model to be ready (with timeout)
let maxWaitTime: TimeInterval = 5.0
let startTime = Date()
while model == nil {
if -startTime.timeIntervalSinceNow > maxWaitTime {
throw LaMaError.modelNotLoaded
}
try await Task.sleep(nanoseconds: 100_000_000) // 100ms
}
guard let model = model else {
throw LaMaError.modelNotLoaded
}
let originalSize = CGSize(width: image.width, height: image.height)
return try await withCheckedThrowingContinuation { continuation in
workQueue.async {
do {
let result = try self.performInpainting(
model: model,
image: image,
mask: mask,
originalSize: originalSize
)
continuation.resume(returning: result)
} catch {
continuation.resume(throwing: error)
}
}
}
}
/// Perform the actual inpainting operation
private func performInpainting(
model: LaMaFP16_512,
image: CGImage,
mask: CGImage,
originalSize: CGSize
) throws -> CGImage {
DebugLogger.processing("LaMa inpainting started")
let startTime = Date()
// Find the bounding box of the mask to crop efficiently
let maskBounds = findMaskBounds(mask: mask)
DebugLogger.log("Mask bounds: \(maskBounds)")
// Calculate the crop region (expand to square and add padding)
let cropRegion = calculateCropRegion(
maskBounds: maskBounds,
imageSize: originalSize,
targetSize: modelSize
)
DebugLogger.log("Crop region: \(cropRegion)")
// Crop the image and mask to the region
guard let croppedImage = cropImage(image, to: cropRegion),
let croppedMask = cropImage(mask, to: cropRegion) else {
throw LaMaError.imageProcessingFailed
}
// Resize to model input size
guard let resizedImage = resizeImage(croppedImage, to: CGSize(width: modelSize, height: modelSize)),
let resizedMask = resizeImage(croppedMask, to: CGSize(width: modelSize, height: modelSize)) else {
throw LaMaError.imageProcessingFailed
}
// Convert to pixel buffers
guard let imageBuffer = createPixelBuffer(from: resizedImage, format: kCVPixelFormatType_32ARGB),
let maskBuffer = createGrayscalePixelBuffer(from: resizedMask) else {
throw LaMaError.bufferCreationFailed
}
// Run inference
DebugLogger.log("Running LaMa inference...")
let output = try model.prediction(image: imageBuffer, mask: maskBuffer)
// Convert output to CGImage
guard let outputImage = cgImageFromPixelBuffer(output.output) else {
throw LaMaError.outputConversionFailed
}
// Resize output back to crop region size
guard let resizedOutput = resizeImage(outputImage, to: cropRegion.size) else {
throw LaMaError.imageProcessingFailed
}
// Merge back into original image
let finalImage = mergeIntoOriginal(
original: image,
inpainted: resizedOutput,
at: cropRegion.origin
)
let elapsed = -startTime.timeIntervalSinceNow
DebugLogger.processing("LaMa inpainting completed in \(String(format: "%.2f", elapsed))s")
return finalImage
}
// MARK: - Helper Methods
/// Find the bounding box of white pixels in the mask
private func findMaskBounds(mask: CGImage) -> CGRect {
let width = mask.width
let height = mask.height
guard let data = mask.dataProvider?.data,
let bytes = CFDataGetBytePtr(data) else {
return CGRect(x: 0, y: 0, width: CGFloat(width), height: CGFloat(height))
}
var minX = width, minY = height, maxX = 0, maxY = 0
let bytesPerPixel = mask.bitsPerPixel / 8
let bytesPerRow = mask.bytesPerRow
for y in 0..<height {
for x in 0..<width {
let offset = y * bytesPerRow + x * bytesPerPixel
let value = bytes[offset]
if value > 127 {
minX = min(minX, x)
minY = min(minY, y)
maxX = max(maxX, x)
maxY = max(maxY, y)
}
}
}
if minX > maxX || minY > maxY {
// No white pixels found, return full image
return CGRect(x: 0, y: 0, width: CGFloat(width), height: CGFloat(height))
}
return CGRect(
x: CGFloat(minX),
y: CGFloat(minY),
width: CGFloat(maxX - minX + 1),
height: CGFloat(maxY - minY + 1)
)
}
/// Calculate the crop region for the mask area
private func calculateCropRegion(maskBounds: CGRect, imageSize: CGSize, targetSize: Int) -> CGRect {
// Add 20% padding around the mask
let padding = max(maskBounds.width, maskBounds.height) * 0.2
var region = maskBounds.insetBy(dx: -padding, dy: -padding)
// Make it square (use the larger dimension)
let maxSide = max(region.width, region.height)
let centerX = region.midX
let centerY = region.midY
region = CGRect(
x: centerX - maxSide / 2,
y: centerY - maxSide / 2,
width: maxSide,
height: maxSide
)
// Ensure minimum size matches model size
if region.width < CGFloat(targetSize) {
let diff = CGFloat(targetSize) - region.width
region = region.insetBy(dx: -diff / 2, dy: -diff / 2)
}
// Clamp to image bounds
region.origin.x = max(0, min(region.origin.x, imageSize.width - region.width))
region.origin.y = max(0, min(region.origin.y, imageSize.height - region.height))
// Ensure we don't exceed image bounds
if region.maxX > imageSize.width {
region.origin.x = imageSize.width - region.width
}
if region.maxY > imageSize.height {
region.origin.y = imageSize.height - region.height
}
// Final clamp if region is larger than image
region.origin.x = max(0, region.origin.x)
region.origin.y = max(0, region.origin.y)
region.size.width = min(region.width, imageSize.width)
region.size.height = min(region.height, imageSize.height)
return CGRect(
x: floor(region.origin.x),
y: floor(region.origin.y),
width: ceil(region.width),
height: ceil(region.height)
)
}
/// Crop an image to the specified region
private func cropImage(_ image: CGImage, to rect: CGRect) -> CGImage? {
return image.cropping(to: rect)
}
/// Resize an image to the specified size
private func resizeImage(_ image: CGImage, to size: CGSize) -> CGImage? {
let ciImage = CIImage(cgImage: image)
let scaleX = size.width / CGFloat(image.width)
let scaleY = size.height / CGFloat(image.height)
let scaled = ciImage.transformed(by: CGAffineTransform(scaleX: scaleX, y: scaleY))
return ciContext.createCGImage(scaled, from: scaled.extent)
}
/// Create an ARGB pixel buffer from a CGImage
private func createPixelBuffer(from image: CGImage, format: OSType) -> CVPixelBuffer? {
do {
let feature = try MLFeatureValue(
cgImage: image,
pixelsWide: modelSize,
pixelsHigh: modelSize,
pixelFormatType: format
)
return feature.imageBufferValue
} catch {
DebugLogger.error("Failed to create pixel buffer: \(error)")
return nil
}
}
/// Create a grayscale pixel buffer from a CGImage
private func createGrayscalePixelBuffer(from image: CGImage) -> CVPixelBuffer? {
do {
let feature = try MLFeatureValue(
cgImage: image,
pixelsWide: modelSize,
pixelsHigh: modelSize,
pixelFormatType: kCVPixelFormatType_OneComponent8
)
return feature.imageBufferValue
} catch {
DebugLogger.error("Failed to create grayscale buffer: \(error)")
return nil
}
}
/// Convert a CVPixelBuffer to CGImage
private func cgImageFromPixelBuffer(_ buffer: CVPixelBuffer) -> CGImage? {
let ciImage = CIImage(cvPixelBuffer: buffer)
return ciContext.createCGImage(ciImage, from: ciImage.extent)
}
/// Merge the inpainted region back into the original image
private func mergeIntoOriginal(original: CGImage, inpainted: CGImage, at position: CGPoint) -> CGImage {
let size = CGSize(width: original.width, height: original.height)
let inpaintedSize = CGSize(width: inpainted.width, height: inpainted.height)
// Use scale 1.0 to match actual pixel size (not screen scale)
let format = UIGraphicsImageRendererFormat()
format.scale = 1.0
let renderer = UIGraphicsImageRenderer(size: size, format: format)
let resultImage = renderer.image { context in
// Draw original
UIImage(cgImage: original).draw(at: .zero)
// Draw inpainted region on top
UIImage(cgImage: inpainted).draw(in: CGRect(origin: position, size: inpaintedSize))
}
return resultImage.cgImage!
}
}
// MARK: - Errors
enum LaMaError: Error, LocalizedError {
case modelNotLoaded
case imageProcessingFailed
case bufferCreationFailed
case outputConversionFailed
var errorDescription: String? {
switch self {
case .modelNotLoaded:
return "LaMa model is not loaded"
case .imageProcessingFailed:
return "Failed to process image for inpainting"
case .bufferCreationFailed:
return "Failed to create pixel buffer"
case .outputConversionFailed:
return "Failed to convert model output to image"
}
}
}

View File

@@ -71,12 +71,17 @@ actor ImagePipeline {
case .inpaint:
if let maskOp = pendingMask {
let maskData = MaskData(
width: maskOp.maskWidth,
height: maskOp.maskHeight,
data: maskOp.maskData
)
if let mask = maskData.toCGImage() {
// Create mask data and convert to CGImage on MainActor for Swift 6 compatibility
let mask = await MainActor.run {
let maskData = MaskData(
width: maskOp.maskWidth,
height: maskOp.maskHeight,
data: maskOp.maskData
)
return maskData.toCGImage()
}
if let mask = mask {
// Scale mask if needed
let scaledMask: CGImage
if scaleFactor != 1.0 {

BIN
CheapRetouch/appstore.jpeg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

BIN
CheapRetouch/appstore.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 426 KiB

BIN
CheapRetouch/playstore.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 160 KiB

View File

@@ -0,0 +1,91 @@
# CheapRetouch
A privacy-first iOS photo editor for removing unwanted elements from your photos — powered by on-device machine learning.
![Platform](https://img.shields.io/badge/Platform-iOS%2017.0+-blue)
![Swift](https://img.shields.io/badge/Swift-5.9-orange)
## Features
### 🧑 Person Removal
Tap on any person in your photo to instantly remove them. The app uses Apple's Vision framework to generate precise segmentation masks, then fills the removed area seamlessly.
### 📦 Object Removal
Remove unwanted foreground objects with a single tap. When automatic detection isn't possible, use the smart brush tool with edge-aware refinement for manual selection.
### ⚡ Wire & Line Removal
Easily remove power lines, cables, and other thin linear objects. The app detects contours and automatically selects wire-like shapes, or you can trace them manually with the line brush.
## How It Works
CheapRetouch combines Apple's Vision framework for intelligent object detection with an AI-powered inpainting engine:
### Object Detection (Vision Framework)
- **`VNGenerateForegroundInstanceMaskRequest`** — Generates pixel-accurate masks for people and salient foreground objects
- **`VNDetectContoursRequest`** — Detects edges and contours for wire/line detection
- **Tap-based selection** — Simply tap on what you want to remove
### AI-Powered Inpainting (LaMa Model)
The app uses **LaMa (Large Mask Inpainting)**, a state-of-the-art deep learning model optimized for removing objects from images:
- **Model**: `LaMaFP16_512.mlpackage` — A Core ML-optimized neural network running entirely on-device
- **Architecture**: Fourier convolutions that capture both local textures and global image structure
- **Processing**: Runs on the Neural Engine (ANE) for fast, efficient inference
- **Quality**: Produces natural-looking results even for large masked areas
**Technical Details:**
- Input resolution: 512×512 pixels (automatically crops and scales around masked regions)
- Quantization: FP16 for optimal balance of quality and performance
- Fallback: Metal-accelerated exemplar-based inpainting when needed
### Processing Pipeline
```
1. User taps object → Vision generates mask
2. Mask is dilated and feathered for smooth edges
3. Region is cropped and scaled to 512×512
4. LaMa model inpaints the masked area
5. Result is composited back into original image
```
## Privacy
🔒 **100% On-Device Processing**
- No photos leave your device
- No cloud services or network calls
- No analytics or telemetry
- Photo library access via secure PHPicker
## Technical Stack
| Component | Technology |
|-----------|------------|
| UI | SwiftUI + UIKit |
| Object Detection | Vision Framework |
| ML Inference | Core ML (Neural Engine) |
| GPU Processing | Metal |
| Image Pipeline | Core Image |
| Fallback Processing | Accelerate/vImage |
## Requirements
- iOS 17.0 or later
- iPhone or iPad with A14 chip or newer (for optimal Neural Engine performance)
## Performance
| Operation | Target Time | Device |
|-----------|-------------|--------|
| Preview inpaint | < 300ms | iPhone 12+ |
| Full resolution (12MP) | < 4 seconds | iPhone 12+ |
| Full resolution (48MP) | < 12 seconds | iPhone 15 Pro+ |
## Non-Destructive Editing
All edits are stored as an operation stack — your original photos are never modified. Full undo/redo support included.
## License
MIT License — see [LICENSE](LICENSE) for details.

BIN
screenshots/iPad/1.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 MiB

BIN
screenshots/iPad/2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

BIN
screenshots/iPad/3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

BIN
screenshots/iPhone/1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 876 KiB

BIN
screenshots/iPhone/2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 937 KiB

BIN
screenshots/iPhone/3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 940 KiB

BIN
screenshots/iPhone/4.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

BIN
screenshots/iPhone/5.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

BIN
screenshots/iPhone/6.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB