2026-01-21 01:04:50 -05:00
2026-01-21 01:04:50 -05:00
2026-01-19 22:06:32 -05:00
2026-01-19 22:06:32 -05:00
2026-01-19 22:06:32 -05:00
2026-01-19 22:06:32 -05:00
2026-01-19 22:06:32 -05:00
2026-01-19 22:06:32 -05:00
2026-01-21 01:04:50 -05:00

At-Table

Bridging the gap between Deaf and Hearing conversations through local mesh networking and calm design.

At-Table is an iOS accessibility application designed to enable seamless, simultaneous conversations between multiple deaf and hearing individuals. By leveraging a full mesh network, devices connect automatically without the need for Wi-Fi routers or cellular data, creating a local "table" of communication.

The app prioritizes a stress-free environment for deaf users, utilizing calming visuals to counteract the anxiety often associated with following rapid-fire hearing conversations.

🌟 Key Features

Core Communication

  • Full Mesh Networking: Devices enter a peer-to-peer full mesh network using Apple's Multipeer Connectivity. No manual pairing or internet connection is required to chat.
  • Real-Time Transcription (Hearing Role): Hearing users speak naturally. The app transcribes speech in real-time and automatically posts the text as a message to the group once the speaker pauses.
  • Text & Quick Replies (Deaf Role): Deaf users can type messages or use one-tap "Quick Reply" chips (Yes, No, Hold on, Thanks) for rapid interaction.
  • Role-Based UI: Distinct interfaces tailored for "Deaf/HoH" and "Hearing" users, optimized for their specific communication needs.

Visual Design & Atmosphere

  • Calm ASMR Aesthetic: The background features a slow-moving, blurring mesh of "Liquid Glass" orbs. This visual ASMR is designed to induce a calm mood, helping alleviate the cognitive load and stress deaf users experience during hearing conversations.
  • Personalized Auras: Users select a "Neon Aura" color during onboarding. This color identifies them in the mesh and dynamically influences the background animation of their device.
  • Live Transcript Streaming: Hearing users' speech appears as "partial" streaming text on receiving devices before becoming a finalized message, allowing for faster reading speeds.

Technical Resilience

  • Ghost Filtering: Advanced logic prevents "ghost" connections (stale peer signals) from clogging the network.
  • Smart Recovery: Includes "Identity Thrashing" prevention and auto-reconnection logic to handle interruptions or app backgrounding seamlessly.
  • Hybrid Speech Recognition: Utilizes server-based speech recognition for high accuracy when online, with a seamless fallback to on-device recognition when offline.

📱 How It Works

  1. Onboarding: Users enter a display name, select their role (Deaf/HoH or Hearing), and choose an identifying color.
  2. Discovery: The app automatically advertises and browses for nearby devices running At-Table.
  3. Connection: Devices negotiate a connection automatically using a leader/follower algorithm to prevent collision loops.
  4. The Conversation:
    • Hearing users simply hold their phone; the microphone listens for speech, visualizes it, and broadcasts it.
    • Deaf users read incoming bubbles and participate via text.

🔒 Privacy

  • Transient Data: Messages and transcriptions are ephemeral. They exist only for the duration of the session and are not stored in any persistent database or uploaded to a cloud server (other than the temporary audio buffer sent to Apple for transcription if online).
  • Local Connectivity: Chat data flows directly between devices over Wi-Fi/Bluetooth peer-to-peer protocols.
Description
No description provided
Readme 21 MiB
Languages
Swift 98.8%
Shell 1.2%