Hey all,
Louis here. Another intense 10 days. Here's what changed.
Check out what's new 👇
On macOS 13+, you can now select, copy, and interact with text directly on your timeline screenshots — exactly like in Preview or Safari. Powered by Apple's VisionKit, it works alongside our accessibility tree + OCR pipeline: native text selection where Apple supports it, our own engine everywhere else.
Try it: open the timeline, hover any screenshot, and select text.
[VISUAL PLACEHOLDER — Live Text demo GIF]
Screenpipe now detects when you're on battery and auto-adjusts capture to save power — longer battery life without losing important data. Toggle it in recording settings.
[VISUAL PLACEHOLDER — Battery Saver settings screenshot]
Connect your ChatGPT account via OAuth — no API keys, just log in. We also added Gemini 3 Flash and Gemini 3 Pro (1M context window) to the cloud model lineup.
Pro users now get 5x more daily AI queries, and lightweight models cost less quota than expensive ones — so you can chat more.
[VISUAL PLACEHOLDER — AI provider picker screenshot]
@mention filters now actually work — type @audio to search only voice/meetings, @screen for screen text, or @AppName to scope to a specific app. Every Pi search call respects your filters.
Also new: click images to view full-size, scroll-to-bottom button for long conversations, speaker filter popover, and smarter idle suggestions that refresh based on your activity.
[VISUAL PLACEHOLDER — Chat @mention filters screenshot]
New monitor selection in settings. Uses stable ID matching so reconnecting a monitor doesn't break your filter. "Record all monitors" toggle for the default.
[VISUAL PLACEHOLDER — Monitor selection screenshot]
- Transcription engines grouped into Cloud / Offline / Other — easier to pick
- "Transcription dictionary" → Custom vocabulary with friendlier copy
- Bulk vocabulary import — paste a comma or newline-separated list
[VISUAL PLACEHOLDER — Settings dropdown screenshot]
- Skip OCR when accessibility tree has text — less CPU
- Content dedup + higher visual change threshold — less CPU
- Fixed memory leak from missing autorelease pools on macOS
- Pipes page loads 10x faster
- Pi starts in 2s instead of 15s
- Fixed DB pool exhaustion and server hangs
- Skip capture when screen is locked
- Batch transcription mode now default for new users
- Windows: full accessibility tree for Electron apps (Discord: 1→661 nodes), taskbar visibility, app icons, deep-link flood protection
- macOS: hardened runtime, M1 stability, calendar permissions for macOS 14+
- Linux: accessibility tree capture via AT-SPI2 and evdev
- Audio shortcuts now actually toggle capture
- Text selection no longer blocked by URL overlays
- Settings crash from unknown AI provider types fixed
- Pi no longer restarts excessively on preset changes
- Timeline shows batch-transcribed audio in real-time
- OCR bounding boxes normalized on Windows/Linux
- UTF-8 panics fixed across multiple components
- Vietnamese OCR language added 🇻🇳
- Encrypted pipe sync — sync pipe configs across devices, end-to-end encrypted
- 5x daily AI quota — raised from 200 to 1,000 queries/day
- New MCP tools:
activity-summary,search-elements,frame-contextfor developers building on screenpipe
- Native Live Text for search highlights + click-to-copy URLs
- Make pipes easier to use, more reliable and valuable
- Encrypted archive of your data
- SOC 2 compliance
- Windows code signing improvements
- More team features
⭐️ Download screenpipe
⬇️ Update screenpipe
Questions? Reply to this email or join our Discord.
— Louis