| name | tags | description | ||
|---|---|---|---|---|
plant-seed |
|
Plant a seed - context-based instant capture with optional depth |
Plant ideas you want to tend - instant capture from context, with optional enrichment.
| # Create a new worktree and branch from within current git directory. | |
| ga() { | |
| if [[ -z "$1" ]]; then | |
| echo "Usage: ga [branch name]" | |
| exit 1 | |
| fi | |
| local branch="$1" | |
| local base="$(basename "$PWD")" | |
| local path="../${base}--${branch}" |
It turns out that MacOS Tahoe can generate and use secure-enclave backed SSH keys! This replaces projects like https://github.com/maxgoedjen/secretive
There is a shared library /usr/lib/ssh-keychain.dylib that traditionally has been used to add smartcard support
to ssh by implementing PKCS11Provider interface. However since recently it also implements SecurityKeyProivder
which supports loading keys directly from the secure enclave! SecurityKeyProvider is what is normally used to talk to FIDO2 devices (e.g. libfido2 can be used to talk to your Yubikey). However you can now use it to talk to your Secure Enclave instead!
| # /// script | |
| # dependencies = [ "transformers", "accelerate" ] | |
| # /// | |
| # run on 2xH200 rented from primeintellect.ai | |
| import gc | |
| import torch | |
| from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer |
| import anthropic | |
| import os | |
| import sys | |
| from termcolor import colored | |
| from dotenv import load_dotenv | |
| class ClaudeAgent: | |
| def __init__(self, api_key=None, model="claude-3-7-sonnet-20250219", max_tokens=4000): | |
| """Initialize the Claude agent with API key and model.""" |
| #!/usr/bin/env bash | |
| # Default values for percentages | |
| DEFAULT_WIRED_LIMIT_PERCENT=85 | |
| DEFAULT_WIRED_LWM_PERCENT=75 | |
| # Read input parameters or use default values | |
| WIRED_LIMIT_PERCENT=${1:-$DEFAULT_WIRED_LIMIT_PERCENT} | |
| WIRED_LWM_PERCENT=${2:-$DEFAULT_WIRED_LWM_PERCENT} |
I've been using llama.cpp on Mac Silicon for months now, and my brother, Chimezie has been nudging me to give MLX a go. I finally set aside time today to get started, with an eventual goal of adding support for MLX model loading & usage in OgbujiPT. I've been warned it's rough around the edges, but it's been stimulating to play with. I thought I'd capture some of my notes, including some pitfalls I ran into, which might help anyone else trying to get into MLX in its current state.
As a quick bit of background I'll mention that MLX is very interesting because honestly, Apple has the most coherently engineered consumer and small-business-level hardware for AI workloads, with Apple Silicon and its unified memory. The news lately is all about Apple's AI fumbles, but I suspect their clever plan is to empower a community of developers to take the arrows in their back and build things out for them. The MLX
see https://pnpm.io/installation
$ pnpm --versioncorepack will NOT be distributed with Node.js v25>= https://nodejs.org/docs/latest-v24.x/api/corepack.html
| #!/bin/bash | |
| # this forces Arena into full screen mode on startup, set back to 3 to reset | |
| # note that if you go into the Arena "Graphics" preference panel, it will reset all of these | |
| # and you will need to run these commands again | |
| defaults write com.wizards.mtga "Screenmanager Fullscreen mode" -integer 0 | |
| defaults write com.wizards.mtga "Screenmanager Resolution Use Native" -integer 0 | |
| # you can also replace the long complicated integer bit with any other scaled 16:9 | |
| # resolution your system supports. |
Strongly inspired by https://gist.github.com/heymonkeyriot/9a2f429caff5c091d5429666fa080403.
On Ubuntu :
sudo apt install python3 python3-pip