Skip to content

Instantly share code, notes, and snippets.

@AmitGazal
AmitGazal / xcode-cleanup.sh
Created August 25, 2025 06:48
Bash script to free up disk space by cleaning Xcode’s DerivedData, Archives, Device Support, Simulators, and Documentation caches.
#!/bin/bash
echo "Cleaning up Xcode files…"
# Show current heavy folders
du -sh ~/Library/Developer/Xcode/DerivedData \
~/Library/Developer/Xcode/Archives \
~/Library/Developer/Xcode/iOS\ DeviceSupport \
~/Library/Developer/CoreSimulator/Devices \
~/Library/Developer/Xcode/DocumentationCache 2>/dev/null || true
@sayakpaul
sayakpaul / grade_images_with_gemini.py
Last active October 8, 2025 21:21
Shows how to use Gemini Flash 2.0 to grade images on multiple aspects like accuracy to prompt, emotional and thematic response, etc.
from google import genai
from google.genai import types
import typing_extensions as typing
from PIL import Image
import requests
import io
import json
import os
# /// script
# dependencies = [
# "atproto"
# ]
# ///
from atproto import Client
import getpass
import time
@sayakpaul
sayakpaul / aot_compile_with_int8_quant.py
Last active June 3, 2025 16:09
Shows how to AoT compile the Flux.1 Dev Transformer with int8 quant and perform inference.
import torch
from diffusers import FluxTransformer2DModel
import torch.utils.benchmark as benchmark
from torchao.quantization import quantize_, int8_weight_only
from torchao.utils import unwrap_tensor_subclass
import torch._inductor
torch._inductor.config.mixed_mm_choice = "triton"
@linoytsaban
linoytsaban / flux_with_cfg
Last active December 9, 2024 06:26
Flux with CFG and negative prompts
# download FluxCFGPipline
!wget https://raw.githubusercontent.com/linoytsaban/diffusers/refs/heads/dreambooth-lora-flux-exploration/examples/community/pipeline_flux_with_cfg.py
# load pipeline
import diffusers
import torch
from pipeline_flux_with_cfg import FluxCFGPipeline
pipe = FluxCFGPipeline.from_pretrained("black-forest-labs/FLUX.1-dev",
torch_dtype=torch.bfloat16)
@sayakpaul
sayakpaul / run_flux_with_limited_resources.md
Last active September 15, 2025 15:03
This document enlists resources that show how to run Black Forest Lab's Flux with Diffusers under limited resources.
@pcuenca
pcuenca / openelm-coreml.py
Created April 30, 2024 09:55
Convert OpenELM to Core ML (float32)
import argparse
import numpy as np
import torch
import torch.nn as nn
import coremltools as ct
from transformers import AutoTokenizer, AutoModelForCausalLM
# When using float16, all predicted logits are 0. To be debugged.
compute_precision = ct.precision.FLOAT32
compute_units = ct.ComputeUnit.CPU_ONLY
@Artefact2
Artefact2 / README.md
Last active November 28, 2025 02:29
GGUF quantizations overview

Which GGUF is right for me? (Opinionated)

Good question! I am collecting human data on how quantization affects outputs. See here for more information: ggml-org/llama.cpp#5962

In the meantime, use the largest that fully fits in your GPU. If you can comfortably fit Q4_K_S, try using a model with more parameters.

llama.cpp feature matrix

See the wiki upstream: https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix

@takanotaiga
takanotaiga / i2t.py
Last active September 26, 2024 16:51
i2t ros2
# Copyright 2023 Taiga Takano
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
@eavae
eavae / convert_diffusers_lora_to_sd_webui.py
Created August 23, 2023 07:45
A script help you convert diffusers lora to sd webui format
from pathlib import Path
from diffusers import StableDiffusionXLPipeline
import torch
from safetensors.torch import save_file
# text_encoder.text_model.encoder.layers.0.self_attn.k_proj.lora_linear_layer.down.weight
# lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_down.weight
# 1. text_encoder -> lora_te, text_encoder_2 -> lora_te2
# 2. map
# 3. .weight -> 2 .alpha -> 1 and replace . -> _