Discover gists
| if (window.location.origin !== "https://www.instagram.com") { | |
| window.alert( | |
| "Hey! You need to be on the instagram site before you run the code. I'm taking you there now but you're going to have to run the code into the console again.", | |
| ); | |
| window.location.href = "https://www.instagram.com"; | |
| console.clear(); | |
| } | |
| const fetchOptions = { | |
| credentials: "include", |
Companion prompts for the video: OpenClaw after 50 days: 20 real workflows (honest review)
These are the actual prompts I use for each use case shown in the video. Copy-paste them into your agent and adjust for your setup. Most will work as-is or the agent will ask you clarifying questions.
Each prompt describes the intent clearly enough that the agent can figure out the implementation details. You don't need to hand-hold it through every step.
My setup: OpenClaw running on a VPS, Discord as primary interface (separate channels per workflow), Obsidian for notes (markdown-first), Coolify for self-hosted services.
(Inspired by https://medium.com/@icanhazedit/clean-up-unused-github-rpositories-c2549294ee45#.3hwv4nxv5)
-
Open in a new tab all to-be-deleted github repositores (Use the mouse’s middle click or Ctrl + Click) https://github.com/username?tab=repositories
-
Use one tab https://chrome.google.com/webstore/detail/onetab/chphlpgkkbolifaimnlloiipkdnihall to shorten them to a list.
-
Save that list to some path
-
The list should be in the form of “ur_username\repo_name” per line. Use regex search (Sublime text could help). Search for ' |.*' and replace by empty.
Nowadays, many people use Large Language Models (LLMs) for programming. While I can understand that these tools are convenient (and sometimes there's also pressure from management), I personally don't want to use these tools and this article attempts to capture some of my reasons why. This is not intended to convince others to stop using LLMs but to explain why I think that working with LLMs isn't worth it for me.
One reason I don't want my code to be written (or co-authored by) AI tools is licensing. When I write code, it is undoubtably my own. With code written by an LLM, it's a bit more complicated. At least at the time of writing, I am not convinced that me generating code with an LLM would always result in me fully "owning" the code in terms of copyright/licensing. What's worse is that (especially when generating bigger snippets of code using an LLM) it is possible for an LLM to reproduce code from other people that may be licensed in a way I don't want. When an LLM generates code for me,
| const axios = require('axios'); | |
| const crypto = require('crypto'); | |
| const fs = require('fs'); | |
| const FormData = require('form-data'); | |
| class Nanana { | |
| constructor() { | |
| this.baseUrl = 'https://nanana.app'; | |
| this.tempMail = new TempMailScraper(); | |
| this.sessionToken = ''; |
- A fast, simple method to render sky color using gradients maps [[Abad06]]
- A Framework for the Experimental Comparison of Solar and Skydome Illumination [[Kider14]]
- A Method for Modeling Clouds based on Atmospheric Fluid Dynamics [[Miyazaki01]]
- A Physically-Based Night Sky Model [[Jensen01]]
| import fitz # PyMuPDF | |
| import os | |
| import tkinter as tk | |
| from tkinter import messagebox | |
| from PIL import Image, ImageTk | |
| from tqdm import tqdm | |
| import numpy as np | |
| PDF_PATH = "kanji_poster.pdf" | |
| OUTPUT_FOLDER = "kanji_images" |
| blueprint: | |
| name: Leak detection & notifier (Custom text + option to repeat alarm) | |
| description: > | |
| Sends notification when moisture is detected. | |
| Optional auto-clear notification and repeating alarm until cleared. | |
| domain: automation | |
| input: | |
| notify_device: | |
| name: Notify device |