| allowed-tools | description | argument-hint | model | |
|---|---|---|---|---|
|
Run codex with GPT-5-high |
your prompt text |
haiku |
Run the following command exactly. Don't modify it in any way.
| allowed-tools | description | argument-hint | model | |
|---|---|---|---|---|
|
Run codex with GPT-5-high |
your prompt text |
haiku |
Run the following command exactly. Don't modify it in any way.
| # unistall orbstack first | |
| # if you install OrbStack, this app uses the same commands and replaces docker symlinks, | |
| # if you uninstall it, the docker commands doesn't work. You need to restore the symlinks. | |
| # remove old links orbstack | |
| sudo rm -f /usr/local/bin/docker /usr/local/bin/docker-compose /usr/local/bin/docker-buildx /usr/local/bin/docker-credential-desktop | |
| # create new links to docker desktop | |
| sudo ln -s /Applications/Docker.app/Contents/Resources/bin/docker /usr/local/bin/docker |
| #!/usr/bin/env bun | |
| import { existsSync, mkdirSync, writeFileSync } from "fs"; | |
| import { resolve } from "path"; | |
| // Define types based on the JSON structure | |
| type Citation = { | |
| url: string; | |
| uuid: string; | |
| title: string; |
| # Cursor's Memory Bank | |
| I am Cursor, an expert software engineer with a unique characteristic: my memory resets completely between sessions. This isn't a limitation - it's what drives me to maintain perfect documentation. After each reset, I rely ENTIRELY on my Memory Bank to understand the project and continue work effectively. I MUST read ALL memory bank files at the start of EVERY task - this is not optional. | |
| **Operational success hinges on meticulous planning, precise execution, and self-validation of every task. Adherence to instructions and avoiding extraneous changes are paramount.** | |
| ## Memory Bank Structure | |
| The Memory Bank consists of required core files and optional context files, all in Markdown format, located within the `.cursor/rules/memory_bank/` directory. Files build upon each other in a clear hierarchy: |
This repository contains a disciplined, evidence-first prompting framework designed to elevate an Agentic AI from a simple command executor to an Autonomous Principal Engineer.
The philosophy is simple: Autonomy through discipline. Trust through verification.
This framework is not just a collection of prompts; it is a complete operational system for managing AI agents. It enforces a rigorous workflow of reconnaissance, planning, safe execution, and self-improvement, ensuring every action the agent takes is deliberate, verifiable, and aligned with senior engineering best practices.
I also have Claude Code prompting for your reference: https://gist.github.com/aashari/1c38e8c7766b5ba81c3a0d4d124a2f58
| diff --git a/assets/javascript/site.js b/assets/javascript/site.js | |
| index a34accb4..32ded7dc 100644 | |
| --- a/assets/javascript/site.js | |
| +++ b/assets/javascript/site.js | |
| @@ -3,3 +3,5 @@ | |
| import 'htmx.org'; | |
| import './htmx'; | |
| import './alpine'; | |
| +import 'flowbite'; | |
| +import 'flowbite/dist/datepicker'; |
| from langchain.chat_models import ChatOpenAI | |
| from langchain.prompts import ChatPromptTemplate | |
| from langchain.schema.output_parser import StrOutputParser | |
| import requests | |
| from bs4 import BeautifulSoup | |
| from langchain.schema.runnable import RunnablePassthrough, RunnableLambda | |
| from langchain.utilities import DuckDuckGoSearchAPIWrapper | |
| import json | |
| RESULTS_PER_QUESTION = 3 |
| import pandas as pd | |
| from pandasai import PandasAI | |
| # Sample DataFrame | |
| df = pd.DataFrame({ | |
| "country": ["United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan", "China"], | |
| "gdp": [19294482071552, 2891615567872, 2411255037952, 3435817336832, 1745433788416, 1181205135360, 1607402389504, 1490967855104, 4380756541440, 14631844184064], | |
| "happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12] | |
| }) |
Code for an article about LangChain structured outputs.
Article: https://horosin.com/extracting-pdf-and-generating-json-data-with-gpts-langchain-and-nodejs
In this blog post, I will share how to use LangChain, a flexible framework for building AI-driven applications, to extract and generate structured JSON data with GPTs and Node.js. I'll provide code snippets and concise instructions to help you set up and run the project.