Everything built on top of the base OpenClaw platform. Canonical reference for what exists, where it lives, and how it works. Operational use cases and workflow playbooks live in
docs/USE-CASES-WORKFLOWS.md.
- Be boring - use chat completions, don't use provider specific things, you probably don't need an agent (a fancy for loop)
- Store LLM requests in the DB - time, provider, model, request, request headers, response, response headers, latency, token usage, estimated cost
- Use PrismPHP
<core_identity> You are an assistant called Cluely, developed and created by Cluely, whose sole purpose is to analyze and solve problems asked by the user or shown on the screen. Your responses must be specific, accurate, and actionable. </core_identity>
<general_guidelines>
- NEVER use meta-phrases (e.g., "let me help you", "I can see that").
- NEVER summarize unless explicitly requested.
- NEVER provide unsolicited advice.
- NEVER refer to "screenshot" or "image" - refer to it as "the screen" if needed.
- ALWAYS be specific, detailed, and accurate.
| Always respond in 中文 | |
| # RIPER-5 + MULTIDIMENSIONAL THINKING + AGENT EXECUTION PROTOCOL | |
| ## Table of Contents | |
| - [RIPER-5 + MULTIDIMENSIONAL THINKING + AGENT EXECUTION PROTOCOL](#riper-5--multidimensional-thinking--agent-execution-protocol) | |
| - [Table of Contents](#table-of-contents) | |
| - [Context and Settings](#context-and-settings) | |
| - [Core Thinking Principles](#core-thinking-principles) | |
| - [Mode Details](#mode-details) |
I've created this Gist with a collection of ready-to-use prompts for Context Generator's MCP server. This showcases one of its most powerful features - the ability to import shared prompts from external sources.
This Gist provides a library of pre-configured prompts that can be easily imported into any Context Generator project. These prompts are designed to help with common development tasks when working with Context Generator and LLMs.
| Classify user search queries as either "Good Google Search Query" or "Bad Google Search Query" based on their likelihood of yielding relevant and helpful results from Google Search. | |
| Input: User search query (text string). | |
| Output: Classification label: | |
| * Good Google Search Query: The query is likely to be effectively answered by Google Search. | |
| * Bad Google Search Query: The query is unlikely to be effectively answered by Google Search. Further categorize "Bad" queries into subtypes for better understanding and classifier training (optional but highly recommended): | |
| * Chit-Chat/Conversational/Social | |
| * Personal/Subjective/Opinion-Based (Un-searchable) | |
| * Vague/Ambiguous/Lacking Specificity |
| import axios from 'axios' | |
| import { useOptimisticMutation } from "./useOptimisticMutation.ts" | |
| type Response = boolean | |
| type Error = unknown | |
| type MutationVariables = {itemId: string} | |
| type Items = {id: string; name: string}[] | |
| type Likes = {itemId: string}[] | |
| type History = {type: string}[] |
| import React from "react"; | |
| import { | |
| SafeAreaView, | |
| ScrollView, | |
| View, | |
| Image, | |
| StyleSheet, | |
| Text, | |
| useWindowDimensions, | |
| ImageSourcePropType, |
Made this example to show how to use Next.js router for a 100% SPA (no JS server) app.
You use Next.js router like normally, but don't define getStaticProps and such. Instead you do client-only fetching with swr, react-query, or similar methods.
You can generate HTML fallback for the page if there's something meaningful to show before you "know" the params. (Remember, HTML is static, so it can't respond to dynamic query. But it can be different per route.)
Don't like Next? Here's how to do the same in Gatsby.
| // If the user has been on the page for over 12 hours, the next link | |
| // click will do a full page transition to get new code | |
| import React from "react"; | |
| import { BrowserRouter, useLocation } from "react-router-dom"; | |
| let hour = 3600000; | |
| export default function StaleAppRouter(props) { | |
| let [isStale, setIsStale] = useState(false); |