dbt-cloud-repo/ # github.com/myorg/dbt-cloud-repo
├── .github/
│ └── workflows/
│ └── dbt_cloud_deploy.yml # dbt Cloud CI/CD
Comparison of five Claude Code sessions that received the same prompt, with varying skill configurations and prompt refinements.
All sessions received essentially the same base prompt: create a demo Dagster project with Fivetran → dbt → Snowflake → PowerBI, Alteryx, Domo (migrating off) → Census/Fivetran Activations, with event-driven sensors and observe/orchestrate modes.
skills-10 received an enhanced prompt with additional explicit instructions: "Make sure any component that connects to an external system is using a state-backed component, uses a local cache and writes a set of mock assets using that cache, and that when it executes it logs a sample message and metadata instead of connecting to the external system. When modifying a component that exists, ALWAYS subclass, do not create a custom component."
| Aspect | Project 1 (testing-new-skills) |
Project 2 (testing-new-skills-2) |
|---|---|---|
| dbt project location | Inside src/.../defs/dbt_project/ |
Top-level dbt_project/ |
| dbt mart models | account_360, pipeline_summary, lead_conversion_funnel |
fct_sales_pipeline, dim_account_health, fct_lead_conversion |
Analysis Date: March 2, 2026
Between Dagster 1.6 and the current release (1.12.17, as of February 27, 2026), the documentation has undergone a fundamental transformation — not just a cosmetic refresh, but a philosophical and structural overhaul that reflects Dagster's evolution from a flexible orchestration framework into a highly opinionated data platform with prescribed workflows, new abstractions (Components, dg CLI), and a dramatically narrower "happy path" for new users.
| # --------------------------------------------------------------------------- | |
| # 1. Upstream asset — plain @asset | |
| # --------------------------------------------------------------------------- | |
| @dg.asset( | |
| group_name="pipeline", | |
| kinds={"python"}, | |
| tags={"domain": "orders"}, | |
| ) | |
| def raw_orders(context: dg.OpExecutionContext) -> list[dict]: |
| #!/usr/bin/env python3 | |
| #!/usr/bin/env python3 | |
| # /// script | |
| # dependencies = [ | |
| # "requests<3", | |
| # ] | |
| # /// | |
| import argparse | |
| import sys | |
| import requests |
Creating a new persistent deployment requires coordinating two systems: the Dagster+ API and your Kubernetes agent configuration.
The deployment must exist in Dagster+ as a logical entity. This is done via the GraphQL API:
import httpxThis document explains how to limit concurrent executions of individual assets using Dagster's pool-based concurrency system.
By default, Dagster allows unlimited concurrent executions of the same asset across multiple runs. This can cause issues when:
- An asset writes to a shared resource (database table, file, API)
- An asset has rate limits or resource constraints
- You want to prevent duplicate work
| #!/usr/bin/env python3 | |
| # /// script | |
| # requires-python = ">=3.10" | |
| # dependencies = [ | |
| # "requests", | |
| # ] | |
| # /// | |
| """ | |
| Query Dagster's GraphQL API to get monthly credits (ops executed + asset materializations). |