This guide maps file types found across the repos to practical deployment, schema checks, data checks, unit tests, and integration tests. It is written for Windows + PowerShell workflows.
- Run fast static checks (lint/schema/type) on every PR.
- Run data and integration checks on merge/main or release branches.
- Keep unit tests isolated with mocks/stubs for network, DB, and cloud calls.
- Use plan/preview first for infrastructure changes.
- Fail fast in CI using non-zero exit codes.
| File Type(s) | How to Deploy | Verify Schema | Verify Data | Unit Test | Integration Test | Key Tools |
|---|---|---|---|---|---|---|
.yml, .yaml |
Apply via CI pipelines and environment-specific variable groups | Lint structure and required keys | Validate resolved values/paths/secrets references | N/A (config-only) | Execute pipeline template validation/dry-run in test project | yamllint, yq, PowerShell key assertions |
.json, .platform, .sample |
Package with app artifact or infra payloads | Parse + JSON Schema validation | Assert required fields are non-empty and valid | N/A (config-only) | Startup smoke test loads config without errors | jq, ajv-cli, ConvertFrom-Json |
.env |
Do not deploy raw files; map to secret stores and inject at runtime | Validate required key set and formats | Reject placeholders (CHANGEME, blank values) |
N/A | Start app/container using injected secrets in test env | dotenv-linter, PowerShell regex checks |
.sql |
Versioned migrations via SQL deployment stage | SQL lint + parse checks | Seeded DB assertions (row counts, keys, null rules) | Test SQL wrappers and query builders; use pgTAP for PostgreSQL assertions | Run migration + rollback test against ephemeral DB | sqlfluff, Invoke-Sqlcmd, tSQLt/pgTAP (where applicable) |
.ps1 |
Deploy signed scripts/modules; run via controlled automation accounts | Static script analysis and metadata checks | Validate generated outputs and side effects in temp paths | Pester tests with mocked cmdlets | End-to-end runbook execution in test subscription/resource group | PSScriptAnalyzer, Pester |
.kql |
Deploy query/dashboard artifacts via workspace APIs | Parse/compile query text | Validate query outputs on seeded test datasets | Test KQL construction helpers in host language | Execute KQL against test Eventhouse/Kusto cluster | Fabric/ADX APIs, PowerShell wrappers |
.graphql |
Deploy schema/query artifacts with backend release | GraphQL schema lint/introspection | Validate result shape and required fields | Test resolver/query helper functions | Contract test against test GraphQL endpoint | graphql-cli, spectaql/schema tooling |
.py |
Build wheel/container; deploy via service release pipeline | Type/lint/import checks | Fixture-based output assertions and contract tests | pytest with mocks for DB/HTTP/filesystem and architecture boundary tests |
pytest -m integration against test DB/services including Cosmos DB emulator/test account |
pytest, pytest-mock, unittest.mock, ruff, mypy, pytest-archon |
.ts, .tsx, .js, .jsx |
Build static bundle or container and promote through envs | Lint + type checks | Validate API contract fixtures and component states | Unit/component tests with mocked modules/network | UI/API integration in test environment | eslint, tsc --noEmit, vitest/jest, playwright/cypress |
.java, .cs |
Build artifacts and deploy with service pipeline/container image | Compile + static analysis | Serialization and mapping checks on fixtures | JUnit/xUnit tests with mock frameworks | Service integration using ephemeral dependencies | maven/gradle, dotnet test, Mockito, Moq, Testcontainers |
.csv, .parquet, .geojson |
Publish to Lakehouse/warehouse versioned paths | Enforce schema contract (columns/types/order) | Data quality checks (nulls, uniqueness, range checks, geometry validity) | Unit test transform functions with tiny fixtures | Execute ingest pipeline on representative sample partition | pandera, great_expectations, duckdb, pyarrow, ogrinfo |
.ipynb |
Run parameterized notebooks in orchestrated jobs | Validate notebook metadata/kernel/cell structure | Assert output tables/metrics meet thresholds | Unit-test extracted Python modules used by notebooks | Execute notebooks end-to-end on test data | papermill, nbqa, jupyter nbconvert --execute, pytest |
.pdf, .png, .zip |
Publish as immutable release artifacts | Manifest/signature/hash checks | Size/hash/allowlist validation | N/A (binary) | Artifact download/open/readability smoke tests | Get-FileHash, Expand-Archive, release artifact checks |
.pbir, .pbism, .tmdl, .pbix |
Deploy semantic/report artifacts via Fabric/Power BI deployment pipeline | Validate model metadata, bindings, and model rules via notebooks | Refresh datasets and verify key measures/queries and memory profile | Test model scripts/rules where supported | End-to-end refresh + report render smoke tests + notebook analyzers | Fabric/Power BI REST APIs, pbi-tools, XMLA scripts, service notebooks |
.bicep (if used) |
Run plan-like preview, then staged deploy | Compile and lint templates | Assert planned resources (SKU, region, tags, names) | Module-level assertions/scripts | Deploy to ephemeral RG/subscription and run smoke tests | bicep build, az deployment what-if, checkov |
.tf, .tfvars (if used) |
plan in PR, apply in gated release |
Format/validate/providers/modules checks | Inspect plan for policy/tag/naming compliance | Module tests/terratest where implemented | Apply to test workspace and verify provisioned resources | terraform fmt -check, terraform validate, terraform plan, tfsec, checkov |
.md, .txt (repo content) |
Publish as docs artifacts/site content | Markdown and link linting | Check generated docs links and anchors | Spelling and glossary validation | Docs site build and link crawl in preview env | markdownlint-cli2, lychee, cspell |
- Use it for all
.yml/.yamlfiles before pipeline execution. - Recommended command:
yamllint .- Use it to parse and validate
.jsonquickly in CI. - Recommended commands:
jq -e . path/to/file.json
jq -e '.requiredKey != null' path/to/file.json- Use
pytestfor Python unit and integration layers. - Use
pytest-mockorunittest.mockto isolate side effects. - Typical split:
pytest -m "not integration" -q
pytest -m integration -q- Use architecture tests to enforce layering boundaries (for example,
api -> services -> data, but notapi -> data). - Keep boundary rules in test code and fail PRs when forbidden imports or dependency directions are introduced.
- Complement
pytest-archonwith tools likeimport-linter/deptrywhen you need package-level dependency governance.
- Use
cspellto catch misspellings in docs, configs, and user-facing messages. - Maintain a project dictionary for domain-specific terms to minimize false positives.
- Recommended commands:
cspell "**/*.{md,txt,yml,yaml,json,ts,tsx,py}" --no-progress
cspell "**/*" --config .cspell.json- Use preview/plan checks before any deploy step.
- Bicep example:
bicep build .\infra\main.bicep
az deployment group what-if --resource-group <rg> --template-file .\infra\main.bicep- Terraform example:
terraform fmt -check
terraform validate
terraform plan -out tfplanUse Power BI/Fabric service notebooks to validate semantic model quality after deploying .pbir, .pbism, .tmdl, or .pbix changes.
- Open sample notebooks from the semantic model entry points in Power BI service.
- Run Best Practice Analyzer notebook to evaluate model design/performance rules.
- Run Model Memory Analyzer notebook to inspect table/column/partition memory usage.
- Record key findings (rule violations, high-memory objects) as release gates.
- Re-run after fixes and compare notebook outputs to ensure regression-free improvements.
- Model quality rules: Performance, DAX Expressions, Error Prevention, Maintenance, and Formatting categories.
- Memory hotspots: oversized tables/columns, relationships, hierarchies, partitions.
- Data correctness: sample DAX/query outputs for critical measures and dimensions after refresh.
- User must have Build permission on the semantic model.
- Workspace must be on Fabric capacity and user should have Contributor role for notebook creation.
- Avoid semantic model names ending with trailing whitespace (known notebook execution limitation).
- Use deployment pipeline stage order: deploy model -> refresh -> run notebook analyzers -> evaluate thresholds.
- Persist notebook outputs (JSON/CSV summary) to artifacts and fail CI/CD when configured thresholds are exceeded.
- Keep baseline snapshots for BPA violations and memory metrics to detect regressions over time.
Reference: https://learn.microsoft.com/en-us/power-bi/transform-model/service-notebooks
- Install and enable
pgTAPin test databases, then run SQL-based unit tests as part of migration validation. - Use
pg_provein CI to execute tests undertests/sql/(or your chosen folder). - Typical checks: table existence, constraint behavior, function outputs, view correctness, and permission grants.
- Suggested flow: apply migrations -> run
pgTAPtests -> run app integration tests -> optional rollback check.
- Unit tests: mock Cosmos DB SDK calls (
ContainerProxy,query_items,upsert_item) withpytest-mock. - Integration tests: run against Cosmos DB Emulator (local) or a dedicated non-prod account/container.
- Validate partition key behavior, RU-sensitive query paths, consistency expectations, and TTL/index policy behavior.
- Add idempotent setup/teardown fixtures and isolate data by test run id to avoid cross-test contamination.
- Validate workspace role assignments (Viewer, Contributor, Member, Admin) for expected access paths.
- Validate semantic model Build permission explicitly for notebook-based model verification flows.
- Add negative tests (forbidden actions) to confirm least-privilege is enforced.
- Test permission propagation after deployment: artifact access, refresh ability, notebook creation, and API execution.
- In CI, run a post-deploy smoke suite using a restricted service principal identity in addition to elevated identities.
.py,.ps1,.sql:snake_casefile names with action-oriented intent (for example,load_transactions.py,create_views.sql)..ts,.tsx,.js,.jsx: app conventions (camelCaseutilities,PascalCaseReact components) kept consistent per repo..yml/.yaml,.json: environment- and purpose-qualified names (for example,pipeline.pr.yml,settings.prod.json)..ipynb: include purpose and stage in name (for example,qa_evaluation_notebook.ipynb,ingest_stage01.ipynb)..pbir,.pbism,.tmdl,.pbix: include subject area + environment suffix where relevant to avoid deployment ambiguity.- Keep naming rules documented in a single source (README/CONTRIBUTING) and enforce with lint checks in CI.
lint-yaml-config– runyamllintfor pipeline/config files.validate-json-contracts– runjqparse checks + JSON Schema validation.check-env-contract– ensure required.envkeys exist and are non-placeholder.sql-lint-migration-smoke– lint SQL and run migration smoke tests.powershell-unit-tests– runPesterwith mocked external commands.python-unit-tests– runpytestunit tests + coverage.python-integration-tests– runpytest -m integrationagainst test dependencies.frontend-quality-gate– lint, typecheck, unit tests for TS/JS.data-contract-quality– schema + quality checks for CSV/Parquet/GeoJSON.notebook-smoke– execute selected notebooks withpapermill.iac-bicep-validate–bicep build+az deployment what-if.iac-terraform-validate–terraform fmt/validate/plan.postgres-pgtap-tests– runpg_provetest packs after migration in ephemeral PostgreSQL.cosmos-integration-tests– run integration tests against Cosmos emulator/test account with isolated containers.fabric-permission-smoke– verify Build/workspace role permissions with positive and negative checks.python-architecture-boundaries– enforce import/layer boundaries viapytest-archon(and optionalimport-linter).spelling-quality-gate– runcspellon docs/config/code comments.
This reference maps each tool/API mentioned in file-type-deploy-verify-test-guide.md to official docs/help, CLI/API references, GitHub repositories, and Microsoft Learn pages where available.
- For PowerShell-native commands, use
Get-Help <Command> -Fullfor offline/local help. - For CI/CD, pin tool versions in lockfiles or pipeline setup steps to avoid drift.
- For Fabric/Power BI verification, combine REST/XMLA checks with service notebook analyzers for semantic model quality gates.