If you've ever seen Kysely return the error below, this post is for you.
☠️ Error: corrupted migrations: previously executed migration
Traditional migration tools like golang-migrate and
kysely use a timestamp-based approach with significant limitations
that can cause deployment failures and operational headaches.
Traditional tools track only the current version number in a single table
(kysely_migration for kysely, schema_migrations for golang-migrate), not a
history of applied migrations. This means:
- The database stores only one number (e.g.,
20240112120100) - Migrations are applied sequentially by increasing version number
- Once at version X, all migrations < X are assumed to have been applied
Consider this common scenario:
- Developer A creates migration
20240112_1200_add_users.sqlon their feature branch - Developer B creates migration
20240112_1201_add_products.sqlon their feature branch - Developer B merges first, production runs the 12:01pm migration
- Developer A merges second, but their 12:00pm migration will be skipped because the database is already at version 12:01pm
When out-of-order migrations occur, your options are all problematic:
- Rename the migration to a later timestamp (breaks reproducibility)
- In Kysely, this will cause the aforementioned "corrupted migrations" error.
- Force the version back using
migrate force(risky in production) - Manually apply the skipped migration (error-prone, breaks automation)
- Migration order depends on arbitrary timestamps from developers' machines
- Parallel development leads to unpredictable deployment sequences
- No built-in mechanism to detect or prevent out-of-order issues until deployment fails
Atlas addresses these fundamental problems through two different approaches: improved versioned migrations and declarative schema management.
Atlas uses an atlas.sum file that:
- Creates automatic merge conflicts when teams add migrations in parallel
- Forces developers to resolve ordering before merging
- Ensures linear, deterministic migration history
- Development:
--exec-order non-linearflag for local flexibility - Production:
atlas migrate rebaseto safely reorder migrations - CI Integration: Automatic validation that migration directories are replay-able
Instead of writing sequential migration files, you can:
- Define desired state in Atlas HCL
- Atlas calculates the diff automatically
- Atlas generates safe migrations with built-in testing
This completely eliminates the timestamp problem because there's no concept of migration ordering - just current state → desired state.
Here's a simple e-commerce schema showcasing Atlas features:
schema "public" {
comment = "E-commerce application schema"
}
table "users" {
schema = schema.public
comment = "User accounts for the platform"
column "id" {
type = uuid
default = sql("gen_random_uuid()")
comment = "Unique user identifier"
}
column "email" {
type = varchar(255)
null = false
}
column "full_name" {
type = text
null = false
}
column "created_at" {
type = timestamptz
default = sql("now()")
}
column "active_period" {
type = tstzrange # PostgreSQL range type (Pro feature)
comment = "Period when user account is active"
}
primary_key {
columns = [column.id]
}
unique "unique_email" {
columns = [column.email]
}
index "idx_users_created" {
columns = [column.created_at]
type = BTREE
}
check "valid_email" {
expr = "email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}$'"
}
}
table "products" {
schema = schema.public
comment = "Product catalog"
column "id" {
type = uuid
default = sql("gen_random_uuid()")
}
column "sku" {
type = varchar(50)
null = false
}
column "name" {
type = text
null = false
}
column "price" {
type = decimal(10,2)
null = false
}
column "discounted_price" {
type = decimal(10,2)
as = "price * 0.9" # Generated column
as_type = STORED
}
column "stock_quantity" {
type = integer
default = 0
}
primary_key {
columns = [column.id]
}
unique "unique_sku" {
columns = [column.sku]
}
index "idx_products_name" {
columns = [column.name]
type = GIN # For full-text search
}
check "positive_price" {
expr = "price > 0"
}
check "valid_stock" {
expr = "stock_quantity >= 0"
}
}
table "orders" {
schema = schema.public
comment = "Customer orders"
column "id" {
type = uuid
default = sql("gen_random_uuid()")
}
column "user_id" {
type = uuid
null = false
}
column "product_id" {
type = uuid
null = false
}
column "quantity" {
type = integer
null = false
}
column "order_date" {
type = timestamptz
default = sql("now()")
}
column "status" {
type = varchar(20)
default = "pending"
}
column "total_amount" {
type = decimal(10,2)
as = sql("quantity * (SELECT price FROM products WHERE id = product_id)")
as_type = VIRTUAL # Computed on read
}
primary_key {
columns = [column.id]
}
foreign_key "fk_user" {
columns = [column.user_id]
ref_columns = [table.users.column.id]
on_delete = CASCADE
on_update = CASCADE
}
foreign_key "fk_product" {
columns = [column.product_id]
ref_columns = [table.products.column.id]
on_delete = RESTRICT
on_update = CASCADE
}
index "idx_orders_user_date" {
columns = [column.user_id, column.order_date]
type = BTREE
}
check "positive_quantity" {
expr = "quantity > 0"
}
check "valid_status" {
expr = "status IN ('pending', 'processing', 'shipped', 'delivered', 'cancelled')"
}
}This schema demonstrates:
- UUID primary keys with auto-generation
- Range types for time periods (Pro feature)
- Generated columns (stored and virtual)
- Check constraints for data validation
- Foreign keys with different cascade behaviors
- Multiple index types (BTREE, GIN)
- Unique constraints on business keys
- Comments for documentation
- Complex expressions in constraints
Atlas will automatically handle creating, modifying, or dropping these objects based on your desired state, eliminating migration ordering issues entirely.
- Views, Materialized Views
- Triggers, Functions, Procedures
- Partitions, Policies
- Composite/Range Types (
tstzrange) - Foreign Servers, Extensions
- Pre-migration linting and analysis
- Dry-run capabilities
- Schema drift detection
- Automatic rollback planning
- Schema as code with proper version control
- Variable injection for environment-specific configs
- Reusable schema components
- Editor support (VSCode, JetBrains, Neovim)
- Starter (Free): Basic features, limited databases
- Pro ($9/user/month): Full features, all databases, business support
- Enterprise: Custom pricing, unlimited seats, SLA
- Pro ($59/project/month): 2 target databases included
- Additional databases can be added
- $39/database/month: Changelog, drift detection, alerts
For teams that can afford the Pro tier ($9/user/month + optional pipeline costs), Atlas provides:
- Immediate ROI through prevented deployment failures
- Developer productivity with declarative schemas
- Operational safety with built-in testing and linting
- Future-proofing with advanced database features
The cost of a single production migration failure (rollback, hotfix, downtime) often exceeds the annual Atlas subscription cost. For teams doing serious database development, Atlas transforms schema management from a liability into a predictable, safe process.
Teams can adopt Atlas incrementally:
- Start with Atlas for new projects
- Import existing schemas into Atlas HCL
- Use Atlas to generate future migrations
- Gradually move to fully declarative workflow
The investment in Atlas pays for itself through reduced operational incidents, faster development cycles, and eliminated "migration surprise" deployments.