ESCAPER is a pragmatic, ECS-flavored way to structure software—especially interactive apps, simulations, and data-oriented workloads.
It stands for:
- E — Entity: a stable identifier (an “ID”) for a thing in your world.
- S — System: logic that runs over matching data (usually in batches).
- C — Component: plain data attached to entities (no behavior required).
- A — App: the orchestration layer that wires everything together.
- P — Plugin: a reusable bundle of systems/resources/events/config.
- E — Event: typed messages for decoupled communication.
- R — Resource: shared state/config that isn’t naturally “per-entity”.
In Bevy terms: entities + components + systems are the core ECS; App and Plugin are composition; Event and Resource are structured communication and shared state.
Components let you express feature slices as data. Systems become small, focused transforms over that data. Plugins group related slices without forcing an inheritance tree or giant “manager” types.
Example: movement via Position + Velocity + Time
use bevy::prelude::*;
#[derive(Component, Copy, Clone, Debug)]
struct Position(Vec3);
#[derive(Component, Copy, Clone, Debug)]
struct Velocity(Vec3);
fn movement_system(mut q: Query<(&mut Position, &Velocity)>, time: Res<Time>) {
let dt = time.delta_seconds();
for (mut pos, vel) in &mut q {
pos.0 += vel.0 * dt;
}
}Key point: the movement system doesn’t “own” entities. It simply operates on any entity that has the relevant components.
Entities scale by adding/removing components, not by growing a deep type hierarchy. That makes it cheap to introduce new behavior:
- add
Healthto make something damageable - add
AIControllerto make something think - remove
Velocityto “freeze” without special-case logic
Example: spawning an entity and evolving it
use bevy::prelude::*;
#[derive(Component)]
struct Health(f32);
fn setup(mut commands: Commands) {
let e = commands
.spawn((
Position(Vec3::ZERO),
Velocity(Vec3::new(1.0, 0.0, 0.0)),
))
.id();
// Later you can “upgrade” the same entity with new capabilities:
commands.entity(e).insert(Health(100.0));
}Resources are great for true shared state: configuration, caches, global clocks, RNG seeds, asset registries, etc.
The main design pressure: if the state is conceptually per-instance, prefer a component. Resources are powerful but can turn into implicit global coupling if overused.
Example: shared gravity config as a resource
use bevy::prelude::*;
#[derive(Resource)]
struct Gravity(Vec3);
fn apply_gravity(mut q: Query<&mut Velocity>, gravity: Res<Gravity>, time: Res<Time>) {
let dt = time.delta_seconds();
for mut v in &mut q {
v.0 += gravity.0 * dt;
}
}Events provide typed, buffered communication between systems. This keeps systems independent: producers don’t need to know who consumes an event (or whether anyone does).
Example: collision pipeline (detect → emit event → respond)
use bevy::prelude::*;
#[derive(Component, Copy, Clone)]
struct Collider { radius: f32 }
#[derive(Event, Copy, Clone, Debug)]
struct CollisionEvent { a: Entity, b: Entity }
fn collision_detection_system(
mut events: EventWriter<CollisionEvent>,
q: Query<(Entity, &Position, &Collider)>,
) {
// Naive O(n²) broadphase for clarity (fine for small counts).
let bodies: Vec<(Entity, Vec3, f32)> = q
.iter()
.map(|(e, p, c)| (e, p.0, c.radius))
.collect();
for i in 0..bodies.len() {
for j in (i + 1)..bodies.len() {
let (a, pa, ra) = bodies[i];
let (b, pb, rb) = bodies[j];
let r = ra + rb;
if pa.distance_squared(pb) <= r * r {
events.send(CollisionEvent { a, b });
}
}
}
}
fn collision_response_system(mut events: EventReader<CollisionEvent>) {
for ev in events.read() {
info!("collision: {:?} <-> {:?}", ev.a, ev.b);
}
}Notes:
- Events aren’t “async I/O”; they’re buffered messages within your app’s update loop.
- You can control ordering when you want deterministic pipelines.
Your App is the integration point: register resources, events, systems, schedules, and plugins. Plugins make it easy to ship features as a unit.
Example: a plugin that bundles physics-ish behavior
use bevy::prelude::*;
pub struct PhysicsPlugin;
impl Plugin for PhysicsPlugin {
fn build(&self, app: &mut App) {
app.init_resource::<Gravity>()
.add_event::<CollisionEvent>()
.add_systems(Update, apply_gravity)
.add_systems(Update, movement_system.after(apply_gravity))
.add_systems(Update, collision_detection_system)
.add_systems(Update, collision_response_system.after(collision_detection_system));
}
}
impl Default for Gravity {
fn default() -> Self {
Gravity(Vec3::new(0.0, -9.81, 0.0))
}
}
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_plugins(PhysicsPlugin)
.add_systems(Startup, setup)
.run();
}This scales well: each plugin owns its little slice; the app just composes slices.
A major perk of ECS-style design is that the scheduler can run systems in parallel when their data access doesn’t conflict. Data access patterns are explicit:
Query<&T>is read accessQuery<&mut T>/ResMut<T>is write access
That explicitness enables real concurrency while staying maintainable.
A common performance win is struct-of-arrays (SoA) layout: keep each component type contiguous so tight loops are cache-friendly and easier to vectorize.
Many ECS engines already store components in SoA-like layouts internally, grouped by archetype. You usually don’t need to hand-roll storage. But for hotspot kernels (custom broadphases, particle sims, ML feature transforms), a purpose-built SoA can still be valuable.
Example: a manual SoA for a specialized kernel
struct PlanarStorage {
positions: Vec<Vec3>,
velocities: Vec<Vec3>,
radii: Vec<f32>,
}
// Tight math kernel over the SoA; often easy for the compiler to optimize.
fn integrate(storage: &mut PlanarStorage, dt: f32) {
for i in 0..storage.positions.len() {
storage.positions[i] += storage.velocities[i] * dt;
}
}Guideline: keep ECS as the source of truth, and use SoA caches as derived data when profiling says it matters.
ESCAPER maps cleanly onto simulation-heavy ML workflows:
- Entities: agents, sensors, objects, cameras, labels, annotations
- Components: state/features (pose, velocity, class id, material, noise params)
- Systems: step dynamics, render, domain randomization, label extraction
- Events: “frame captured”, “episode ended”, “anomaly detected”
- Resources: seed control, dataset config, global environment params
- Plugins: swappable environments, logging/export backends, instrumentation
This gives you:
- composable experiments (swap plugins, keep core app)
- fast iteration (add components to add features)
- clearer profiling (systems are natural measurement boundaries)
DI excels at wiring long-lived services (I/O, databases, network clients) and enforcing boundaries in classic “object graph” architectures. ESCAPER shifts the center of gravity:
- behavior is selected by data presence (components), not by constructor wiring
- coupling is reduced via queries + events, not service locators
- concurrency becomes more natural because access patterns are explicit
In practice, many teams do a hybrid:
- use ESCAPER (ECS) for the hot inner loop and dynamic world state
- use DI (or simple constructors) for outer-shell services and integration code
ESCAPER is a useful mental model for building complex systems that stay modular under change: compose behavior with components, express logic as systems, integrate with apps/plugins, communicate with events, and reserve resources for truly shared state.
If you design the data well, the architecture tends to “click” into place—and performance follows naturally.