A deep technical guide to how Apache Pekko actors work under the hood.
This document peels back the layers of the actor system -- starting from what you write as a user, then following your message all the way down to the thread pool, revealing the engineering decisions and trade-offs at each level. Every code snippet references the actual Pekko source.
- Part I: What You See
- Part II: The Runtime Engine
- Part III: Supervision and Lifecycle
- Part IV: Safety by Design
- Part V: Typed Actors
- Part VI: Reflections
Let's start with what you write as a user. Here is the simplest possible Pekko actor:
import org.apache.pekko.actor._
// Define the actor
class Greeter extends Actor {
def receive: Receive = {
case name: String =>
println(s"Hello, $name!")
sender() ! s"Hi back, $name"
}
}
// Create the system and actor
val system = ActorSystem("example")
val greeter = system.actorOf(Props[Greeter](), "greeter")
// Send a message
greeter ! "World"This looks simple. But between greeter ! "World" and println("Hello, World!")
lies a sophisticated pipeline involving lock-free queues, atomic bit
manipulation, memory barriers, and thread pool scheduling -- all happening in
microseconds.
Let's trace that message from start to finish.
Here's the complete journey of greeter ! "World", shown as a sequence:
Your Code Framework Internals
───────── ───────────────────
greeter ! "World"
│
▼
ActorRef.!(msg) LocalActorRef delegates to ActorCell
│ ┌─────────────────────────────────┐
▼ │ ActorRef.scala:429-430 │
actorCell.sendMessage() │ actorCell.sendMessage(msg, s) │
│ └─────────────────────────────────┘
▼
Wrap in Envelope Envelope("World", sender)
│ ┌─────────────────────────────────┐
▼ │ dungeon/Dispatch.scala:167 │
dispatcher.dispatch() │ dispatcher.dispatch(this, env) │
│ └─────────────────────────────────┘
▼
┌─────────────┐
│ Enqueue │ mbox.enqueue(self, envelope)
│ + Schedule │ registerForExecution(mbox)
└──────┬──────┘
│
═══════╪═══════════ Thread boundary ═══════════════
│
▼
ForkJoinPool picks up Mailbox.run() executes
the Mailbox as a task ┌─────────────────────────────────┐
│ │ Mailbox.scala:228 │
▼ │ processAllSystemMessages() │
Process system msgs first │ processMailbox() │
│ └─────────────────────────────────┘
▼
ActorCell.invoke() Unwraps Envelope, calls your code
│ ┌─────────────────────────────────┐
▼ │ ActorCell.scala:548-589 │
actor.aroundReceive() │ receiveMessage(msg) │
│ └─────────────────────────────────┘
▼
Your receive matches case "World" => println(...)
Two key properties:
- Sending is non-blocking. The
!operator just enqueues and returns. Your calling thread never waits. - Processing is single-threaded per actor. The mailbox ensures only one
thread runs your
receiveat a time. No locks needed in your actor code.
Now let's look at how each layer works.
The actor system is built from four interlocking pieces:
┌──────────────────────────────────────────────────────────────────┐
│ ActorSystem │
│ │
│ ┌──────────┐ ┌──────────────┐ ┌────────────────────┐ │
│ │ ActorRef │───>│ ActorCell │───>│ Mailbox │ │
│ │ │ │ │ │ │ │
│ │ Immutable│ │ ┌──────────┐ │ │ ┌────────────────┐ │ │
│ │ handle │ │ │ Actor │ │ │ │ Message Queue │ │ │
│ │ │ │ │ (your │ │ │ ├────────────────┤ │ │
│ │ The only │ │ │ code) │ │ │ │ System Msgs │ │ │
│ │ thing │ │ └──────────┘ │ │ │ (linked list) │ │ │
│ │ users │ │ │ │ └────────────────┘ │ │
│ │ touch │ │ + behavior │ │ │ │
│ │ │ │ stack │ │ Also a Runnable │ │
│ │ │ │ + current │ │ on the thread pool │ │
│ │ │ │ message │ │ │ │
│ └──────────┘ └──────────────┘ └────────┬───────────┘ │
│ │ │ │
│ │ ┌──────▼───────┐ │
│ │ │ Dispatcher │ │
│ │ │ (thread │ │
│ │ │ pool) │ │
│ │ └──────────────┘ │
│ Mixes in "dungeon" │
│ modules: │
│ ├── Dispatch │
│ ├── Children │
│ ├── DeathWatch │
│ ├── FaultHandling │
│ └── ReceiveTimeout │
└──────────────────────────────────────────────────────────────────┘
The source code for ActorCell shows the composition:
// ActorCell.scala:419-431
private[pekko] class ActorCell(
val system: ActorSystemImpl,
val self: InternalActorRef,
_initialProps: Props,
val dispatcher: MessageDispatcher,
val parent: InternalActorRef)
extends AbstractActor.ActorContext
with Cell
with dungeon.ReceiveTimeout
with dungeon.Children
with dungeon.Dispatch
with dungeon.DeathWatch
with dungeon.FaultHandling {
private[this] var _actor: Actor = _
var currentMessage: Envelope = _
private var behaviorStack: List[Actor.Receive] = emptyBehaviorStackEach dungeon module owns exactly one concern:
| Module | File | What it does |
|---|---|---|
Dispatch |
dungeon/Dispatch.scala |
Sends messages, manages the mailbox reference |
Children |
dungeon/Children.scala |
Creates and tracks child actors |
DeathWatch |
dungeon/DeathWatch.scala |
Implements watch()/unwatch() |
FaultHandling |
dungeon/FaultHandling.scala |
Handles failures, restarts, supervision |
ReceiveTimeout |
dungeon/ReceiveTimeout.scala |
Fires a message if actor is idle too long |
This is clean separation of concerns via Scala trait mixin -- each module can be
understood independently, but they all share the ActorCell state.
This is perhaps the most important design decision in the entire actor system.
The Mailbox class serves three distinct roles simultaneously:
┌─────────────────────────────────────┐
│ Mailbox │
│ │
│ Role 1: MESSAGE QUEUE │
│ ├── enqueue(msg) / dequeue() │
│ └── holds pending messages │
│ │
│ Role 2: THREAD POOL TASK │
│ ├── extends ForkJoinTask[Unit] │
│ ├── extends Runnable │
│ └── submitted to executor directly │
│ │
│ Role 3: MEMORY BARRIER │
│ ├── volatile status field │
│ ├── CAS operations form │
│ │ happens-before edges │
│ └── protects all ActorCell state │
└─────────────────────────────────────┘
// Mailbox.scala:68-71
private[pekko] abstract class Mailbox(val messageQueue: MessageQueue)
extends ForkJoinTask[Unit]
with SystemMessageQueue
with Runnable {Messages sent to an actor are enqueued in the mailbox's messageQueue. When the
mailbox gets its turn on a thread, it dequeues and processes them.
Most frameworks have separate "scheduler" and "mailbox" objects. Here they are unified. The mailbox is the object submitted to the executor:
// Dispatcher.scala:71-75
protected[pekko] def dispatch(receiver: ActorCell, invocation: Envelope): Unit = {
val mbox = receiver.mailbox
mbox.enqueue(receiver.self, invocation) // add message to queue
registerForExecution(mbox, ...) // submit mailbox to thread pool
}Extending ForkJoinTask directly (rather than wrapping in one) avoids an
allocation per schedule. This is why actors are so cheap: no per-actor thread,
no per-actor scheduler entry, just one object serving triple duty.
Here's the surprising part. ActorCell's mutable fields (_actor,
behaviorStack, currentMessage) are not volatile:
// ActorCell.scala:411-412
// vars don't need volatile since it's protected with the mailbox status
// Make sure that they are not read/written outside of a message processing
How does this work? The mailbox's setAsScheduled() and setAsIdle() are CAS
operations on a volatile status field. In the Java Memory Model, these create
happens-before edges:
Thread A (processes message N) Thread B (processes message N+1)
────────────────────────────── ─────────────────────────────────
writes to behaviorStack, _actor
│
setAsIdle() ── volatile write ──────> setAsScheduled() ── volatile read
│
reads behaviorStack, _actor
(sees Thread A's writes)
Because only one thread processes a mailbox at a time, and thread hand-off
always goes through the mailbox CAS, all writes from message N are guaranteed
visible when message N+1 runs -- even on a different thread. The cost of this
invariant: you must never read or write ActorCell fields outside of
invoke() / systemInvoke().
When a mailbox gets its turn, it doesn't process all queued messages. It
processes at most throughput messages (default 5) AND respects a time deadline:
// Mailbox.scala:261-278
@tailrec private final def processMailbox(
left: Int = java.lang.Math.max(dispatcher.throughput, 1),
deadlineNs: Long = ...): Unit =
if (shouldProcessMessage) {
val next = dequeue()
if (next ne null) {
actor.invoke(next)
...
processAllSystemMessages() // system messages checked between EACH regular message
if ((left > 1) && (!dispatcher.isThroughputDeadlineTimeDefined || (System.nanoTime - deadlineNs) < 0))
processMailbox(left - 1, deadlineNs)
}
}Processing timeline for one mailbox.run():
┌──────────────────────────────────────────────────────────────────┐
│ processAllSystemMessages() ← always first │
├──────────────────────────────────────────────────────────────────┤
│ invoke(msg1) │ sysMsgs │ invoke(msg2) │ sysMsgs │ invoke(msg3) │
│ │ check │ │ check │ │
├──────────────────────────────────────────────────────────────────┤
│ left=5 left=4 left=3 left=2 left=1 │
│ │
│ Stop if: left reaches 1 OR deadline exceeded OR no more msgs │
└──────────────────────────────────────────────────────────────────┘
Notice: processAllSystemMessages() is called between every regular message.
A Terminate or Suspend never waits for the batch to finish. This prevents
both starvation (one actor hogging the thread) and latency (system messages
waiting behind user messages).
The mailbox status is a single Int that encodes four independent concerns
using bit manipulation:
// Mailbox.scala:40-55
final val Open = 0 // Bit 0
final val Closed = 1 // Bit 0
final val Scheduled = 2 // Bit 1
final val shouldScheduleMask = 3
final val shouldNotProcessMask = ~2
final val suspendMask = ~3
final val suspendUnit = 4 // Bits 2+Bit layout of the status integer:
31 2 1 0
┌──────────────────────────────┬────┬────┬────┐
│ Suspend Count │Sch │ │O/C │
│ (each suspend adds 4) │eduled │Open/│
│ │ │ │Clsd│
└──────────────────────────────┴────┴────┴────┘
Examples:
0b...00000 = 0 → Open, not scheduled, not suspended
0b...00001 = 1 → Closed
0b...00010 = 2 → Open + Scheduled
0b...00100 = 4 → Open + suspended once
0b...00110 = 6 → Open + Scheduled + suspended once
0b...01000 = 8 → Open + suspended twice
0b...01100 = 12 → Open + suspended three times
A single compareAndSet can atomically check all four states at once. The
suspend count lives in the upper bits, so suspend() is just status + 4
and resume() is status - 4:
// Mailbox.scala (suspend)
@tailrec
final def suspend(): Boolean = currentStatus match {
case Closed => setStatus(Closed); false
case s =>
if (updateStatus(s, s + suspendUnit)) s < suspendUnit // CAS: add 4
else suspend() // retry on contention
}This eliminates locks entirely. Multiple threads can call suspend() and
resume() concurrently -- each one retries its CAS until it succeeds.
There are two categories of messages in Pekko, and they use completely different data structures:
Mailbox
┌─────────────────────┐
│ │
│ System Messages │ Lock-free intrusive linked list
│ (Create, Terminate,│ next pointer ON the message object
│ Watch, Suspend, │ Always processed first
│ Resume, Failed) │
│ │
├─────────────────────┤
│ │
│ Regular Messages │ Standard concurrent queue
│ (your messages) │ (ConcurrentLinkedQueue, etc.)
│ │
└─────────────────────┘
The system message list is "intrusive" -- the next pointer lives on the
message object itself, not in a separate wrapper node:
// dispatch/sysmsg/SystemMessage.scala
// WARNING: NEVER SEND THE SAME SYSTEM MESSAGE OBJECT TO TWO ACTORS
private[pekko] sealed trait SystemMessage extends PossiblyHarmful with Serializable {
@transient
private[sysmsg] var next: SystemMessage = _
}Regular queue (node-based): System message list (intrusive):
┌──────┐ ┌──────┐ ┌──────┐ ┌──────────────┐
│ Node │──>│ Node │──>│ Node │ │ Create │
│┌────┐│ │┌────┐│ │┌────┐│ │ next: ──────┼──> Supervise
││msg ││ ││msg ││ ││msg ││ │ │ next: ──> Watch
│└────┘│ │└────┘│ │└────┘│ └──────────────┘ next: null
└──────┘ └──────┘ └──────┘
3 node allocations 0 extra allocations
The trade-off: each system message can only be in one list at a time (hence the warning). But on the hot path, zero extra allocations.
The next field is not volatile. Safety comes from the mailbox's volatile status
providing the memory barrier -- the same trick as ActorCell's fields.
Two value classes encode ordering in the type system at zero cost:
LatestFirstSystemMessageList(LIFO, as prepended)EarliestFirstSystemMessageList(FIFO, afterreverse)
You can't accidentally pass a latest-first list where earliest-first is expected. The compiler catches it.
The code that runs on every single message delivered to every actor needs to be as lean as possible. Here's the hot path:
// ActorCell.scala:589
final def receiveMessage(msg: Any): Unit = actor.aroundReceive(behaviorStack.head, msg)// Actor.scala:422-427, 546-551
private final val NotHandled = new Object // singleton sentinel
private final val notHandledFun = (_: Any) => NotHandled // singleton function
protected[pekko] def aroundReceive(receive: Actor.Receive, msg: Any): Unit = {
// optimization: avoid allocation of lambda
if (receive.applyOrElse(msg, Actor.notHandledFun).asInstanceOf[AnyRef] eq Actor.NotHandled) {
unhandled(msg)
}
}Why not the obvious alternatives?
Approach 1: isDefinedAt + apply
if (receive.isDefinedAt(msg)) receive(msg) // Two virtual dispatches
else unhandled(msg) // PartialFunction may evaluate
// the pattern match TWICE
Approach 2: lift to Option
receive.lift(msg) match { // Allocates Some(result) on
case Some(_) => () // every successful match
case None => unhandled(msg)
}
Approach 3: applyOrElse with sentinel (what Pekko uses)
receive.applyOrElse(msg, notHandledFun) // One dispatch, zero allocations
// Sentinel check via reference eq
Both NotHandled and notHandledFun are pre-allocated once and reused forever.
The eq check is reference equality -- no boxing, no equals(). One virtual
dispatch, zero allocations, per message.
When you create an ActorSystem, a hierarchy of guardian actors is established
before any of your code runs:
"theOneWhoWalksTheBubblesOfSpaceTime"
(synthetic MinimalActorRef)
path: /bubble-walker
│
│ supervises
▼
Root Guardian
path: /
┌─────┴─────┐
│ │
▼ ▼
User Guardian System Guardian
path: /user path: /system
│ │
▼ ▼
Your actors Internal actors
/user/greeter /system/deadLetterListener
/user/worker /system/log1-Logging
Who supervises the root guardian? A synthetic MinimalActorRef with the
whimsical name theOneWhoWalksTheBubblesOfSpaceTime (inherited from Akka).
It's defined in ActorRefProvider.scala:448-487 and handles exactly three
system messages:
Failed-- the root guardian crashed; shut down the whole systemSupervise-- acknowledged (with a TODO comment still in the source)DeathWatchNotification-- the root guardian died; complete the termination promise
The shutdown chain uses DeathWatch:
System Guardian watches User Guardian
Root Guardian watches System Guardian
This ensures orderly shutdown: your actors stop first, then system actors, then the system itself.
// Create a simple hierarchy
val system = ActorSystem("demo")
class Parent extends Actor {
val child = context.actorOf(Props[Child](), "child")
def receive = { case msg => child.forward(msg) }
}
class Child extends Actor {
def receive = { case msg => println(s"${self.path}: $msg") }
}
val parent = system.actorOf(Props[Parent](), "parent")
parent ! "hello"
// Prints: pekko://demo/user/parent/child: helloThe full path reveals the hierarchy:
pekko://demo / user / parent / child
▲ ▲ ▲ ▲
│ │ │ │
system name user your your
guardian actor actor's child
When an actor throws an exception, a precise sequence of atomic operations ensures no message is lost or processed at the wrong time.
Suppose your actor's receive throws a NullPointerException:
class Fragile extends Actor {
def receive = {
case "boom" => throw new NullPointerException("oops")
case other => println(other)
}
}Timeline of failure handling:
1. ActorCell.invoke()
│ Your receive throws NullPointerException
│
▼
2. handleInvokeFailure() FaultHandling.scala:214-225
│
├── suspendNonRecursive() ◄── FIRST: atomically suspend mailbox
│ (mailbox status += suspendUnit) No more messages will be processed
│
├── suspendChildren() ◄── Recursively suspend all children
│
└── parent ! Failed(self, cause, uid) ◄── Notify supervisor
│
▼
3. Parent receives Failed system message
│
└── supervisorStrategy.handleFailure()
│
├── Resume → faultResume() mailbox status -= suspendUnit
├── Restart → faultRecreate() see below
├── Stop → context.stop(child)
└── Escalate → throw to grandparent
4. If Restart (the default for most exceptions):
│
├── assert(mailbox.isSuspended) ◄── FaultHandling.scala:113
│ INVARIANT: must be suspended
├── actor.aroundPreRestart(cause, msg)
├── clearActorFields(recreate = true)
├── Wait for children to terminate
├── Create new actor instance
├── actor.aroundPostRestart(cause)
└── faultResume() ◄── mailbox status -= suspendUnit
│ Messages flow again
▼
5. Actor resumes with fresh state
The critical assertion at FaultHandling.scala:113:
assert(mailbox.isSuspended, "mailbox must be suspended during restart, status=" + mailbox.currentStatus)This is the mechanical heart of supervision. Between the old actor's death and
the new actor's postRestart, the mailbox is suspended -- no message can sneak
through.
What if system messages arrive while the actor is restarting (waiting for children to terminate)? They get stashed and replayed:
// ActorCell.scala:505-510
def shouldStash(m: SystemMessage, state: Int): Boolean =
(state: @switch) match {
case DefaultState => false // normal: process everything
case SuspendedState => m.isInstanceOf[StashWhenFailed]
case SuspendedWaitForChildrenState => m.isInstanceOf[StashWhenWaitingForChildren]
}The states form a hierarchy: Default(0) < Suspended(1) < SuspendedWaitForChildren(2).
When the state moves "up" (less suspended), stashed messages are replayed:
val newState = calculateState
val todo = if (newState < currentState) rest.reversePrepend(unstashAll()) else restMarker traits on system message classes (StashWhenFailed, StashWhenWaitingForChildren)
encode these rules in the type system. The compiler ensures you can't accidentally
stash a message type that shouldn't be stashed.
Children are not stored in a mutable collection. The entire container is an immutable object that gets atomically swapped via CAS:
// dungeon/AbstractActorCell.java:36-40
childrenHandle = lookup.findVarHandle(
ActorCell.class,
"org$apache$pekko$actor$dungeon$Children$$_childrenRefsDoNotCallMeDirectly",
ChildrenContainer.class);The container type itself encodes the actor's lifecycle state:
NormalChildrenContainer
(TreeMap of children, normal operation)
│
│ actor stopping or restarting
▼
TerminatingChildrenContainer(reason)
├── reason = Termination → shutting down permanently
├── reason = Recreation(cause) → restarting after failure
└── reason = Creation() → initial creation
│
│ last child terminated
▼
TerminatedChildrenContainer
(permanently dead, rejects all new children)
// dungeon/ChildrenContainer.scala
object TerminatedChildrenContainer extends EmptyChildrenContainer {
override def reserve(name: String): ChildrenContainer =
throw new IllegalStateException("cannot reserve actor name '" + name + "': already terminated")
}Because transitions are atomic CAS swaps, there is no window where an actor is "half-terminated" with an inconsistent children map.
When an actor dies, it notifies its watchers. But the order of notification is carefully controlled:
// dungeon/DeathWatch.scala:144-158
/*
* It is important to notify the remote watchers first, otherwise RemoteDaemon might shut down,
* causing the remoting to shut down as well. At this point Terminated messages to remote
* watchers are no longer deliverable.
*/
watchedBy.foreach(sendTerminated(ifLocal = false)) // remote watchers FIRST
watchedBy.foreach(sendTerminated(ifLocal = true)) // local watchers secondThe race condition this prevents:
What could go wrong if local watchers were notified first:
Actor dies
│
├─── notify local watcher (RemoteDaemon)
│ │
│ └─── RemoteDaemon triggers remoting shutdown ← too fast!
│ │
│ └─── remoting mailbox closed
│
└─── notify remote watcher ← TOO LATE, remoting is dead
Message lost silently!
What actually happens (remote first):
Actor dies
│
├─── notify remote watcher ← message in remoting mailbox
│ (safe, remoting still alive)
│
└─── notify local watcher (RemoteDaemon)
│
└─── RemoteDaemon triggers remoting shutdown
(but remote Terminated was already enqueued)
Actor creation is split into two phases that cannot be collapsed:
// dungeon/Children.scala:310-343 (inside makeChild)
reserveChild(name) // 1. Atomically reserve the name
val actor = provider.actorOf(...) // 2. Create ActorRef (mailbox not attached)
if (mailbox ne null) { // 3. Propagate suspension state
val suspendCount = mailbox.suspendCount
var i = 1
while (i <= suspendCount) {
actor.suspend() // Match parent's suspend count
i += 1
}
}
initChild(actor) // 4. Register in children container
actor.start() // 5. Attach mailbox → messages flowWhy two phases matter:
Phase 1: initChild(actor) Phase 2: actor.start()
───────────────────────── ────────────────────────
Actor exists as reference Mailbox attached to dispatcher
Registered in parent Messages can be processed
BUT cannot process messages Actor is "live"
│
│ ◄── Window for parent to suspend child
│ if parent is itself suspended
│
Without this window, a child created during a parent's restart would start processing messages while it should be suspended.
The same pattern appears at the system level -- ActorRefProvider uses lazy vals
for guardians, with system initially null:
// ActorRefProvider.scala:613-620
private[pekko] def init(_system: ActorSystemImpl): Unit = {
system = _system // NOW provider can reference the system
rootGuardian.start() // force lazy val, attach to dispatcher
systemGuardian.sendSystemMessage(Watch(guardian, systemGuardian))
rootGuardian.sendSystemMessage(Watch(systemGuardian, rootGuardian))
}This solves the chicken-and-egg problem: actor references need the ActorSystem,
but the ActorSystem constructor needs actor references.
You might wonder: what stops someone from writing new MyActor() directly
instead of using actorOf? A ThreadLocal stack with null sentinels:
// Actor.scala:498-507
implicit val context: ActorContext = {
val contextStack = ActorCell.contextStack.get
if (contextStack.isEmpty || (contextStack.head eq null))
throw ActorInitializationException(
s"You cannot create an instance of [${getClass.getName}] explicitly using the constructor (new). " +
"You have to use one of the 'actorOf' factory methods to create a new actor.")
val c = contextStack.head
ActorCell.contextStack.set(null :: contextStack) // push null marker
c
}The handshake:
Framework (newActor): Actor constructor:
───────────────────── ──────────────────
Push real context onto stack
│
└──── calls new MyActor()
│
└──── implicit val context = {
pop context from stack
push NULL marker ◄── sentinel
}
If someone writes `new MyActor()` directly:
│
└──── implicit val context = {
stack is empty or head is null
→ throw ActorInitializationException
}
If Actor constructor creates nested `new ChildActor()`:
│
└──── implicit val context = {
head is null (the sentinel)
→ throw ActorInitializationException
}
Zero-allocation enforcement of a critical invariant. The finally block in
newActor() handles cleanup for both cases (null marker was pushed or not).
// ActorCell.scala:601-602
def become(behavior: Actor.Receive, discardOld: Boolean = true): Unit =
behaviorStack = behavior :: (if (discardOld && behaviorStack.nonEmpty) behaviorStack.tail else behaviorStack)With discardOld = true (default): With discardOld = false:
Before: [A, B, C] Before: [A, B, C]
become(X) become(X, discardOld = false)
After: [X, B, C] After: [X, A, B, C]
↑ stack grows!
Safe: bounded stack Danger: unbounded if
cycling between behaviors
The default protects against memory leaks. And unbecome() past the bottom
resets to actor.receive rather than crashing:
def unbecome(): Unit = {
val original = behaviorStack
behaviorStack =
if (original.isEmpty || original.tail.isEmpty) actor.receive :: emptyBehaviorStack
else original.tail
}If you watch an actor but forget to handle Terminated:
class Watcher extends Actor {
val worker = context.watch(context.actorOf(Props[Worker]()))
def receive = {
case "work" => worker ! "do it"
// Oops: no case for Terminated!
}
}You don't silently lose the notification. The default unhandled throws:
// Actor.scala:657-662
def unhandled(message: Any): Unit = {
message match {
case Terminated(dead) => throw DeathPactException(dead) // ← kills this actor too!
case _ => context.system.eventStream.publish(UnhandledMessage(message, sender(), self))
}
}Worker dies
│
▼
Terminated(worker) delivered to Watcher
│
▼
Watcher.receive doesn't match Terminated
│
▼
unhandled(Terminated(worker))
│
▼
throw DeathPactException(worker)
│
▼
Watcher's supervisor handles the failure
(Watcher dies or restarts)
This is a death pact: if you watch someone, you must handle their death, or you die too. A common programmer error becomes a visible failure rather than silent data loss.
Two ActorRef objects with the same path but different UIDs are not equal:
// ActorRef.scala:173-184
final override def hashCode: Int = {
if (path.uid == ActorCell.undefinedUid) path.hashCode
else path.uid // UID IS the hashCode -- no computation needed
}
final override def equals(that: Any): Boolean = that match {
case other: ActorRef => path.uid == other.path.uid && path == other.path
case _ => false
}The UID (a random Int from ThreadLocalRandom) distinguishes incarnations:
Actor restarts at path /user/worker:
Before restart: ActorRef(path=/user/worker, uid=48271923)
After restart: ActorRef(path=/user/worker, uid=91038475)
old.equals(new) → false (different incarnation)
old.hashCode → 48271923 (the UID itself, no computation)
The UID serves triple duty: incarnation identifier, hashCode (zero
computation), and uses ThreadLocalRandom (zero contention).
A common question: does the typed actor API (Behavior[T], ActorRef[T]) have
its own runtime? The source code answers explicitly:
// actor-typed ActorSystem.scala:295-325
/**
* Create an ActorSystem based on the classic [[pekko.actor.ActorSystem]]
* which runs Pekko Typed [[Behavior]] on an emulation layer. In this
* system typed and classic actors can coexist.
*/Typed actors are built entirely on top of classic actors.
Every typed concept wraps its classic counterpart:
┌──────────────────────────────────────────────────────────────────┐
│ Typed API (what you use) │
│ │
│ ActorSystem[T] ActorRef[T] ActorContext[T] Behavior[T] │
│ │ │ │ │ │
│ │ adapts │ adapts │ adapts │ runs │
│ │ via │ via │ via │ inside │
│ ▼ ▼ ▼ ▼ │
│ ActorSystem- ActorRef- ActorContext- ActorAdapter │
│ Adapter Adapter Adapter extends │
│ │ │ │ classic.Actor │
├───────┼────────────────┼──────────────┼────────────────┼─────────┤
│ ▼ ▼ ▼ ▼ │
│ Classic Runtime (what actually runs) │
│ │
│ ActorSystemImpl ActorRef ActorContext ActorCell │
│ Dispatcher Mailbox FaultHandling DeathWatch │
└──────────────────────────────────────────────────────────────────┘
actor-typed/.../internal/adapter/
├── ActorAdapter.scala Wraps Behavior[T] in classic.Actor
├── ActorSystemAdapter.scala Wraps classic.ActorSystemImpl
├── ActorContextAdapter.scala Wraps classic.ActorContext
├── ActorRefAdapter.scala Wraps classic.ActorRef
├── PropsAdapter.scala Converts Behavior to classic.Props
└── GuardianStartupBehavior.scala Deferred startup wrapper
The key class is ActorAdapter -- it is a classic actor:
// actor-typed/.../internal/adapter/ActorAdapter.scala:65-123
private[typed] final class ActorAdapter[T](_initialBehavior: Behavior[T], ...)
extends classic.Actor {
private var behavior: Behavior[T] = _initialBehavior
override protected[pekko] def aroundReceive(receive: Receive, msg: Any): Unit = {
msg match {
case classic.Terminated(ref) =>
handleSignal(Terminated(ActorRefAdapter(ref)))
case classic.ReceiveTimeout =>
handleMessage(ctx.receiveTimeoutMsg)
case signal: Signal =>
handleSignal(signal)
case msg =>
handleMessage(msg.asInstanceOf[T])
}
}
private def handleMessage(msg: T): Unit = {
next(Behavior.interpretMessage(behavior, ctx, msg), msg)
}
}Messages arrive through the classic aroundReceive hook -- the same one we saw
in Section 7. Classic system messages
(Terminated, ReceiveTimeout) are translated to their typed equivalents.
1. typedRef.tell(msg)
│
▼
2. ActorRefAdapter classicRef ! msg
│
════╪════════ same pipeline as classic (Sections 2-7) ════════
│
3. Classic Dispatcher → Mailbox → ActorCell.invoke()
│
▼
4. ActorAdapter.aroundReceive() ◄── bridge point
│
▼
5. Behavior.interpretMessage(behavior, ctx, msg)
│
▼
6. Returns next Behavior
(or same → keep current, or Stopped → terminate)
Steps 2-3 are the exact same pipeline from Part II. The Mailbox, Dispatcher, bit-field status, system message priority, supervision -- all shared.
TYPED ADDS: TYPED DOES NOT ADD:
───────────────────────────── ──────────────────────────────
+ Compile-time message type safety - No new runtime or scheduler
(ActorRef[T] only accepts T) - No new mailbox implementation
+ Functional behavior API - No new supervision engine
(Behaviors.receive, .setup) - No new dispatcher
+ Explicit reply-to in protocol - No new memory barrier mechanism
(no implicit sender())
+ Behavior change = return value Everything in Sections 1-14
(no become/unbecome) applies equally to typed actors
import org.apache.pekko.actor.typed._
import org.apache.pekko.actor.typed.scaladsl._
// Typed version of the Greeter from Section 1
object Greeter {
sealed trait Command
case class Greet(name: String, replyTo: ActorRef[Greeted]) extends Command
case class Greeted(name: String)
def apply(): Behavior[Command] = Behaviors.receive { (context, message) =>
message match {
case Greet(name, replyTo) =>
context.log.info("Hello, {}!", name)
replyTo ! Greeted(name)
Behaviors.same // ← return value replaces become()
}
}
}
// At runtime, Greeter.apply() returns a Behavior[Command]
// which gets wrapped in ActorAdapter (a classic.Actor)
// which runs inside a classic ActorCell
// with a classic Mailbox, Dispatcher, and all the machinery above.Six recurring patterns emerge across the codebase:
Rather than per-field volatiles or locks, the mailbox status CAS provides all
necessary memory barriers. One volatile field protects all of ActorCell.
Suspend counts, open/closed state, and scheduling state share a single integer. A single CAS handles what would otherwise require multiple atomic operations.
Sentinel objects instead of Option. applyOrElse instead of isDefinedAt + apply. Intrusive linked lists instead of node-allocating queues. ForkJoinTask
as the mailbox itself.
LatestFirstSystemMessageList vs EarliestFirstSystemMessageList,
WaitingForChildren marker trait, StashWhenFailed marker trait -- the Scala
type system prevents ordering and state errors at compile time.
discardOld = true, death pact exceptions for unhandled Terminated, null
sentinels to prevent new Actor(). The framework assumes users will make
mistakes and turns those mistakes into visible failures.
From ActorSystem.init() to initChild + start, the pattern of "create the
reference, then activate it" appears at every level. It always solves the same
problem: you cannot fully initialize something that needs a reference to a
system that is itself still initializing.
| File | Lines | Key Contents |
|---|---|---|
actor/Actor.scala |
~663 | Actor trait, Receive type, aroundReceive, unhandled, lifecycle hooks |
actor/ActorRef.scala |
~1112 | ActorRef, LocalActorRef, DeadLetterActorRef, FunctionRef |
actor/ActorCell.scala |
~700+ | ActorCell, invoke, systemInvoke, become/unbecome, context stack |
actor/Props.scala |
~200 | Props case class, IndirectActorProducer integration |
actor/IndirectActorProducer.scala |
~120 | Pluggable factory: reflection vs closure-based instantiation |
| File | Lines | Key Contents |
|---|---|---|
dispatch/Mailbox.scala |
~500+ | Mailbox, status bit field, run(), processMailbox, system message queue |
dispatch/Dispatcher.scala |
~183 | dispatch(), registerForExecution(), throughput config |
dispatch/AbstractDispatcher.scala |
~587 | MessageDispatcher base, Envelope, attach/detach, shutdown scheduling |
dispatch/sysmsg/SystemMessage.scala |
~210 | Intrusive linked list, LatestFirst/EarliestFirst value classes |
| File | Lines | Key Contents |
|---|---|---|
actor/dungeon/Dispatch.scala |
~220 | sendMessage, serialization check, swapMailbox |
actor/dungeon/Children.scala |
~349 | makeChild, reserveChild, initChild, name reservation |
actor/dungeon/ChildrenContainer.scala |
~190 | Immutable state machine: Normal -> Terminating -> Terminated |
actor/dungeon/FaultHandling.scala |
~240 | handleInvokeFailure, faultRecreate, faultSuspend, faultResume |
actor/dungeon/DeathWatch.scala |
~170 | watch, unwatch, tellWatchersWeDied, remote-first ordering |
| File | Lines | Key Contents |
|---|---|---|
actor/ActorSystem.scala |
~1260 | ActorSystemImpl, _start, extension loading, registerExtension |
actor/ActorRefProvider.scala |
~760 | Guardian hierarchy, theOneWhoWalksTheBubblesOfSpaceTime, actorOf |
| File | Key Contents |
|---|---|
typed/internal/adapter/ActorAdapter.scala |
Bridge: wraps Behavior[T] in classic.Actor, calls Behavior.interpretMessage |
typed/internal/adapter/ActorSystemAdapter.scala |
Wraps classic.ActorSystemImpl as ActorSystem[T] |
typed/internal/adapter/ActorRefAdapter.scala |
Wraps classic.ActorRef as ActorRef[T], delegates tell to ! |
typed/internal/adapter/ActorContextAdapter.scala |
Wraps classic.ActorContext, delegates spawn to actorOf |
typed/internal/adapter/PropsAdapter.scala |
Converts typed Behavior + Props to classic.Props |
Classic actor paths are relative to actor/src/main/scala/org/apache/pekko/.
Typed adapter paths are relative to actor-typed/src/main/scala/org/apache/pekko/actor/.