Angler: Dark Pool Resource Allocation
In finance, a dark pool is a private exchange where buyers and sellers trade without revealing their orders to the broader market. The idea is simple: participants get matched, trades happen, but no one sees the full order book.
Now imagine applying that concept to cloud infrastructure. What if multiple edge providers — AWS, a regional telecom, a CDN operator, a local ISP — could pool their resources into a shared marketplace, match customer requests to available capacity, and complete the whole transaction without any provider ever revealing how much capacity they have, how busy they are, or what they charge?
That is exactly what Angler does. Published at the ACM/IEEE Symposium on Edge Computing (SEC ‘23), Angler is the first system to perform resource allocation from dark pools — and it does so with only 2x overhead compared to having no privacy at all.
The Problem: Edge Computing Needs Multi-Provider Allocation, But Trust Gets in the Way
Cloud computing has always assumed a single trust domain. When Kubernetes schedules a pod, every worker node reports its CPU capacity, memory, and current utilization to the control plane. This works great within one organization’s infrastructure. It falls apart the moment you try to span multiple providers.
And spanning multiple providers is exactly what edge computing demands. Edge infrastructure is, by nature, fragmented. No single provider has points of presence everywhere. Capacity at edge locations is limited compared to centralized data centers. Different providers offer different connectivity properties, different pricing, and different coverage areas. A customer looking for compute within 10 milliseconds of their end users may need to consider half a dozen providers to find a good match.
The obvious solution to federate the schedulers, and share inventory across providers runs into a wall of confidentiality:
- Capacity is a trade secret. The number of servers in a data center, or at an edge point of presence, reveals business strategy. As edge locations shrink in scale, even per-user capacity limits can leak total capacity.
- Utilization is a trade secret. How busy a provider’s infrastructure is reveals demand patterns and directly impacts spot pricing models.
- Pricing models are a trade secret. Dynamic pricing based on load is competitive information no provider wants competitors to see.
Third-party brokers are one answer, but they just relocate the trust problem. Someone still sees everything.
Angler’s Approach: Cryptographic Matching Without a Middleman
Angler eliminates the need for trust by using Secure Multi-Party Computation (MPC) — a cryptographic technique that lets multiple parties jointly compute a function over their private inputs without revealing those inputs to each other. Each provider inputs their secret capacity and pricing. The customer inputs their resource request. Together, they compute the matching function, and only the result is revealed: the customer learns which provider won, and the winning provider learns what to provision. Losing providers learn nothing except that they lost.
There is no central broker, no trusted third party, and no aggregation service. Angler is fully decentralized. And the security model is strong — malicious security tolerating all-but-one corruptions, meaning that even if every other participant is actively trying to cheat, an honest party’s secrets remain safe.
The specific MPC protocol Angler uses is WRK, a garbled-circuit protocol implemented in the AGMPC library. Garbled circuits are a good fit here because they have constant round complexity regardless of the function being evaluated, which matters when participants are spread across a wide-area network. Secret-sharing-based MPC protocols, by contrast, have round complexity proportional to the circuit depth, which would be slower over high-latency links.
The Engineering Challenge: Making MPC Fast Enough
MPC is mathematically elegant, but it is not free. Garbled-circuit protocols involve substantial network communication (circuits are transmitted as ciphertext) and computation scales with the number of participants due to all-to-all operations. Naively wrapping an existing scheduler with MPC would be prohibitively slow.
Angler’s contribution is an end-to-end system design that keeps the total overhead manageable. The system decomposes resource allocation into three steps — discovery, matching, and provisioning — and optimizes each one.
Discovery: A Location-Aware DHT That Limits the Blast Radius
The most effective way to speed up MPC is to reduce the number of participants. If eight providers are nearby but only five need to be in the matching function to find a good result, eliminating three participants can cut matching time dramatically.
Angler achieves this through a modified Distributed Hash Table. In a standard DHT (like the one BitTorrent uses), looking up a resource hash returns every provider who offers that resource, globally. That is far too many participants for MPC.
Angler’s insight is that edge resource discovery has a strong locality property: resources at a particular location are almost always queried from that same location. So Angler prefixes DHT keys with network coordinates (specifically, Vivaldi coordinates measured from actual network latency, not geographic approximations). DHT node IDs are prefixed the same way.
The effect is that resources associated with a particular network neighborhood end up stored on nearby DHT nodes, and queries for those resources traverse only short, low-latency paths. In experiments with 5,000 nodes in the DHT, a standard lookup scales logarithmically with the number of nodes, but key-prefixed lookups remain flat — nearby queries are fast regardless of how large the overall system gets.
Matching: Squeezing Performance Out of the MPC Protocol
With the participant set narrowed by discovery, Angler then optimizes the MPC execution itself. There are three system-level optimizations to the AGMPC library — none of which alter the underlying cryptographic protocol (so formal security proofs still hold):
-
Parallelized socket initialization. The original AGMPC implementation opens sockets to all other participants sequentially. With nine parties, this alone takes about a second. Parallelizing it cuts initialization by 85%, down to 150ms.
-
Smarter socket buffer management. The original code flushes the socket buffer on every read, a defensive measure against deadlocks that turns out to be unnecessary in most cases. Making the flush conditional reduces total instructions executed by 4% and improves iTLB cache behavior by 3x.
-
BBR congestion control. MPC protocols open many TCP connections between participants simultaneously. Linux’s default Cubic congestion control algorithm doesn’t share bandwidth well across these connections, creating stragglers that all parties must wait for. Switching to BBR, which is designed for exactly this scenario, eliminates the periodic downtime visible in packet captures.
Together, these changes reduce matching time by 3x — from 2.7 seconds to 800ms for nine parties with 10ms peer-to-peer latency.
On top of the protocol tuning, Angler introduces best-effort scoping: if the predicted matching time for all discovered providers would exceed a configured deadline, the system drops the highest-latency providers one at a time until the prediction fits within budget. This trades match quality (you might miss the optimal provider) for predictable response time — a practical tradeoff that the system operator controls with a single parameter.
Provisioning: Parallelizing What Used to Be Serial
After the matching function identifies a winning provider, that provider must provision a Kubernetes namespace and issue credentials to the client. Normally these steps — creating the namespace (with resource quotas and network policies) and generating/signing an x509 certificate — happen one after the other.
Angler modifies the AGMPC output delivery so that the winning provider learns the result as part of the protocol execution itself, before the full output reconstruction completes. This lets the provider start creating the namespace immediately, while the client generates and sends the certificate signing request in parallel.
On a remote cluster (comparable to Google Kubernetes Engine), this parallelization saves up to 200ms. As a bonus, the modified output delivery means losing providers never learn who won or what the terms were — they only learn they weren’t selected.
How Well Does It Work?
The evaluation runs on bare-metal machines on the Chameleon testbed, emulating a realistic edge scenario: 5,000 providers in the DHT overall, with 2 to 8 nearby providers within the query region. Peer-to-peer latencies for nearby providers range from 2 to 10ms.
The headline numbers:
- End-to-end allocation completes in under 1 second for up to 8 providers, including discovery, matching, and provisioning.
- Only 2x slower than plaintext — that is, compared to running the exact same system with no encryption, no MPC, and no privacy guarantees whatsoever.
- Faster than a Google Cloud Function cold start (measured at 3.27 seconds).
- Comparable to existing provisioning time on Google Kubernetes Engine, where allocating a namespace on a remote cluster takes about 808ms on average.
Resource consumption during MPC is modest: CPU utilization stays below 35%, memory under 1GB, and per-party network traffic ranges from 0.5 to 5MB depending on the number of participants.
What This Means
The conventional wisdom has been that MPC is too slow for anything latency-sensitive. Angler demonstrates that this is no longer true — if you design the system end-to-end with the cryptographic cost in mind. The key is not to bolt MPC onto an existing system, but to architect around it: minimize participants through smart discovery, tune the implementation at the systems level, and exploit parallelism wherever the protocol allows.
The broader implication is for the future of multi-provider infrastructure. As edge computing matures, the number of infrastructure providers at any given location will grow. Today, customers pick one provider and hope for the best. Angler shows a path toward a real marketplace — one where providers compete on price and capacity without exposing their business to competitors, and customers get the best available deal without trusting anyone.
The matching function Angler evaluates in its benchmarks is deliberately simple: find the cheapest provider with enough capacity. But the framework supports arbitrary functions — second-price auctions, multi-attribute optimization, even small linear programs. More complex functions will produce larger garbled circuits and take longer to evaluate, but the architectural principles (limit participants, tune the protocol, parallelize provisioning) remain the same.
Angler is open source and available on GitHub.