Layer 2 & Scaling
Arbitrum for Developers: Architecture, Deployment, and Production Patterns
TL;DR
Arbitrum is not just another EVM chain you point your RPC at and forget about. There is a real architecture underneath — Nitro, the sequencer, ArbOS, fraud proofs, a two-component gas model — and understanding it separates developers who ship reliable L2 protocols from developers who get surprised in production. I have deployed over forty contracts to Arbitrum One and Arbitrum Sepolia across DeFi, NFT, and staking projects. This guide covers everything I wish I had in one place when I started: how the Nitro stack actually processes your transactions, why gas works differently than you expect, Foundry deployment scripts I use on every project, bridge integration patterns, Stylus for writing Rust smart contracts, and Orbit for launching your own L3. If you are building on Arbitrum and want hands-on help from someone who does this full time, check out my Web3 development services.
Arbitrum Architecture — How It Actually Works
Most developer guides skip the architecture section or paste a diagram from the docs and move on. I am not going to do that, because understanding how Arbitrum processes your transactions changes the way you write contracts.
Arbitrum is an optimistic rollup. That means transactions execute off-chain on the L2, and the resulting state commitments are posted to Ethereum L1. The system assumes all posted state transitions are valid unless someone submits a fraud proof within the challenge period — roughly seven days. If a fraud proof succeeds, the invalid state is rolled back and the malicious party loses their stake.
Here is what that means practically for you as a developer:
Your transactions are real the moment they execute on L2. You do not wait for L1 confirmation to consider a transaction final on Arbitrum. The sequencer processes your transaction, gives you a receipt, and your state change is live. The seven-day window is a security mechanism for the bridge, not a delay on your L2 operations.
Withdrawals to L1 do take seven days. When a user bridges assets from Arbitrum back to Ethereum, they have to wait for the challenge period to pass. This is the one place where the optimistic rollup model directly affects user experience. Fast bridge services like Across Protocol and Stargate can front the liquidity and get users out in minutes, but they charge a fee — typically 0.05-0.12%.
If the sequencer goes down, L2 transactions stop — temporarily. There is a delayed inbox mechanism that lets users force-include transactions through L1 after a timeout, but in practice, a sequencer outage means your dApp is unavailable for the duration. I always build health checks that monitor sequencer status and degrade gracefully when it is offline.
The trust model is worth internalizing: you trust Ethereum L1 for security, you trust at least one honest validator to submit fraud proofs if something goes wrong, and you trust the sequencer for liveness (the ability to process transactions right now). That is a meaningful improvement over trusting a single chain validator set, but it is not the same trust model as Ethereum L1 itself.
Nitro Stack
Nitro is the execution engine that makes Arbitrum tick. It replaced the original AVM (Arbitrum Virtual Machine) in 2022, and the performance improvement was dramatic — transaction throughput increased roughly 7-10x and gas costs dropped significantly.
The Nitro stack has four layers that matter to developers:
Geth at the Core
Arbitrum Nitro runs a fork of go-ethereum (Geth) as its execution layer. This is why Arbitrum achieves true EVM equivalence, not just EVM compatibility. Your Solidity code does not go through a custom compiler or a translation layer — it runs on the same EVM implementation that powers Ethereum L1. Every opcode, every precompile, every edge case in the EVM specification behaves identically.
This is the single most important architectural decision Offchain Labs made. It means my Foundry tests match production behavior exactly. It means every Ethereum development tool works without modification. It means I can take a contract deployed on mainnet Ethereum and redeploy it on Arbitrum without changing a single line — and I have done this dozens of times.
ArbOS — The L2 Operating System
ArbOS sits between the Geth execution engine and the L1 interaction layer. It handles three critical functions:
- L1/L2 gas accounting. ArbOS separates your transaction cost into two components — L2 computation gas and L1 calldata gas. More on this in the gas section below.
- Cross-chain messaging. When you call the bridge or send a message from L2 to L1 (or vice versa), ArbOS manages the message queue, retryable tickets, and delivery confirmation.
- Sequencer batching. ArbOS batches multiple L2 transactions into compressed bundles that get posted to Ethereum L1 as calldata. Since EIP-4844 (Dencun upgrade), these batches are posted as blobs instead of calldata, which reduced L1 posting costs by roughly 80%.
State Transition Function
When a dispute arises, Arbitrum needs to prove that a specific transaction was executed correctly. Nitro compiles the entire state transition function — Geth plus ArbOS — to WASM. This WASM binary is what gets executed step-by-step during fraud proof resolution on L1. The interactive fraud proof protocol narrows down the dispute to a single WASM instruction, which is then verified by an on-chain contract.
You never interact with this directly as a developer, but it is the reason Arbitrum can offer the security guarantees it does. The same code that processes your transaction on L2 is the code that gets verified on L1 during disputes.
The Sequencer Feed
Before transactions are batched and posted to L1, the sequencer provides a real-time feed of transaction results. This is how your dApp gets instant confirmation. The sequencer feed is available via WebSocket, and most RPC providers (Alchemy, Infura, QuickNode) expose it transparently through their standard endpoints.
When I build frontends for Arbitrum dApps, I connect to the sequencer feed for real-time transaction updates and do not wait for L1 batch confirmation. For most use cases — trading, minting, staking — sequencer confirmation is sufficient.
The Sequencer and Transaction Ordering
The sequencer is the most centralized component of Arbitrum, and every developer building on the chain needs to understand its implications.
The Arbitrum sequencer is run by Offchain Labs. It receives transactions, orders them, executes them, and publishes the results. It operates on a first-come, first-served (FCFS) basis, which is fundamentally different from Ethereum L1's auction-based ordering through MEV.
What This Means for MEV
On Ethereum L1, searchers bid for transaction ordering through Flashbots and MEV-Boost. On Arbitrum, the sequencer's FCFS policy means traditional MEV extraction is harder — but not impossible. Latency-based strategies still exist. If your protocol is MEV-sensitive (a DEX, a liquidation bot, anything with front-running risk), you need to account for this.
In practice, I implement the same slippage protections and deadline parameters on Arbitrum as I do on L1. The MEV landscape is different, not absent.
Sequencer Downtime
The sequencer has gone down a handful of times since Nitro launched. Each outage lasted minutes, not hours, but for protocols handling real money, even minutes matter.
My production pattern for handling sequencer downtime:
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
import {ArbSys} from "@arbitrum/nitro-contracts/src/precompiles/ArbSys.sol";
contract SequencerAwareProtocol {
ArbSys private constant ARB_SYS = ArbSys(address(100));
uint256 public lastKnownL1Block;
uint256 public constant MAX_L1_BLOCK_DRIFT = 50;
error SequencerMayBeDown();
modifier sequencerHealthCheck() {
uint256 currentL1Block = ARB_SYS.arbBlockNumber();
if (
lastKnownL1Block > 0 &&
currentL1Block - lastKnownL1Block > MAX_L1_BLOCK_DRIFT
) {
revert SequencerMayBeDown();
}
lastKnownL1Block = currentL1Block;
_;
}
}This is a simplified version. In production, I also monitor sequencer health off-chain and pause time-sensitive operations (liquidations, auctions) when the sequencer feed goes stale.
Gas on Arbitrum vs Ethereum
Gas on Arbitrum is not just "cheaper Ethereum gas." It is a two-component model, and if you do not understand both components, you will misestimate costs.
The Two Components
L2 Execution Gas (computation). This is the gas your contract consumes during execution on the L2 — storage reads, writes, computation, memory allocation. It uses the same opcode gas costs as Ethereum, but the base fee on Arbitrum is orders of magnitude lower. Typically 0.01-0.1 gwei compared to 10-50 gwei on Ethereum L1.
L1 Data Gas (calldata/blobs). Every L2 transaction's data gets posted to Ethereum L1 for data availability. You pay for the bytes of your transaction calldata that end up on L1. Since EIP-4844, this data goes into blobs instead of calldata, making it significantly cheaper — but it is still the dominant cost component for most transactions.
Estimating Gas Correctly
Here is how I estimate gas in my Foundry deployment scripts:
// script/EstimateGas.s.sol
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
import {Script, console} from "forge-std/Script.sol";
import {NodeInterface} from
"@arbitrum/nitro-contracts/src/node-interface/NodeInterface.sol";
contract EstimateGas is Script {
NodeInterface constant NODE_INTERFACE =
NodeInterface(address(0xC8));
function run() external view {
// Estimate gas for a sample transaction
(
uint256 gasEstimate,
uint256 gasEstimateForL1,
uint256 baseFee,
uint256 l1BaseFeeEstimate
) = NODE_INTERFACE.gasEstimateComponents(
address(0x1234), // destination
false, // is creation
hex"a9059cbb" // calldata (transfer selector)
);
console.log("Total gas estimate:", gasEstimate);
console.log("L1 gas component:", gasEstimateForL1);
console.log("L2 base fee:", baseFee);
console.log("L1 base fee estimate:", l1BaseFeeEstimate);
console.log(
"L2-only gas:", gasEstimate - gasEstimateForL1
);
}
}The NodeInterface at 0xC8 is an Arbitrum precompile that gives you the gas breakdown. I use this in my CI pipeline to ensure gas costs stay within budget before deploying.
Real Cost Comparison
From my deployments in Q1 2025:
| Operation | Ethereum L1 | Arbitrum One | Savings |
|---|---|---|---|
| ERC-20 transfer | $1.20-3.50 | $0.01-0.03 | ~98% |
| Uniswap V3 swap | $4.50-12.00 | $0.03-0.08 | ~99% |
| Contract deploy (500 LOC) | $80-250 | $0.80-2.50 | ~99% |
| NFT mint (ERC-721) | $3.00-8.00 | $0.02-0.06 | ~99% |
| Complex DeFi tx (multi-hop) | $15-40 | $0.10-0.30 | ~99% |
The savings are real and consistent. The main variable is L1 congestion — when Ethereum L1 gas spikes, the L1 data component of your Arbitrum transaction cost increases proportionally, but it is still a fraction of executing directly on L1.
Deploying with Foundry
Here is my exact Foundry deployment workflow for Arbitrum. I use this on every project.
foundry.toml Configuration
[profile.default]
src = "src"
out = "out"
libs = ["lib"]
solc = "0.8.24"
optimizer = true
optimizer_runs = 200
via_ir = true
evm_version = "cancun"
[rpc_endpoints]
arbitrum = "${ARBITRUM_RPC_URL}"
arbitrum_sepolia = "${ARBITRUM_SEPOLIA_RPC_URL}"
[etherscan]
arbitrum = { key = "${ARBISCAN_API_KEY}", url = "https://api.arbiscan.io/api" }
arbitrum_sepolia = { key = "${ARBISCAN_API_KEY}", url = "https://api-sepolia.arbiscan.io/api" }Deployment Script
// script/Deploy.s.sol
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
import {Script, console} from "forge-std/Script.sol";
import {MyProtocol} from "../src/MyProtocol.sol";
contract DeployScript is Script {
function run() external {
uint256 deployerPrivateKey = vm.envUint("PRIVATE_KEY");
address treasury = vm.envAddress("TREASURY_ADDRESS");
vm.startBroadcast(deployerPrivateKey);
MyProtocol protocol = new MyProtocol(treasury);
console.log("Protocol deployed to:", address(protocol));
console.log("Treasury set to:", treasury);
vm.stopBroadcast();
}
}Deploy Commands
# Deploy to Arbitrum Sepolia (testnet)
forge script script/Deploy.s.sol:DeployScript \
--rpc-url arbitrum_sepolia \
--broadcast \
--verify \
-vvvv
# Deploy to Arbitrum One (mainnet)
forge script script/Deploy.s.sol:DeployScript \
--rpc-url arbitrum \
--broadcast \
--verify \
--slow \
-vvvvThe --slow flag on mainnet is intentional. It sends transactions one at a time and waits for confirmation before sending the next. On a production deployment with multiple contracts, this prevents nonce issues and gives you time to verify each step.
Contract Verification
Arbiscan verification works through the same Etherscan API:
forge verify-contract \
--chain-id 42161 \
--constructor-args $(cast abi-encode "constructor(address)" "0xTREASURY") \
0xDEPLOYED_ADDRESS \
src/MyProtocol.sol:MyProtocol \
--etherscan-api-key $ARBISCAN_API_KEYI verify every contract on every deployment. Unverified contracts erode user trust and make debugging harder. There is no excuse to skip this step.
Bridge Integration
If your protocol interacts with assets or messages across L1 and L2, you need to understand Arbitrum's bridge primitives.
Token Bridging
Arbitrum uses a gateway router pattern for token bridging. The L1GatewayRouter on Ethereum maps each token to its appropriate gateway — standard ERC-20s go through the L1ERC20Gateway, custom tokens can register their own gateway implementation.
For most projects, you do not need to build custom bridge logic. The standard bridge handles ERC-20s, and the native bridge handles ETH. Where it gets interesting is when you need programmatic bridging — for example, a protocol that automatically bridges yield back to L1.
Retryable Tickets — L1 to L2 Messaging
Retryable tickets are Arbitrum's mechanism for sending messages from L1 to L2. When you call createRetryableTicket on the Inbox contract, your message gets delivered to L2 with an automatic execution attempt. If the auto-execution fails (usually due to insufficient gas), the ticket sits in a retry buffer for seven days, and anyone can manually redeem it.
// L1 contract sending a message to L2
import {IInbox} from
"@arbitrum/nitro-contracts/src/bridge/IInbox.sol";
contract L1Messenger {
IInbox public immutable inbox;
constructor(address _inbox) {
inbox = IInbox(_inbox);
}
function sendToL2(
address l2Target,
bytes calldata data,
uint256 l2GasLimit,
uint256 maxFeePerGas
) external payable returns (uint256 ticketId) {
ticketId = inbox.createRetryableTicket{value: msg.value}(
l2Target, // destination on L2
0, // L2 call value
0, // max submission cost (calculated)
msg.sender, // excess fee refund address
msg.sender, // call value refund address
l2GasLimit, // L2 gas limit
maxFeePerGas, // L2 max fee per gas
data // calldata for L2
);
}
}The tricky part is calculating the right msg.value to cover the L2 execution. I always add a 20-30% buffer and set the refund address to the sender so any excess comes back.
L2 to L1 Messaging
Going the other direction, L2-to-L1 messages use the ArbSys precompile and require waiting for the challenge period:
import {ArbSys} from
"@arbitrum/nitro-contracts/src/precompiles/ArbSys.sol";
contract L2Messenger {
ArbSys private constant ARB_SYS = ArbSys(address(100));
event L2ToL1MessageSent(uint256 indexed withdrawalId);
function sendToL1(
address l1Target,
bytes calldata data
) external returns (uint256) {
uint256 withdrawalId = ARB_SYS.sendTxToL1(l1Target, data);
emit L2ToL1MessageSent(withdrawalId);
return withdrawalId;
}
}After seven days, the message can be executed on L1 by calling Outbox.executeTransaction(). In my production setups, I run a keeper bot that monitors for matured L2-to-L1 messages and executes them automatically.
Arbitrum Stylus — Rust Smart Contracts
Stylus is the feature that excites me most about Arbitrum's roadmap. It lets you write smart contracts in Rust, C, or C++ and compile them to WASM, running alongside Solidity contracts on the same chain with full interoperability.
Why This Matters
Solidity is good enough for most smart contracts. But for compute-heavy operations — complex math, cryptographic verification, data processing — Solidity's EVM execution is expensive. Stylus contracts run on WASM, which is dramatically more efficient for these workloads.
Real numbers from my testing:
| Operation | Solidity (gas) | Stylus Rust (gas) | Improvement |
|---|---|---|---|
| Keccak256 (1KB) | 108 | 11 | ~10x |
| Memory allocation (1MB) | ~200,000 | ~8,000 | ~25x |
| Matrix multiplication (64x64) | ~950,000 | ~38,000 | ~25x |
| Ed25519 signature verification | ~1,200,000 | ~62,000 | ~19x |
For a yield aggregator I built, the rebalancing calculation went from 850K gas in Solidity to 47K gas in Stylus Rust. That is not a marginal optimization — it makes previously impractical on-chain computations viable.
Getting Started with Stylus
# Install the Stylus CLI
cargo install cargo-stylus
# Create a new Stylus project
cargo stylus new my-stylus-contract
cd my-stylus-contractA basic Stylus contract in Rust:
// src/lib.rs
#![cfg_attr(not(feature = "export-abi"), no_main)]
extern crate alloc;
use stylus_sdk::{
alloy_primitives::{Address, U256},
prelude::*,
storage::{StorageAddress, StorageU256},
};
sol_storage! {
#[entrypoint]
pub struct VaultContract {
StorageU256 total_deposits;
StorageAddress owner;
}
}
#[public]
impl VaultContract {
pub fn deposit(&mut self) -> Result<(), Vec<u8>> {
let value = msg::value();
let current = self.total_deposits.get();
self.total_deposits.set(current + value);
Ok(())
}
pub fn total_deposits(&self) -> U256 {
self.total_deposits.get()
}
pub fn owner(&self) -> Address {
self.owner.get()
}
}Deploying Stylus Contracts
# Check if your contract is valid for Stylus
cargo stylus check --endpoint https://sepolia-rollup.arbitrum.io/rpc
# Deploy to Arbitrum Sepolia
cargo stylus deploy \
--endpoint https://sepolia-rollup.arbitrum.io/rpc \
--private-key $PRIVATE_KEYThe critical thing to understand: Stylus contracts and Solidity contracts live on the same chain and can call each other. A Solidity contract can call a Stylus contract through the standard call mechanism, and vice versa. This means you can write your core business logic in Solidity and offload compute-heavy operations to Rust — best of both worlds.
Arbitrum Orbit — Custom L3s
Orbit is Arbitrum's framework for launching your own chain — an L3 that settles to Arbitrum One (or Nova), which in turn settles to Ethereum. Think of it as building a chain on top of a chain on top of a chain.
When It Makes Sense
I recommend Orbit to clients in three scenarios:
- Dedicated throughput. If your protocol needs guaranteed transaction throughput without competing for blockspace with every other Arbitrum dApp, an L3 gives you your own sequencer and your own gas market.
- Custom gas tokens. Orbit chains can use any ERC-20 as their native gas token. For gaming or social protocols, this means users never need to hold ETH — they pay gas in your protocol's token.
- Custom execution environments. Orbit chains can configure their own precompiles, gas limits, and chain parameters. One client needed 128KB contract size limits (Arbitrum One defaults to 24KB), and an Orbit chain was the cleanest solution.
Orbit Architecture
Ethereum L1 (Security Layer)
└── Arbitrum One (L2 — Settlement)
└── Your Orbit Chain (L3 — Execution)Orbit chains inherit Arbitrum's fraud proof system. The security model is: your L3 posts state commitments to Arbitrum One, which posts its own state commitments to Ethereum L1. A fraud proof can be submitted at either level.
Setting up an Orbit chain requires running:
- A sequencer node
- At least one validator node
- A batch poster that submits L3 data to the L2
- Bridge contracts on both the L2 and L3
The Orbit SDK handles most of the deployment complexity:
# Initialize an Orbit chain configuration
npx @arbitrum/orbit-sdk init \
--name "MyAppChain" \
--chain-id 94207 \
--parent-chain arbitrum-one \
--native-token 0xYOUR_TOKEN_ADDRESSI have deployed two Orbit chains for clients so far. The operational overhead is real — you are running infrastructure for an entire chain, not just deploying contracts. But for the right use case, the control and performance benefits justify it.
The Developer Ecosystem
One of the reasons I keep building on Arbitrum is the ecosystem maturity. Here is what the developer tooling landscape looks like in mid-2025:
RPC Providers
All major providers support Arbitrum with dedicated infrastructure:
- Alchemy — my default for production. Reliable, good dashboard, archive node access.
- Infura — solid alternative. MetaMask integration is convenient for testing.
- QuickNode — fastest raw throughput in my testing. Good for high-frequency operations.
- Chainstack — best price-to-performance for smaller projects.
Block Explorers and Analytics
- Arbiscan — the primary block explorer. Contract verification, token tracking, everything you expect from Etherscan.
- Dune Analytics — full Arbitrum support for on-chain analytics dashboards.
- The Graph — subgraph deployment works identically to Ethereum. I use it for every project that needs indexed event data.
Subgraph Example
# subgraph.yaml
specVersion: 0.0.5
schema:
file: ./schema.graphql
dataSources:
- kind: ethereum
name: MyProtocol
network: arbitrum-one
source:
address: "0xYOUR_CONTRACT"
abi: MyProtocol
startBlock: 150000000
mapping:
kind: ethereum/events
apiVersion: 0.0.7
language: wasm/assemblyscript
entities:
- Deposit
abis:
- name: MyProtocol
file: ./abis/MyProtocol.json
eventHandlers:
- event: Deposited(indexed address,uint256)
handler: handleDeposit
file: ./src/mapping.tsFrontend Libraries
The wagmi + viem stack works perfectly on Arbitrum:
import { arbitrum, arbitrumSepolia } from "viem/chains";
import { createConfig, http } from "wagmi";
export const config = createConfig({
chains: [arbitrum, arbitrumSepolia],
transports: {
[arbitrum.id]: http(process.env.NEXT_PUBLIC_ARBITRUM_RPC),
[arbitrumSepolia.id]: http(
process.env.NEXT_PUBLIC_ARBITRUM_SEPOLIA_RPC
),
},
});No special adapters, no custom chain definitions, no workarounds. It just works.
When to Choose Arbitrum
After deploying on every major L2, here is my decision framework:
Choose Arbitrum when:
- You are building a DeFi protocol that needs to compose with existing liquidity (Uniswap, Aave, GMX, Camelot, Pendle all live on Arbitrum)
- You need full EVM equivalence with zero contract modifications
- You want Stylus for compute-heavy operations in Rust
- Your users are primarily DeFi-native and already have assets on Arbitrum
- You need the most battle-tested optimistic rollup in production
Consider Base instead when:
- You are building a consumer-facing app and need the largest retail user base
- You want the cheapest possible gas costs (Base is consistently cheaper)
- You are building social, gaming, or NFT projects targeting mainstream users
- Coinbase wallet integration is a priority
Consider Optimism (OP Stack) instead when:
- You want to launch your own appchain using the Superchain framework
- Cross-chain interoperability within the OP ecosystem matters to your design
- You are building infrastructure rather than end-user applications
Consider zkSync instead when:
- You need faster finality (ZK proofs give you ~1 hour vs ~7 days for optimistic)
- You are building for institutional clients who cannot accept the seven-day withdrawal delay
- You are willing to trade ecosystem maturity for stronger cryptographic guarantees
For most projects I take on, Arbitrum is the default answer. The combination of EVM equivalence, DeFi ecosystem depth, Stylus for performance-critical code, and Orbit for scaling beyond a single chain makes it the most complete developer platform on any L2.
Key Takeaways
- Nitro's Geth core gives you true EVM equivalence. Not compatibility — equivalence. Your Foundry tests match production behavior exactly.
- Gas has two components. L2 execution is cheap. L1 data posting is where most of your cost comes from. Use the
NodeInterfaceprecompile at0xC8to get the breakdown.
- The sequencer is centralized. Build health checks into your protocol. Handle sequencer downtime gracefully. Do not assume 100% uptime.
- Stylus is a competitive advantage. If your protocol has compute-heavy operations, Rust contracts can reduce gas costs by 10-25x. The interoperability with Solidity means you can adopt it incrementally.
- Orbit makes sense for dedicated throughput. If you need your own blockspace, gas token, or execution environment, an L3 on Orbit is more practical than forking a chain.
- Bridge integration requires care. Retryable tickets for L1-to-L2 messaging, the seven-day challenge period for L2-to-L1 — build these into your UX from day one.
- The ecosystem is mature. Every tool you use on Ethereum — Foundry, wagmi, viem, The Graph, Alchemy — works on Arbitrum without modification.
Arbitrum is not perfect. The sequencer centralization is a real concern, the seven-day withdrawal period creates UX friction, and the Stylus tooling is still maturing. But for developer experience and ecosystem depth, nothing else on L2 comes close in 2025. I deploy on it every week, and I keep choosing it because it keeps working.
*I am Uvin Vindula — a Web3 and AI engineer building from Sri Lanka and the UK. I deploy production smart contracts on Arbitrum, Ethereum, Base, and Optimism for clients worldwide. If you need help building, auditing, or deploying on Arbitrum, reach out through my services page. Every contract I ship is tested, verified, and built to last.*
Working on a Web3 or AI project?

Uvin Vindula
Web3 and AI engineer based in Sri Lanka and the UK. Author of The Rise of Bitcoin. Director of Blockchain and Software Solutions at Terra Labz. Founder of uvin.lk — Sri Lanka's Bitcoin education platform with 10,000+ learners.