Cadence Computation Profiling
This guide provides comprehensive instructions for using the computation profiling and reporting features in the Flow Emulator. These tools help Cadence developers analyze and optimize their smart contracts by understanding computational costs and identifying performance bottlenecks.
Table of Contents
- Introduction
- Prerequisites
- Computation Reporting
- Computation Profiling (pprof)
- Using Source File Pragmas
- Practical Examples
- Best Practices
- API Reference
- Troubleshooting
- Related Features
Introduction
When developing smart contracts on Flow, understanding computational costs is essential for:
- Performance Optimization: Identify slow operations and optimize your code
- Cost Awareness: Understand how much computation your transactions and scripts consume
- Bottleneck Identification: Pinpoint exactly where your code spends the most resources
The Flow Emulator provides two complementary tools for this purpose:
| Feature | Output | Best For |
|---|---|---|
| Computation Reporting | JSON report with detailed intensities | Quick numerical analysis, CI/CD integration, automated testing |
| Computation Profiling | pprof-compatible flame graphs | Visual analysis, deep-dive debugging, call stack exploration |
Prerequisites
-
Flow CLI installed (installation guide)
-
pprof tool (for computation profiling):
_10go install github.com/google/pprof@latest
Computation Reporting
Computation reporting provides a JSON-based view of computational costs for all executed transactions and scripts.
Enabling Computation Reporting
Start the emulator with the --computation-reporting flag:
_10flow emulator --computation-reporting
For more accurate computation numbers that reflect real network conditions, consider using emulator fork testing. Forking allows you to profile against actual Mainnet or Testnet state without requiring a full emulator environment setup.
Viewing Computation Reports
Once enabled, access the computation report at:
_10http://localhost:8080/emulator/computationReport
The report returns a JSON object with the following structure:
_30{_30 "scripts": {_30 "<script-id>": {_30 "path": "scripts/myScript.cdc",_30 "computation": 1250,_30 "intensities": {_30 "Statement": 45,_30 "FunctionInvocation": 12,_30 "GetValue": 8_30 },_30 "memory": 2048,_30 "source": "access(all) fun main(): Int { ... }",_30 "arguments": ["0x1"]_30 }_30 },_30 "transactions": {_30 "<transaction-id>": {_30 "path": "transactions/myTransaction.cdc",_30 "computation": 3500,_30 "intensities": {_30 "Statement": 120,_30 "EmitEvent": 5,_30 "SetValue": 15_30 },_30 "memory": 8192,_30 "source": "transaction { ... }",_30 "arguments": ["100.0"]_30 }_30 }_30}
Report Fields
| Field | Description |
|---|---|
path | Source file path (set via #sourceFile pragma) |
computation | Total computation units used |
intensities | Count of each operation type performed |
memory | Estimated memory usage |
source | Original Cadence source code |
arguments | Arguments passed to the transaction/script |
Understanding Computation Intensities
The intensities map shows how many times each operation type was performed. The keys are human-readable names like Statement, Loop, FunctionInvocation, GetValue, SetValue, EmitEvent, etc.
The total computation value is calculated by multiplying each intensity by its corresponding weight (defined by the network) and summing the results. When optimizing, look for operations with high counts - reducing these will lower your total computation cost.
Computation Profiling (pprof)
Computation profiling generates pprof-compatible profiles that can be visualized as flame graphs, providing a powerful way to understand your code's execution patterns.
Enabling Computation Profiling
Start the emulator with the --computation-profiling flag:
_10flow emulator --computation-profiling
Note: You can enable both
--computation-reportingand--computation-profilingsimultaneously if you need both types of analysis.
Downloading the Profile
After executing transactions and scripts, download the profile from:
_10http://localhost:8080/emulator/computationProfile
This downloads a profile.pprof file containing the aggregated computation profile.
Using curl:
_10curl -o profile.pprof http://localhost:8080/emulator/computationProfile
Viewing Profiles with pprof
Open the profile in an interactive web interface:
_10pprof -http=:8081 profile.pprof
Then navigate to http://localhost:8081 in your browser.
Available Views
The pprof web interface provides several visualization options:
| View | Description |
|---|---|
| Flame Graph | Visual representation of call stacks with computation costs |
| Graph | Directed graph showing call relationships |
| Top | List of functions sorted by computation usage |
| Source | Source code annotated with computation costs |
| Peek | Callers and callees of selected functions |
Viewing Source Code in pprof
To see Cadence source code annotated with computation costs:
-
Download all deployed contracts:
_10curl -o contracts.zip http://localhost:8080/emulator/allContracts -
Extract the ZIP file into a
contractsfolder:_10mkdir -p contracts_10unzip contracts.zip -d contracts -
Run pprof with the source path:
_10pprof -source_path=contracts -http=:8081 profile.pprof
Now when you view the "Source" tab in pprof, you'll see your Cadence code with line-by-line computation annotations.
Resetting Computation Profiles
To clear the accumulated profile data (useful between test runs):
_10curl -X PUT http://localhost:8080/emulator/computationProfile/reset
Using Source File Pragmas
The #sourceFile pragma improves report readability by associating your code with meaningful file paths. Without it, reports show generic identifiers.
Usage
Add the pragma at the beginning of your transaction or script:
_10#sourceFile("transactions/transfer_tokens.cdc")_10_10transaction(amount: UFix64, recipient: Address) {_10 prepare(signer: auth(Storage) &Account) {_10 // Transfer logic_10 }_10}
For scripts:
_10#sourceFile("scripts/get_balance.cdc")_10_10access(all) fun main(address: Address): UFix64 {_10 return getAccount(address).balance_10}
Benefits
- Reports show file paths instead of generic IDs
- Easier to correlate computation costs with source files
- Better integration with pprof source views
- Useful for tracking costs across multiple files in a project
Practical Examples
Profiling a Simple Transaction
Let's profile a simple NFT minting transaction.
1. Start the emulator with profiling enabled:
_10flow emulator --computation-profiling --computation-reporting
2. Create a transaction file (transactions/mint_nft.cdc):
_14#sourceFile("transactions/mint_nft.cdc")_14_14import NonFungibleToken from 0xf8d6e0586b0a20c7_14import ExampleNFT from 0xf8d6e0586b0a20c7_14_14transaction {_14 prepare(signer: auth(Storage) &Account) {_14 let collection = signer.storage.borrow<&ExampleNFT.Collection>(_14 from: ExampleNFT.CollectionStoragePath_14 ) ?? panic("Could not borrow collection")_14 _14 collection.deposit(token: <- ExampleNFT.mintNFT())_14 }_14}
3. Execute the transaction:
_10flow transactions send transactions/mint_nft.cdc
4. View the computation report:
_10curl http://localhost:8080/emulator/computationReport | jq
5. Analyze with pprof:
_10curl -o profile.pprof http://localhost:8080/emulator/computationProfile_10pprof -http=:8081 profile.pprof
Identifying Performance Bottlenecks
Consider a script that iterates over a large collection:
_21#sourceFile("scripts/find_expensive.cdc")_21_21access(all) fun main(address: Address): [UInt64] {_21 let account = getAccount(address)_21 let collection = account.capabilities.borrow<&{NonFungibleToken.Collection}>(_21 /public/NFTCollection_21 ) ?? panic("Could not borrow collection")_21 _21 let ids = collection.getIDs()_21 var result: [UInt64] = []_21 _21 // Potentially expensive loop_21 for id in ids {_21 let nft = collection.borrowNFT(id)_21 if nft != nil {_21 result.append(id)_21 }_21 }_21 _21 return result_21}
After profiling, you might see high values for:
Loop: Many iterationsFunctionInvocation: RepeatedborrowNFTcallsGetValue: Multiple storage reads
Optimization strategies:
- Use pagination to limit iterations per call
- Cache results when possible
- Consider restructuring data for more efficient access
Comparing Computation Costs
You can compare two implementation approaches by:
1. Reset the report between tests:
_10curl -X PUT http://localhost:8080/emulator/computationProfile/reset
2. Run implementation A and record the computation:
_10flow transactions send approach_a.cdc_10curl http://localhost:8080/emulator/computationReport > report_a.json
3. Reset and test implementation B:
_10curl -X PUT http://localhost:8080/emulator/computationProfile/reset_10flow transactions send approach_b.cdc_10curl http://localhost:8080/emulator/computationReport > report_b.json
4. Compare the computation values in both reports.
Best Practices
-
Profile early and often: Don't wait until production to understand your computation costs.
-
Use the right tool for the job:
- Computation Reporting: Quick checks, automated tests, CI/CD pipelines
- Computation Profiling: Deep analysis, visual exploration, optimization work
-
Reset between isolated tests: Always reset profiles when comparing different implementations or testing in isolation.
-
Use
#sourceFileconsistently: Add pragmas to all your transactions and scripts for better debugging and reporting. -
Consider compute limits: Be aware of the emulator's compute limits:
--transaction-max-compute-limit(default: 9999)--script-compute-limit(default: 100000)
-
Profile realistic scenarios: Test with realistic data volumes and usage patterns.
-
Monitor expensive operations: Pay attention to high-cost operations like:
- Large loops
- Frequent storage reads/writes (
GetValue,SetValue) - Cryptographic operations (
Hash,VerifySignature) - Event emissions (
EmitEvent)
API Reference
| Endpoint | Method | Description |
|---|---|---|
/emulator/computationReport | GET | View computation report (JSON) |
/emulator/computationProfile | GET | Download pprof profile |
/emulator/computationProfile/reset | PUT | Reset computation profile |
/emulator/allContracts | GET | Download all deployed contracts (ZIP) |
Example API Calls
_11# Get computation report_11curl http://localhost:8080/emulator/computationReport_11_11# Download pprof profile_11curl -o profile.pprof http://localhost:8080/emulator/computationProfile_11_11# Reset computation profile_11curl -X PUT http://localhost:8080/emulator/computationProfile/reset_11_11# Download all contracts_11curl -o contracts.zip http://localhost:8080/emulator/allContracts
Troubleshooting
Profile endpoint returns 404
Problem: Accessing /emulator/computationProfile returns a 404 error.
Solution: Make sure you started the emulator with --computation-profiling:
_10flow emulator --computation-profiling
Empty profile
Problem: The downloaded profile is empty or has no useful data.
Solution: Make sure you've executed at least one transaction or script after starting the emulator. The profile only contains data for executed code.
Source code not showing in pprof
Problem: The pprof source view doesn't display your Cadence code.
Solution:
- Download the contracts ZIP:
curl -o contracts.zip http://localhost:8080/emulator/allContracts - Extract to a
contractsfolder in your working directory - Run pprof with the source path:
pprof -source_path=contracts -http=:8081 profile.pprof
High memory usage
Problem: The emulator uses increasing memory over time.
Solution: Periodically reset computation profiles to free accumulated data:
_10curl -X PUT http://localhost:8080/emulator/computationProfile/reset
Reports not showing file paths
Problem: The path field in reports is empty.
Solution: Add the #sourceFile pragma to your transactions and scripts:
_10#sourceFile("path/to/your/file.cdc")
Related Features
Code Coverage Reporting
The emulator also supports Cadence code coverage reporting, which complements computation profiling:
_10flow emulator --coverage-reporting
View coverage at: http://localhost:8080/emulator/codeCoverage
Learn more in the Flow Emulator documentation.
Debugger
For step-through debugging of Cadence code, use the #debug() pragma:
_10#debug()_10_10transaction {_10 prepare(signer: &Account) {_10 // Execution pauses here for debugging_10 }_10}
This works with VSCode and Flow CLI debugging tools.