Quantum-Safe Signatures For Web3: ML-DSA (CRYSTALS-Dilithium)
Today’s blockchains rely on elliptic-curve signatures – ECDSA (e.g., Bitcoin, Ethereum) and EdDSA (e.g., Solana) – which are vulnerable to sufficiently powerful quantum computers running algorithms like Shor’s.
To stay ahead, CISA, NSA, and NIST advise planning and migrating to quantum-resistant algorithms – starting with inventories, crypto-agility, and pilot deployments – well before large-scale quantum machines exist.
What is Post-Quantum Cryptography?
Post-Quantum Cryptography (PQC) is a family of cryptographic algorithms designed to resist both classical and quantum attacks.
In blockchain terms:
- It protects data in transit (KEM) from harvest-now-decrypt-later risks.
- It protects authorization (signatures) from future transaction/validator forgeries once public keys are known.
PQC aims for long-term security by relying on mathematical problems believed to remain hard even for quantum computers.
NIST finalized the first PQ signature standards in 2024. In this post, we focus on ML-DSA because it provides strong security foundations and efficient verification suitable for general Web3 use.
What is ML-DSA (CRYSTALS-Dilithium)?
ML-DSA, or Module-Lattice-Based Digital Signature Algorithm, is a post-quantum digital signature scheme that fills the same role as ECDSA/EdDSA – proving “I control this key” – but is built on module-lattice assumptions, specifically the Module Learning With Errors (MLWE) problem.
Naming note: The scheme long known as CRYSTALS-Dilithium was standardized as FIPS 204: ML-DSA. You will see both names (Dilithium and ML-DSA) in the ecosystem; they refer to the same signature scheme.
Main ML-DSA design properties:
- Fiat-Shamir with Aborts, no Gaussian sampling. ML-DSA uses a commit → challenge → response structure (Fiat-Shamir with Aborts) and avoids Gaussian sampling, aiding constant-time, side-channel-aware implementations. Secret vectors are sampled via a central binomial distribution (CBD); ephemeral values are sampled uniformly from bounded ranges.
- Well-studied assumptions. Security rests on module-lattice problems (MLWE) at NIST target levels. Parameter sets are chosen to meet these levels without relying on complex samplers.
Sizes at a Glance (FIPS 204)
- ML-DSA-44: pubkey 1,312 B, signature 2,420 B
- ML-DSA-65: pubkey 1,952 B, signature 3,309 B
- ML-DSA-87: pubkey 2,592 B, signature 4,627 B
By comparison, a typical ECDSA signature on secp256k1 is ~71–73 B, and a compressed public key is 33 B.
Security Levels
ML-DSA parameter sets correspond to NIST security levels 2, 3, and 5 (ML-DSA-44/65/87 respectively). Choose the level that matches your assurance goals and footprint constraints.
Curious about technical specifics, implementation insights, and auditing details?
Explore our full research paper, featuring:
- Performance benchmarks
- Detailed parameter guidelines (ML-DSA-44, ML-DSA-65, ML-DSA-87)
- Auditing best practices and security analysis
📄 Download Dilithium research paper
Practical Trade-Offs for Builders
Footprint (size, bandwidth, fees). Keys and signatures are larger than classical ECC. Plan for higher on-chain storage and network overhead if signatures or keys appear on-chain.
Parameter choice. Map your assurance requirements to ML-DSA-44/65/87 (NIST Levels 2/3/5). Higher levels increase sizes.
Implementation discipline. ML-DSA’s security relies on precise parameter sets and careful constant-time coding, not on ad-hoc tweaks.
Dilithium Implementations: What to Look For
Many open-source Dilithium implementations (C/C++, Rust, Go, Python) exist, yet few have undergone thorough third-party security audits. Before integrating Dilithium into your blockchain solution, carefully:
- Evaluate maturity of available implementations.
- Prioritize comprehensively audited codebases.
- Favor actively maintained libraries with documented security reviews.
Note that a library’s overall audit does not automatically imply a dedicated audit of Dilithium/ML-DSA itself; verify the scope.
Do / Don’t for Teams Shipping ML-DSA
Do
- Start with a code/infra inventory and plan for algorithm agility internally.
- Pilot off-chain verification or L2 runtime support where you control costs.
- Follow FIPS 204 parameter sets exactly; pin library versions and run vectors.
Don’t
- Don’t assume “PQC is too slow” – measure with realistic message sizes and key/signature handling.
- Don’t treat PQC as a silver bullet; you still need sound key management, audits, and secure updates.
Rigorous auditing before the implementation isn’t optional – it's essential. Common pitfalls include:
- Incorrect parameters (always verify against NIST's FIPS 204)
- Poor sampling practices (uniform sampling must avoid biases and leakage)
- Vulnerability to side-channel attacks (properly implemented constant-time arithmetic is a must)
Conclusion
If your assets have long lifetimes or public keys are exposed early, begin PQ planning now. Use government roadmaps (e.g., complete transitions by early-to-mid 2030s) as conservative planning anchors for high-value systems.
Want a concrete plan? Start with a 2-week PQC readiness review: inventory your signature surfaces, choose an ML-DSA level, prototype a hybrid or off-chain verify path, and run an implementation audit against FIPS 204. Then iterate.
Or dive deeper right now: read the full research PDF and share it with your core team.
Table of contents
Tell us about your project
Read next:
More related- The Hacken 2025 Half-Year Web3 Security Report Is Out
2 min read
Insights
- What Would a Transparent CEX Look Like in an Ideal Crypto World?
13 min read
Insights
- Web3 Security Report Q1 2025: $2B Lost in 90 Days
2 min read
Insights