Between the Code: Why Technical Excellence Fails
This blog post is an extended write-up of a presentation delivered in collaboration with Kelsie Nabben at SEAL’s darkMode conference (EthDenver, 2026). Written by Kelsie Nabben & Matta.
To view the trimmed, shortened recording, see the video below.
Table of contents
1. Between the Code: Why Technical Excellence Fails
1.1 Why this topic
1.2 Core themes
1.3 Several key insights
1.3.1 INSIGHT 1: Blockchain security is characterized by a permanent state of insecurity (for all involved)
1.3.2 INSIGHT 2: The security boundary has moved off-chain
1.3.3 INSIGHT 3: Security is maintained by informal coalitions of actors
1.3.4 INSIGHT 4: Legitimacy is undefined and contested
1.4 Take-aways
2. Q&A with matta
2.1.1 References & Links
Why this topic
The primary objective of our talk is practical: to provide conceptual tools that help practitioners better understand some of the key, ecosystem-wide dynamics that remain to be addressed to improve the security and thus legitimacy of the blockchain ecosystem.
It does this in two ways:
First, it provides four insights from Dr Nabben’s forthcoming book Decentralised Digital Security: Code, crisis, community (due out April 28th 2026, and Open Access in digital format). The book is based on embedded research within security communities for over years, and a PhD on resilience in decentralised digital infrastructures before that. It views security through a socio-technical lens (meaning the social and technical are inextricably linked and co-constitutive). This ‘outside-in’ perspective provides an ecosystem view of the blockchain security landscape, framing familiar problems with analytical clarity and surfacing the high-impact dynamics that are widely experienced but rarely formalised. This supports blockchain security practitioners by offering a shared language for discussing coordination, incentives, and legitimacy in decentralised security.
Second, the presentation is followed by a practitioner response and Q&A with Matta from The Red Guild/SEAL frameworks lead, who works daily on frontline interventions, including phishing education, operational security guidance, and adversarial response. Together, the session bridges analytical diagnosis with operational reality, offering security professionals a clearer map of the system they already inhabit—and a basis for thinking differently about where leverage actually lies and what needs to be done to improve the state of blockchain security.
Core themes
1. Security as a Socio-Technical System Security is not merely a property of code. It is produced through interactions between heterogeneous actants, including:
Technical systems
Organisational actors
Informal and ephemeral communities
Moral and professional norms
External institutions (exchanges, regulators, law enforcement)
Failures often occur not within layers, but between them.
2. The Limits of Tool-Centric Solutions The ecosystem is making substantial investments in:
Improved wallets
Intrusion detection and monitoringDeveloper security tooling
While necessary, these do not address several structural blind spots, including:
Incentive misalignment for white-hat responders
Fragile or incompatible incident information formats
Informal authority substituting for formal mandate
Coordination with traditional authorities during physical or hybrid incidents
The moral economies that shape disclosure, intervention, and restraint.
3. The Stakeholder Landscape and Incentive Geometry Decentralised security is maintained by a dense and uneven network of actors, including:
Volunteer responders
Protocol teams
Security collectives
Infrastructure providers
Exchanges and custodians
National and transnational authorities
Each operates under different incentive regimes, risk exposures, and legitimacy constraints. Understanding this landscape is essential for effective intervention.
Several key insights
INSIGHT 1: Blockchain security is characterized by a permanent state of insecurity (for all involved)
Unlike other cybersecurity domains, responsibility for security in blockchain ecosystems is shared between developers, operators, maintainers, and users.
While many security governance models assume a state of temporary crisis followed by restabilization, blockchain security assumes continuous threat, from which no-one is immune.
Relevant tools include: Vulnerability mapping, threat modelling, red-teaming (e.g. SEAL Wargames)
INSIGHT 2: The security boundary has moved off-chain
2025 has been aptly named ‘the year of the wrench attack’. Social engineering dominates loss events. Physical coercion and personal targeting are real (including abductions, ranSoms, fingers cut off, and hotel gangs, as detailed in the forthcoming book). Furthermore, cross-jurisdictional geopolitical dynamics shape response options (e.g., DPRK, Russia, and Iran money laundering stats highest ever in 2025.
If your threat model ends at the protocol, it is already incomplete.
Thus far, there are no real industry solutions (and very little talk) about deterring physical attacks (e.g. private physical security services…).
INSIGHT 3: Security is maintained by informal coalitions of actors
Blockchain security depends on non-mandated actors doing public-interest work (e.g. white-hat (‘for good’) hackers, security coalitions, trusted personalities, anonymous sleuths, and private security firms doing incident tracing).
Sustainable incentive structures are lacking. The blockchain industry is reliant on actors who carry risk without mandate, coordinate without a clearly designated authority, and burn out without replacement.
This is not sustainable infrastructure—it’s moral heroism (and industry failure if not addressed).
INSIGHT 4: Legitimacy is undefined and contested
Who can request a protocol or exchange freeze?
Who can reliably request an action (e.g. blacklist an address)?
How are information and requests validated?
Communities are actively building legitimacy infrastructure, for example:
Reputation systems
Safe-harbour mechanisms (e.g. SEAL Safe Harbor)
Coalition-based authority (e.g. coordinated response frameworks)
There is still a way to go to streamline public-private incident response coordination (including information formats, expectations, control, etc.)
See the latest paper on the work of zeroShadow + SEAL on freezes: Nabben, K. (2025). ‘Freeze: DPRK Hacks and the Governance of Blockchain Security’. Available at SSRN: http://dx.doi.org/10.2139/ssrn.6070088
Take-aways
This is an institutional design and governance problem, not just technical deficiencies.
Security is now a shared responsibility across actors who need to figure out how to work together (blockchains are meant to help with coordination, legitimacy, and incentives among distributed actors).
Q&A with matta
Kelsie: What is The Red Guild? What are some examples of what you do?
Matta: We are a very small team initially funded by EF’s Ecosystem Support Program, which has dedicated the past three years to working full-time, solely focused on security as a public good for the Ethereum ecosystem. It started as an exploration to determine whether we (Tincho and I) could achieve sustainability without relying on typical security industry models: bug hunting, contests, audit firms, etc. We’ve suffered them all.
M: We’ve experienced firsthand the inaccuracies, inefficiencies, lack of accountability, the critical/high-driven-reporting and triaging of issues, the profit-driven nature of these models, and we believe there’s still room (and need) for alternative approaches. We applied a new approach we called “spotchecks”, a mix between bug hunting and audits with a few differences: they were unsolicited, continuous, not limited by a fixed frozen codebase, and they lasted what we thought was acceptable, not conditioned by how much we “agreed” since there was no agreement. Our criteria were based on what we thought were Ethereum’s current needs, the EF’s view, and ecosystem feedback. A few weeks into using this model, we identified a critical vulnerability in an ENS library.
M: After public appearances and speaking with communities, we realized there were too many looking at smart contracts and few looking at the rest. That’s when we pivoted our energy to what some might call ‘traditional security’. Since then, we’ve delivered 28 unique presentations worldwide, hosted security activities and gamified experiences, and held our own conferences within renowned conferences and pop-up cities, raising awareness of the off-chain layer in different ways and open-sourcing everything we can.
M: I remember a very specific point in time where I knew we were going the right direction. As part of our first security awareness campaign, a group of 40 users spent 2 hours tampering with a Foundry challenge without realizing it was backdoored, and we had left a PDF on their desktop saying, “You could’ve been pwned by the red guild.”
M: As part of our current funding explorations, we have created our first platform, the Phishing Dojo (phishingdojo.com), which includes phishing awareness training delivered through realistic simulations in a safe environment.
K: And you’re also a SEAL.
M: Proud member! We have been working closely with SEAL since its genesis. I remember telling Tincho that seeing their announcement was like a refresher to my personal motivation. I’m now happy leading the Security Frameworks initiative, which outlines security best practices for organizations and individuals. Paradoxically, I decided to dedicate all my time to two nonprofits in one of the most profitable fields in tech haha :)
K: Thinking about the last major incident you were close to: where did the failure actually occur—at the technical layer, or in coordination, incentives, or authority?
M: My most recent direct and large-scale involvement was a nationwide incident in Argentina. It began when Pablo from Opsek and I investigated what we thought were some isolated Telegram hijacks in a local Web3 community. After conducting field research, gathering as much information as we could from the victims, and reverse-engineering Telegram’s protocol, we quickly concluded that they were being targeted by a threat group using a compromised SMS gateway provider.
M: This breach allowed them to capture all SMS messages for almost every short code sent to any mobile line in Argentina—even to foreign lines with active roaming. The result was the theft of OTP and 2FA access codes, granting unauthorized access to banks, government sites such as MiArgentina, and social apps like Telegram, which, by default, store and sync unencrypted messages in the cloud. Their objective was to look for conversation history in the form of pasted credentials, preferably private keys.
M: The incident was caused by a breach, primarily because the gateway provider didn’t realize it had been compromised for an extended period of time. But the bigger problem was how people reacted. After we confirmed that at least an Argentinian gateway provider was involved, Pablo and I reached out to organizations that might be affected. At first, they ignored us, dismissed our concerns, lied, and refused to investigate until additional evidence was provided. In one particular case, I was threatened.
M: We leveraged our local network through some local friends and colleagues to get all the national mobile carriers (Movistar, Claro, Personal) and our national CERT to jump in. Our experience was tough, as you can imagine. Fortunately, other SEAL members supported us all the way. We even had to issue an urgent advisory to warn users after the press learned about it and said they were about to run a cover story—and they did. We were worried the threat actor (TA) might make a sudden move.
M: On a personal level, though I wrote the advisory, I chose not to release the advisory under the ‘Red Guild’ simply to build a reputation; our main goal was to maximize public awareness. I also left out my name to be safe. Later, Pablo, who did not, received an anonymous message containing his personal information, telling him to be more careful next time.
M: Honestly, looking back, the actual breach was almost secondary. The real mess was how everything broke down between carriers, gateway providers, and even us. You had mobile carriers who smelled trouble but didn’t say anything. Gateway companies that just denied or stonewalled for weeks. And users were getting pwned left and right and often had no clue. Nobody felt like they had a reason to stick their neck out first, and nobody had the actual power to make anyone else act.
M: We weren’t in it for the cash or the fame; we just wanted the attacks to stop. Instead, we ate the cost: the stress, the threats, the work stopping, the personal risk. Because we weren’t “known enough,” our warnings felt light. We had to go find big names and pull external levers just to get a seat at the table. That’s the core problem. When your ability to respond depends on who you know and how much clout you have, instead of just having good data, security stops being a technical problem and becomes this political, slow, total mess.
K: And do you think this was resolved?
M: We have no way of knowing, to be honest. Those with the authority to initiate an investigation told us that the companies are responsible for reporting it themselves, so I don’t think so.
K: You mentioned above that provisioning security came with risk. White hats and some of the private security firms doing incident tracing are often described as critical infrastructure, yet are incentivised like volunteers. Where do you see this model breaking down first?
M: I consider a white hat someone who discovers real vulnerabilities in live systems and chooses restraint over exploitation, disclosure over silence, and proportionality over spectacle. They have a core principle: “Users first”, that’s something that both The Red Guild and SEAL have always in mind. The defining trait is not how the finding is reported, but why harm is deliberately avoided when it would be easy, profitable, or invisible to cause it.
M: The model breaks down when their work is no longer optional, yet is still treated as informal or secondary, or when its importance is neglected by some. The moment the ecosystem assumes someone will look, report, coordinate, and act responsibly under pressure, you have crossed into critical infrastructure. This is when volunteer incentives become a liability. Not because people are acting in bad faith, but because the system quietly relies on their sacrifice while providing no structural support in return.
M: Reputation is fine, but most of the folks I admire the most don’t care much about that; they often have 7 followers and use a customized anime profile picture. Reputation-based incentives fail early because it scales attention, not protection. The more trusted you are, the more responsibility you attract, the less cover you have when something goes wrong. Being “known” increases personal risk without necessarily increasing safety. This also applies to hunting threat actors, not only to finding vulnerabilities.
M: Incentives are distorted. Bug bounties in web3 are often reactive, underfunded, or capped well below the value at risk. The gap between the value of exploiting a vulnerability and disclosing it is clear to everyone. Choosing disclosure over exploitation is a measurable financial sacrifice, not a hypothetical one.
M: Imagine you see an exploit about to happen on-chain. You can act, or you can wait. If you front-run the attacker and drain the protocol to save user funds, you become the one who executed the exploit. Markets can panic, tokenomics can break, and even if funds are returned, the damage is done. If you hold back, alert maintainers, and try to coordinate a response, someone else may drain the protocol while you wait. Either choice has irreversible consequences, made under incomplete information.
M: If you think about it, that’s an insane position to put people in! Whitehats are expected to act responsibly, coordinate disclosures, and sometimes delay action for the “greater good,” yet they have no real authority, indemnification, or legal backing. The ecosystem quietly assumes someone will take the risk and eat the cost.
M: We often speak about decentralization, but liability is not decentralized in any coherent way. Whitehats interact with code, and by extension with DAOs, foundations, companies, and sometimes anonymous teams. Many major protocols have legal entities, but the entity on paper is not always the one that controls the code, the keys, or the response during an incident. Researchers are expected to coordinate responsibly across this gap while operating in jurisdictions that may still interpret their actions through traditional computer misuse laws.
M: In centralized financial systems, a critical bug typically results in downtime, data exposure, or erosion of trust. In our field, a critical bug often means assets are gone, forever, on a public ledger. There is no chargeback, no rollback, no quiet remediation window. That changes the psychology on both sides. For the whitehat, restraint carries real financial weight. For the vendor, disclosure is not just reputational risk but existential risk.
M: I still assume the vendor’s primary objective is always user safety. Most large vendors optimize first for narrative control, liability containment, and precedent. A vulnerability that affects millions of users is a technical problem. An uncontrolled disclosure is a governance and legal problem, a problem their executives are trained to “fear”.
M: This is why things like Safe Harbor, which pre-authorizes whitehats to rescue funds from protocols under active exploits, and legal defense funds, matter: they turn an implicit expectation into an explicit, supported role. It both protects and rewards. You stop looking at things you cannot safely touch because of risk avoidance.
K: We often treat incident information as neutral. In practice, how do formats, timing, and audiences of information sharing shape outcomes—for better or worse?
M: Incident communication is a tricky part of the whole security puzzle. It’s not just about what you say, but how, when, and to whom. Changing any one of them could mean changing attacker behavior, defender behavior, market reaction, and user harm.
Format matters: A super-polished report signals everything is under control, even if it’s not. A casual chat might signal urgency. “FTX is fine. Assets are fine.”
Timing is key: If you disclose too early, you give attackers a cheat sheet. Too late, and people lose trust.
Audience is critical: You can’t use a one-size-fits-all message. What reassures a user might educate an attacker or panic a market.
M: I’m constantly learning about it. Wednesday (from SEAL) knows this well, haha. Until not so far ago, I treated it as an afterthought. Now I think we should all treat it as a defensive system. Teaching people to treat incident communication as part of the threat model changes outcomes. Who speaks, when, in what medium, and with what level of precision is as important as the technical fix. The incident does not end when the exploit stops! It ends when the information no longer causes additional harm. If you don’t consider these factors, your transparency can actually make attacks worse.
K: Yes. The Coalition to Change Crypto Freezes & Recovery (led by zeroShadow and SEAL) is seeking to address this area, too. Your work highlights the ‘off-chain layer’—users, behaviours, education, trust. Do you see this as a solvable security problem or an inherently shifting one?
M: The latter.
M: It isn’t “solvable” because it has no boundaries—it’s constantly changed by human behavior and how fast tech evolves. When we make things easier and more convenient, we’re actually making the gap between how simple a technology feels and how complicated it really is larger. That gap is the security risk we accept in exchange for convenience.
“Freedom, convenience, security. Choose two” — Dan Geer
M: Unlike on-chain security—which operates inside “fixed boxes” like the EVM—the off-chain layer is something you constantly manage and reshape. It’s a systemic attack vector that we need to handle. Our goal shifts from eliminating risk to limiting damage and designing systems that don’t completely fail when people make mistakes. And they will, because that’s human nature.
M: We know this, which is why we aim to improve users’ awareness when interacting with technologies that could be used against them. That’s basically what the Phishing Dojo is for.
K: Why did The Red Guild print an ‘op-sec companion pocket-book’, and what was the on-ground reaction when you handed it out at a Web3 conference? Did anything surprise you?
M: Well… It’s merely a different way to help users digest information in these chaotic times. We’re always wondering how we can improve the user learning experience.
M: The most surprising thing was that even experienced security professionals found this basic primer useful. This highlights a critical point: the tech gap is now so vast that it’s affecting even experts. With AI accelerating this trend, we urgently need better, more creative ways to communicate about security, especially since most people seek information only after a security incident.
A hard truth: Humans default to outcome-based learning over risk-based reasoning.
That’s why, at the guild, we don’t just wait for them to come to us to learn; we try to “coerce” them to engage with our security awareness campaigns through gamification/engagement tactics.
K: If we assume incidents will become more cross-chain, more AI-augmented, and more geopolitical—what coordination capability is most urgently lacking?
M: Well, tech isn’t the problem; the real weakness lies in the human and operational layers above the technology. Incidents are becoming chaotic—cross-chain, AI-involved, and even geopolitical. Relying on informal chats and personal favors is unsustainable. Our on-chain complexity means a single failure puts everything at risk. Yet, when things go wrong, we regress to private DMs and improvisation. This gap will only widen as attacks target interfaces, automation, and AI narratives.
M: It’s not about awareness. We need a basic off-chain operational structure: a shared language for incidents, clear roles, predictable disclosure, and channels that assume attackers are listening. Education alone (as we saw at The Red Guild) isn’t enough; systems evolve too fast. We can’t rely on vigilance or heroic efforts to save us.
M: If we’re serious, we must design for social failure. Confusion, bad information, legal headaches, and time pressure are the new normal, not exceptions.
M: Off-chain coordination is mission-critical infrastructure, and we must treat it accordingly. This is why organizations like SEAL are essential: they create a social coordination layer where none existed.
M: This isn’t only an education or a tooling problem. It’s a structural one. We need to move past fragmented efforts and implement industry-scale resource coordination. That could mean getting company-sponsored staff working together on essential open-source work and public goods organizations that require human power and funding to thrive. Honestly, TheDAO’s Security Fund also gives me a ton of hope for future non-existent solutions.
M: I joked at my last presentation at Devconnect’s One Trillion Dollar’s Security Conference, saying we need to “destroy the industry,” but it points to a truth: if we don’t formalize it at industry scale, with real resources and support, the system will burn out the most dedicated people, and their exhaustion will be seen as a badge of honor instead of a warning sign.
K: If there’s one thing this discussion makes clear, it’s that blockchain security is no longer a tooling problem. It’s an institutional design problem. In other words, security without a sovereign works...until coordination fails.
Everything we’ve discussed today lives in that gap.
i. Dr. Nabben’s book titled “Blockchain Security: Code, Crisis, Community” will be launched on Wednesday, April 29, 10 PM UTC via Zoom on a Metagov Seminar.
More details available here: https://manchesteruniversitypress.co.uk/9781526187093/
References & Links
https://www.darkreading.com/cyber-risk/illicit-crypto-economy-surges-nation-states
https://www.darkreading.com/cyber-risk/illicit-crypto-economy-surges-nation-states
Nabben, K. (2025) ‘Freeze: DPRK Hacks and the Governance of Blockchain Security’. Available at SSRN:http://dx.doi.org/10.2139/ssrn.6070088
https://blog.theredguild.org/how-to-almost-take-over-any-dns-domain-on-ens/
https://blog.theredguild.org/you-were-not-pwned-the-red-guild-ethereum-argentina-2023/
https://blog.theredguild.org/against-all-odds-security-awareness-campaign-at-devconnect/
Originally posted: blog.theredguild.org/between-the-code-why-technical-excellence-fails/

