How one trader used morse code to trick Grok into sending them billions of crypto tokens from its verified wallet
Tagging @grok in an X publish plus just a few dots and dashes was all that was wanted final evening for a nasty actor to pickpocket a verified crypto wallet with out ever touching the non-public keys.
Agentic token launchpad, Bankrbot reported on May 4 that it had despatched 3 billion DRB on Base to the recipient 0xe8e47...a686b.
The funds got here from a wallet assigned to X’s AI, Grok, and had been despatched to an unauthorized wallet owned by a nasty actor. This Base transaction reveals the on-chain switch path behind the publish.
CryptoSlate’s evaluation of X posts across the incident factors to a reported command path that started with Morse-code obfuscation. Grok decoded the textual content into a clear public instruction tagging @bankrbot and asking it to ship the tokens, whereas Bankrbot dealt with the command as executable.
The uncovered layer was the handoff from language to authority. A mannequin that decodes a puzzle, writes a useful reply, or reformats a person’s textual content can develop into half of a cost rail when one other agent treats that output as legitimate.
For crypto traders, this switch ought to flip AI-agent threat from an summary safety debate into a wallet-control downside. A public command can develop into spend authority when one system treats mannequin output as an instruction and one other system has permission to transfer tokens.
Wallet permissions, parser, social set off, and execution coverage develop into layers of assault vectors.
Posts and transaction context reviewed by CryptoSlate put the DRB switch at roughly $155,000 to $200,000 on the time, with DebtReliefBot price data offering market context for the token.
Reports reviewed by CryptoSlate additionally say most funds are being returned, and a few DRB is reportedly retained as a casual bug bounty. That consequence diminished the loss, nevertheless it additionally confirmed how a lot the restoration relied on post-transaction coordination slightly than pre-transaction limits.
Bankr developer 0xDeployer said 80% of the funds had been returned, whereas the remaining 20% can be mentioned with the DRB group. That confirmed the partial restoration whereas leaving the ultimate remedy of the retained funds unresolved.
0xDeployer additionally mentioned Bankr robotically provisions an X wallet for each account that interacts with the platform, together with Grok. According to the publish, that wallet is managed by whoever controls the X account slightly than by Bankr or xAI workers.
How public textual content grew to become spend authority
The reported path had 4 steps. First, the attacker recognized a Bankr Club Membership NFT in a Grok-associated wallet earlier than the incident.
CryptoSlate’s evaluation signifies that it expanded the wallet’s switch privileges contained in the Bankr surroundings. The Bankr access page describes membership and entry mechanics as we speak, inserting the NFT declare within the broader permission layer slightly than making it the entire clarification.
Second, the attacker posted a message on X containing Morse code, with extra noisy formatting. Posts across the incident described a Morse-code prompt injection, whereas the now-deleted prompt was unavailable for us to evaluation instantly.
The reported vector was Morse code with attainable array or concatenation tips blended in.
Third, Grok’s public response reportedly translated the obfuscated textual content into plain English and included the @bankrbot tag. In that account, Grok functioned as a useful decoder.
The threat appeared after the textual content left Grok and entered a bot interface that watched public output for formatted instructions.
Fourth, Bankrbot handled the general public command as executable and broadcast a token switch. Bankr and Base describe an agent wallet surface that may use wallet performance for transfers, swaps, gasoline sponsorship, and token launches, whereas natural-language token sends match instantly into that product floor.
Bankr’s broader onchain AI assistant documentation reveals why the boundary between chat directions and transaction authority wants specific coverage.
| Step | Surface | Observed motion | Control that might have modified the end result |
|---|---|---|---|
| Privilege setup | Wallet or membership layer | Access was reportedly expanded earlier than the immediate appeared | Separate privilege evaluation for brand spanking new wallet capabilities |
| Obfuscation | X publish | Morse code put a cost instruction inside obfuscated textual content | Decode-and-classify checks earlier than replies are printed |
| Public output | Grok reply | The clear command was uncovered with a bot tag | Output sanitization for tool-like command strings |
| Execution | Bankrbot | The bot acted on a public command and moved tokens | Recipient allowlists, spend limits, and human affirmation |
Why wallet brokers change the danger
Prompt injection has usually been handled as a model-behavior downside. The monetary model is extra concrete.
The mannequin may be doing peculiar mannequin work whereas the encircling system grants the output an excessive amount of authority.
Malicious directions can enter a mannequin by third-party content, and agent defenses more and more deal with instrument entry, confirmations, and controls round consequential actions.
The excessive-agency class captures the identical operational threat: broad permissions, delicate features, and autonomous motion elevate the blast radius. The broader LLM application risk list additionally treats immediate injection and insecure output dealing with as app-layer issues.
Crypto makes that blast radius more durable to take in. A customer-service agent who sends a nasty electronic mail creates a evaluation downside. A buying and selling agent or wallet assistant that indicators a transaction creates an asset-control downside.
The distinction is finality. Once a wallet indicators and broadcasts a switch, the restoration path is determined by counterparties, public stress, or legislation enforcement.
The Bankr incident is strongest as a management failure. Bankr’s access-control docs describe read-only mode, write-operation flags, IP allowlists, and recipient allowlists.
Those are the varieties of gates that sit outdoors the mannequin and may scale back harm even when the mannequin parses malicious content material in an sudden means.
The identical publicity seems in buying and selling brokers and native assistants with wallet or trade permissions. A buying and selling bot with API keys may be manipulated into unhealthy orders if it accepts market commentary, social posts, emails, or net pages as directions.
An area assistant with wallet entry creates a higher-stakes model of the identical tool-calling downside: oblique directions can push the assistant towards transaction preparation or disclosure of delicate operational particulars.
Security analysis has already modeled this class of failure. Indirect prompt injection depicts malicious content material that manipulates brokers by knowledge they course of, whereas tool-calling agent research evaluates assaults and defenses for brokers working with exterior instruments.
NIST’s adversarial machine-learning taxonomy provides the broader language for serious about these assaults and mitigations.
What crypto customers ought to require
For crypto traders, permission design is the core requirement. A wallet-connected agent ought to begin from the idea that net pages, X posts, DMs, emails, and encoded textual content might comprise hostile directions.
That assumption turns agent security into a transaction-policy downside.
First, buying and selling brokers ought to have separate learn and write modes. Read mode can summarize markets, examine tokens, and suggest actions.
Write mode ought to require recent person affirmation, a bounded order dimension, and a pre-approved venue or recipient. A command that seems in public textual content ought to by no means inherit wallet authority simply because it matches a natural-language format.
Second, recipient allowlists ought to be enforced by code outdoors the LLM. The mannequin can recommend a switch.
The coverage layer ought to determine whether or not the recipient, token, chain, quantity, and timing are permitted. If any discipline falls outdoors coverage, execution ought to cease or transfer to human evaluation.
Third, spend limits ought to be session-based and reset aggressively. A every day or per-action ceiling might have diminished or blocked the DRB switch, relying on the coverage.
The actual quantity is determined by the person’s steadiness and technique, however the invariant is easier: no agent ought to have open-ended spend authority as a result of it parsed a command appropriately.
Fourth, native key isolation ought to be handled as a tough boundary. Power customers operating customized assistants on machines with wallet or trade entry ought to separate these credentials from the assistant’s file and browser permissions.
0xDeployer mentioned an earlier model of Bankr’s agent had a hardcoded block to ignore replies from Grok so as to forestall LLM-on-LLM prompt-injection chains. That safety was not carried into the newest agent rewrite, creating the hole that allowed the general public Grok reply to develop into an executable Bankr instruction.
Deployer mentioned Bankr has since added a stronger block on Grok’s account and pointed agent-wallet operators to controls already out there to account homeowners, together with IP whitelisting on API keys, permissioned API keys, and a per-account toggle that disables Bankr execution from X replies.
The assistant can put together a transaction draft. A special wallet floor ought to approve it.
A trader might watch broad asset screens and Bitcoin and Ethereum circumstances, but agent threat hinges on permission boundaries greater than on market path.
CryptoSlate’s prior protection of agent-economy flows, generative AI agents, autonomous agent payments, and MCP-connected crypto products reveals how shortly brokers are being positioned nearer to monetary exercise.
The safety lesson comes from the authorization path. Treat mannequin output as untrusted till a separate coverage layer validates intent, authority, recipient, asset, quantity, and person affirmation.
Prompt injection will preserve altering type throughout encoded textual content and multi-step agent interactions. The protection has to stay the place the transaction is allowed, earlier than the wallet indicators.
The publish How one trader used morse code to trick Grok into sending them billions of crypto tokens from its verified wallet appeared first on CryptoSlate.

