|

Code, Blockchain, and Illusions: Why AI Won’t Replace Brains

Literature tried to warn us, critically, for about 5 hundred years it has been screaming the identical message, from the clay-fisted Golem of medieval Prague all the way in which to William Gibson’s neon-soaked neural networks. The plot? Always the identical. The factor you construct to assist your self finally ends up reshaping you.

We learn it, nodded, and slammed the e-book shut earlier than going proper again to ordering chatbots to put in writing our marriage ceremony speeches, our authorized briefs, and our medical recommendation.

Today the AI hype machine is promoting a glittering future the place everybody from cub-reporter juniors to silver-tongued attorneys will get swept into the dustbin. But whereas Silicon Valley peddles paradise, actuality is doling out dangerously incorrect recommendation by way of a smiling chat window.

Dmitry Nikolsky, CPO of BitOK, says sufficient is sufficient. And he’s right here to clarify why humanity should STOP loading each final burden onto AI’s pixel-thin “shoulders.”

Even Elon Musk lately warned in his OpenAI lawsuit testimony that “AI may kill us all.”

From the Golem to R.U.R.: We Always Wanted a Kill Switch

Think the worry of synthetic intelligence began with Terminator? Think once more. This panic is older than electrical energy itself.

Roll again to Sixteenth-century Prague. Rabbi Loew sculpts a hulking clay protector, the Golem, and virtually instantly discovers he has to yank the plug. The creature went rogue. Humanity, in its infinite knowledge, invented AI and a kill swap in the identical breath.

Rabbi Loew brings the Golem to life. Illustration by M. Aleš. According to the artist’s idea, Rabbi Loew writes the sacred phrase “Emet” (reality) on the brow of the clay large. Source: Wikipedia.

A kill swap is an emergency shutdown mechanism, the massive pink panic button that halts a system the second it goes haywire, will get hacked, or slips its leash. The entire level is to restrict the carnage when well mannered shutdowns fail.

Then got here Mary Shelley. Frankenstein isn’t actually a monster film, it’s a textbook case of catastrophic venture administration. Victor Frankenstein? Just one other good engineer who cracked the technical riddle and shrugged off the results. Every developer alive is aware of that face within the mirror.

Fast-forward to 1920. Karel Čapek cash the phrase “robotic.” In his story, the machines don’t revolt out of pure malice. Oh no, people merely make themselves pointless by outsourcing every little thing they used to do.

The lesson? When you construct your alternative, you might not discover the exact second you turned disposable.

Three Prophecies We Turned into Bug Reports

The sci-fi giants of the final century weren’t predicting applied sciences. They have been predicting our failures.

Isaac Asimov floated his Three Laws — the primary stab at “alignment,” that fancy fashionable phrase for making machines share human values. Every Asimov story is a punch line: excellent logic, absurd final result.

Nikolsky says he watches it unfold every day inside AML systems, with algorithms cheerfully blocking grandma’s $40 birthday switch whereas a evident offshore laundering pipeline waltzes proper by way of. Formally appropriate. Practically deranged.

Arthur C. Clarke gave us HAL 9000, the pc that murders the crew not out of evil, however as a result of its directives contradict one another. Hide the knowledge. Remain truthful. Pick a lane! For an engineer, this isn’t horror, it’s a garden-variety necessities battle.

Philip Okay. Dick requested the query that haunts the deepfake period: if a duplicate is indistinguishable from the unique, does it matter? His verdict, sure. Because of internal expertise. Machines don’t have any. End of story.

Under the Hood: AI Doesn’t Think, It Calculates

Let’s strip away the advertising and marketing fluff. Modern language fashions are NOT intelligence. They are huge statistical prediction engines. They don’t “perceive” that means, they calculate chance.

When ChatGPT confidently cites court cases that by no means occurred, it isn’t mendacity. It’s producing statistically believable phrase salad. It has no idea of “reality,” solely “chance.”

To a blockchain developer this sounds positively unhinged. We construct trustless methods exactly as a result of we don’t belief anybody, and now we’re being advised to belief a black field that doesn’t even know why it spat out the reply it simply spat out.

Blockchain Teaches Verification; AI Teaches Blind Trust

Crypto has a commandment carved into the arduous drive: Don’t belief. Verify.

The total level is that arithmetic replaces repute.

AI flips that gospel on its head. You haven’t seen the coaching knowledge. You don’t know the mannequin weights. You don’t grasp its reasoning. To confirm the output, you already must be an professional, and in case you’re already an professional, why are you asking the chatbot?

In AML circles they name it the “false confidence downside.” Analysts see a shiny dashboard and begin trusting the numbers greater than their very own intestine. AI doesn’t improve pondering, it replaces it with the phantasm of reliability.

Chronicle of Disappointments: When AI Goes Off the Rails

This is not any thought experiment. The receipts are piling up.

Humans needed to be hauled again in to scrub up the wreckage concerning the algorithm’s wreckage.

The bot then merrily suggested individuals with anorexia to depend energy and drop some weight. Life-threatening recommendation. Someone hit “deploy” with all of the warning of a chimp holding a dwell grenade.

The airline’s protection? The bot was a “separate authorized entity.” Spoiler: the decide wasn’t shopping for it.

Studies now show 55% of corporations that rushed to replace employees with AI deeply remorse it. The financial savings evaporated into misplaced prospects and reputational rubble. Executives drooling over the concept “Claude and associates” can swallow entire groups ought to learn that determine once more. Slowly.

Source: mayhemcode

What We Should Actually Fear

Forget Skynet. Forget red-eyed killbots marching down the boulevard. There received’t be a insurrection.

There might be quiet atrophy.

A programmer leaning on Copilot for years quietly forgets architectural pondering. An analyst stops studying major sources. A pupil by no means learns the luxurious agony of wrestling a tough textual content into submission till understanding lastly clicks.

No rebellion. Just a slow-motion transformation of human beings into extensions of an interface.

Philip Okay. Dick noticed it earlier than any of us: the true hazard was by no means machines turning into human. The actual hazard is people turning into machines.

The Red Pill Isn’t Technology

This isn’t a Luddite battle cry. Automation and machine studying are highly effective instruments. But the ideas should maintain:

  • Blockchain precept: Verification over perception. If you may’t confirm how a system reached its conclusion, don’t bow to it as gospel. AI is a black field, not a supreme court docket justice.
  • Engineering precept: Tool, not alternative. A hammer drives nails. It doesn’t determine the place to place up the home. Use AI to crunch the routine, however by no means let it make the ultimate name.
  • AML precept: Critical filtering. Algorithms will all the time crack within the advanced instances as a result of they’ve zero real-world expertise. Don’t let “digital pleasure” stomp on instinct and plain outdated frequent sense.

Return to The Matrix for a second. The pink tablet is a selection, the selection to see actuality as it’s. The hazard isn’t creating one thing smarter than us. The hazard is creating one thing that makes us dumber and calling it progress.

The most harmful bug is the one that appears like a function.

Dmitry Nikolsky is the CPO of BitOK, an analytics platform for compliance and on-chain investigations.

The submit Code, Blockchain, and Illusions: Why AI Won’t Replace Brains appeared first on BeInCrypto.

Similar Posts