In a stirring and unorthodox lecture, AI trading pioneer Joseph Plazo issued a warning to the next generation of investors: the future still belongs to humans who can think.
MANILA — What followed wasn’t thunderous, but resonant—it reflected a deep, perhaps uneasy, resonance. At the packed University of the Philippines auditorium, future leaders from NUS, Kyoto, HKUST and AIM expected a triumphant ode to AI’s dominance in finance.
But they left with something deeper: a challenge.
Joseph Plazo, the architect behind high-accuracy trading machines, chose not to pitch another product. Instead, he opened with a paradox:
“AI can beat the market. But only if you teach it when not to try.”
Students leaned in.
What ensued was described by one professor as “a reality check.”
### Machines Without Meaning
His talk unraveled a common misconception: that data-driven machines can foresee financial futures alone.
He presented visual case studies of trading bots gone wrong—algorithms buying into crashes, bots shorting bull runs, systems misreading sarcasm as market optimism.
“ Most of what we call AI is trained on yesterday. But investing happens tomorrow.”
His tone wasn’t cynical—it was reflective.
Then he paused, looked around, and asked:
“ Can your code feel the 2008 crash? Not the price charts—the dread. The stunned silence. The smell of collapse?”
Silence.
### When Students Pushed Back
Bright minds pushed back.
A doctoral student from Kyoto proposed that large language models are already picking up on emotional cues.
Plazo nodded. “Yes. But sensing anger is not the same as understanding it. ”
Another student from HKUST asked if real-time data and news could eventually simulate conviction.
Plazo replied:
“Lightning can be charted. But not predicted. Conviction is a choice, not a calculation.”
### The Tools—and the Trap
He shifted the conversation: from tech to temptation.
He described traders who waited for AI signals as gospel.
“This is not evolution. It’s abdication.”
Yet he made it clear: AI is a tool, not a compass.
His systems parse liquidity, news, and institutional behavior—with rigorous human validation.
“The most dangerous phrase of the next decade,” he warned, “will be: ‘The model told me to do it.’”
### click here Asia’s Crossroads
In Asia—where AI is lionized—Plazo’s tone was a jolt.
“Automation here is almost sacred,” noted Dr. Anton Leung, AI ethicist. “Plazo reminded us that even intelligence needs wisdom.”
In a follow-up faculty roundtable, Plazo urged for AI literacy—not just in code, but in consequence.
“Teach them to think with AI, not just build it.”
Final Words
His closing didn’t feel like a tech talk. It felt like a warning.
“The market,” Plazo said, “is not a spreadsheet. It’s a novel. And if your AI doesn’t read character, it will miss the plot.”
There was no cheering.
They stood up—quietly.
A professor compared it to hearing Taleb for the first time.
Plazo didn’t sell a vision.
And for those who came to worship at the altar of AI,
it was the sermon they didn’t expect—but needed to hear.