Jeremie Harris is the co-founder of Gladstone AI. The company facilitates the U.S. government’s understanding of advanced artificial intelligence (AI) and promotes the responsible development and adoption of AI by providing safeguards against AI-driven national security threats, such as weaponization and loss of control, according to its website.
Harris spoke with Vassy Kapelos this week to discuss Canadian company Shopify and its CEO Tobi Lutke, who told staff in a recent memo that they would now be expected to prove why certain jobs can’t be done using AI.
The Vassy Kapelos Show airs on Saturdays and replays on Sundays on 650CKOM and 980CJME.
Read more:
- Artificial intelligence, wearable tech can improve safety in stroke rehab: study
- Does artificial intelligence deserve a seat in Canada’s courtrooms?
- AI code signatories happy with decision but want more company
These questions and answers have been edited and condensed for clarity.
I heard this internal memo of mine is being leaked right now, so here it is: pic.twitter.com/Qn12DY7TFF
— tobi lutke (@tobi) April 7, 2025
Kapelos: What did you think when you read this memo?
Harris: The surprising thing to me was that it is news. It is clearly right. It just tracks this trend that we’ve seen of the increasing ability of AI to automate the work that maybe 10 or 100 different employees would have to do. You can now have one employee do that.
It’s a force multiplier, depending on the nature of the work and one of the big areas where you see that that impact is in coding. That’s a big part of what Shopify does. They want to build apps, tools for merchants on their platforms.
You have AI systems right now that can literally out-code human programmers, and are at the point where they’re essentially building entire apps autonomously.
There are all kinds of reasons to think that this is going to continue. If you do stand still, as Tobi (Lutke) puts it in the memo, you are moving backwards. A hot-shot startup that finds a way to hitch their wagon to that train gets to potentially out compete companies that are far larger.
Kapelos: Is Shopify using AI to develop an app to purchase that merchant can use? Is that what you mean they’re supplanting?
Harris: There have been a couple of high-profile releases of new AI tools in the last couple of months to kind of set the scene.
OpenAI came out with this tool called Deep Research. It’ll spend 15 to 30 minutes doing research for you and come back to you with a fully detailed report, complete with references. You’ll look at it and be like “Okay, this is like a solid high-level intern, low-level junior employee job.
If you don’t have enough expertise to be that critical and you just need a basic overview, it does a great job. I’ll do this to look into what are the nuanced state-level policy arguments that are happening right now on human level.
On the flip side, there’s app building where in two minutes it pumps out a prototype of an app. There’s stuff that’s already happening in that direction, like how do we optimize the flow of data from our merchants to the platform and back and have that go really smoothly?
There are huge teams dedicated to that. If you can increase their efficiency by a factor of even 30 per cent — these are people who are being paid $200,000, $300,000, $500,000 a year — so you’re getting hundreds of thousands of dollars in uplift with that tool and in fact, it’s doing considerably more in some cases. That’s really part of what’s motivating Tobi’s memo.
Kapelos: Are other sectors in Canada awake to this? What is your overall assessment of the degree to which our economy has absorbed what AI can do?
Harris: People talk about Shopify as a Canadian company. Its DNA is actually American. Canadian investors generally suck at their craft — they are super risk-averse. The first questions they’ll ask when you’re trying to figure out whether they’ll invest is “how much money are you making?,” which is a very low-information investor question.
In Silicon Valley, which is where we work and live, everything’s different. The questions they ask you are things like “what do you know about your customer that no one else knows, how much interaction have you actually had and do you deeply understand your customers?
Right now in Silicon Valley there are WhatsApp groups and Signal groups where people are debating over when we are going to see the first unicorn — the first billion dollar company that has one employee. That’s where the conversation’s at and it’s totally on the horizon.
This is the insane leverage you get from this technology. The tasks that AI can automate successfully is growing. Today it’s about 50-50 whether AI can successfully automate a task that takes a human an hour to do.
It’s growing exponentially, every four months it’s doubling. So four months from now, it’ll be two hours, four months after that it’ll be four hours. It goes to weeks and months by 2028.
Kapelos: If AI can do all this stuff, what else can it do? Where do things stand in mitigating risks?
Harris: It’s complicated. People are still disagreeing about the fundamental risks to address. It is pretty clear that we have weapons of mass destruction type risks — the use of AI systems to automate the discovery of new bio weapons.
We’ve seen a surprising amount of traction on systems lately with autonomous cyber attacks. You can think of things that take down the electrical grid or components of it, but then there’s the whole idea of the loss of control risk for these systems.
We actually don’t know how to point an AI system towards a task that we want it to do without that system getting dangerously creative and doing something like inventing a solution to the problem.
We have more and more evidence of AI deception — when AI systems know they’re being tested, they behave one way to try to hide certain capabilities that then they display when they’re deployed.
It sounds like we’re talking about like science fiction, but life comes at you fast when you’re writing exponentials.
Read more: