It’s become a badge of honor in real estate innovation circles: “We’ve done 35 pilots,” one building executive told me. But here’s the truth nobody wants to admit—most smart building pilots are a waste of time.
They’re expensive. They rarely scale. And they’re often run without a clear plan for what happens next. Across industries, this so-called “pilot purgatory” is a well-documented trap: Cisco once revealed that the majority of IoT pilot projects end up stranded with no full rollout, and economist John List cites research that 50–90% of all pilot programs fail to work at scale.
Smart buildings are no exception. At NexusCon ‘24, when we put building owners and vendors in separate rooms, both groups ended up venting about pilots. And when we brought them together on a joint panel, fingers were pointed.
Leading up to NexusCon 2025, it’s clear everyone is asking: What’s the right way to test new building technology as an owner, and how can vendors better support those tests? In this article, we’ll break down why pilots are “broken” and how some owners and vendors are rewriting the playbook to actually get value from trials.
Across the industry, you’ll hear jokes about "death by a thousand pilots"—and they’re not far off. For every pilot that leads to a scaled deployment, there are dozens more that quietly fizzle out. While there’s plenty of tech that doesn’t live up to the hype, it’s usually not because the technology doesn’t work. The pilot process itself is broken.
Here’s why:
The result of all this? Pilot purgatory. Dozens of disconnected pilots that fail to inform strategy, rarely scale, and consume time and budget with little to show. But it doesn’t have to be this way. Smart building teams are starting to rethink what pilots are for—and how to run them in a way that actually gets results.
Joe Gaspardone of Montgomery Technologies shared a useful distinction with us: pilot versus trial. In his view, a pilot is an open-ended demo of a product that may or may not be ready for commercial use. There’s often no follow-up plan. A trial, by contrast, is a structured test of a commercialized product—one where both sides agree on what success looks like and what happens if the solution meets expectations.
It's not just about proving whether the product works—it's about proving that it fits the owner's specific needs and workflows and produces the expected value. “This isn’t just us proving our product. It’s you finding value in our product. There’s a difference,” agrees Danielle Radden of facil.ai.
This shift from pilot to trial reframes the process from exploratory to decisive, and it's a crucial mindset change if we want to stop wasting time and resources.
In the smart buildings marketplace, most technology categories are more mature and commercialized than building owners realize. One technology founder sounded this alarm at NexusCon ‘24, saying it’s time for solution providers to push back: "I know it’s new to you, but there are a ton of companies that have been doing this and they’re kicking out case studies."
A crucial, but often overlooked, first step is to talk to peers in your industry who have deployed it, check vendor references, and see the system in action at a comparable site.
“Rather than just defaulting to pilots, maybe ask: where have you done this before? What’s your retention rate? Can we speak to your references?” suggests Saruf Alam of KODE Labs. Often, a few reference calls and demos can confirm that a product does what it says on the website. If a vendor has case studies from, say, 10 office campuses or 5 hospitals similar to yours, you might decide to skip a “does it work?” sort of pilot.
Vendors, for their part, are learning to filter out “experimental” tire-kicker clients. Alam noted that KODE Labs isn’t afraid to turn down pilot requests if the customer hasn’t internalized the need for the solution. Mapped’s Yash Prakash told us they stopped doing pilots where the customer isn’t sure of what’s being offered and has no clear commitment of what happens after.
But what does commitment look like? It means having a vision for how the technology will change your organization. As we teach in our Smart Building Strategist course, that means:
Confirming strategic alignment is about doing the prep work to map a technology to business outcomes. “We’ve seen a lot more success when the customer starts with the view that they know the value of the category. They have a goal, and if the trial delivers, it moves to phase two,” said Mapped’s Yash Prakash.
Once the decision to trial is made, both the owner and vendor need to agree on specific, quantifiable success criteria before it starts. How else will you know if it “worked”?
“It starts with a charter... if you do not have success criteria that are quantifiable, measurable... you should not even be starting down that path,” argued Drew DePriest of McKesson at NexusCon ‘24. Also, decide what constitutes failure (and that it’s OK!). “It’s okay if it fails... if you haven’t run a proof of concept that fails, you haven’t been pushing hard enough,” DePriest says. Crucially, tie the success metrics to the business outcomes you’ve already defined in your prep work (energy cost, tenant comfort, staff productivity, etc.), not just technical performance.
In a successful trial, the vendor’s team and a limited portion of the owner’s team operate as one unit with a common goal. It’s not a vendor tossing tech over the fence and waiting to see if the owner likes it. It’s a collaborative effort to validate the solution in the owner’s environment.
Conor Gray, an experienced smart building consultant now at IntelliBuild, compares trials to a first date: both parties need to impress each other and also be honest. It’s not just proving the technology—it’s proving the working relationship. A vendor that’s communicative, responsive, and transparent during a pilot shows they can be a long-term partner. An owner that’s engaged, provides feedback, and is organized in handling legal/IT hurdles shows they’re a partner worth the vendor’s time.
On the owner side, the pilot shouldn’t live only in an innovation silo, it must engage the actual roles who would own the solution long-term. Change management is often the hardest part of scaling a technology, so the trial must include a small-scale test of how you will change operations if you go live everywhere.
One retail portfolio recently worked with KODE Labs in exactly this way. “We had a retail client where we did a trial with a small set of stores. We had their champion really look at this platform every day, use it, and visualize how it would change their processes,” recalls KODE’s Saruf Alam. By embedding a passionate champion and having that person adopt the tool in their daily routine, the organization could see what the workflow changes would be at scale.
KODE and the client even “extrapolated from the trial what that ROI would look like at scale,” and defined a “blueprint” for the entire portfolio—what systems would be integrated, how operations would change, and what the process would look like going forward, as depicted the in graphic below.
This made it easy for the client’s champion to go to senior leadership and say, “Here’s the value we got in 5 stores; here’s the projected value in 500 stores; and here’s exactly how we’d roll it out.” The key was to understand what workflows needed to change and treat the pilot as phase one of a larger initiative.
The balance required is not to make the trial too much of a burden that it stalls out. During scoping, Radden of facil.ai recommends making the trial easy to start by lowering the “activation energy”: a chemical reaction won’t proceed if the activation energy is too high. “In a trial, what’s activation energy? It’s taking three months to set it up, buying a bunch of new hardware, training all your employees, going through procurement, getting IT support... the activation energy is so high that you can’t get over it,” she explains.
Facil.ai’s solution is to remove as many barriers as possible. Whether through cloud-based deployments (no new hardware), limited-scope integrations, or temporary sandbox environments, making pilots lightweight increases the odds they actually happen and yield quick results.
So what happens after the trial? Ideally, if the trial hits its marks, you’ve already laid the groundwork to press “Go” on a full implementation (and everyone from the CFO to the technicians will be on board). And if the pilot doesn’t hit the mark, you’ll know why—and you can either adjust course or confidently scrap that initiative before sinking more time and money.
In other words, the trial is phase one of the deployment.
This model forces both sides to do the homework and legwork upfront—ensuring executive buy-in, budget allocation, and alignment on what “yes” or “no” looks like. This might look like writing the next-step options into the agreement: e.g. “If trial meets defined success criteria, customer will procure X units for a Y-building rollout at $Z price.” It’s not a binding contract for the full rollout (customers rightfully want an “out” if things change), but it is a gentle pre-commitment that focuses everyone’s minds.
It’s important to note that this approach doesn’t mean skipping validation. It means you’re not just testing the tech—you’re also starting to integrate it, train people on it, work out contractual and technical kinks, and so on, with the expectation that it will continue.
Assuming it goes well, you seamlessly transition to the next phase (instead of shelving the project and starting a new procurement cycle). This approach aligns incentives: the owner is motivated to make the pilot succeed because, having already received approval for the full-scale rollout, they have a stake in the outcome. And the vendor is willing to invest in the trial because there’s a defined payoff if successful.
“Pilot purgatory” has plagued the smart building industry, but it’s a solvable problem. The theme that emerges from both successful pilots and candid post-mortems is intentionality. Pilot with purpose or don’t pilot at all. Building owners need to approach pilots with the same rigor they would a full project—clear goals, stakeholder buy-in, and a plan for what comes next. Vendors need to be choosy and invest in trials that have a real chance to blossom into partnerships, rather than scattering freebies everywhere hoping something sticks.
The encouraging news from our interviews is that many are already embracing this change. They’re running shorter, smarter trials with committed champions, measurable outcomes, and predefined next steps. They’re focusing on integration and change management from day one, not as an afterthought. And they’re not afraid to walk away from a pilot that isn’t set up for success.
In the end, the goal of any pilot or trial should be to drive a real decision: either scale up or shut it down.
It’s become a badge of honor in real estate innovation circles: “We’ve done 35 pilots,” one building executive told me. But here’s the truth nobody wants to admit—most smart building pilots are a waste of time.
They’re expensive. They rarely scale. And they’re often run without a clear plan for what happens next. Across industries, this so-called “pilot purgatory” is a well-documented trap: Cisco once revealed that the majority of IoT pilot projects end up stranded with no full rollout, and economist John List cites research that 50–90% of all pilot programs fail to work at scale.
Smart buildings are no exception. At NexusCon ‘24, when we put building owners and vendors in separate rooms, both groups ended up venting about pilots. And when we brought them together on a joint panel, fingers were pointed.
Leading up to NexusCon 2025, it’s clear everyone is asking: What’s the right way to test new building technology as an owner, and how can vendors better support those tests? In this article, we’ll break down why pilots are “broken” and how some owners and vendors are rewriting the playbook to actually get value from trials.
Across the industry, you’ll hear jokes about "death by a thousand pilots"—and they’re not far off. For every pilot that leads to a scaled deployment, there are dozens more that quietly fizzle out. While there’s plenty of tech that doesn’t live up to the hype, it’s usually not because the technology doesn’t work. The pilot process itself is broken.
Here’s why:
The result of all this? Pilot purgatory. Dozens of disconnected pilots that fail to inform strategy, rarely scale, and consume time and budget with little to show. But it doesn’t have to be this way. Smart building teams are starting to rethink what pilots are for—and how to run them in a way that actually gets results.
Joe Gaspardone of Montgomery Technologies shared a useful distinction with us: pilot versus trial. In his view, a pilot is an open-ended demo of a product that may or may not be ready for commercial use. There’s often no follow-up plan. A trial, by contrast, is a structured test of a commercialized product—one where both sides agree on what success looks like and what happens if the solution meets expectations.
It's not just about proving whether the product works—it's about proving that it fits the owner's specific needs and workflows and produces the expected value. “This isn’t just us proving our product. It’s you finding value in our product. There’s a difference,” agrees Danielle Radden of facil.ai.
This shift from pilot to trial reframes the process from exploratory to decisive, and it's a crucial mindset change if we want to stop wasting time and resources.
In the smart buildings marketplace, most technology categories are more mature and commercialized than building owners realize. One technology founder sounded this alarm at NexusCon ‘24, saying it’s time for solution providers to push back: "I know it’s new to you, but there are a ton of companies that have been doing this and they’re kicking out case studies."
A crucial, but often overlooked, first step is to talk to peers in your industry who have deployed it, check vendor references, and see the system in action at a comparable site.
“Rather than just defaulting to pilots, maybe ask: where have you done this before? What’s your retention rate? Can we speak to your references?” suggests Saruf Alam of KODE Labs. Often, a few reference calls and demos can confirm that a product does what it says on the website. If a vendor has case studies from, say, 10 office campuses or 5 hospitals similar to yours, you might decide to skip a “does it work?” sort of pilot.
Vendors, for their part, are learning to filter out “experimental” tire-kicker clients. Alam noted that KODE Labs isn’t afraid to turn down pilot requests if the customer hasn’t internalized the need for the solution. Mapped’s Yash Prakash told us they stopped doing pilots where the customer isn’t sure of what’s being offered and has no clear commitment of what happens after.
But what does commitment look like? It means having a vision for how the technology will change your organization. As we teach in our Smart Building Strategist course, that means:
Confirming strategic alignment is about doing the prep work to map a technology to business outcomes. “We’ve seen a lot more success when the customer starts with the view that they know the value of the category. They have a goal, and if the trial delivers, it moves to phase two,” said Mapped’s Yash Prakash.
Once the decision to trial is made, both the owner and vendor need to agree on specific, quantifiable success criteria before it starts. How else will you know if it “worked”?
“It starts with a charter... if you do not have success criteria that are quantifiable, measurable... you should not even be starting down that path,” argued Drew DePriest of McKesson at NexusCon ‘24. Also, decide what constitutes failure (and that it’s OK!). “It’s okay if it fails... if you haven’t run a proof of concept that fails, you haven’t been pushing hard enough,” DePriest says. Crucially, tie the success metrics to the business outcomes you’ve already defined in your prep work (energy cost, tenant comfort, staff productivity, etc.), not just technical performance.
In a successful trial, the vendor’s team and a limited portion of the owner’s team operate as one unit with a common goal. It’s not a vendor tossing tech over the fence and waiting to see if the owner likes it. It’s a collaborative effort to validate the solution in the owner’s environment.
Conor Gray, an experienced smart building consultant now at IntelliBuild, compares trials to a first date: both parties need to impress each other and also be honest. It’s not just proving the technology—it’s proving the working relationship. A vendor that’s communicative, responsive, and transparent during a pilot shows they can be a long-term partner. An owner that’s engaged, provides feedback, and is organized in handling legal/IT hurdles shows they’re a partner worth the vendor’s time.
On the owner side, the pilot shouldn’t live only in an innovation silo, it must engage the actual roles who would own the solution long-term. Change management is often the hardest part of scaling a technology, so the trial must include a small-scale test of how you will change operations if you go live everywhere.
One retail portfolio recently worked with KODE Labs in exactly this way. “We had a retail client where we did a trial with a small set of stores. We had their champion really look at this platform every day, use it, and visualize how it would change their processes,” recalls KODE’s Saruf Alam. By embedding a passionate champion and having that person adopt the tool in their daily routine, the organization could see what the workflow changes would be at scale.
KODE and the client even “extrapolated from the trial what that ROI would look like at scale,” and defined a “blueprint” for the entire portfolio—what systems would be integrated, how operations would change, and what the process would look like going forward, as depicted the in graphic below.
This made it easy for the client’s champion to go to senior leadership and say, “Here’s the value we got in 5 stores; here’s the projected value in 500 stores; and here’s exactly how we’d roll it out.” The key was to understand what workflows needed to change and treat the pilot as phase one of a larger initiative.
The balance required is not to make the trial too much of a burden that it stalls out. During scoping, Radden of facil.ai recommends making the trial easy to start by lowering the “activation energy”: a chemical reaction won’t proceed if the activation energy is too high. “In a trial, what’s activation energy? It’s taking three months to set it up, buying a bunch of new hardware, training all your employees, going through procurement, getting IT support... the activation energy is so high that you can’t get over it,” she explains.
Facil.ai’s solution is to remove as many barriers as possible. Whether through cloud-based deployments (no new hardware), limited-scope integrations, or temporary sandbox environments, making pilots lightweight increases the odds they actually happen and yield quick results.
So what happens after the trial? Ideally, if the trial hits its marks, you’ve already laid the groundwork to press “Go” on a full implementation (and everyone from the CFO to the technicians will be on board). And if the pilot doesn’t hit the mark, you’ll know why—and you can either adjust course or confidently scrap that initiative before sinking more time and money.
In other words, the trial is phase one of the deployment.
This model forces both sides to do the homework and legwork upfront—ensuring executive buy-in, budget allocation, and alignment on what “yes” or “no” looks like. This might look like writing the next-step options into the agreement: e.g. “If trial meets defined success criteria, customer will procure X units for a Y-building rollout at $Z price.” It’s not a binding contract for the full rollout (customers rightfully want an “out” if things change), but it is a gentle pre-commitment that focuses everyone’s minds.
It’s important to note that this approach doesn’t mean skipping validation. It means you’re not just testing the tech—you’re also starting to integrate it, train people on it, work out contractual and technical kinks, and so on, with the expectation that it will continue.
Assuming it goes well, you seamlessly transition to the next phase (instead of shelving the project and starting a new procurement cycle). This approach aligns incentives: the owner is motivated to make the pilot succeed because, having already received approval for the full-scale rollout, they have a stake in the outcome. And the vendor is willing to invest in the trial because there’s a defined payoff if successful.
“Pilot purgatory” has plagued the smart building industry, but it’s a solvable problem. The theme that emerges from both successful pilots and candid post-mortems is intentionality. Pilot with purpose or don’t pilot at all. Building owners need to approach pilots with the same rigor they would a full project—clear goals, stakeholder buy-in, and a plan for what comes next. Vendors need to be choosy and invest in trials that have a real chance to blossom into partnerships, rather than scattering freebies everywhere hoping something sticks.
The encouraging news from our interviews is that many are already embracing this change. They’re running shorter, smarter trials with committed champions, measurable outcomes, and predefined next steps. They’re focusing on integration and change management from day one, not as an afterthought. And they’re not afraid to walk away from a pilot that isn’t set up for success.
In the end, the goal of any pilot or trial should be to drive a real decision: either scale up or shut it down.
Head over to Nexus Connect and see what’s new in the community. Don’t forget to check out the latest member-only events.
Go to Nexus ConnectJoin Nexus Pro and get full access including invite-only member gatherings, access to the community chatroom Nexus Connect, networking opportunities, and deep dive essays.
Sign Up