47 min read

🎧 #003: Nick Gayeski of KGS Buildings on FDD at scale and 5 waves of smart building tech

🎧 #003: Nick Gayeski of KGS Buildings on FDD at scale and 5 waves of smart building tech
“I think the first wave of adopters saw energy as the primary benefit. Our experience is that the current wave of adopters sees condition-based or predictive maintenance as the primary benefit, that they have staffing challenges, resource challenges, knowledge gaps that fault detection and analytics on buildings and building systems fills so they can have a smarter maintenance strategy for the long term.

And I think maintenance is fundamentally going to shift into a more data-driven, proactive approach rather than, you know, mostly PMs or preventative maintenance or reactive. And I think that's where analytics and FDD has the biggest role to play.”

—Nick Gayeski, CEO, KGS Buildings

Welcome to Nexus, a newsletter and podcast for smart people applying smart building technology—hosted by James Dice.

Since starting the Nexus newsletter, many of you have reached out wanting to talk shop. After a few weeks of those wonderful conversations, I realized I needed to record and share them with our growing community. So here we are… this is our chance to explore and learn with the brightest in our industry—together.

If you have thoughts, questions, ideas, or tips: hit reply, I’d love your feedback!

Housekeeping: the podcast is up on Apple Podcasts and Spotify. To easily add it to the app of your choice, click the “Listen in podcast app” above.

Music credit: The Garden State by Audiobinger

Disclaimer: James is a researcher at the National Renewable Energy Laboratory (NREL). All opinions expressed via Nexus emails, podcasts, or on the website belong solely to James. No resources from NREL are used to support Nexus. NREL does not endorse or support any aspect of Nexus.


Episode summary

I’m excited to bring you Episode 3, a conversation with Nick Gajewski of KGS Buildings. I’m a big fan of Nick’s—he’s one of the deepest thinkers in this space and he’s taught me a ton in our brief friendship. And his company is pretty cool, too.

We flow through a range of topics, including:

  • The origin story of KGS Buildings and its early days at MIT
  • What sets KGS apart from other vendors of fault detection and diagnostics
  • Five waves of use cases for smart building technology beyond energy efficiency
  • COVID-19’s impact on the facilities and smart buildings industry
  • And much more

Scroll down for the companies and links mentioned, my top highlights of the episode, and a full transcript.

You can find Nick online on LinkedIn and at kgsbuildings.com.

Thoughts, comments, reactions? Let us know in the comments.


Mentions and Links

  1. KGS Buildings (1:07)
  2. MIT media lab (3:22)
  3. Clockworks (4:26)
  4. Schneider Electric (5:38)
  5. Pacific Northwest National Lab (5:51)
  6. CSI (6:59)
  7. Satchwell Sigma (7:00)
  8. Mass customization (10:23)

Nexus Newsletters featuring KGS Buildings (12:32):

Nexus #5 on KGS’ unique approach to FDD

Nexus #7 on Mass Customization

  1. ASHRAE Standard 223p (12:46)
  2. Project Haystack (13:03)
  3. Brick Schema (13:05)
  4. Cornell BACnet group - Joel Bender, Mike Newman (14:35)
  5. ASHRAE Guideline 36 (25:08)
  6. Nexus Deep Dive on COVID-19 (42:59)
  7. NREL (48:21)

NOTE: THE ABOVE AUDIO, VIDEO, SUMMARY, AND LINKS WILL ALWAYS BE FREE. HOWEVER, STARTING WITH NEXUS PODCAST EPISODE #4 (THE NEXT EPISODE), THE FOLLOWING DEEP DIVE CONTENT WILL BE AVAILABLE EXCLUSIVELY TO MEMBERS OF NEXUS PRO.

SIGN UP BEFORE MAY 1ST TO GET THE EARLY ADOPTER DISCOUNT:

Get 20% off forever


Top Highlights

1. What are the keys to scalable Fault Detection and Diagnostics?

Nick Gayeski: [00:08:57] From the very beginning we were careful not to take an ad hoc approach to algorithms. And I think there is, there is a danger in the industry of being very ad hoc in like, "okay, this algorithm applies to this air handler at this site with this sequence if I tag it this way," and that doesn't scale very well. So if you want to build an analytic library that is adaptable to sites all over the world with different engineering, different sequences, and not have a large number of false positives, you have to program your code in a scalable way. And so that's been a big part of our focus.

And the reason why that sets us apart is we put a whole lot less burden on the end customer or the partner, where you know they're not as responsible for defining an information model, applying the information model in programming or configuring algorithms to work properly relative to that information model.

James Dice: [00:10:17] Got it. And when you and I were talking about this last time we spoke, the word you used was mass customization. Can you kind of go a little deeper into that for everyone?

Nick Gayeski: [00:10:26] Yeah, certainly. Some people like that terminology and some people don't. I picked it up at the media lab. So while I was at MIT, there were folks in the media lab using the term mass customization a fair amount. I liked it. And I liked the way it applied to code. The context where I learned it was more like product configurators, so how do you have a product that can be mass configured to meet user preferences? Like you go on to a website where you're going to buy shoes and you know, I want the skin of the shoe to look like this and I want these colors and I want this and that, and you've kind of parameterized your selection and design of shoes to be mass-customized to the, the end, the buyer, the end user, right, that's the consumer.

James Dice: [00:11:15] Like Nike ID, right?

Nick Gayeski: [00:11:17] That might've been one of the examples that were used. Yeah. So in my mind, as we started to scale Clockworks the product, our product, the fault detection and diagnostic analytics product, it was: how does that principle apply to the way algorithms and information models get applied to buildings and to systems? So that when we have an air handler, well, all the different economizer sequences you might see on an economizer air handler parameterized so that the same code base can apply to any air handler with any economizer sequence. Or it gets much more complicated with chiller plants, where depending on what combination of primary and secondary or different types of heat rejections—ground source or cooling tower or closed circuit cooler or stormwater, heat recovery, heat rejection— how do you build code bases that can be parameterized to handle those configurations without reinventing the wheel at every site?

2. Given your parameterization approach to data modeling, how are you accommodating and supporting the open modeling standards Project Haystack and Brick, given that your methods came first?

Nick Gayeski: [00:13:27] It's exciting and great that these communities are now coming together. You know, it's Haystack, it's Brick, it's ASHRAE 223, or standard 223 that's under development, and the communities around them. We love talking with the people in those communities. Have a lot of respect for the people who are influencing those communities.

The challenge for us is because we have a fairly extensive information model and how to represent these systems, how do we make sure that customers get what they're really after out of these new, you know, Haystack—and Haystack's not that new—but Haystack and Brick and where ASHRAE is going, you know, so there aren't missed expectations?

There's maybe a little bit of a fallacy or just too over-reliance on: well, if you just do one of those things-

James Dice: [00:14:21] Add some tags and you're good to go.

Nick Gayeski: [00:14:24] Yeah. Sprinkle tags on it and everybody's good. And I think that kind of misses some of the core challenges of interoperability. And, the guys involved in 223 are well aware of this. Folks like Joel Bender talks about it a lot, influential in the BACnet group at Cornell where Mike Newman was, who sadly passed away recently.

But how do you define a concept and make sure that that concept is uniquely represented in a way that, machine-to-machine can be understood so that one person doesn't use one collect-, or where one company or group doesn't use one collection of tags to represent that concept and one metadata exchange format to communicate that concept, and that is different from some other group? And I think one of the challenges we still collectively have is making sure that all the vendors who are applying tags out of any of these systems are applying them in consistent ways so they truly are interoperable. And I think there are a lot of clients who don't fully understand the importance and the depth of that yet, even if they've heard that Haystack or Brick or one of these ontologies is important. But you have to make sure you're working with people who appreciate the discipline required in the metadata and in the ontology.

James Dice: [00:15:41] Got it. And how are you guys approaching this from a platform perspective with these new standards, new-ish?

Nick Gayeski: [00:15:47] So as you mentioned, we have information modeling concepts already baked into Clockworks, so a lot of the point protos or brick tags from Haystack or Brick, you know, exist in some form in our platform, and we've done mapping between them, or have a general understanding of the mapping. And a lot of it for us is just deciding the metadata exchange format where we're going to start exposing this information and also consuming it. We've done projects where we've consumed Haystack tags, but the discipline with which the Haystack tags were applied meant that there was a lot lacking. So we ended up, you know, uncovering some challenges with the Haystack tagging standards that were being used on that site, and having to basically redo it. Now, a nice thing is that a byproduct of our onboarding process is that we could feedback to a, a customer, you know, a new set of Haystack tags that they could then use.

So for us, it's really an ontological mapping between different schema and having consumer and server type behavior just to be able to be interoperable with whatever schema it is. As this moves into the ASHRAE community and into more formal standards, you know, making sure that the way that interoperability is structured and defined in standards is something that you know, really accomplishes the vision of interoperability.

3. What happens when FDD vendors stop after the first D (detection)? What’s the key to the second D (diagnostics)?

stopping at detection may tell you that, you know, that a valve is leaking by or that a supplier temp is off. You want to take it to the level of providing contextual information about what the underlying cause might be. I mean, that's what the diagnostic part is about. So do you have enough information to say that an actuator failed on a valve, and that's what's causing those other issues? I think looking at systems and equipment instead of looking just at tags on points and what that combination of tags on those points detect, but looking at the overall system and making sure that you're accounting for the expected sequence, accounting for the engineering parameters, and trying to identify the root cause of a series of issues on the system or the equipment that might otherwise be multiple detected faults or even alarms. I think it's taking that, taking it to the level of: why are we seeing a pattern of issues? And getting into a specific fault diagnosis that someone can repair, is what diagnostics takes it to.

James Dice: [00:21:35] And what happens when you don't have the approach to diagnostics that, that you guys have? One of them is false positives, but I know there's a lot of downsides to it.

Nick Gayeski: [00:21:46] Yeah. I think it's, a big piece of it is just lack of prioritization and the creation of noise. So there's already a problem of alarms in the building automation system being ignored because there are too many of them, and if you move that into fault detection, you still have a lot of noise with potentially a lot of false positives. And if you want to turn it into something that's value add and more actionable, I think you need the prioritization, which comes from engineering calculations, cost calculation, comfort impact assessment, maintenance severity assessment, along with the diagnostic piece to get to what is the root of that problem, to take it from excess noise that they don't have time for into intelligence that's prioritized that allows them to make a better decision about a problem they might otherwise never have known about or have ignored.

James Dice: [00:22:54] Great. Yeah, thanks for that. I think that's a huge point for anyone who's just getting started with fault detection or anyone who may have done a pilot and not gotten great results.

4. What happens when people try to do FDD through the building automation system? And what are your thoughts on ASHRAE Guideline 36 and the movement to start specifying faults into the BAS?

Nick Gayeski: [00:23:09] We’ve worked with people who tried to do it in the building automation system. They tried to program fault detection, maybe almost diagnostics into the BMS and have like energy alarms in the BMS, and that's great when you have a building with an experienced BAS programmer who builds all that and maintains it, but when you then want to do that across your portfolio, or you want to continue to use it and maintain it when that person role changes or they switch jobs, it's just unmanageable at that stage. So we've seen that with folks who tried it in the BMS. We've seen that with folks who bought kind of a low cost tool to do it themselves.

And, thankfully I think the market is shifting where organizations are thinking about scaling. They're not thinking about just trying it out. And when you start thinking about scaling, you have to consider maintainability and scalability and all the other aspects that you might otherwise ignore when it's like, let me try this.

James Dice: [00:24:59] Got it. Yeah. I'm what you could call an extreme skeptic when it comes to adding fault detection to the BAS. What are your thoughts on ASHRAE Guideline 36 and this sort of movement to start specifying faults into the BAS?

Nick Gayeski: [00:25:15] Yeah. Well, first I'll say I have an appreciation for Guideline 36 and the RPs that set it. The folks who contribute to those things are very well-respected engineers in the field who do a lot of cool, cool work. So I'll start there.

I think my hope is that the ASHRAE community and the industry as a whole isn't rigid in their thinking about the technology solutions on how to bring FDD into the mix. So are they saying that what's in Guideline 36 has to be implemented through programming in the building automation system? Or are they saying that when you have a terminal unit of that type with those points, you should have diagnostics that do that FDD, wherever it is? And I think as long as people are keeping an open mind about the evolution of the technology through which those algorithms and those ideas get applied, then it's great, but if it's sort of narrowly, narrowly focused on like it's through the building automation system, I think that sort of misses a bigger future of where this is going.

5. Beyond energy efficiency, what are the other drivers for smart building technology? And how has that been changing?

1st wave: Energy Efficiency

Nick Gayeski: [00:26:54] Right. Well, the first thing I'll say is that I think the first wave of adopters saw energy as the primary benefit.

2nd wave: Condition-based Maintenance

My own experience is that the current wave of adopters sees condition-based or predictive maintenance as the primary benefit, that they have staffing challenges, resource challenges, knowledge gaps that fault detection and analytics on buildings and building systems fills, so that they can have a smarter maintenance strategy for the long term. And I think maintenance is fundamentally going to shift into a more data-driven, proactive approach rather than, you know, mostly PMs or preventative maintenance or reactive. And I think that's where analytics and FDD has the biggest role to play. And the benefits of that, the energy cost reduction benefits, energy and sustainability, you know, carbon, say, reduction benefits are still there. But fundamentally, the reason why a lot of organizations are shifting towards this now is it's more just a better way to manage and maintain a building long term.

James Dice: Can you give us some specifics on how you're seeing that play out?

Nick Gayeski: Yeah, I guess where I'll comment is more with our service provider partners, so control services, mechanical services. There are many service providers that have service agreements with customers to do certain things on a schedule, right?

They show up every month or every quarter and you know, they check off a bunch of lists, a bunch of items on a list related to pumps, related to fans, related to boilers, related to area alerts. I think this changes that whole approach. You don't need to go look at a gauge or look at a graphic to record a reading anymore. Like that should be continuously monitored, detected, and diagnosed as something worth their attention and time before they ever show up on site. And so I think it changes the PM schedules, the preventative maintenance pathway that most in-house facilities  organizations undertake. And when it's outsourced to a service provider, it changes that task list fundamentally, and I think they can spend more time on fixing the issue that the diagnostics found instead of checking everything to determine if there is an issue, which I think a lot of the task lists are focused on now. It's like you go check all these things and then you've done your task, you've found an issue instead of fixing it.

3rd wave: Feedback for the Design Process and Prioritization of Capital Projects

So I see that as core, but I want to go back to your question that it's really beyond that that we're starting to see interesting things, like how do you feedback performance statistics and performance information into the design and specification process or into the retrofit process, so that the history of faults on a system or in a building and a history of performance, you know, key performance indicators and how they trended over time, like KW per ton or KW per CFM, those things together inform what needs to get replaced or retrofit on what schedule, capital renewal schedules?  When they do that, how to engineer it differently? What are the actual loads instead of what are the loads the HVAC designer modeled? It's all those pieces that once you have that degree of information about systems, changes the way we reinvest, the way we retrofit, the way we design.

And then beyond that, I would say it's, it's more on the planning side. So if there's a history of those types of issues, is it time for a retrofit or a replacement? And looking at the patterns of those problems or patterns by type of system may cause you to make a choice to engineer systems differently.

James Dice: [00:31:35] Yeah. It gets back to using data for prioritization. So I've always thought of it in terms of, you have this bucket of low or no-cost things that come up that you should fix sometime soon. And then there's other capital lists, capital projects that it's more longterm planning.

And so you're saying, let's take the analytics and use them to prioritize that other list. So that I think is pretty unique. Cool.

Nick Gayeski: [00:32:01] Yeah. We are seeing that. We're starting to see it. We have collaborations with, oftentimes it's through our customer with an engineering firm, where the engineering firm may get access, at the customer's permission to the faults and to the raw data. And we've had folks calibrate energy models based on the data. We've had folks look at the history of faults in order to define the scope of a retro commissioning, you know, an outsourced retrocommissioning project. So yeah, increasingly it's being used on the, on the retrofit and design and capital planning side.

4th wave: Risk Management

[00:29:20] And there's a whole nother area that's increasingly becoming interesting for us, which is more on the risk side. So with some of our pharma and life sciences clients, risk to production, risks in operations is another important factor of the types of faults, the prevalence of those faults, the frequency of those faults and the risk that creates for their mission critical operations starts to change the way they do risk assessment and reinvest in those operations. And that's exciting for us right now.

James Dice: [00:29:52] Cool. Yeah. So I'm noticing a lot of tie-in with like this whole movement towards greater resilience, and it sounds like you have some clients that are feeling that more than others. What are some like more detailed examples of how like a fault detection package would help, say, a pharma, like some sort of manufacturing plant for pharma?

Nick Gayeski: [00:30:14] Sure. I'll keep it fairly simple, which is just, environmental conditions for storage after production is done or environmental conditions while the production is taking place, they may have very strict requirements on relative humidity, on temperature, on pressurization, and if there are faults that put those things at risk or there was a history of faults before they do a production run and there's a chance that that fault could occur while that production run is happening, they may get that repaired before their next production run, or at least get it looked at in order to reduce the risk to their production run. And when you're talking about millions of dollars of product, the risk is very high. It's very worth it to get somebody investigating six faults before they take that action, because it's small compared to the overall risks, small cost to address compared to the overall risks. So that's one piece.

And then I'd say the other area that's sort of more big-picture, longterm is risk from the point of view of insurance, and how do you insure yourself against risks of system and equipment failure, and how does this information inform that over time? But those are frankly a little bit further out. I think the ones we already talked about are more today.

5th wave: Data Aggregation for Better Equipment Manufacturing

Nick Gayeski: [00:37:12] I think we covered, you know, between condition-based maintenance, energy cost reduction and energy sustainability, reliability risks and life cycle costs. Those are really core. There's always the: what does this look like when there are millions of equipment connected and we can feed data back to manufacturers, either anonymously or with customer permission so that they get better at what they do? You know, we have this testing process where, you know, you go to the testing labs and the manufacturer's testing, they get a certified stamp. And you know it's out there in the field, and we don't have a whole lot of data about how those products all operate in the field. So I think it changes that industry over time to have this really rich and robust performance data about in situ performance of manufacturer products.

6. How smart building vendors can design software for all these different use cases

Nick Gayeski: [00:33:34] Yeah. No, I agree. And to give you a kind of a sampling, I would say the types of users that we have today include commissioning agents, HVAC technicians and service providers, controls technicians and service providers, maintenance managers, facility managers, energy managers, directors of facilities, VPs of facilities, controls vendor, facility management service vendor, mechanical service provider. You know, it's definitely broadening in terms of the base of users. Even some utilities who are doing measurement and verification work, they'll get access, and they can go in there and look at the history of the diagnostics, look if something was fixed.

Having said all that, I think we try to maintain a focus on the primary use case for the client. So it's really important to understand what the client's trying to get out of it. If it's about energy reduction or cost reduction, making sure they have well-defined processes and accountability within their organization of how that's going to happen using the platform.

If it's about participating in the utility's incentive program, making sure that's a clearly defined process. If it's about incorporating into their service agreement with their vendor, making sure the process for the vendor to use the information and fix the issues. So it's just really important that you understand what the client's trying to get out of it and know their use case, and ensure there's a focus on that. And then build on that for all these other use cases that they can derive value from.

James Dice: [00:35:14] Got it. Yeah, and I think you just hit the nail on the head, exactly my next question, which was how does a software company build for 20 different use cases? But yeah, you just answered it before I could ask it.


Full Transcript

Note: transcript was created using an imperfect machine learning tool and lightly edited by a human (so you can get the gist). Please forgive errors!

James Dice: [00:00:00] Hello friends. Welcome to Nexus, a smart buildings technology podcast for smart humans. I'm your host James Dice. If we haven't met before, I write a weekly newsletter on the same topic. It's also called Nexus. Each week I share what I've learned, my opinions, and what I'm excited about in the quickly evolving world of intelligent buildings.

Readers have called Nexus the best way to stay up to date on the future of this industry without all the marketing fluff. You can check it out and subscribe at nexus.substack.com or click the link in the show notes. Since starting the Nexus newsletter, many of you have reached out to me wanting to talk shop, and we have. After a few weeks of those wonderful conversations, I realized I needed to record and share them with our growing community.

So here we are. The Nexus podcast is born. This is our chance to explore and learn with the brightest in our industry. Together.

All right, I'm pumped to bring you episode three, a conversation with Nick Gayeski, CEO of KGS Buildings. I'm a big fan of Nick's. He's one of the deepest thinkers in the space, and he's taught me a ton in our brief friendship. And his company is pretty cool too. We flow through a range of topics, including: the origin story of KGS Buildings, including its early days at MIT, what sets KGS apart from other vendors of fault detection and diagnostics, five categories of use cases for smart buildings technology—especially FDD—beyond energy  efficiency, COVID-19's  impact on the facilities and smart buildings industry, and much, much more. You can find Nick online on LinkedIn, and at kgsbuildings.com. Both of these links can be found in the show notes on nexus.substack.com. Without further ado, please enjoy Nexus podcast, episode three with Nick Gayeski.

Nick Gayeski, welcome to the show. Can you introduce yourself?

Absolutely, James. Glad to be here. I'm Nick Gayeski. I'm the co-founder and CEO of KGS Buildings.

Alright. And can you give us a little intro on who KGS Buildings is?

Nick Gayeski: [00:02:09] Yeah, absolutely. KGS has been our baby and our project for many years now. Started the company back in 2008 really, while we were doing PhDs in Building Science and have grown it since 2010 as a Software as a Service and managed service business, serving clients all over the world.

James Dice: [00:02:31] Okay, cool. And primarily it's a fault detection platform. Can you go into a little bit more detail on what it is?

Nick Gayeski: [00:02:38] Yeah, that's right. So at the beginning it was primarily fault detection and diagnostics. That's still the core of what it is today. So we were doing PhD work on model predictive control on expert systems for daylighting design simulation— that was one of my partners—, and then on fault detection and diagnostics for air handlers, which was Steven, one of our other partners. And while we were doing that, we wanted to pick a path that would give us the most impact in the buildings industry. And we sort of felt fault detection and diagnostics is the most needed thing of all the things we were working on.

So decided to commercialize that in 2010. MIT, our Alma mater, was our first customer and have been growing it ever since. So there's a lot of things to talk about along the way.

James Dice: [00:03:32] Yeah. Yeah. That's kind of fascinating because model predictive control is now becoming important, but yeah. So 2010, man, I was just graduated from college, didn't even know what fault detection was at that point.

So how about your first few buildings? You said MIT was your first client, basically?

Nick Gayeski: [00:03:51] Yeah, that's right. The first building we did was a research building— fairly large building, 450,000 square feet, lot of big air handlers, 100,000 CFM air handlers. I won't go into detail on issues, no one likes their dirty  laundry aired, but this, like any building, there's so many systems and so many components that can have, you know, a variety of issues that just require constant maintenance and attention. And there's never enough resources to do that. So there was plenty of things to help them with, and that was a great first building.

James Dice: [00:04:24] Okay. And how has Clockwork's changed since that first building?

Nick Gayeski: [00:04:29] Oh man, that's a 10 year journey. So a lot to talk about there. I'll start. So in the early days, we had diagnostics primarily focused on air handlers, on heat recovery systems, on hydronic loops, quickly moved into chillers and boilers and, you know, a full system.

One of the beauties of working with a client like MIT or higher education in general—we now do a lot of higher education, we do a lot of corporate real estate, healthcare—is just the diversity of systems you see, both in terms of engineering on the HVAC side, but also building automation system diversity, metering system and skate, control sequences, just a huge diversity.

So part of our story was just continuing to grow this diagnostic structure and platform that could handle that diversity in a scalable way across a really broad set of clients. So that's one of the biggest themes of the change. There's plenty of other directions to go though, but I'll, I'll let you ask a question or two.

James Dice: [00:05:33] Yeah. Well, I know one of the big steps along the roadmap there was your partnership with Schneider. Can you talk about how that has impacted the company?

Nick Gayeski: [00:05:42] Yeah, absolutely. So the early days we had a few commercial customers. We actually had research projects with national labs. We did, work with Pacific Northwest National Lab on retuning, commercializing, retuning.

That was all sort of growing bootstraps, but we were fortunate to get the attention of Schneider in 2011 or so, saw that we were making waves, and started talking with their team out of Andover. We're, out of Boston, so in Andover, Massachusetts is right up 93. Really good group of folks, Barry Coflan, their CTO, and Jay Nardone at  the time were very instrumental in forming that relationship. And we formed an OEM relationship in 2012 where we started working together on bringing fault detection and diagnostics, through their buildings, branches to service customers, and started to look at: how does this change the way services are delivered?

James Dice: [00:06:37] Got it. Okay. And obviously they've been helpful in the growth because Schneider is a massive worldwide company, right?

Nick Gayeski: [00:06:44] They are indeed, yeah. Very, very much so, important relationship for us. I have a lot of respect for the folks at Schneider. You know, a lot of them have been in the industry for years.

You know, and all those major players, you find people who, they were the product manager for CSI or Satchwell Sigma that had some of the earliest-

James Dice: [00:07:03] Yeah.

Nick Gayeski: [00:07:04] You just find people like that in a company of that scale, who had such an influence, often behind the scenes.

Well, it was really important for us scaling overseas. 2013, 2014, they started bringing us over into markets in Australia, and the Nordics, and other places. So yeah, it's been a, a fruitful relationship. You know, we've had to strike a balance between how we serve our direct clients and how we work with other partners and how we work with them as a very important OEM relationship.

James Dice: [00:07:34] Right, right. And you obviously have direct relationships beyond Schneider as well. So how big is KGS today?

Nick Gayeski: [00:07:42] Yeah, so we have sites connected in something like 25 countries. I, I'm not actually sure how many. We have some of our first sites in like Chile and Russia and places that we haven't expected.

Most of our business is in North America, the U S and Canada. We have about 260,000 equipment connected. So we like to think in terms of equipment—like an air handler, a chiller, a boiler, a pump—because points is very, you know, if you're monitoring billion points, what do you mean by that? And how does that relate to, you know, a real concrete asset? So we think in terms of assets.

James Dice: [00:08:28] Got it. Yeah, points are pretty arbitrary. But how many pieces of equipment did you say?

Nick Gayeski: [00:08:34] About 260,000 pieces of equipment.

James Dice: [00:08:37] Damn. Okay, cool. Yeah, I definitely want to circle back on some of that, but let's kind of move on to what's-. So I'm an analytics nerd. I love fault detection. I'm a huge believer. So what sets KGS apart, philosophy-wise, approach-wise to other analytics, FDD platforms?

Nick Gayeski: [00:08:57] Right. I think first and foremost, we've been focused on scale. How do you scale? And from the very beginning we were careful not to take an ad hoc approach to algorithms. And I think there is, there is a danger in the industry of being very ad hoc in like, "okay, this algorithm applies to this air handler at this site with this sequence if I tag it this way," and that doesn't scale very well. So if you want to build an analytic library that is adaptable to sites all over the world with different engineering, different sequences, and not have a large number of false positives, you have to program your code in a scalable way. And so that's been a big part of our focus.

And the reason why that sets us apart is we put a whole lot less burden on the end customer or the partner, where you know they're not as responsible for defining an information model, applying the information model in programming or configuring algorithms to work properly relative to that information model.

James Dice: [00:10:17] Got it. And when you and I were talking about this last time we spoke, the word you used was mass customization. Can you kind of go a little deeper into that for everyone?

Nick Gayeski: [00:10:26] Yeah, certainly. Some people like that terminology and some people don't. I picked it up at the media lab. So while I was at MIT, there were folks in the media lab using the term mass customization a fair amount. I liked it. And I liked the way it applied to code. The context where I learned it was more like product configurators, so how do you have a product that can be mass configured to meet user preferences? Like you go on to a website where you're going to buy shoes and you know, I want the skin of the shoe to look like this and I want these colors and I want this and that, and you've kind of parameterized your selection and design of shoes to be mass customized to the, the end, the buyer, the end user, right, that's the consumer.

James Dice: [00:11:15] Like Nike ID, right?

Nick Gayeski: [00:11:17] That might've been one of the examples that were used. Yeah. So in my mind, as we started to scale Clockworks the product, our product, the fault detection and diagnostic analytics product, it was: how does that principle apply to the way algorithms and information models get applied to buildings and to systems? So that when we have an air handler, well, all the different economizer sequences you might see on an economizer air handler parameterized so that the same code base can apply to any air handler with any economizer sequence. Or it gets much more complicated with chiller plants, where depending on what combination of primary and secondary or different types of heat rejections—ground source or cooling tower or closed circuit cooler or storm water, heat recovery, heat rejection— how do you build code bases that can be parameterized to handle those configurations without reinventing the wheel at every site?

James Dice: [00:12:22] Yeah, totally. I like that concept a lot. I mean, you definitely helped me understand it better, and I wrote up a little bit of a newsletter on this that I'll put in the show notes for everyone that wants to dive a little deeper.

I want to circle back to other differentiators of KGS, but I want to kind of fast forward, because this ties into another question I was going to ask, which was around ASHRAE 223 and interoperability. So it sounds like what I'm hearing from you and what I've heard from you is that you guys developed this method of mass customization, which really also means a method to model data and buildings, before Haystack came around and before Brick came around, and now there's this really great movement of open sourcing how we're going to model these things. And so given that reverse order of things for you guys and your philosophy around mass customization, how are you thinking about Haystack and ASHRAE 223 and these movements to standardize data modeling?

Nick Gayeski: [00:13:27] Yeah. Well, first I'll say I agree. It's exciting and great that these communities are now coming together. You know, it's Haystack, it's Brick, it's ASHRAE 223, or standard 223 that's under development, and the communities around them. We love talking with the people in those communities. Have a lot of respect for the people who are influencing those communities.

The challenge for us is because we have a fairly extensive information model and how to represent these systems, how do we make sure that customers get what they're really after out of these new, you know, Haystack—and Haystack's not that new—but Haystack and Brick and where ASHRAE is going, you know, so there aren't missed expectations?

There's maybe a little bit of a fallacy or just too over-reliance on: well, if you just do one of those things-

James Dice: [00:14:21] Add some tags and you're good to go.

Nick Gayeski: [00:14:24] Yeah. Sprinkle tags on it and everybody's good. And I think that kind of misses some of the core challenges of interoperability. And, the guys involved in 223 are well aware of this. Folks like Joel Bender talks about it a lot, influential in the BACnet group at Cornell where Mike Newman was, who sadly passed away recently. But how do you define a concept and make sure that that concept is uniquely represented in a way that, machine-to-machine can be understood so that one person doesn't use one collect-, or where one company or group doesn't use one collection of tags to represent that concept and one metadata exchange format to communicate that concept, and that is different from some other group? And I think one of the challenges we still collectively have is make sure, making sure that all the vendors who are applying tags out of any of these systems are applying them in consistent ways so they truly are interoperable. And I think there are a lot of clients who don't fully understand the importance and the depth of that yet, even if they've heard that Haystack or Brick or one of these ontologies is important. But you have to make sure you're working with people who appreciate the discipline required in the metadata and in the ontology.

James Dice: [00:15:41] Got it. And how are you guys approaching this from a platform perspective with these new standards, new-ish?

Nick Gayeski: [00:15:47] So as you mentioned, we have information modeling concepts already baked into Clockworks, so a lot of the point protos or brick tags from Haystack or Brick, you know, exist in some form in our platform, and we've done mapping between them, or have a general understanding of the mapping. And a lot of it for us is just deciding the metadata exchange format where we're going to start exposing this information and also consuming it. We've done projects where we've consumed Haystack tags, but the discipline with which the Haystack tags were applied meant that there was a lot lacking. So we ended up, you know, uncovering some challenges with the Haystack tagging standards that were being used on that site, and having to basically redo it. Now, a nice thing is that a byproduct of our onboarding process is that we could feed back to a, a customer, you know, a new set of Haystack tags that they could then use.

So for us, it's really an ontological mapping between different schema and having consumer and server type behavior just to be able to be interoperable with whatever schema it is. As this moves into the ASHRAE community and into more formal standards, you know, making sure that the way that interoperability is structured and defined in standards is something that you know, really accomplishes the vision of interoperability.

James Dice: [00:17:15] Got it. Yeah, and correct me if I'm wrong as I'm listening and remembering our past conversations, but I feel like your, the Clockworks data model goes into far more detail than your typical Haystack or Brick implementation. And the reason for that is you guys are going—this is just my interpretation—but your diagnostics-, so FDD stands for fault detection and diagnostics. You guys have made a point to define diagnostics in a way that I feel like is unique, and your data model supports that. Can you tell us about what diagnostics means to you?

Nick Gayeski: [00:17:53] Yeah, so the type of information that is not entirely clear yet to me whether and how Brick or Haystack represent well, is information about sequences, about modes, about equipment parameters like horsepowers and rated flows. And maybe it's not. Maybe it doesn't belong in Brick or Haystack. Maybe it's by interoperability with a BIM metadata exchange format or through some other metadata exchange format that that information gets shared. Or for that matter, the building automation system itself, just can the building automation system expose its sequences as something that can be consumed by the information model?

We configure that type of information based on best available information for any given site, which sadly, sometimes is a person. I mean, we love talking to people, but sadly, sometimes it's talking to the engineers who are just familiar with that site. Sometimes it's just the BMS code. Sometimes the sequence narratives, but the reality is it's not really memorialized in a lot of information models. It's not clear to me where it does get memorialized there. So as it matures, I think having ways of, of defining in a standard way what it means to be in mode one, mode five, or more mode seven of operation and what the expected behavior of that system is in that mode. And those details are what makes analytics not just, well A, not just alarms, and then B, be able to do real engineering calculations to help with prioritization, and then I'd say C, get further into like business intelligence around, you know, should I re-engineer this plant, should I engineer my next plant differently? ind of  beyond just O&M FDD.

James Dice: [00:19:50] Yeah. I want to save that. So on the diagnostics versus detection, can you just kind of-, I feel like there's a lot of fault detection platforms out there, and I just said it. I just said fault detection. There's a lot of fault detection and diagnostics platforms that stop with the first D. And like you said, they end up turning into just alarm platforms, right? So can you tell us what gets added when you go the extra mile to diagnostics?

Nick Gayeski: [00:20:19] Yeah. You know I think it's things like: stopping at detection may tell you that, you know, that a valve is leaking by or that a supplier temp is off. You want to take it to the level of providing contextual information about what the underlying cause might be. I mean, that's what the diagnostic part is about. So do you have enough information to say that an actuator failed on a valve, and that's what's causing those other issues? I think looking at systems and equipment instead of looking just at tags on points and what that combination of tags on those points detect, but looking at the overall system and making sure that you're accounting for the expected sequence, accounting for the engineering parameters, and trying to identify the root cause of a series of issues on the system or the equipment that might otherwise be multiple detected faults or even alarms. I think it's taking that, taking it to the level of: why are we seeing a pattern of issues? And getting into a specific fault diagnosis that someone can repair, is what diagnostics takes it to.

James Dice: [00:21:35] And what happens when you don't have the approach to diagnostics that, that you guys have? One of them is false positives, but I know there's a lot of downsides to it.

Nick Gayeski: [00:21:46] Yeah. I think it's, a big piece of it is just lack of prioritization and the creation of noise. So there's already a problem of alarms in the building automation system being ignored because there are too many of them, and if you move that into fault detection, you still have a lot of noise with potentially a lot of false positives. And if you want to turn it into something that's value add and more actionable, I think you need the prioritization, which comes from engineering calculations, cost calculation, comfort impact assessment, maintenance severity assessment, along with the diagnostic piece to get to what is the root of that problem, to take it from excess noise that they don't have time for into intelligence that's prioritized that allows them to make a better decision about a problem they might otherwise never have known about or have ignored.

James Dice: [00:22:54] Great. Yeah, thanks for that. I think that's a huge point for anyone who's just getting started with fault detection or anyone who may have done a pilot and not gotten great results. You're smiling. Go ahead.

Nick Gayeski: [00:23:09] We've worked with early adopters who've been trying out various FDD strategies for years, like some of them for 10 years they've tried a few different things. We're fortunate to still be working with some of those folks who, you know, they tried two or three things that didn't really go that well, but they still saw the vision. And now we're working closely with them and we share a vision, and then it's starting to be successful or it is successful for them. And that's always very gratifying because the realization of a shared FDD vision, but there've been many of those instances where people tried a product that really kind of alarms plus or just fault detection or maybe they tried to do it themselves, they bought a tool to do it themselves. Or we've, we worked with people who tried to do it in the building automation system. They tried to program fault detection, maybe almost diagnostics into the BMS and have like energy alarms in the BMS, and that's great when you have a building with an experienced BAS programmer who builds all that and maintains it, but when you then want to do that across your portfolio, or you want to continue to use it and maintain it when that person role changes or they switch jobs, it's just unmanageable at that stage. So we've seen that with folks who tried it in the BMS. We've seen that with folks who bought kind of a low cost tool to do it themselves.

And, thankfully I think the market is shifting where organizations are thinking about scaling. They're not thinking about just trying it out. And when you start thinking about scaling, you have to consider maintainability and scalability and all the other aspects that you might otherwise ignore when it's like, let me try this.

James Dice: [00:24:59] Got it. Yeah. I'm what you could call an extreme skeptic when it comes to adding fault detection to the BAS. What are your thoughts on ASHRAE Guideline 36 and this sort of movement to start specifying faults into the BAS?

Nick Gayeski: [00:25:15] Yeah. Well, first I'll say I have an appreciation for Guideline 36 and the RPs that set it. The folks who contribute to those things are very well-respected engineers in the field who do a lot of cool, cool work. So I'll start there.

I think my hope is that the ASHRAE community and the industry as a whole isn't rigid in their thinking about the technology solutions on how to bring FDD into the mix. So are they saying that what's in Guideline 36 has to be implemented through programming in the building automation system? Or are they saying that when you have a terminal unit of that type with those points, you should have diagnostics that do that FDD, wherever it is? And I think as long as people are keeping an open mind about the evolution of the technology through which those algorithms and those ideas get applied, then it's great, but if it's sort of narrowly, narrowly focused on like it's through the building automation system, I think that sort of misses a bigger future of where this is going.

James Dice: [00:26:23] Yeah, me too. I'll just leave it at that. Cool. Yeah. You mentioned re-engineering a plant and using analytics to decide whether that's a good idea. So this kind of falls in this broader category of benefits of analytics, benefits of fault detection and diagnostics outside of the traditional main benefit, which is energy efficiency, energy conservation.

What is KGS seeing for other use cases, other benefits for building owners for this technology?

Nick Gayeski: [00:26:54] Right. Well, the first thing I'll say is that I think the first wave of adopters saw energy as the primary benefit. My own experience is that the current wave of adopters sees condition-based or predictive maintenance as the primary benefit, that they have staffing challenges, resource challenges, knowledge gaps that fault detection and analytics on buildings and building systems fills, so that they can have a smarter maintenance strategy for the long term. And I think maintenance is fundamentally going to shift into a more data-driven, proactive approach rather than, you know, mostly PMs or preventative maintenance or reactive. And I think that's where analytics and FDD has the biggest role to play. And the benefits of that, the energy cost reduction benefits, energy and sustainability, you know, carbon, say, reduction benefits are still there. But fundamentally, the reason why a lot of organizations are shifting towards this now is it's more just a better way to manage and maintain a building long term.

So I see that as core, but I want to go back to your question that it's really beyond that that we're starting to see interesting things, like how do you feed back performance statistics and performance information into the design and specification process or into the retrofit process, so that the history of faults on a system or in a building and a history of performance, you know, key performance indicators and how they trended over time, like KW per ton or KW per CFM, those things together inform what needs to get replaced or retrofit on what schedule, capital renewal schedules?  When they do that, how to engineer it differently? What are the actual loads instead of what are the loads the HVAC designer modeled? It's all those pieces that once you have that degree of information about systems, changes the way we reinvest, the way we retrofit, the way we design.

And there's a whole nother area that's increasingly becoming interesting for us, which is more on the risk side.

James Dice: [00:29:19] Okay.

Nick Gayeski: [00:29:20] So with some of our pharma and life sciences clients, risk to production, risks in operations is another important factor of the types of faults, the prevalence of those faults, the frequency of those faults and the risk that creates for their mission critical operations starts to change the way they do risk assessment and reinvest in those operations. And that's exciting for us right now.

James Dice: [00:29:52] Cool. Yeah. So I'm noticing a lot of tie-in with like this whole movement towards greater resilience, and it sounds like you have some clients that are feeling that more than others. What are some like more detailed examples of how like a fault detection package would help, say, a pharma, like some sort of manufacturing plant for pharma?

Nick Gayeski: [00:30:14] Sure. I'll keep it fairly simple, which is just, environmental conditions for storage after production is done or environmental conditions while the production is taking place, they may have very strict requirements on relative humidity, on temperature, on pressurization, and if there are faults that put those things at risk or there was a history of faults before they do a production run and there's a chance that that fault could occur while that production run is happening, they may get that repaired before their next production run, or at least get it looked at in order to reduce the risk to their production run. And when you're talking about millions of dollars of product, the risk is very high. It's very worth it to get somebody investigating six faults before they take that action, because it's small compared to the overall risks, small cost to address compared to the overall risks. So that's one piece.

And then beyond that, I would say it's, it's more on the planning side. So if there's a history of those types of issues, is it time for a retrofit or a replacement? And looking at the patterns of those problems or patterns by type of system may cause you to make a choice to engineer systems differently.

James Dice: [00:31:35] Yeah. It gets back to using data for prioritization. So I've always thought of it in terms of, you have this bucket of low or no-cost things that come up that you should fix sometime soon. And then there's other capital lists, capital projects that it's more longterm planning.

And so you're saying, let's take the analytics and use them to prioritize that other list. So that I think is pretty unique. Cool.

Nick Gayeski: [00:32:01] Yeah. We are saying that. We're starting to see it. We have collaborations with, oftentimes it's through our customer with an engineering firm, where the engineering firm may get access, at the customer's permission to the faults and to the raw data. And we've had folks calibrate energy models based on the data. We've had folks look at the history of faults in order to define the scope of a retro commissioning, you know, an outsourced retrocommissioning project. So yeah, increasingly it's being used on the, on the retrofit and design and capital planning side.

James Dice: [00:32:40] That's fascinating because one of the things that I feel like is different with these types of software platforms is there's no-, ideally everyone that's interacting with the building is also interacting with the platform because it can help everyone do their jobs: service contractors, mechanical controls, engineering designers, building operators, building owner, CFO. Like everyone can get something out of this platform.

And I think the progression of this so far has been a focus on one or two of those use cases. Probably like an energy manager, maybe like you're saying that the current wave is getting into building operators and O&M type of processes, but I think that as an industry, we're still in the early stages of really unlocking the use cases of all those other potential users. Right?

Nick Gayeski: [00:33:34] Yeah. No, I agree. And to give you a kind of a sampling, I would say the types of users that we have today include commissioning agents, HVAC technicians and service providers, controls technicians and service providers, maintenance managers, facility managers, energy managers, directors of facilities, VPs of facilities, controls vendor, facility management service vendor, mechanical service provider. You know, it's definitely broadening in terms of the base of users. Even some utilities who are doing measurement and verification work, they'll get access, and they can go in there and look at the history of the diagnostics, look if something was fixed.

Having said all that, I think we try to maintain a focus on the primary use case for the client. So it's really important to understand what the client's trying to get out of it. If it's about energy reduction or cost reduction, making sure they have well-defined processes and accountability within their organization of how that's going to happen using the platform.

If it's about participating in the utility's incentive program, making sure that's a clearly defined process. If it's about incorporating into their service agreement with their vendor, making sure the process for the vendor to use the information and fix the issues. So it's just really important that you understand what the client's trying to get out of it and know their use case, and ensure there's a focus on that. And then build on that for all these other use cases that they can derive value from.

James Dice: [00:35:14] Got it. Yeah, and I think you just hit the nail on the head, exactly my next question, which was how does a software company build for 20 different use cases? But yeah, you just answered it before I could ask it.

Let's go back to the condition-based maintenance or predictive maintenance. Can you give us some specifics on how you're seeing that play out?

Nick Gayeski: [00:35:33] Yeah, I guess where I'll comment is more with our service provider partners, so control services, mechanical services. There are many service providers that have service agreements with customers to do certain things on a schedule, right?

They show up every month or every quarter and you know, they check off a bunch of lists, a bunch of items on a list related to pumps, related to fans, related to boilers, related to area alerts. I think this changes that whole approach. You don't need to go look at a gauge or look at a graphic to record a reading anymore. Like that should be continuously monitored, detected, and diagnosed as something worth their attention and time before they ever show up on site. And so I think it changes the PM schedules, the preventative maintenance pathway that most in-house facilities  organizations undertake. And when it's outsourced to a service provider, it changes that task list fundamentally, and I think they can spend more time on fixing the issue that the diagnostics found instead of checking everything to determine if there is an issue, which I think a lot of the task lists are focused on now. It's like you go check all these things and then you've done your task, you've found an issue instead of fixing it.

James Dice: [00:36:53] Exactly. Okay, cool. Alright. I really enjoyed that. So like the different waves of use cases. That's fascinating. I haven't seen it laid out like that before. Any other things you want to say around use cases that are top of mind?

Nick Gayeski: [00:37:12] I think we covered, you know, between condition-based maintenance, energy cost reduction and energy sustainability, reliability risks and life cycle costs. Those are really core. There's always the- what does this look like when there are millions of equipment connected and we can feed data back to manufacturers, either anonymously or with customer, customer permission so that they get better at what they do? You know, we have this testing process where, you know, you go to the testing labs and the manufacturer's testing, they get a certified stamp. And you know it's out there in the field, and we don't have a whole lot of data about how those products all operate in the field. So I think it changes that industry over time to have this really rich and robust performance data about in situ performance of manufacturer products.

And then I'd say the other area that's sort of more big-picture, longterm is risk from the point of view of insurance, and how do you insure yourself against risks of system and equipment failure, and how does this information inform that over time? But those are frankly a little bit further out. I think the ones we already talked about are more today.

James Dice: [00:38:36] Got it, okay. Cool. Yeah, that's getting more-, my mind's going to-, we'll have to talk about this more offline. I have a couple more questions. I want to make sure we hit on the COVID-19, so today is April 10th, 2020, right in the middle of this. So if you're listening to this many years from now, it's a really stressful time for many people. Just wanted to like lay that out there, and-

Nick Gayeski: [00:39:05] Both at home. Everybody in the world seems to be working from home except for our heroic, you know, first response folks and healthcare workers and yeah. So interesting time and grateful for the people who are doing that.

James Dice: [00:39:19] Definitely. Yeah. Grateful for a lot of stuff right now, including them. So zooming in kind of on our industry. What are your thoughts on what this means at this early stage? I won't hold you to any predictions. What are your thoughts?

Nick Gayeski: [00:39:36] Yeah, so I'm sure you've noticed like all the increased usage of Zoom, and like the burden on Teams and just this rapid movement in this time period towards more remote work and work through digital tools and virtual tools and video and, and so on.

For our industry, I think it's slower to change, but we'll probably accelerate the change towards more digital services where you can know about system performance, system faults and failures remotely. You can prioritize whether it's worth your time and attention. You can know whether someone is needed on site or whether it can be handled remotely, and there's a clear process for it to be handled remotely. And I think facilities organizations internally will be pushed faster in that direction. And the service providers externally will be pushed faster in that direction, partly as a result of the COVID crisis, that the risk is, you know, it's a different perception of risk and what needs to be handled on site and what could be handled remotely.

We'll see an acceleration of people, and that's good for us because we feel aligned with that future. I think on the flip side, there are industries that are likely to be changed more significantly by this, you know, higher education is one we're keeping an eye on. Does this push more people to do online education? Do they-, are there fewer people living in dorms? I think folks in higher ed are wrestling with that right now, and that has implications for us long term, and we want to support them through whatever transition this creates. So we all-, no one really knows, right? It's just so much uncertainty now about the longterm effects of this, but, you know, those are just a few things. What do you think? What are you seeing from your point of view?

James Dice: [00:41:32] Well, I think it ties back to our use cases, right? I think there are-, you named like 15 different peoples that use the buildings from a technical standpoint, right? And a lot of those, their processes are sort of dependent on site visits. So a lot of my colleagues do a lot of energy audits and site visits and in-person training, and I think  tools like Clockworks. just like Zoom, allow people to do a lot of the work from a remote location. So if you think about the commissioning process, how much of the commissioning process do we really need to be doing onsite? And I think there's a lot of adjustment that is going to happen with those types of processes, whether it be new construction or retrofits, retrocommissioning, or obviously monitoring-based commissioning is built around the monitoring. But I think even then, there are still ways to make those less dependent on site visits. So that's my first thought. And then I think there's going to be a wave that kind of just like you just laid out, and your kind of six phases of use cases, but I think there's a wave that says, okay, from a fault detection or from an analytics standpoint, how are we helping with resilience of this facility, to be ready for anything like this craziness to happen? And I just wrote a post about this a couple of weeks ago, getting into where our facilities are more, not just ready, but able to benefit from things like this. And that's a totally different reframe of the problem, but how can we improve ourselves? And using smart building technology is, I think, a definite opportunity.

I'm not going to make any predictions, but definite opportunity for the next pandemic. Let's say that.

Nick Gayeski: [00:43:26] Yeah, no, I agree. And I think, resilience may just, well, not just, but having the infrastructure in place to do more remote, to have better understanding of priorities, costs, and the work to be done, is part of it.

You know, if you're making that transition right now in this moment, that's a tough place to be. You're having to create resiliency at the moment that you need it. And I think part of what comes out of this is that people will, will, will make decisions to create that resiliency before the next time something like this happens.

James Dice: [00:44:01] Yeah. Okay. I ran through my questions. Is there anything else that you wanted to make sure, like what are you excited about right now, or anything else like that that's on your mind?

Nick Gayeski: [00:44:10] Yeah, I mean, few things. So one thing we're excited about is, you know, we've been at this for a long time, but we're, we're shortly rolling out, some new, a new version of our product that we're, we're pretty excited about. So that's been motivating us for some time and it's, it's getting more to the intelligence layer of, you know, the types of things we've been talking about. We've been real focused on the energy cost benefits, the equipment reliability benefits, the operations and maintenance benefits, but as we do more and more work with design decision makers or risk, reliability engineers,  having that intelligence layer to do analytics and statistics and feed information into those processes is going to be a whole new avenue for us. We're excited about that.

We're working with more and more service partners, so folks who are, who've been in business for 10, 20, 30 years, but who see the transformation in their industry happening. And this is where, you know, COVID is accelerating some of those changes where they just need to change the way they deliver service. And that's an exciting time for us right now.

It's a hard time to be excited because there are people suffering through this and we're all struggling through it. So, you know, one of the mindful about that, but in trials and tribulation, there is opportunity. And there, there is a motivation to change your way of thinking and look towards the future. So I'm definitely excited about that.

James Dice: [00:45:38] Cool. Yeah, and on this new version, I know you were talking about there being a new interface, new user interface, but it sounds like it's deeper than that. It's kind of providing different types of dashboards, different types of analytics to hit at these other use cases. Is that right?

Nick Gayeski: [00:45:54] Yeah, that's right. So creating much richer information about the trends and the patterns of issues and opportunities across systems, across types of systems, impact from that work over time. So it's just-, you would enjoy it. You mentioned very early you're sort of an analytics nerd, right? It's the range of capabilities that we now have to spin and pivot and look at information starts to get really exciting for us. You know, we now have 10 years of data on some systems and how well they've performed over that time, what's the history of faults, what's the history of performance metrics, like some of the ones I mentioned earlier. And how do you piece all that together when they look, you know, five years out and say, what do we want this picture to look five years from now? And at five years from now, we may be replacing systems X, Y, and Z, what are we going to plan for in that replacement? So it's just at a much more strategic level than the day -to-day how you use fault detection, which is exciting. Having said that, I love working the day to day. Those are, the folks who do that work are the ones keeping these buildings in good shape, so.

James Dice: [00:47:16] Right. Alright, well we look forward to-, maybe you can come back and do a demo for me or for everyone whenever you guys do launch the, the new version. When does it come out?

Nick Gayeski: [00:47:28] Yeah, I mean, we are, we're very careful about the rollout. So we're going to do alpha testing and beta testing before we really go broad with it.

We have power users that we're going to introduce it to first and continue to collect their feedback on how it's working, what else they'd like to see in it. You know, it's really important to us that our customers get what they need out of it. And we've built a lot into it already. It's already out in production, but it's not broadly released. So we're going to slowly roll it out and make sure that our core users get the opportunity to give us feedback and shape it as it matures.

lJames Dice: [00:48:06] Great. Alright, well we look forward to it. Well, Nick, this has been a pleasure. I want to say thanks for everything you guys are doing at KGS. Your mission's very well-aligned with mine. Yeah, thanks for coming on the show.

Nick Gayeski: [00:48:18] Yeah, absolutely. Thanks for having me, and thanks for your work at NREL. We have a deep appreciation for the work of the labs, so, thanks for contributing to it. And also for doing this, you know, this is fostering community around this topic, and doing these video podcasts is awesome.

James Dice: [00:48:36] Yeah, absolutely. Well, I'll talk to you soon. All right, friends, thanks for listening to this episode of the Nexus podcast. For more episodes like this and to get the weekly Nexus newsletter, please subscribe at nexus.substack.com. You can find show notes for this conversation there as well.

As always, please reach out on LinkedIn with any thoughts on this episode. I'd love to hear from you. Have a great day.