Podcast
84
min read
James Dice

🎧 #050: Shaun Cooley on one API for your building

May 13, 2021
“The fact that other industries were being solved with an API, it really made sense to create this single layer that takes on all the historic complexity and abstracts it away into that API, so that everyone doesn't have to keep repeating the exact same task over and over again.”

—Shaun Cooley

Welcome to Nexus, a newsletter and podcast for smart people applying smart building technology—hosted by James Dice. If you’re new to Nexus, you might want to start here.

The Nexus podcast (Apple | Spotify | YouTube | Other apps) is our chance to explore and learn with the brightest in our industry—together. The project is directly funded by listeners like you who have joined the Nexus Pro membership community.

You can join Nexus Pro to get a weekly-ish deep dive, access to the Nexus Vendor Landscape, and invites to exclusive events with a community of smart buildings nerds.

Episode 50 is a conversation with Shaun Cooley, CEO and Founder of Mapped, a startup focused on the data infrastructure layer.

Summary

We talked about the value proposition of having an independent data layer and the downsides of not having one. Then we took a bit of a deep dive into Mapped's approach to that layer and the keys to doing it really well. Finally, we talked about the real definition of a platform and Mapped's take on the platform concept.

  1. Mapped (0:54)
  2. Symantec (1:51)
  3. Twilio (11:09)
  4. Brick Schema (35:22)
  5. Jason Koh (35:26)
  6. Matterport (39:17)
  7. TrueView (39:19)
  8. Meraki (40:59)
  9. Aruba (41:01)
  10. Ruckus (41:02)
  11. GraphQL (49:24)
  12. Haystack (55:11)
  13. Platform Revolution (1:03:19)

You can find Shaun Cooley on LinkedIn.

Enjoy!

Highlights

  • Introducing the IDL (6:42)
  • Downsides of not having an IDL (14:55)
  • How Mapped approaches data modeling (35:07)
  • How mapped thinks about APIs and APIs vs. Standards (47:77)
  • All the interactions that IDL platforms can enable (1:00:03)

Music credit: Dream Big by Audiobinger—licensed under an Attribution-NonCommercial-ShareAlike License.

Full transcript

Note: transcript was created using an imperfect machine learning tool and lightly edited by a human (so you can get the gist). Please forgive errors!

James Dice: [00:00:03] hello friends, welcome to the nexus podcast. I'm your host James dice each week. I fire questions that the leaders of the smart buildings industry to try to figure out where we're headed and how we can get there faster without all the marketing fluff. I'm pushing my learning to the limit. And I'm so glad to have you here following along.

This episode of the podcast is brought to you by nexus pro nexus pro is an annual or monthly subscription where members get exclusive writing podcasts and invites to members only zoom gatherings. You can find info on how to join and support the Without further ado, please enjoy this episode, the nexus podcast.

Episode 50 is a conversation with Shaun Cooley, CEO and Founder of Mapped, the startup focus on the data infrastructure layer. We talked about the value proposition of having that independent data layer and the downsides of not having one. Then we took a bit of a deep dive into Map's approach to that layer and the keys to doing it really well.

James Dice: [00:01:11] Finally, we talked about the real definition of a platform and Map's take on the platform concept. Without further ado, please enjoy Next Podcasts Episode 50. Hello, Shaun. Welcome to the nexus podcast. Can you introduce yourself?

Shaun Cooley: [00:01:25] Sure. I'm Shaun Cooley, founder, CEO of mapped data a infrastructure company for industrial and commercial IOT.

James Dice: [00:01:33] All right. Thanks for joining us. Can we unpack your background a little bit? So we'll obviously get to map in a minute.

Tell me about your career before Matt.

Shaun Cooley: [00:01:42] Yeah. So, only career, right? You don't want to know like where I was born, how it was raised,

James Dice: [00:01:47] whatever you feel like

Shaun Cooley: [00:01:49] too. Yeah. So yeah, so I spent 18 years at Symantec as a distinguished engineer in the Norton group.

James Dice: [00:01:56] Distinguished engineer.

Shaun Cooley: [00:01:57] Sorry to interrupt.

Yeah, no worries. so. It's sort of the second from the top of, engineers it's distinguished engineer. We used to joke distinguished as one step before extinguished, but it's sort of, as you move through, at least in India, most, they have different names for it.

Like, one, two, three at some companies or Symantec was like associate software engineer, then software engineer, then a senior software engineer, then principal, then senior principal, then distinguished engineer and then eventually fellow. But but I left there after 18 years and went to Cisco, was the vice president and CTO of the internet of things group instead of Cisco.

Obviously Cisco's IOT practice is focused on commercial and industrial and so nothing residential or consumer And then left there in in July of 2019 to start mapped just before, we all got to stay home for COVID.

James Dice: [00:02:53] Yeah, totally. I was looking at your LinkedIn. I want to ask you a little bit about Cisco, but first I saw there was like something like a hundred patents or something.

Can you expand on what that means?

Shaun Cooley: [00:03:04] So, all those patents are largely software related. There's a couple of hardware patents that's in there, but a lot of the big tech firms, in incentivize the filing of patents, finding novel things that you've created instead of the software and really, pushing to, to file those, from intellectual protection standpoint.

you know, Luckily I've never worked for a firm that , that sort of weaponizes any of those, both Symantec and Cisco used them for defensive purposes.  Does that, I mean, it's sort of that mutually assured destruction, like, if Microsoft is never going to Sue Cisco for patents, because Cisco has enough to Sue back.

And so they just both stay at arms lengths. I think that, the software patents are an interesting way to publicly document the things you're working on at companies where you can't otherwise do it. Right? A lot of the work at a Symantec or a Cisco is really, internal and sort of protected.

And so when you get to publish a patent and that with the us patent office the work becomes open start to talk about, the sort of architectures and things that you've put together that led to novel solutions.

James Dice: [00:04:06] I see. Cool.

Shaun Cooley: [00:04:07] It's fun. And I think it's 121 just to.

James Dice: [00:04:11] Oh, sorry.

Cool. And so what were the types of things that you were up to when you were in the IOT group at Cisco?

Shaun Cooley: [00:04:20] Yeah, so, I was responsible for, standards, inside of, Cisco for IOT related things. So whether it was an open connectivity foundation, work that we were doing at IATF, 3g, PP,  all those things, uh, on the IOT side,  fell under my team.

Uh, Then there was a lot of ideation and planning of where the product roadmap goes. Uh, The sorts of things that we're interested in building and as a very large company there was obviously a lot of customer interaction.  And so, you know, it's a, it's an interesting where I eventually landed on.

The things that we're building now with mapped, but this concept of the way that Cisco sells products is through something called executive briefings. We bring in the CEOs and chief digital officers of these major fortune 100 companies and we tell them all the great things we can put into their 19 Interac in they're very like clean air conditioned data center.

And I think when we were working on, it was like my third meeting after joining the IOT team we were doing one of these executive briefings for a major oil company. And the guy that we were selling to says, you've never been out on an oil rig, have you? And all of us in the room were like, no, why?

I said like, feel free to come down to Texas and we'll fly you out to the Gulf of Mexico and show you some of the hell that we put up with on these oil rigs from an automation standpoint. And that really led to me spending a lot of time going out and visiting customer sites. I would throw out my time at Cisco, take any customer up on an invite to go visit their sites.

So it didn't matter if it was the roof of a commercial building or a mile under the earth and a mine up in the head of a uh, wind turbine oil rigs or refineries energy plants, like anywhere I could go manufacturing floors, I go take a look and see what what all the pain was that they were going through with the automation environments.

And I think I learned very quickly that it's not a data center. Right. It's very different from everything else that we look at on the Texas.

James Dice: [00:06:21] Totally cool. so. And obviously a lot of our audience will not be familiar with the things outside of buildings, but those same insights happen obviously in a boiler room yeah, exactly.

So, okay. So you were at Cisco, you said I'm going to quit and start a startup. How did the founding of math work and why'd you do it?

Shaun Cooley: [00:06:41] Yeah, so, I think that well, I guess I should probably describe what map is before I talk about why I decided to go chase after it. So this concept of data infrastructure, I think that, we've called it many different names and I should give Joe, over at Montgomery, some credit for the data infrastructure term. Although, he'll probably never let us live it down, but um, we were calling it data aggregation, layers. I think you call it the independent data layer. There's a lot of different names for this.

When you have these environments where many vendors over long periods of time, have come in with whatever bag of tools was available to them at the time, and built out automations to serve a purpose, right? That the purpose is control of an environment.

Whether that's manufacturing, or oil and gas, or commercial buildings, the work that an MSI does is largely the same, right? I've got an end, the outcome that I'd like to achieve. I've got this set of tools. How do I piece these tools together and how do I do the programming to achieve that end outcome? Now fast forward through 50 years of that happening, with thousands of system integrators and many thousands of vendors and, hundreds of protocols, depending on which vertical you're looking at. Now have a situation where you've got building owners, operators, and tenants that are looking at hundreds and hundreds of locations. They were all built in different eras by different system integrators, from different components on different protocols.

And they're trying to make sense of everything from like energy spend, how they do maintenance and operations. Pick any big name tech company. They probably have somewhere between 200 and a thousand offices around the world.

If you do the math on that, they're spending about a billion dollars plus a year on real estate and in most of those cases, all the costs of equipment, maintenance, energy spend is passing through to them as well. And yet they have no visibility into how any of those systems are running across a portfolio.

And so, when you look at this, the same thing applies to manufacturing plants. The same thing applies to oil and gas. Each of those refineries was built at a different era, with different systems and different capabilities in different technologies. And so now as you try to get intelligence out of it, the biggest hurdle that we see again, and again, is integration.

Right. Going in, mapping out all of those systems and trying to discover, like, make model firmware, what protocol does it speak? What are its relationships to other devices? What's its function inside of the environment? What explicit logic does the PLC have of? When you start putting all these things together, you can spend it a couple months instead of a single commercial building.

You can spend, a year or more inside of a refinery or an energy plant or a manufacturing floor doing this. And so the company named Mapped is really a play on the mapping process that would normally be done inside of these environments. And the fact that all the vendors that we see that deliver software or some sort of value into those environments today, they tended to start out with a handful of data scientists or software engineers.

And then if you fast forward a couple of years into the life cycle of these companies, they're like a handful of engineers and 50 integration engineers who are going in and doing the installations in each one of these. Um, and So I think that the insight that I had was that if I can find, a problem where there's 20, 30, 50 integration engineers in every one of the vendors that serving these space, it's time to abstract away that problem and really make it a single sort of layer that can drive that value so that you don't have to go spend 50,000 non-recurring engineering to do that first integration.

You don't have to spend three months doing the integration. You can do it in sort of a plug and play type model. And I think at the same time, we were also seeing similarly complex  being sort of, I would say unlocked at scale from an innovation standpoint through APIs. And so if you look at, credit card processing, you've got a company like Stripe, like nobody in their right mind anymore builds their own credit card processing platform. And you can look at something like text messages.  I uh, over the years have built far too many direct integrations into carrier SMS gateways. Every carrier had a homegrown SMS gateway. They were entirely like built by the carrier, operated by the carrier. You had to sign an agreement with the carrier. You had to know how to speak to their particular SMS gateway. You had to maintain a database of phone numbers of like, okay, James' phone number, is that owned by AT&T or T-Mobile or Verizon or Sprint? Because I have to send it to the right gateway. I can't just send it to one of the gateways.

And so, then you get a company that comes in like Twilio that says like, look, all the complexities, all the sort of store complexities that have existed inside this environment can be abstracted away through a very simple API. Just send a text message to this phone number, like.

I don't care about all the plumbing underneath. And I think when you put together sort of the challenges and the amount of people that were being spent on integration or still are being spent on integration with the fact that other industries have been being solved through a simplified API and happy to come back to the difference between like an API and a standard later.

The fact that other industries were being solved with an API, it really made sense to create this sort of single layer that takes on all the historic complexity and abstracts it away into that API so that everyone doesn't have to keep repeating the exact same task over and over again.

James Dice: [00:12:03] Absolutely. Cool.

Shaun Cooley: [00:12:05] I don't know if that answered your question. I don't even remember what the original question was.

James Dice: [00:12:09] No worries. No worries. I love the way you just described that layer. Yeah, it's very similar to the stuff I've seen on your website and other places where you've kind of laid out the problem in a way that I really like, so the question, the original question was you were at Cisco and you saw this need, and I want to hear like a story around like deciding to quit your job recognizing this from a technology standpoint and a business opportunity standpoint. What made you go start the company.

Shaun Cooley: [00:12:39] Yeah. But, I think that you can obviously do a lot instead of a company like Cisco there are   significant resources inside of a company that size then, you know, significant go to market capabilities and other things around it. At the end of the day I sort of looked at I've been in large tech companies for the better part of 25 years at that point.

Yeah and I really had always kind of wanted to go do a startup because I hate myself for something. I don't know what the actual reason was for it. yeah, th this seemed like a big enough opportunity that that it was worth going out and pursuing on my own rather than trying to build it inside of inside of a company.

James Dice: [00:13:19] Got it. Cool. So that was what year and a half ago? Almost two years ago now?

Shaun Cooley: [00:13:25] Yeah, almost two years ago. Yeah. Coming up on that.

James Dice: [00:13:28] Cool. So where are you guys at today before we kind of dive into the nerdiness? Where were we at? And 2021 spring 2021.

Yeah. So we, we spent the first, 18 months or so building the platform and stealth, it's It's one of those weird startup things.

If, for anyone that doesn't know what it means, it just basically means like we had no website. we weren't publicly talking about it. We had a couple of early design partner, kind of early customers that were helping us, answer some of the questions around, what works for them, what doesn't work for them and what they would want it to look like.

And we came out of stealth on March 2nd, so it's still fairly recent about a month that we've been out of stealth and really started telling the story of, what the company is and how will we think that data integration layer, really solves a major problem, not just for commercial real estate, but also for manufacturing and industrial environments as well.

Totally. Yeah. I have this opinion that I haven't talked about very much that I wish more companies in our space would just stay in stealth a little bit longer.

Shaun Cooley: [00:14:30] I don't know if I should take that personally or not. No,

James Dice: [00:14:33] I haven't seen it in your product, but I have seen a lot of other products that are what I would call marketing led versus product led.

And anyway, that's all set, so let's dive into it too. So from where you're at today, obviously you said you've co-developed with your clients. And so I feel like you would probably have a lot of opinions about how the Slayer should be done. So, which is fun for me to dive into.

So, let's just start with like, what are you kind of hinted at a lot of this when you describe what mapped is, but like, what are the downsides of, and this is kind of like where we're at as an industry today, we don't really have this layer deployed at scale. So from a smart building standpoint, we have all these what I would call point solutions, deployed and they're gaining scale and that we don't really have this layer.

So what are the downsides of not having this independent data layer and what will this enable when we do have it deployed?

Shaun Cooley: [00:15:28] Yeah. Look, I think the biggest downside of not having this layer is that it, it leaves the sort of final integration of literally everything you want to put into the environment to that integrator that has to show up and figure out what's going on inside of your environment and how to really make sense of it, for the application or for the hardware that they're putting in.

I almost equate this to like, if buildings didn't have a  size of door, how much harder would it be to go out and get a door for everything? And, yeah, there are a few doors that are like wacky sizes, but, in general, that's sort of eight foot by three foot size, you can get from anybody.

And it's been standardized and similarly, the sort of position of a lock on a door, the size of the hole, the two and three eighths inch hole that you need to drill. All this stuff has been standardized, but yet our building systems are just all over the place today.

And I think that we continually pay a penalty for it. And we've almost sort of accepted that. It's just okay to go and spend three months integrating everything that we want to integrate into the building. And I, I don't agree that it's okay. Right. I think that we need to have that layer that, that abstracts it away.

And so, the penalty of us not having it is that we continue to waste time on integration. We have a very strong belief. Which is right here in my background is that we want everyone else to focus on innovation and not integration.

You shouldn't have to spend, 30% of your time, like doing that final step of integration when you should be spending it on building cool new stuff or new analytics or new carbon credit trading or whatever it is that you have in mind should be where you get to focus your time.

James Dice: [00:17:09] Yeah.

And I would add to that and say like, just like Stripe and Twilio kind of handle all of the ongoing mess that happens as well. So there's as someone who's done this integration before, there's often, upkeep. And it's not just a, one-time set it up, now we have analytics or now we have this perfect new, smart growing application.

There's always like this, someone unplugged the thing from the switch, someone decided to close that hole in the firewall, just to see if anyone would say anything. Like these things happen all the time. And it's also the, just like someone handling it, right?

Shaun Cooley: [00:17:45] Yep. Yeah. We refer to that as the day two problem.

Okay. Day one is like, you got everything working, all the data's flowing. It's like, whatever your transform layer is working. You're feeding the applications and then day two happens and it broke, I don't know why it broke. Let me go back into the building and like figure out what happened, where the data stopped flowing.

And that day two problem is a real challenge. I think So we refer to it and we'll w we can get into this more later, but we refer to it as a living graph. And that, the graph representation that we have is living, it continues to evolve over time. And so the API is the way that we describe it to our customers is really meant to be sort of self discoverable.

You need to have prior knowledge of the environment that you're going into in order to sort of traverse the graph and make sense of it. And that allows us to like, continuously incorporate change and what we call enrichments into this graph so that the developers who are building on top of it can continue to benefit from that.

Right. And again, Like as a software developer, you don't want to, like if Twilio can't send a message to at and T like not my problem, queue it up when at and T is back online, send it to them. Right. Like, I don't care about what your internal issues are, that's not my problem. And similarly, we want our customers to not have to care about, a firewall port closed or somebody upgraded the firmware on a device somewhere, or swapped out a controller with something else.

And like everything falls over underneath. Right. That's our problem to go figure out and to continue to update the living graph.

James Dice: [00:19:16] Absolutely. So, thinking back on the history of like the building owners I've worked with and correct me if I'm wrong here, but it seems like there's. Two types of building owners when it comes to you guys approaching them, there's people who get it and the people who don't get it, like, and what I mean by that is like, I can remember, like I spent a long time with healthcare clients and it was a big stretch to get them, to get to where they see the value of, and this was just this certain type of healthcare client.

I'm sure there are healthcare systems that do get it. What I'm saying is like to talk to them about an independent data layer and sort of make the business case would have taken years. And so w we often would just go in with a complete solution, right. Or maybe not even talk to them about it, it's just kind of in the background.

And then there's the people who like, are, I've been out there deploying use cases for a long time, and then they see, Oh, wow, I really get this. Now I need this. Is that sort of how it is when you're out talking to building owners?

Shaun Cooley: [00:20:17] Yeah, look, I think when you put it in that simple of buckets it, it absolutely, they fit into one of those two buckets, right.

Either they get it or they don't, I think that we look at the, we look at the go to market in a couple of different ways here. The sort of the easy ones are the tech tenants. Think of all those, big, like fortune 1000 tech companies who manage large portfolios.

They internally have data science teams and BI teams that are capable of making use of an API or building applications that make sense for them on top of the API. And they're also managing the large portfolio, which means that they've a desire to go after it. I think that, when you get into the owner and operator side, there's a very clear delineation between the very tech forward owner operators the sort of, not so tech forward.

Owner-operators right. We talked to a lot. That, th they literally take over a building, they upgrade the lobby, the marble in the lobby and the elevators, and they try to release out the space for a little bit more and flip it to somebody else. Yeah. Like they've, they have no interest in the control systems.

Other than that, the they're not starting on fire in a while. They're trying to sell the building to somebody else. And I think that, for us that obviously creates a little bit of a challenge as we go after that longer tail of the sort of non tech forward ones. But you know, what we'll find over time, at least what I believe we'll find over time is that as we reach a critical mass of buildings where the third-party developers start making use of it as well, third-party developers are not just selling an API into the, to the customer they're selling some value, some business outcomes, some energy optimization or predictive maintenance solution into those environments.

And if they're depending on us under the hood to, to make use the API APIs, now it starts to make more and more sense to the two buildings. I think on the tech forward building side, we hear a couple of different things either. They've been struggling for a couple of years to get together their whole portfolio into a single data Lake or whatever you want to call it.

And they've been struggling to normalized data. They've been struggling to capture data. They've been struggling to make use of that data. Or they just sort of have this like vision of like, it'll be solved soon. And so they're looking for a vendor that can do it for them.

I think in both cases our conversation's pretty easy with these customers. You've got a lot of buildings that all have a lot of the data you're trying to get access to it. We can help you. Right. We can give you a very clean API and way to do that. Th the next question then is like, what do you do with the data?

Again, if they've got a BI team or data science team, or even a finance or ops team that's capable of using the data, there's a very clear path to immediate use for that. The ones where we were refined, no BI or ops team, we see one of two things either they have a vision for, like we've had one tell us that.

Every building should have an API and that no vendor should ever be in the building again. Right? Like if you want to, if you want to sell me a software package that runs on my building, like go use the API and talk to me when you're using the API. You're not crawling around my space and putting boxes in my space and like adding additional plant pressure that might eventually cause something to fall over and stop working.

And that model, I think, works really well for us. Right. Having a cloud API that brings together all your data where you get the visibility and control of where the data is going and who has access to it is a good story for that single API. The other one is like, they have tenants that are demanding access to data, right?

And so when the tenants start asking for building data, it tends to be those tenants that are in many locations around the world. Trying to sort of, quantify where their dollars are going on that space. And as a building engineer, when a tenant comes to you and says, we want access to building data, like your first answer is like, what, I don't have a clean way to give that to you.

And yet we're seeing more and more of these tenants signing leases where the lease actually mandates access to building data. And they just don't have a way to give it to them. And so in some cases, even though there's no BI team or data science team that can make use of it in the owner operator they know that there's demand from their tenants and they're viewing this as a tenant service that they're providing, which is, we can give you access to the building data for your floor, with like a simple API that you don't need to go and spend a bunch of engineering effort to try to provide.

So we, I liked the sort of they get it or they don't, but I find that there's some, a lot of shades of gray on that.

James Dice: [00:24:37] Yes, definitely. So how about other vendors then? So, one of the things that, we talked about last week at our pro member gathering is like, it probably actually makes sense to then. Like not go to sometimes not go to a building owner with this, go to a vendor and partner. And I think there's also probably nuances and shades of gray there.

Right. Where it's like, it seems like some of them would probably not want to give up parts of their stack. Can you talk a little bit about why a vendor would want to bring you in and say, well, I want you to take care of the integration piece.

Shaun Cooley: [00:25:11] Yeah. So I think that there are that there are a couple, again many shades in here.

So it depends on the type of vendor we're talking about. If you're talking about an MSI or an MSP that's going in and perform services inside of these buildings, many of those MSIs are looking for ways to do sort of recurring revenue type services in the space. And a lot of those, you can think of any things from like a remote management of buildings, remote optimization of buildings, where they are, they're not just sort of watching it remotely.

They're also helping you to maintain things remotely without having to show up on site and do things for a very large portfolio, mid to large size MSI. Every one of those buildings is a slightly different environment. Again, So just like those companies that are in a bunch of buildings, like you're dealing with the same thing as a vendor to that space.

You have to send somebody out to go figure out what's going on with the system or to tell them that like some set point is off and it's costing you a bunch of money. The, those sorts of investigations right now require that a human goes out and visits it, which means you've got to have this like fleet of humans that you can send out to all these places.

And so as they look for ways to move from services and to recurring revenue they're very interested in finding platforms that make that easier for them. I think that the other one is that you'll probably see some of the MSI start to build applications as the access to data gets easier and easier, right.

They will start moving up the stack into some of the application spaces, some of those analytics and other types of services that they used to provide with humans, but no longer focused on the integration piece, but rather the sort of data use piece. When we get into the application vendors, the folks that.

I think like last week we had clockwork analytics on there. Right. Those sorts of folks, I get a mix. I've had apologies for language. I had one of their CEOs tell me, I don't ever want to install any more shit in buildings. Right. Like they, they just, like they, they want to do the data science problem.

They wanna, they want to add value and provide a benefit to the customer. And the integration is viewed as a sort of necessary. They had to go in and do the integration in order to provide the value that they sold to that customer. They didn't want to do the integration they had to. Right. And so I think in those cases similar to the same sort of like cloud discussions that we used to have of like, well, yeah, you can go rack and stack your own servers in a data center somewhere.

But like, Y you can just use Amazon or Azure now. Like why would you ever go rack and stack your own servers again? I think that eventually we'll get to a point where, you know, whether it's us or somebody else manages to, to get to a large enough deployment where there will be a vendor in there where it's like, why would you ever go bother?

It's the Twilio one again, why would you ever go bother to send your own text messages? Just because you can, when it's not the part of the business where you're adding value. And so I think that, w we'll see the vendors shift over time towards, API platforms that normalize for them so that they can move away from that.

NRE the certain non-recurring engineering that they spend today on integration. Oftentimes that's either directly billed to the building as a, upfront installation or integration costs.

That can be pretty significant in, in many cases or it's rolled into a, a 36 month contract that they sign the building signs like this three-year agreement for whatever software they're installing and the vendor just, hides the cost of their fees into that. Right. I think if you can switch to an API where you're paying, a couple hundred dollars a month, rather than $50,000 upfront and you can just start making use of the API as necessary.

It makes it a lot easier to scale a business out. It makes it a lot easier to not have to ramp up an integration team that goes out and does the integrations, when you can just depend on an API being there to handle those cases for you.

James Dice: [00:29:00] Totally. So what's that look like for you guys then as you look to go to market and to scale up.

You guys are then taking those, like you said, those 50 engineers that everybody has and pulling them internally. So what's this look like for you guys and for you as a CEO, are you looking for 50 of your own integration engineers right now? Or what's your,

Shaun Cooley: [00:29:22] yeah, look, I'm always looking for smart people.

I think if anyone knows the space really well and wants to come work for us, I'd love to talk to them. We view this from a very different approach and that is that this is not a job for people. And I realize it is today. And so,  if you look in other if you look at something like enterprise asset management on the it, or the it security side these sorts of things are largely solved and have been for the last 20 or so years.

Where you install a piece of software, it goes out and  everything on your network and it figures out the make and model of those things. And, in many cases then they go into like, what's its security posture. Does it need to be updated? That sort of stuff. In our case we use automation and machine learning to largely do those same things.

We go out, we discover all the things on the network, whether it's a serial bus or an IP network. And we're doing that to find it's make and model so that we know how to map it and model it inside of our graph. And once we know how to map it instead of our graph then it turns into how do we operationally extract data from it, right.

We want the data that continues to flow out of it. We use a bunch of different techniques to get data out of those. Some of them are active we are actually like pulling devices for data. Some of them are passive.  We monitor traffic moving across networks and look for protocols that we know how to speak and pull apart those protocols.

And so part of this is like, we don't want to add more load to the systems that you already have inside of the building. Right? If you're already hitting some device 10 times a second, for some reason us hitting another 10 times a second, doesn't really help anybody. But if we can pick up on the communications between your existing controller and that device, and we can just read the data that's coming off of it we benefit a lot from the fact that most of these protocols have no encryption or security of any sort.

And it allows us to sort of,  sit in the network layer and watch what's going on inside of the environment. But we're doing that again to produce that operational data that's coming out of there. And then when the data gets to our cloud, all the tasks of merging that data into our graph and mapping it fall onto machine learning.

And so we use a of ML some sort of natural language processing on things like point names, but also a lot of. Sort of, deeper ML on how we look at relationships between devices and discover both explicit and implicit relationships between those devices. So

James Dice: [00:31:37] what you're saying is, whereas a lot of people are just kind of hiring a bunch of people like me, who I've done this before.

And I shouldn't have been doing this. Like you didn't mention earlier, you didn't mention the mechanical engineers out there that are out there trying to figure this out. Basically just like plugging stuff in and see if it works and then calling somebody. Yeah. Yeah.

Shaun Cooley: [00:31:57] I think I reversed the two wires on my serial bus what's happening.

Yeah,

James Dice: [00:32:00] exactly. Yes. Yeah, I always used to call it hacking and it was just like a mechanical engineer trying to hack, but yeah. Anyway well, where I was going with that is, it seems like another value of splitting up the stack into these different layers is that you guys can then build out. No tools that the other people that are doing this in a one-off way don't even have time or desire to do it.

So you guys can get better and better. Whereas everybody else, if they're distributed, they're just kind of doing this as a means to

Shaun Cooley: [00:32:31] an end. That's right. Yeah. It's a it, so the system learns globally, meaning that w what, so some of the interesting things that we start to see we see both sort of regional and like, if I just look at the point names and the, our processing appoint names we see both regional and time-based sort of differences in point names.

What people were commissioning systems with in the eighties in Los Angeles is very different than the eighties in New York and the nineties in LA totally different point naming schemes than they were using in the eighties. But you know, as a system that sees globally all of these things, once we train it, how to pull apart as a single.

mechanism for point naming that applies to any other building. We come across, it was done by the same vendor in that same era. Right? And so you start to get a lot of value out of doing the same work over and  again, across everything. And I think with humans, every human enters, every environment, new you've got to go in and get your bearings, understand where all the equipment is like what's connected to what type of systems are in there.

Start reading manufacturer documentation and the manufacturer docs are not particularly easy to go through on a lot of these, especially on the protocol side. I mean, you pick your favorite chiller from carrier. It'll have a 450 page technical manual, and two of those pages talk about how it uses BACnet.

Those are the two pages that like we care about, but you know, if you're in there doing, mechanical design and you need to integrate one of these systems into something else. You've got to go find that 450 page manual. You've got to scroll through it until you find the two pages that talk about the, like the piece of info that you care about and like, okay, what are, what is the analog input and analog output for this thing?

And which one is which, and like, how do I read the value out of it is in Fahrenheit or is in Socius and those sorts of things like that, they're very easily solved through sort of big data and machine learning. Because again, every time I come across a carrier 30 or UT chiller, like it's the same, I don't need to go look at the manual again.

If the system already knows how to interpret the data coming out of that device. Absolutely.

James Dice: [00:34:33] I did that on a project like a month ago. Oh, when I'm manuals to the death. Yes.

Hey guys, just another quick note from our sponsor nexus labs. And then we'll get back to the show. This episode is brought to you by nexus foundations, our introductory course on the smart buildings industry. If you're new to the industry, this course is for you. If you're an industry vet, but want to understand how technology is changing things.

This course is also for you. The alumni are raving about the content, which they say pulls it all together, and they also love getting to meet the other students on the weekly zoom calls and in the private chat room, you can find out more about the courses.nexuslabs.online all right, back to the interview.

Let's talk about the, so we're kind of getting into it a little bit, the data modeling piece of it. So what are some of the keys to modeling the data to enable it for whatever use case a building owner wants to enable?

Shaun Cooley: [00:35:28] Yeah, so we use Brick, I think we're fairly open about that. Our chief data scientist is Jason Coe, who's one of the Co-Creators of Brick. And so, it would be crazy if we went with something else. So, I think that there's a couple of things in play here.

Right? You're taking data from very different systems. Again, configured by different system integrators named  differently, wired up differently and trying to normalize it into sort of a schema that is representing not just the individual points and devices inside of there, but also the locations.

Also the relationships between all of these things, like one device feeds air into another device or has a point of this other device. And then I think the thing that we add that's not quite in Brick yet is people as well. So we, we track people, places and things instead of our ontology.

And so the extensions we've done should be rolling back into Brick soon for all of those changes, but getting from sort of data into Brick takes a couple of steps. I think that the first one is obviously that discovery that I talked about in the building and then it turns into extraction.

You've got to get data out, you have to efficiently get that data to the cloud. And you've got to start making sense of that data in the cloud. And so, we start with something that we call device profiles. So those device profiles, earlier when I said carrier 30, are you teach Hiller, if I know how to talk to it, I know how to talk to it.

That is what we would call device profile.

James Dice: [00:36:52] Okay.

Shaun Cooley: [00:36:52] So the device profiles take us from sort of raw data that we've discovered in the building into, what we call structural data inside of the graph, instead of the Brick graph. And that structural data, for example, if you took a thermostat, very simplified, the thermostat might have a vertex in the graph, a node in the graph for the physical thermostat itself. And it might have a set point for the humidity and a set point for the temperature and a sensed temperature and a sense to humidity, right? So in the graph, a simple thermostat may turn into five vertices in the graph with four relationships between them. That sort of information is structurally of like I found one device, I need to represent it in the graph, we can do through our device profiles.

And those device profiles allow us to do that pretty quickly. A whole nother talk for how we build device profiles some other time. But once we have the structural components inside of the graph, now we have to connect them all up, right? What is this thermostat actually controlling? And in most cases that thermostat does not have a direct relationship with any other piece of equipment.

It is being either pulled or signaling a controller of some sort. And the controller has some human created logic inside of it. Somebody actually went in and programmed it in order to drive some other device based on the events coming out of that thermostat. And so that relationship data to us is an explicit relationship, right?

So there is now an explicit relationship between the thermostat and the  rooftop cooling unit or whatever your system happens to be. Those relationships, we learn through a couple of different mechanisms but we do a lot of time-based correlation. And so as we see sort of action reactions happen inside of the things in our graph, we start to draw lines between them and over time, those lines get sort of more and more confirmed as we move on.

And then the last one, I guess the last two is a mix of sort of geospatial. How do you represent geospatial constructs inside of these environments? Somebody went on a rant during the last member call about mapping data. It might've been Steve,  like how hard it is to get floor plans in these spaces.

And so part of the geospatial, there's a couple of different ways we do geospatial. We use some public data sources to get the outside footprint of a building, right. It's pretty easy to get the outside walls of a building. That at least gets us sort of longitude latitude in a large sense.

Then we allow the customer to either upload PDFs or AutoCAD files or if they've done a 3D scan of their space, that either has gone into Matterport or into like a True View. We can actually connect to those and pull a slice out of it as well to get the indoor maps. But now as you're trying to place devices throughout those spaces, you run into a couple of different things.

One, point names that were commissioned in the 80s, likely have no relation  to what the current name of that space is. So when the current tenant uploads their map, and it's got a conference room called, like Frontier Land and in the point name, it's like RM624, how do you link those things up?

Right. And so, so we, again, we start looking at correlations over time between systems that we know where they are physically, systems that we don't necessarily know where they are physically, and it allows us to start moving things closer to each other. We do also at any time an administrator can grab a device and just drag it to where it needs to go so that we can stop trying to guess on it.

But we do try to make pretty good guesses about where things go.

James Dice: [00:40:19] So there's like a user interface piece that lets someone that doesn't know anything about graphs, sort of update things?

Shaun Cooley: [00:40:26] Yeah. Our user interface, and you can see it on the website, it's very much focused on the visualization and control of your data. And so, we do a couple of different things in there. And then, sorry, and I'll come back to that in a second on the last piece. The last thing that we do is enrichments. And so we look for signals coming out of the data that we can further enrich with sort of more meaning.

It's not, I don't want to call it analytics because it's not quite like it doesn't replace the analytics that an application vendor would build on top of us. But if you take something like like a Cisco wireless access point, that's tracking the movements of mobile devices moving around a space, right.

You can get the same thing from like, Euclid analytics used to do this before, wherever they went and inside of We Work, but like Muraki has it, Ruckus has it, Aruba has it, for the wirless access point. If two mobile devices are always within a meter of each other, there's a pretty good chance we can introduce a person into the graph, right? Those two mobile devices, aren't just moving, being in unison on their own. It's likely that there's a person that's moving around with them. And so now that concept of a person gets you to much more useful information as a developer. And again, without that extra step of having to try to figure out that correlation between those two over time, because we're doing it already instead of our graph.

And so we do a lot of enrichments like that, that look at things like, correlation between multiple devices over time in order to introduce new concepts.

James Dice: [00:41:47] Okay, cool. So you mentioned extending Brick and sort of updating the standard after that. So like, it's interesting that you said that because not a lot of people do that.

So what's that process look like, I guess having Jason as an employee?

Shaun Cooley: [00:42:04] It's helpful. It's definitely helpful. So the Brick consortia is still being formed right now. I don't know how public it is. I won't say company names, but there's several very large building system companies that are forming this Brick consortium around it.

Jason and Mapped will obviously continue to be involved in that, really Jason on behalf of Mapped. And so, internally as we come across more and more systems, what we find is that Brick, I think in the like 0.9 or 1.0, they went really deep on HVAC.

They covered like every possible construct in the HVAC system. And then when they went to like 1.1, they introduced a lot of energy management and lighting type things in there. 1.2 that just came out, again, further extended it in a bunch of other directions. And so,  as Jason gets to spend time, as other members of the future or consortium, get to spend time, their really thinking through some of these other systems and how to appropriately model it inside of these graphs.

We will contribute back. Our view, from a standard standpoint is that, Brick is pretty well ahead and, not to bring back up standards wars, right. But we think Brick is pretty far ahead of the other standards out there, as far as modeling relationships and all of the actual things inside of a building. The other part of it is that Brick is very prescriptive, is very clear about how you represent certain things inside of that.

In some of the other sort of semantic tagging type standards are not so clear about it. Right. And I think when you look through them, what you find is that there are a significant number of sort of standard tags, but then when you go into an environment that makes use of it.

They had something that they wanted to represent that wasn't available in the standard and they just like made up a tag. And as soon as you do that, now you're back to sort of custom meanings, custom names that don't really make sense to anybody else. And it's really hard to enter some of those environments with no prior knowledge and figure out what's going on inside of there.

And so for us, brick is the appropriate way to really have a self-describing environment, whereas a software developer, I don't need to know in advance what I'm targeting. I can just sort of show up, look at the model and understand how these things work together and what the data is that's coming out of them.

James Dice: [00:44:16] Totally. So let's talk about the data model piece on to get an API is in the minute, but extending that data model into the use cases that it enables. So I think there are certain number of say, software application companies out there that have the opinion that you can't separate. The data model from the use case or the data model from the application that sits on top of it, because inevitably if like your data model is not going to have enough information or it's going to be wrong and it's going to have to be redone.

So what do you say to that opinion?

Shaun Cooley: [00:44:57] Well, I think, yeah, I think first every database and operating system vendor on the it would disagree with that that statement, right? Like I think we have seen again and again, that a platform is capable of supporting many disparate use cases.

And things that the platform creator didn't think of at the beginning, right? These this is sort of the benefit of a platform, if we were still, if you took an Android phone, right. And every vendor of Android phone with a different camera, different GPS unit, different CPU required me as a software developer to code directly to that GPU or to the GPU or the GPS or the camera.

I think we would have like two apps running on all of our phones. And the fact that like, somebody has abstracted away, all that complexity into a well thought out API means that it's very easy for a software developer to build an app that makes use of the camera or makes use of the GPS or makes use of some other component inside of there.

And I think that brick is enough of a a standardized model without really changing what the origin meaning is right in, in that, I can represent a thermostat or a set point temperature or a, set point humidity. And have the values that go along with it. And as a developer, I can understand that at scale across a large portfolio,  without needing to know, like, is this the camera from Sony or is this the camera from right. Like, similarly, I don't want to know if the thermostat was built by Honeywell or Johnson controls. Like just give me the value that was inside of it. That's all I really care about at the moment. And so, these sorts of arguments I think in the tech space and this one I think as a side effect of what's typically referred to as the stack fallacy, right.

Which is that it's always easier to move down the stack than it is to move up the stack. But what's important to keep in mind, is that a platform vendor, if you take an Intel who's making, they don't to know what the application that's being built on top of CPU is. In order to provide a very, sort of usable and broadly applicable CPU.

And I think similarly, as a vendor, that's trying to take all of these disparate systems and represent them in a uniform way. I don't need to know what the application upstack is. And now I think the stack fellows is totally valid if we tried to move upstack. If we as a, as mapped, tried to go after the applications, after the single pane of glass, after the, energy optimization or carbon credit trading, things like I don't have the first clue in how they sell those to customers or what the sort of value that's promised is.

But from a here's chaos, I can provide normalization. I have a very good idea of how to provide that normalization layer in the same way that, a CPU vendor can figure out all the possible instructions you might need. And make those available through, through the sort of code that you would put onto that execute inside of that CPO.

James Dice: [00:47:54] Totally. So let's talk about the API. Everyone talks about an API for your building. Maybe it's just in the Nexus Community, but I say everyone, what I really have been doing the nerds of smart buildings talk about an API for the buildings. So what are the sort of the keys for an API for a building?

Shaun Cooley: [00:48:12] Yeah, I think maybe I'll start a little higher level than that. We, I think, as modern software engineers, you look for a lot of things from an API. You're looking that you can use a modern programming language to access it, that you're not really having to deal with the, like the transport layer that happens underneath it or the, like the physical layer that happens underneath that. And I think today when you look at something like a BACnet or a KNX, like, those things become very important, right. Is it over IP? Is it over serial? What's the speed of the bus that I'm on?

I see all the time, like even on the IP side of things, the knick in the controller is still only 10 megabit. Right. And it like brings the whole network to a, through a crawl. And so, these sorts of things as a modern software developer, like, I don't even want to deal with trying to get into the building. The firewall that the building has, or, does the building even have a network? Or is it individual tenant networks that are inside of there?

And so it starts with like, it should be accessible everywhere. And we look at this from the cloud  standpoint, right? Like an API should be available in the cloud. I don't need it to be in the building. I don't want it to be in the building because that's not the place where you're really trying to get access to everything these days.

And so it starts within the cloud. Then the next step is, can we use sort a modern technology to access that API? Right. And so, we use something called GraphQL. It was created by Facebook and has been proven to be able to handle very large graphs and pretty complex queries across that graph.

We've extended it in a couple of different ways. So, if you think of Brick,  Brick is the structural representation of everything that was inside of the building. It is not the time series data. The individual time series points still have to be stored somewhere else, that is not Brick.

And so in graph, every vertex in the graph has a time series stored behind it. So if you look at like the set point temperature for texts, that thermostat had that I talked about earlier, there is a time series store behind that where I can also get the data from that vertex over time.

And so, we make that available in a lot of different forms. You can get it in raw form, which is just, I want the values wherever they happen to occur. We can do it in aggregate. So yeah, give me the values over the last year by month or, give me the min/max and average by month, or, give me just the max over on a minute, by minute over the last two hours. Those sorts of queries drive a lot of flexibility in the way that a application developer starts to make use of the data.

I think that, from a Brick standpoint, and Jason will disagree  with me, but RDF, which is the relational data format that is sort of the driver behind the Brick, is not developer friendly. It is not something that I would put in front of a typical developer and expect them to understand.

I think even when I started working with Jason probably six months into it, I was still asking questions about RDF and like trying to wrap my head around it. And there's this, whole community of data scientists that understand RDF inside and out and are building, this web of things and, all the work that the W3C is doing around the RDF standards.

But as a developer, who's been writing code for 25 years, it's like, there's a lot of stuff in there that I just couldn't wrap my head around. And so we actually use a variation of it, but the ontology is still the same. So if you understand Brick, our ontology is no different, but because we're exposing it through GraphQL, it's a more developer friendly.

It's a little easier to understand the relationships and the hierarchies and how you get from something that was in the structural RDF to the time series data, right? Those pieces are typically, separate systems and not easy to connect. And I think that for us also, moving away from RDF allowed us to scale it to a much larger scale than we could with any of the RDF databases that existed out there.

And so, we continue  to own that, but again, as a developer, that's not your problem, right? Like, that's my problem in Mapped is like, how do I scale my database? How do I do security? How do I do all the other things on it? And so, GraphQL, Brick based. We allow for both polling, so you can make an individual query into the graph, give me all the temperatures step points as they changed over the last six months across my entire portfolio. Pretty easy, right?

Like, has somebody been messing with my thermostats throughout the building? Like I just want to see where the value changed. You can also subscribe to queries so you can put in what we call a streaming query, which is here's my query, every time it matches value, just call me on a web hook. A web hook is, they've got an application running in their environment and every time we have this, we actually fire a message out to them as well.

So, if you think of like in the buildings right now, the controllers like, Hey, what's the temperature? Hey, what's the temperature? What's the temperature? What's the temperature? We obviously don't want to build that same thing in the cloud. And so you can set a query that says, when this temperature changes, let me know.

And so we can use that to sort of push these notifications back out to other applications. And so, if you think of something like a dashboard, when the dashboard for spins up it's going to make a bunch of queries to build this, sort of initialview of that dashboard, the graphs and charts, and whatever happens in the dashboard, and then it's going to subscribe to a bunch of queries. And as values come in, it will update the values in the dashboard. And this sort of a model drives this use where, the dashboard that you leave up on a screen, or that you leave open in the background on your laptop is not just hammering away at the API. And, like really like, driving a lot of network traffic and other things that you don't really need. Right. Cause you only care when the value changes. And so, yeah, lots of, lots of ways that we think about how to make an API sort of consistent and strong. The other piece that I'll add is that, as a building owner or as a tenant of a building, as you install other applications, you start to think about how is that application using my data.

And so how are they using my data? Shouldn't just be a question. It should be set up in advance in the permissions that you grant to that application. And again, I think that, when you look at trying to automate buildings from a control standpoint, it was very easy to say, like, what do we need permissions for?

Right. Like it's either on a private serial bus or eventually it's on a segmented IP network. Like, the things are all trusted that we're plugged into the environment, but now you're plugging in like vendor after vendor and application after. And like, you don't know who's accessing what, where they're taking the data, if anybody has remote access to that.

And so when we moved that into the cloud, it's much easier for us at that point to put a very clear definition of how you grant access to something. And so in our environment, as the building owner or the tenant, if you're installing an application, you choose this application can access my electrical data in my elevators, but not my HVAC.

And it can access, my entire portfolio, except for like these three floors that have the federal government customer, where they're not allowed to see the data. And so you can be very explicit around how these things come together. And it's enforced up that API layer. So you don't need to worry about how that happens over time.

James Dice: [00:55:01] Cool, interesting. So how do you think about an API versus standard? You brought that up earlier. I wanted to hear your thoughts.

Shaun Cooley: [00:55:10] Yeah. They're can obviously be standards that define APIs as well. I think that when we look at in the case of Brick or even Haystack, right? They're defining a data standard a way to structure and sort of tag or represent data. What they don't prescribe is how you access that over a network.

Or how you control access to who can see parts of it and who can't see parts of it or how you sort of, throttle that, right? Like if one application decides to start hammering your Brick server, like let's say you've got an RDF server, like a Neo for J or something that you've got all of your Brick, schemer running inside of.

And one of your applications starts making, 100 requests a second to it. Like, do you need to throttle that? Who's in charge of throttling that? How do you even notice that's happening other than the fact that your servers like catching on fire at the time? Right. And so an API allows  allows you to introduce a lot of these sort of I would say walls around the actual data format.

And we use Brick as the data format, but all of those constructs around who has access? How much access do they have? How frequently can they call it? Who's paying when they call it? Right. That's a big question as well. If I host a server up in the cloud somewhere, like somebody has got to pay for all the CPU that it's using where if, like, if I'm hosting my own Brick server and it's just getting like hammered by one of those vendors, are they paying for that bill?

Or am I paying for the bill? Who's doing that. And so I think you get a lot of these, where the standard on the data side does a great job modeling the data itself, but it doesn't go into answering those questions around, like how do you access it and where and when to apply and all those pieces.

James Dice: [00:56:47] Got it. How about um, this is my last, I think nerdy question on this layer is the, and I asked this a lot on the podcast, but like, where are you seeing. Where are you seeing the market in terms of using this layer for control? So the application wants to send the command back down to the systems, and since you're in the middle of that what's sort of the state-of-the-art for supervisory control.

Shaun Cooley: [00:57:10] So, so I should be totally clear, T today we are read only and absolutely by choice read only. I think that, you know, when you've got a data layer like this, that is allowing right commands back into the environment and allowing you to install various applications you very quickly run into contention between those applications.

The application that's trying to optimize energy, wants it to be 77 and the application that was trying to optimize, occupant comfort wants it to be 74. How do those get settled? Right? Are those two applications literally just fighting over the value, like back and forth and back and forth.

And your systems are constantly like kicking on and off to try to like meet up with the two of them. And so, our voice to be read only from the beginning is because we're working on other things to address that. Right. And I think that there are really good ways to address it, that don't sort of expose the exact value back to the developer.

You can start to, sort of, and it's not to say we won't expose the exact value, but I think that in a lot of these where you might have contention, you want the developer to declare their intent which is I intend it to be a little bit warmer or I intended to be a little bit colder or intended to a little bit brighter or a little bit darker.

And so you can start to sort of reconcile those intense in a platform like ours in order to figure out what the end value should be back in. A control system. And so, we, we view control as a future thing for us. We're seeing enough use cases right now where people are just trying to get data and the data, they're looking, how can we optimize  And they're not necessarily looking for the system to go back in and reprogram everything, optimize energy. They just want to know like, where are we spending? How do we do things with it? I think we're seeing a lot of use cases right now, and I expect them to be pretty short lived, but a lot of sort of post COVID returned to work, use cases around, like, where are people moving?

How long are they congregating in certain areas? What's the fresh air exchange rate in those areas like, how quickly am I turning over the air? When was the last time it was cleaned? Like hotspots and not hotspots from a, from an infectious disease standpoint. Right. And I, and again, I expect those to be very short-lived, but they're driving a lot of thinking around.

Sort of, how do you use data to better the environment? We're also seeing both in the EU and in New York, these, mandates towards energy data,  as a building over a certain size, you now have to provide that energy data in near real time to the sort of monitoring agencies.

So they can figure out which buildings are sort of burning the most power in that space and which ones aren't. And it, again, it just turns into a data problem. Like how do I get the data out and normalized? And so as we, as we sort of go through and address these these read only problems, we'll eventually get to the right piece as well.

And today it's we have a software block and that just doesn't allow us to write back to the environment. And it's it was, again, it's just to avoid a lot of the contention and other things that come along with writing back in there.

James Dice: [01:00:03] Got it. Very cool. So this might be another, I guess that was a roadmap question.

This might be another one around I've been thinking a lot recently and, I'll sort of provide some context around these thoughts, which is there's this use case around FDD fault detection, diagnostics. You need to then integrate with a work order system. So we have two applications that now need to talk to each other and I've identified, I feel like this it's kind of the same issue that we're talking about just higher up the stack. Where all of these FDD companies, and this is probably the same for a bunch of different use cases, but all these FTD companies are saying what CMS has devolved my clients have.

Right. Okay. Now I'm going to write integrations with all of those and  then on the other side, the CMMS guys are saying like, same thing. Right. So is there an opportunity for you guys to move up to that layer as well? Or are you thinking about that at all?

Shaun Cooley: [01:01:00] So, so we refer to that as data exchange which is, if you of a in a lot of the platforms like ours in other industries, you'll see these bow tie diagrams, right.

And it's data, very cleanly moving from like a bunch of things on one side into the platform. And then back out to a bunch the things on the other side, I think that the data exchange is more of, on the right hand side of the bow tie data, moving from one application to the others.

Yeah. There's a couple of different ways that we look at that. So one of them is in the enrichments that I talked about earlier with the wireless access points, like tracking devices. We do intend to open up our enrichments to allow third parties, to put enrichments into the platform and to monetize those enrichments.

And so FDD is a perfect example of like, you don't actually want to run your FDD outside of the platform. You want to put it in the platform and then monetize it, from anyone else who's making use of the platform. And that's just, something like F DD using so much data that the sheer cost of like taking it out of one platform and into the next, or out of one cloud and into the next, starts to add a lot of overhead to that.

And so, there's better ways to do that particular example. I think on these ones where, you'll start to see an energy optimization app decide that a value needs to be changed. And rather than pushing the change directly back into the building, it opens up a ticket in the workforce management solution.

Those sorts of data exchanges are pretty straight forward through our platform, right. There are ways to write back into the graph and ways to pull that back out on the other side, but we continue to look for cleaner ways to do it. And I think as we get more and more vendors that want to do that data exchange we'll find a better ways to do it.

I think we also view data exchange as a, I would say, to unaffiliated parties. And so you'll start to find data inside of buildings that can be shared more generally. Take a parking system, for example, on a Saturday or Sunday afternoon, that building parking lot is completely empty.

And so you may decide that on Saturdays and Sundays, your parking data is publicly available. And so you'll start to see apps in that sort of data exchange start to make use of that publicly available data as well, where you didn't explicitly install an application, make use of it. But you know, that there's value to you as a building owner to drive, traffic literal traffic to your, building because you can monetize the empty parking spaces or whatever it happens to be.

And I, and we view that as a data exchange problem as well.

James Dice: [01:03:24] So I'm reading this book called, Platform Revolution. And those people who've been listening to the podcast probably realized that I've been saying that for a couple of weeks. I'm a slow reader right now while we have the course going on.

Shaun Cooley: [01:03:35] By the time they hear this in a month it'll be even slower for how long you've been reading that book.

James Dice: [01:03:40] Hopefully I won't still be still reading it by the time people hear this. But my question around that is like the way that they define platforms in that book. And they're drawing from examples like Uber and AWS and all these, outside of our industry type of technology companies.

They define it as, you have a producer and a consumer and you have network effects, right? So the way you just described that, as it seems like you're thinking of that as well. Whereas like the traditional, the way people say platform in our industry, they're really just talking about just like bringing data into a database and having an application.

Shaun Cooley: [01:04:14] That's right.

James Dice: [01:04:15] That seems like you guys are thinking more in terms of like a marketplace and interactions between the multiple third parties, that kind of thing.

Shaun Cooley: [01:04:24] That's right. Yeah. so we do similarly break it up into data producers and data consumers from a data producer standpoint we view sort of on the building itself, you've got the owner or the operator, any sort of maintenance company, or, outsourced operations company that's coming in and producing data inside of there. You also have tenants of the building.

Right. So, again, if you think back to that, the building systems. I've got building wide systems. HVAC, vertical lifts, fire safety, those sorts of systems are building wide, especially in high rise, oftentimes lighting is owned by the tenant, right. Then they came into empty space and did all the build out in their own empty space.

And so they oftentimes own lighting. They almost always own access control. They almost always own surveillance, they own calendaring and room booking. And so, you know, when you really run through all the systems, I think Joe and I had an exchange on your Nexus forum of like the number of systems that, that exist inside of these.

And I think we landed on like, 80 systems in there. So it was crazy. But you know, some of those are produced by the owner operator, manager of the building. Some of them are produced by the tenants. Then you can start looking at at data that's produced by individual occupants by the humans that are walking around the space, especially, so their location information if you're tracking mobile devices moving around space, or maybe badge swipes. You can think of a tenant experience app that runs on a mobile device, that's also producing information coming out of there.

Eventually you get outside of the building, the typical ones are sort of weather, right? Everyone looks at weather now. But there's traffic, there's geopolitical geospatial information. The differences in the occupancy of a building on a day where there's a huge protest out front versus a day where there's no protest out front, is drastic.

Right? And so, you need to start paying attention to all of these things really across the board. And so we view all of that as data producers. On the other side, when you start talking about data consumers, we start with what we call first party data consumers. So if you take the building owner operator or one of the tenants, and they're putting data into the platform as a producer, and they're also consuming it for whatever BI or data science or finance or ops teams, they have those data consumers are first party to us, right?

It's usually their own data that they're making that they're accessing on the other side. Then you move into what we call second party data consumers. And the second party is really any buddy who has a contractual relationship with the first party that provides some service to them. So you can think of MSIs MSPs, even the company that does like maintenance or a company that's coming in and like cleaning the office at the end of the day.

And we don't view them at those second parties as using the data directly to serve the first party. We view them as using the data to optimize their own business. And so if you think of the company, that's cleaning offices or  take like an ACO, right? It's like, tens of thousands of trucks that are rolling out to buildings every single day.

If they can understand from an FDD or predictive maintenance standpoint, when they actually need to go and respond to a building. They can start to optimize their workforce. They can roll a truck only when it's absolutely necessary. And now they can do more with less people. Right. Or, serve more customers with the same number of people.

And I think that those second party use cases, there's a lot of just, people that serve these buildings that have a real interest in understanding the data coming out of the building as well. Because most of them serve more than one building, right. And they want to know across the portfolio.

So like, where am I needed? What do I need to do? How can I optimize my business? Then you get into third parties. Our view, third party data consumers are really the folks that are producing software applications that sell to these buildings. The reason that we call them third parties is because we tend to be a component under the hood to them.

Right. When they go in with their sales team to a Boston Properties and they sell whatever the solution is that they're trying to sell. Like they don't even need to mention that Mapped is under there. Maybe if Boston Properties has Mapped in the building, they want to point it out like, hey, it's one click integration, easy to use. But they can go in and sell, as their standalone software solution and, never mentioned the fact that we're under the hood. And so those third parties to us are really, they're providing a product to the first party. And that product makes use of the platform as a data consumer up along the way. And then we get into data exchange which we also internally refer to as fourth party, but we never say fourth party because everyone goes, what the hell is a fourth party?

And so, but internally we go back and forth on like data exchange versus fourth party. And fourth party is where you can imagine like the municipalities or the governments that want access to data out of these buildings, whether it's real time, energy use. I think one of the most eye-opening things to me is the USEIA, right?

The energy information administration that does this 10 year survey of buildings. It's the same model as the U S census, right? Every 10 years they send out a survey, the buildings that feel like answering it, give them some data about energy use and upgrades and other things. And then it takes them two and a half years to compile the data coming out of it.

I think right now we're still looking at data from the 2010 survey that was released in 2012. The 2020 survey they did, we won't have the data until 2022 or 2023, which is like, how is this a thing? Right? Like, how does it still take it's like every 10 years we get new data for how the buildings are using energy, how much they've done on upgrades and things like that.

And so we expect just like the EU has started to do that, like the USDA and other agencies and even local municipalities and like local governments will start paying attention to data coming out of these buildings as well. And those sorts of like, again fourth party I'm just going to stick with it, that's going to be the new phrase is fourth party. Yeah. But those fourth party they, they don't have any direct relationship with them other than sort of a geospatial relationship. Right? Like the building exists in there domain. And so therefore I get data out of it.

But you can imagine other use cases like that as well. I talked about the parking one uh,  Google maps hitting an API like ours and asking for all the parking spots within a mile radius of us, that sort of a query is because you're within the sort of bounds of the building, right.

You're within that one mile radius that was defined. And you can also imagine like a first responder who's showing up to the building, having an app that when they first come into the building can give them all the details of the building simply by the fact that they're in the building, right? The fact that you've now set foot inside of it, or like right in the parking lot, you can start to see like, what's the current occupancy of the building.

Like where was the fire alarm coming from? How do I most quickly get there? Can I get four plans to,  help me get through the space? Those sorts of again, fourth party uses, there is no direct contractual relationship with that first party that actually owns the data.

It's just it's because of the fact that you're within proximity or within the sort of bounds of the ownership of that fourth party. And we think that there's going to be just a lot of use cases on that side as well.

James Dice: [01:11:10] Totally. I feel like that's how people should explain data models. So like, to,  as a first responder, I want to come in and figure out where the fire's out or where, you know,  and how would you do that if you didn't have a data anyway.

Cool. So as we kind of wrap up here, that was fascinating by the way, I haven't heard a lot of people explain it at that level before. How are these problems similar in the other industries? Because you guys are not limited, just limited to commercial buildings and your scope and what you're trying to approach.

So. How are we different versus how are we similar to other industries out there?

Shaun Cooley: [01:11:47] Yeah. I, I think that, so, so, if you look at our website, we also target industrial which includes energy, production, oil, and gas manufacturing even retail. I think a lot of times it's easy to forget that retail is still a commercial building.

There's still all these systems inside of it. And and similar sorts of, if you're a large retailer, that's got a thousand locations, you're dealing with the exact same headache that a CVRE or a Boston properties is dealing with. Just, at a, much more distributed scale and far fewer engineers to, to manage all those spaces.

Yeah. And so, I think that when we look across the other spaces that the reason that we're so focused on CRE to start with is that, that CRE shell that you get. Really exists in a lot of the other spaces as well. If you look at a manufacturing floor, it still has HVAC, it still has lighting.

It still has, safety and security type functionality, fire safety. It still has access control and surveillance and all the other things that you would get in service CRM building, or, proper Siri building. But then it adds extra things like robotic arms and conveyor belt and, other stuff that that you need to integrate with as well.

From a similarity standpoint, lots of overlap in the systems also, a lot of overlap in the way that those environments came to be very similarly like they use different protocols. Like, I don't know of any buildings that use OPC UA or OPC or  right.

Or some of these other protocols that are used in manufacturing, but you get the same sort of. Way that systems were built an integrator at some point over the last 50 years showed up with their bag of tools and managed to put together a thing that kept the line moving, kept the parts coming out of, parts going in one end and product coming out of the other end.

And so, the problems that they now deal with even if you take a, a very large like automotive manufacturer, that's got, 10 or 12 factories within a, a block of each other those factories were all built over different time. They're all retooled at different times, there was different vendors.

I think similarly they're dealing with a lot of the folks that really deeply understand those systems exiting the workforce as well. And so there's a lot of sort of knowledge. That's just, going away day by day the system integrators that originally built them oftentimes are out of business or got merged into some other system integrator.

somebody else came in and made some tweaks and like changed the things. And so they're dealing with it, just a lot of the exact same problems. I think where it differs though, is the use cases that they're looking at. You can equate like OEE or overall equipment efficiency to, some of the things that we look at on like energy usage or energy optimization, sort of commercial real estate.

But I think you'll find a lot more sort of human safety type applications that are going into play. You find a lot more quality type applications. And so you're sort of looking at like, what is the quality of the widget that I'm producing or an oil refinery, what's the quality of the mix that I'm producing at the moment?

We look across intake  major oil company and. Little things like when they start producing a different mix, like they switched from, 89 octane to, to jet a right. And when they go around the refinery and sort of, twist the knobs that or the valves that allow them to control how they're producing that mix.

There are many times where the first bit that they produce was inaccurately like one knob wasn't turned. Right. And so what you're finding is more and more of these refineries now have a central view to see where all the positioning of those are. Whereas before they would radio out to people who oftentimes would go on a bicycle to like out where it was needed to be.

And so, but it's a lot of, it really is a lot of the  problems, right? Like misconfiguration, FTD. Energy optimization, safety, security, those sorts of things. I think, when we look at the other industries, though, we get into a lot of just regulatory concerns around how you deliver products into those spaces.

There's not a whole lot of regulatory concerns in sort of commercial real estate, right? Like you, if you have like a UL certification or T U V certification on the product that you're putting into it, like you're usually pretty good to go. Yeah. You go into a refinery with a tiny little box that could potentially spark um, you know,

the amount of explosive gases in the air at any given time might be enough to level, an entire city.

And so, that you've got to meet, like has locked certification and you know,

some of these, like at Cisco, some of these has locked boxes that we had. They look like the nesting dolls, right. Where you've got like. Your box in the middle, in a box. And before long it's like the size of a car, because if any of those gases get in there, you've got really big problems.

And so, I think there's a lot of other complications into entering these spaces. Also from a equipment standpoint, you know, if we put equipment in a building and it has a fan in it not a big or it, you put a fan down in a mine like you've got a lot of problems. If you want to put something on a launchpad all your electronics need to be potted, right?

They need to be covered in like a silicone that holds them together because the a hundred or 90 decibels of vibration will things taking off is going to shake every component off of your circuit board. And so, if you get past those sorts of things, the discovery and ingestion and normalization all starts to be very similar to what we're doing, right.

We just need different device profiles. We need different protocols that we speak inside of the environments. And for us, brick needs to continually be extended to all of these other types of equipment that can exist in types of relationships . In these other areas.

Cool.

That's so fascinating.

Yeah. It's it's good. Fun. I hopefully we will, get some of those certifications soon and can really start driving into some of those environments as well. It's things that like you have to take them very seriously, right? Because this isn't, if we go into a commercial building and we break the air conditioning for a day, there's some mildly annoyed people, right.

Building engineer might be yelling at me, but like nobody died in the process of that. Right. And if we screw up something in a refinery or in an energy plant there's actual lives on the line and, those sorts of things. you know, Again, when we decide to be read only at the beginning a big part of it is in some of those other industries, right.

Capabilities can be pretty dangerous. And so we have to take it very seriously.

James Dice: [01:17:41] All right. Well, depth is a good place for us to end off today. Thanks so much for coming

Shaun Cooley: [01:17:46] on the show. Just serious note at the end.

James Dice: [01:17:48] Well, I appreciate it. This has been super educational, so thanks for coming on.

Shaun Cooley: [01:17:52] Yeah. Thanks for having me.

James Dice: [01:17:57] All right, friends. Thanks for listening to this episode of the nexus podcast for more episodes like this, and to get the weekly nexus newsletter, which by the way, readers have said is the best way to stay up to date on the future of the smart building industry. Please subscribe@nexuslabs.online. You can find the show notes for this conversation there as well. Have a great day.

Upgrade to Nexus Pro to continue reading

Upgrade

Upgrade to Nexus Pro to continue reading

Upgrade
“The fact that other industries were being solved with an API, it really made sense to create this single layer that takes on all the historic complexity and abstracts it away into that API, so that everyone doesn't have to keep repeating the exact same task over and over again.”

—Shaun Cooley

Welcome to Nexus, a newsletter and podcast for smart people applying smart building technology—hosted by James Dice. If you’re new to Nexus, you might want to start here.

The Nexus podcast (Apple | Spotify | YouTube | Other apps) is our chance to explore and learn with the brightest in our industry—together. The project is directly funded by listeners like you who have joined the Nexus Pro membership community.

You can join Nexus Pro to get a weekly-ish deep dive, access to the Nexus Vendor Landscape, and invites to exclusive events with a community of smart buildings nerds.

Episode 50 is a conversation with Shaun Cooley, CEO and Founder of Mapped, a startup focused on the data infrastructure layer.

Summary

We talked about the value proposition of having an independent data layer and the downsides of not having one. Then we took a bit of a deep dive into Mapped's approach to that layer and the keys to doing it really well. Finally, we talked about the real definition of a platform and Mapped's take on the platform concept.

  1. Mapped (0:54)
  2. Symantec (1:51)
  3. Twilio (11:09)
  4. Brick Schema (35:22)
  5. Jason Koh (35:26)
  6. Matterport (39:17)
  7. TrueView (39:19)
  8. Meraki (40:59)
  9. Aruba (41:01)
  10. Ruckus (41:02)
  11. GraphQL (49:24)
  12. Haystack (55:11)
  13. Platform Revolution (1:03:19)

You can find Shaun Cooley on LinkedIn.

Enjoy!

Highlights

  • Introducing the IDL (6:42)
  • Downsides of not having an IDL (14:55)
  • How Mapped approaches data modeling (35:07)
  • How mapped thinks about APIs and APIs vs. Standards (47:77)
  • All the interactions that IDL platforms can enable (1:00:03)

Music credit: Dream Big by Audiobinger—licensed under an Attribution-NonCommercial-ShareAlike License.

Full transcript

Note: transcript was created using an imperfect machine learning tool and lightly edited by a human (so you can get the gist). Please forgive errors!

James Dice: [00:00:03] hello friends, welcome to the nexus podcast. I'm your host James dice each week. I fire questions that the leaders of the smart buildings industry to try to figure out where we're headed and how we can get there faster without all the marketing fluff. I'm pushing my learning to the limit. And I'm so glad to have you here following along.

This episode of the podcast is brought to you by nexus pro nexus pro is an annual or monthly subscription where members get exclusive writing podcasts and invites to members only zoom gatherings. You can find info on how to join and support the Without further ado, please enjoy this episode, the nexus podcast.

Episode 50 is a conversation with Shaun Cooley, CEO and Founder of Mapped, the startup focus on the data infrastructure layer. We talked about the value proposition of having that independent data layer and the downsides of not having one. Then we took a bit of a deep dive into Map's approach to that layer and the keys to doing it really well.

James Dice: [00:01:11] Finally, we talked about the real definition of a platform and Map's take on the platform concept. Without further ado, please enjoy Next Podcasts Episode 50. Hello, Shaun. Welcome to the nexus podcast. Can you introduce yourself?

Shaun Cooley: [00:01:25] Sure. I'm Shaun Cooley, founder, CEO of mapped data a infrastructure company for industrial and commercial IOT.

James Dice: [00:01:33] All right. Thanks for joining us. Can we unpack your background a little bit? So we'll obviously get to map in a minute.

Tell me about your career before Matt.

Shaun Cooley: [00:01:42] Yeah. So, only career, right? You don't want to know like where I was born, how it was raised,

James Dice: [00:01:47] whatever you feel like

Shaun Cooley: [00:01:49] too. Yeah. So yeah, so I spent 18 years at Symantec as a distinguished engineer in the Norton group.

James Dice: [00:01:56] Distinguished engineer.

Shaun Cooley: [00:01:57] Sorry to interrupt.

Yeah, no worries. so. It's sort of the second from the top of, engineers it's distinguished engineer. We used to joke distinguished as one step before extinguished, but it's sort of, as you move through, at least in India, most, they have different names for it.

Like, one, two, three at some companies or Symantec was like associate software engineer, then software engineer, then a senior software engineer, then principal, then senior principal, then distinguished engineer and then eventually fellow. But but I left there after 18 years and went to Cisco, was the vice president and CTO of the internet of things group instead of Cisco.

Obviously Cisco's IOT practice is focused on commercial and industrial and so nothing residential or consumer And then left there in in July of 2019 to start mapped just before, we all got to stay home for COVID.

James Dice: [00:02:53] Yeah, totally. I was looking at your LinkedIn. I want to ask you a little bit about Cisco, but first I saw there was like something like a hundred patents or something.

Can you expand on what that means?

Shaun Cooley: [00:03:04] So, all those patents are largely software related. There's a couple of hardware patents that's in there, but a lot of the big tech firms, in incentivize the filing of patents, finding novel things that you've created instead of the software and really, pushing to, to file those, from intellectual protection standpoint.

you know, Luckily I've never worked for a firm that , that sort of weaponizes any of those, both Symantec and Cisco used them for defensive purposes.  Does that, I mean, it's sort of that mutually assured destruction, like, if Microsoft is never going to Sue Cisco for patents, because Cisco has enough to Sue back.

And so they just both stay at arms lengths. I think that, the software patents are an interesting way to publicly document the things you're working on at companies where you can't otherwise do it. Right? A lot of the work at a Symantec or a Cisco is really, internal and sort of protected.

And so when you get to publish a patent and that with the us patent office the work becomes open start to talk about, the sort of architectures and things that you've put together that led to novel solutions.

James Dice: [00:04:06] I see. Cool.

Shaun Cooley: [00:04:07] It's fun. And I think it's 121 just to.

James Dice: [00:04:11] Oh, sorry.

Cool. And so what were the types of things that you were up to when you were in the IOT group at Cisco?

Shaun Cooley: [00:04:20] Yeah, so, I was responsible for, standards, inside of, Cisco for IOT related things. So whether it was an open connectivity foundation, work that we were doing at IATF, 3g, PP,  all those things, uh, on the IOT side,  fell under my team.

Uh, Then there was a lot of ideation and planning of where the product roadmap goes. Uh, The sorts of things that we're interested in building and as a very large company there was obviously a lot of customer interaction.  And so, you know, it's a, it's an interesting where I eventually landed on.

The things that we're building now with mapped, but this concept of the way that Cisco sells products is through something called executive briefings. We bring in the CEOs and chief digital officers of these major fortune 100 companies and we tell them all the great things we can put into their 19 Interac in they're very like clean air conditioned data center.

And I think when we were working on, it was like my third meeting after joining the IOT team we were doing one of these executive briefings for a major oil company. And the guy that we were selling to says, you've never been out on an oil rig, have you? And all of us in the room were like, no, why?

I said like, feel free to come down to Texas and we'll fly you out to the Gulf of Mexico and show you some of the hell that we put up with on these oil rigs from an automation standpoint. And that really led to me spending a lot of time going out and visiting customer sites. I would throw out my time at Cisco, take any customer up on an invite to go visit their sites.

So it didn't matter if it was the roof of a commercial building or a mile under the earth and a mine up in the head of a uh, wind turbine oil rigs or refineries energy plants, like anywhere I could go manufacturing floors, I go take a look and see what what all the pain was that they were going through with the automation environments.

And I think I learned very quickly that it's not a data center. Right. It's very different from everything else that we look at on the Texas.

James Dice: [00:06:21] Totally cool. so. And obviously a lot of our audience will not be familiar with the things outside of buildings, but those same insights happen obviously in a boiler room yeah, exactly.

So, okay. So you were at Cisco, you said I'm going to quit and start a startup. How did the founding of math work and why'd you do it?

Shaun Cooley: [00:06:41] Yeah, so, I think that well, I guess I should probably describe what map is before I talk about why I decided to go chase after it. So this concept of data infrastructure, I think that, we've called it many different names and I should give Joe, over at Montgomery, some credit for the data infrastructure term. Although, he'll probably never let us live it down, but um, we were calling it data aggregation, layers. I think you call it the independent data layer. There's a lot of different names for this.

When you have these environments where many vendors over long periods of time, have come in with whatever bag of tools was available to them at the time, and built out automations to serve a purpose, right? That the purpose is control of an environment.

Whether that's manufacturing, or oil and gas, or commercial buildings, the work that an MSI does is largely the same, right? I've got an end, the outcome that I'd like to achieve. I've got this set of tools. How do I piece these tools together and how do I do the programming to achieve that end outcome? Now fast forward through 50 years of that happening, with thousands of system integrators and many thousands of vendors and, hundreds of protocols, depending on which vertical you're looking at. Now have a situation where you've got building owners, operators, and tenants that are looking at hundreds and hundreds of locations. They were all built in different eras by different system integrators, from different components on different protocols.

And they're trying to make sense of everything from like energy spend, how they do maintenance and operations. Pick any big name tech company. They probably have somewhere between 200 and a thousand offices around the world.

If you do the math on that, they're spending about a billion dollars plus a year on real estate and in most of those cases, all the costs of equipment, maintenance, energy spend is passing through to them as well. And yet they have no visibility into how any of those systems are running across a portfolio.

And so, when you look at this, the same thing applies to manufacturing plants. The same thing applies to oil and gas. Each of those refineries was built at a different era, with different systems and different capabilities in different technologies. And so now as you try to get intelligence out of it, the biggest hurdle that we see again, and again, is integration.

Right. Going in, mapping out all of those systems and trying to discover, like, make model firmware, what protocol does it speak? What are its relationships to other devices? What's its function inside of the environment? What explicit logic does the PLC have of? When you start putting all these things together, you can spend it a couple months instead of a single commercial building.

You can spend, a year or more inside of a refinery or an energy plant or a manufacturing floor doing this. And so the company named Mapped is really a play on the mapping process that would normally be done inside of these environments. And the fact that all the vendors that we see that deliver software or some sort of value into those environments today, they tended to start out with a handful of data scientists or software engineers.

And then if you fast forward a couple of years into the life cycle of these companies, they're like a handful of engineers and 50 integration engineers who are going in and doing the installations in each one of these. Um, and So I think that the insight that I had was that if I can find, a problem where there's 20, 30, 50 integration engineers in every one of the vendors that serving these space, it's time to abstract away that problem and really make it a single sort of layer that can drive that value so that you don't have to go spend 50,000 non-recurring engineering to do that first integration.

You don't have to spend three months doing the integration. You can do it in sort of a plug and play type model. And I think at the same time, we were also seeing similarly complex  being sort of, I would say unlocked at scale from an innovation standpoint through APIs. And so if you look at, credit card processing, you've got a company like Stripe, like nobody in their right mind anymore builds their own credit card processing platform. And you can look at something like text messages.  I uh, over the years have built far too many direct integrations into carrier SMS gateways. Every carrier had a homegrown SMS gateway. They were entirely like built by the carrier, operated by the carrier. You had to sign an agreement with the carrier. You had to know how to speak to their particular SMS gateway. You had to maintain a database of phone numbers of like, okay, James' phone number, is that owned by AT&T or T-Mobile or Verizon or Sprint? Because I have to send it to the right gateway. I can't just send it to one of the gateways.

And so, then you get a company that comes in like Twilio that says like, look, all the complexities, all the sort of store complexities that have existed inside this environment can be abstracted away through a very simple API. Just send a text message to this phone number, like.

I don't care about all the plumbing underneath. And I think when you put together sort of the challenges and the amount of people that were being spent on integration or still are being spent on integration with the fact that other industries have been being solved through a simplified API and happy to come back to the difference between like an API and a standard later.

The fact that other industries were being solved with an API, it really made sense to create this sort of single layer that takes on all the historic complexity and abstracts it away into that API so that everyone doesn't have to keep repeating the exact same task over and over again.

James Dice: [00:12:03] Absolutely. Cool.

Shaun Cooley: [00:12:05] I don't know if that answered your question. I don't even remember what the original question was.

James Dice: [00:12:09] No worries. No worries. I love the way you just described that layer. Yeah, it's very similar to the stuff I've seen on your website and other places where you've kind of laid out the problem in a way that I really like, so the question, the original question was you were at Cisco and you saw this need, and I want to hear like a story around like deciding to quit your job recognizing this from a technology standpoint and a business opportunity standpoint. What made you go start the company.

Shaun Cooley: [00:12:39] Yeah. But, I think that you can obviously do a lot instead of a company like Cisco there are   significant resources inside of a company that size then, you know, significant go to market capabilities and other things around it. At the end of the day I sort of looked at I've been in large tech companies for the better part of 25 years at that point.

Yeah and I really had always kind of wanted to go do a startup because I hate myself for something. I don't know what the actual reason was for it. yeah, th this seemed like a big enough opportunity that that it was worth going out and pursuing on my own rather than trying to build it inside of inside of a company.

James Dice: [00:13:19] Got it. Cool. So that was what year and a half ago? Almost two years ago now?

Shaun Cooley: [00:13:25] Yeah, almost two years ago. Yeah. Coming up on that.

James Dice: [00:13:28] Cool. So where are you guys at today before we kind of dive into the nerdiness? Where were we at? And 2021 spring 2021.

Yeah. So we, we spent the first, 18 months or so building the platform and stealth, it's It's one of those weird startup things.

If, for anyone that doesn't know what it means, it just basically means like we had no website. we weren't publicly talking about it. We had a couple of early design partner, kind of early customers that were helping us, answer some of the questions around, what works for them, what doesn't work for them and what they would want it to look like.

And we came out of stealth on March 2nd, so it's still fairly recent about a month that we've been out of stealth and really started telling the story of, what the company is and how will we think that data integration layer, really solves a major problem, not just for commercial real estate, but also for manufacturing and industrial environments as well.

Totally. Yeah. I have this opinion that I haven't talked about very much that I wish more companies in our space would just stay in stealth a little bit longer.

Shaun Cooley: [00:14:30] I don't know if I should take that personally or not. No,

James Dice: [00:14:33] I haven't seen it in your product, but I have seen a lot of other products that are what I would call marketing led versus product led.

And anyway, that's all set, so let's dive into it too. So from where you're at today, obviously you said you've co-developed with your clients. And so I feel like you would probably have a lot of opinions about how the Slayer should be done. So, which is fun for me to dive into.

So, let's just start with like, what are you kind of hinted at a lot of this when you describe what mapped is, but like, what are the downsides of, and this is kind of like where we're at as an industry today, we don't really have this layer deployed at scale. So from a smart building standpoint, we have all these what I would call point solutions, deployed and they're gaining scale and that we don't really have this layer.

So what are the downsides of not having this independent data layer and what will this enable when we do have it deployed?

Shaun Cooley: [00:15:28] Yeah. Look, I think the biggest downside of not having this layer is that it, it leaves the sort of final integration of literally everything you want to put into the environment to that integrator that has to show up and figure out what's going on inside of your environment and how to really make sense of it, for the application or for the hardware that they're putting in.

I almost equate this to like, if buildings didn't have a  size of door, how much harder would it be to go out and get a door for everything? And, yeah, there are a few doors that are like wacky sizes, but, in general, that's sort of eight foot by three foot size, you can get from anybody.

And it's been standardized and similarly, the sort of position of a lock on a door, the size of the hole, the two and three eighths inch hole that you need to drill. All this stuff has been standardized, but yet our building systems are just all over the place today.

And I think that we continually pay a penalty for it. And we've almost sort of accepted that. It's just okay to go and spend three months integrating everything that we want to integrate into the building. And I, I don't agree that it's okay. Right. I think that we need to have that layer that, that abstracts it away.

And so, the penalty of us not having it is that we continue to waste time on integration. We have a very strong belief. Which is right here in my background is that we want everyone else to focus on innovation and not integration.

You shouldn't have to spend, 30% of your time, like doing that final step of integration when you should be spending it on building cool new stuff or new analytics or new carbon credit trading or whatever it is that you have in mind should be where you get to focus your time.

James Dice: [00:17:09] Yeah.

And I would add to that and say like, just like Stripe and Twilio kind of handle all of the ongoing mess that happens as well. So there's as someone who's done this integration before, there's often, upkeep. And it's not just a, one-time set it up, now we have analytics or now we have this perfect new, smart growing application.

There's always like this, someone unplugged the thing from the switch, someone decided to close that hole in the firewall, just to see if anyone would say anything. Like these things happen all the time. And it's also the, just like someone handling it, right?

Shaun Cooley: [00:17:45] Yep. Yeah. We refer to that as the day two problem.

Okay. Day one is like, you got everything working, all the data's flowing. It's like, whatever your transform layer is working. You're feeding the applications and then day two happens and it broke, I don't know why it broke. Let me go back into the building and like figure out what happened, where the data stopped flowing.

And that day two problem is a real challenge. I think So we refer to it and we'll w we can get into this more later, but we refer to it as a living graph. And that, the graph representation that we have is living, it continues to evolve over time. And so the API is the way that we describe it to our customers is really meant to be sort of self discoverable.

You need to have prior knowledge of the environment that you're going into in order to sort of traverse the graph and make sense of it. And that allows us to like, continuously incorporate change and what we call enrichments into this graph so that the developers who are building on top of it can continue to benefit from that.

Right. And again, Like as a software developer, you don't want to, like if Twilio can't send a message to at and T like not my problem, queue it up when at and T is back online, send it to them. Right. Like, I don't care about what your internal issues are, that's not my problem. And similarly, we want our customers to not have to care about, a firewall port closed or somebody upgraded the firmware on a device somewhere, or swapped out a controller with something else.

And like everything falls over underneath. Right. That's our problem to go figure out and to continue to update the living graph.

James Dice: [00:19:16] Absolutely. So, thinking back on the history of like the building owners I've worked with and correct me if I'm wrong here, but it seems like there's. Two types of building owners when it comes to you guys approaching them, there's people who get it and the people who don't get it, like, and what I mean by that is like, I can remember, like I spent a long time with healthcare clients and it was a big stretch to get them, to get to where they see the value of, and this was just this certain type of healthcare client.

I'm sure there are healthcare systems that do get it. What I'm saying is like to talk to them about an independent data layer and sort of make the business case would have taken years. And so w we often would just go in with a complete solution, right. Or maybe not even talk to them about it, it's just kind of in the background.

And then there's the people who like, are, I've been out there deploying use cases for a long time, and then they see, Oh, wow, I really get this. Now I need this. Is that sort of how it is when you're out talking to building owners?

Shaun Cooley: [00:20:17] Yeah, look, I think when you put it in that simple of buckets it, it absolutely, they fit into one of those two buckets, right.

Either they get it or they don't, I think that we look at the, we look at the go to market in a couple of different ways here. The sort of the easy ones are the tech tenants. Think of all those, big, like fortune 1000 tech companies who manage large portfolios.

They internally have data science teams and BI teams that are capable of making use of an API or building applications that make sense for them on top of the API. And they're also managing the large portfolio, which means that they've a desire to go after it. I think that, when you get into the owner and operator side, there's a very clear delineation between the very tech forward owner operators the sort of, not so tech forward.

Owner-operators right. We talked to a lot. That, th they literally take over a building, they upgrade the lobby, the marble in the lobby and the elevators, and they try to release out the space for a little bit more and flip it to somebody else. Yeah. Like they've, they have no interest in the control systems.

Other than that, the they're not starting on fire in a while. They're trying to sell the building to somebody else. And I think that, for us that obviously creates a little bit of a challenge as we go after that longer tail of the sort of non tech forward ones. But you know, what we'll find over time, at least what I believe we'll find over time is that as we reach a critical mass of buildings where the third-party developers start making use of it as well, third-party developers are not just selling an API into the, to the customer they're selling some value, some business outcomes, some energy optimization or predictive maintenance solution into those environments.

And if they're depending on us under the hood to, to make use the API APIs, now it starts to make more and more sense to the two buildings. I think on the tech forward building side, we hear a couple of different things either. They've been struggling for a couple of years to get together their whole portfolio into a single data Lake or whatever you want to call it.

And they've been struggling to normalized data. They've been struggling to capture data. They've been struggling to make use of that data. Or they just sort of have this like vision of like, it'll be solved soon. And so they're looking for a vendor that can do it for them.

I think in both cases our conversation's pretty easy with these customers. You've got a lot of buildings that all have a lot of the data you're trying to get access to it. We can help you. Right. We can give you a very clean API and way to do that. Th the next question then is like, what do you do with the data?

Again, if they've got a BI team or data science team, or even a finance or ops team that's capable of using the data, there's a very clear path to immediate use for that. The ones where we were refined, no BI or ops team, we see one of two things either they have a vision for, like we've had one tell us that.

Every building should have an API and that no vendor should ever be in the building again. Right? Like if you want to, if you want to sell me a software package that runs on my building, like go use the API and talk to me when you're using the API. You're not crawling around my space and putting boxes in my space and like adding additional plant pressure that might eventually cause something to fall over and stop working.

And that model, I think, works really well for us. Right. Having a cloud API that brings together all your data where you get the visibility and control of where the data is going and who has access to it is a good story for that single API. The other one is like, they have tenants that are demanding access to data, right?

And so when the tenants start asking for building data, it tends to be those tenants that are in many locations around the world. Trying to sort of, quantify where their dollars are going on that space. And as a building engineer, when a tenant comes to you and says, we want access to building data, like your first answer is like, what, I don't have a clean way to give that to you.

And yet we're seeing more and more of these tenants signing leases where the lease actually mandates access to building data. And they just don't have a way to give it to them. And so in some cases, even though there's no BI team or data science team that can make use of it in the owner operator they know that there's demand from their tenants and they're viewing this as a tenant service that they're providing, which is, we can give you access to the building data for your floor, with like a simple API that you don't need to go and spend a bunch of engineering effort to try to provide.

So we, I liked the sort of they get it or they don't, but I find that there's some, a lot of shades of gray on that.

James Dice: [00:24:37] Yes, definitely. So how about other vendors then? So, one of the things that, we talked about last week at our pro member gathering is like, it probably actually makes sense to then. Like not go to sometimes not go to a building owner with this, go to a vendor and partner. And I think there's also probably nuances and shades of gray there.

Right. Where it's like, it seems like some of them would probably not want to give up parts of their stack. Can you talk a little bit about why a vendor would want to bring you in and say, well, I want you to take care of the integration piece.

Shaun Cooley: [00:25:11] Yeah. So I think that there are that there are a couple, again many shades in here.

So it depends on the type of vendor we're talking about. If you're talking about an MSI or an MSP that's going in and perform services inside of these buildings, many of those MSIs are looking for ways to do sort of recurring revenue type services in the space. And a lot of those, you can think of any things from like a remote management of buildings, remote optimization of buildings, where they are, they're not just sort of watching it remotely.

They're also helping you to maintain things remotely without having to show up on site and do things for a very large portfolio, mid to large size MSI. Every one of those buildings is a slightly different environment. Again, So just like those companies that are in a bunch of buildings, like you're dealing with the same thing as a vendor to that space.

You have to send somebody out to go figure out what's going on with the system or to tell them that like some set point is off and it's costing you a bunch of money. The, those sorts of investigations right now require that a human goes out and visits it, which means you've got to have this like fleet of humans that you can send out to all these places.

And so as they look for ways to move from services and to recurring revenue they're very interested in finding platforms that make that easier for them. I think that the other one is that you'll probably see some of the MSI start to build applications as the access to data gets easier and easier, right.

They will start moving up the stack into some of the application spaces, some of those analytics and other types of services that they used to provide with humans, but no longer focused on the integration piece, but rather the sort of data use piece. When we get into the application vendors, the folks that.

I think like last week we had clockwork analytics on there. Right. Those sorts of folks, I get a mix. I've had apologies for language. I had one of their CEOs tell me, I don't ever want to install any more shit in buildings. Right. Like they, they just, like they, they want to do the data science problem.

They wanna, they want to add value and provide a benefit to the customer. And the integration is viewed as a sort of necessary. They had to go in and do the integration in order to provide the value that they sold to that customer. They didn't want to do the integration they had to. Right. And so I think in those cases similar to the same sort of like cloud discussions that we used to have of like, well, yeah, you can go rack and stack your own servers in a data center somewhere.

But like, Y you can just use Amazon or Azure now. Like why would you ever go rack and stack your own servers again? I think that eventually we'll get to a point where, you know, whether it's us or somebody else manages to, to get to a large enough deployment where there will be a vendor in there where it's like, why would you ever go bother?

It's the Twilio one again, why would you ever go bother to send your own text messages? Just because you can, when it's not the part of the business where you're adding value. And so I think that, w we'll see the vendors shift over time towards, API platforms that normalize for them so that they can move away from that.

NRE the certain non-recurring engineering that they spend today on integration. Oftentimes that's either directly billed to the building as a, upfront installation or integration costs.

That can be pretty significant in, in many cases or it's rolled into a, a 36 month contract that they sign the building signs like this three-year agreement for whatever software they're installing and the vendor just, hides the cost of their fees into that. Right. I think if you can switch to an API where you're paying, a couple hundred dollars a month, rather than $50,000 upfront and you can just start making use of the API as necessary.

It makes it a lot easier to scale a business out. It makes it a lot easier to not have to ramp up an integration team that goes out and does the integrations, when you can just depend on an API being there to handle those cases for you.

James Dice: [00:29:00] Totally. So what's that look like for you guys then as you look to go to market and to scale up.

You guys are then taking those, like you said, those 50 engineers that everybody has and pulling them internally. So what's this look like for you guys and for you as a CEO, are you looking for 50 of your own integration engineers right now? Or what's your,

Shaun Cooley: [00:29:22] yeah, look, I'm always looking for smart people.

I think if anyone knows the space really well and wants to come work for us, I'd love to talk to them. We view this from a very different approach and that is that this is not a job for people. And I realize it is today. And so,  if you look in other if you look at something like enterprise asset management on the it, or the it security side these sorts of things are largely solved and have been for the last 20 or so years.

Where you install a piece of software, it goes out and  everything on your network and it figures out the make and model of those things. And, in many cases then they go into like, what's its security posture. Does it need to be updated? That sort of stuff. In our case we use automation and machine learning to largely do those same things.

We go out, we discover all the things on the network, whether it's a serial bus or an IP network. And we're doing that to find it's make and model so that we know how to map it and model it inside of our graph. And once we know how to map it instead of our graph then it turns into how do we operationally extract data from it, right.

We want the data that continues to flow out of it. We use a bunch of different techniques to get data out of those. Some of them are active we are actually like pulling devices for data. Some of them are passive.  We monitor traffic moving across networks and look for protocols that we know how to speak and pull apart those protocols.

And so part of this is like, we don't want to add more load to the systems that you already have inside of the building. Right? If you're already hitting some device 10 times a second, for some reason us hitting another 10 times a second, doesn't really help anybody. But if we can pick up on the communications between your existing controller and that device, and we can just read the data that's coming off of it we benefit a lot from the fact that most of these protocols have no encryption or security of any sort.

And it allows us to sort of,  sit in the network layer and watch what's going on inside of the environment. But we're doing that again to produce that operational data that's coming out of there. And then when the data gets to our cloud, all the tasks of merging that data into our graph and mapping it fall onto machine learning.

And so we use a of ML some sort of natural language processing on things like point names, but also a lot of. Sort of, deeper ML on how we look at relationships between devices and discover both explicit and implicit relationships between those devices. So

James Dice: [00:31:37] what you're saying is, whereas a lot of people are just kind of hiring a bunch of people like me, who I've done this before.

And I shouldn't have been doing this. Like you didn't mention earlier, you didn't mention the mechanical engineers out there that are out there trying to figure this out. Basically just like plugging stuff in and see if it works and then calling somebody. Yeah. Yeah.

Shaun Cooley: [00:31:57] I think I reversed the two wires on my serial bus what's happening.

Yeah,

James Dice: [00:32:00] exactly. Yes. Yeah, I always used to call it hacking and it was just like a mechanical engineer trying to hack, but yeah. Anyway well, where I was going with that is, it seems like another value of splitting up the stack into these different layers is that you guys can then build out. No tools that the other people that are doing this in a one-off way don't even have time or desire to do it.

So you guys can get better and better. Whereas everybody else, if they're distributed, they're just kind of doing this as a means to

Shaun Cooley: [00:32:31] an end. That's right. Yeah. It's a it, so the system learns globally, meaning that w what, so some of the interesting things that we start to see we see both sort of regional and like, if I just look at the point names and the, our processing appoint names we see both regional and time-based sort of differences in point names.

What people were commissioning systems with in the eighties in Los Angeles is very different than the eighties in New York and the nineties in LA totally different point naming schemes than they were using in the eighties. But you know, as a system that sees globally all of these things, once we train it, how to pull apart as a single.

mechanism for point naming that applies to any other building. We come across, it was done by the same vendor in that same era. Right? And so you start to get a lot of value out of doing the same work over and  again, across everything. And I think with humans, every human enters, every environment, new you've got to go in and get your bearings, understand where all the equipment is like what's connected to what type of systems are in there.

Start reading manufacturer documentation and the manufacturer docs are not particularly easy to go through on a lot of these, especially on the protocol side. I mean, you pick your favorite chiller from carrier. It'll have a 450 page technical manual, and two of those pages talk about how it uses BACnet.

Those are the two pages that like we care about, but you know, if you're in there doing, mechanical design and you need to integrate one of these systems into something else. You've got to go find that 450 page manual. You've got to scroll through it until you find the two pages that talk about the, like the piece of info that you care about and like, okay, what are, what is the analog input and analog output for this thing?

And which one is which, and like, how do I read the value out of it is in Fahrenheit or is in Socius and those sorts of things like that, they're very easily solved through sort of big data and machine learning. Because again, every time I come across a carrier 30 or UT chiller, like it's the same, I don't need to go look at the manual again.

If the system already knows how to interpret the data coming out of that device. Absolutely.

James Dice: [00:34:33] I did that on a project like a month ago. Oh, when I'm manuals to the death. Yes.

Hey guys, just another quick note from our sponsor nexus labs. And then we'll get back to the show. This episode is brought to you by nexus foundations, our introductory course on the smart buildings industry. If you're new to the industry, this course is for you. If you're an industry vet, but want to understand how technology is changing things.

This course is also for you. The alumni are raving about the content, which they say pulls it all together, and they also love getting to meet the other students on the weekly zoom calls and in the private chat room, you can find out more about the courses.nexuslabs.online all right, back to the interview.

Let's talk about the, so we're kind of getting into it a little bit, the data modeling piece of it. So what are some of the keys to modeling the data to enable it for whatever use case a building owner wants to enable?

Shaun Cooley: [00:35:28] Yeah, so we use Brick, I think we're fairly open about that. Our chief data scientist is Jason Coe, who's one of the Co-Creators of Brick. And so, it would be crazy if we went with something else. So, I think that there's a couple of things in play here.

Right? You're taking data from very different systems. Again, configured by different system integrators named  differently, wired up differently and trying to normalize it into sort of a schema that is representing not just the individual points and devices inside of there, but also the locations.

Also the relationships between all of these things, like one device feeds air into another device or has a point of this other device. And then I think the thing that we add that's not quite in Brick yet is people as well. So we, we track people, places and things instead of our ontology.

And so the extensions we've done should be rolling back into Brick soon for all of those changes, but getting from sort of data into Brick takes a couple of steps. I think that the first one is obviously that discovery that I talked about in the building and then it turns into extraction.

You've got to get data out, you have to efficiently get that data to the cloud. And you've got to start making sense of that data in the cloud. And so, we start with something that we call device profiles. So those device profiles, earlier when I said carrier 30, are you teach Hiller, if I know how to talk to it, I know how to talk to it.

That is what we would call device profile.

James Dice: [00:36:52] Okay.

Shaun Cooley: [00:36:52] So the device profiles take us from sort of raw data that we've discovered in the building into, what we call structural data inside of the graph, instead of the Brick graph. And that structural data, for example, if you took a thermostat, very simplified, the thermostat might have a vertex in the graph, a node in the graph for the physical thermostat itself. And it might have a set point for the humidity and a set point for the temperature and a sensed temperature and a sense to humidity, right? So in the graph, a simple thermostat may turn into five vertices in the graph with four relationships between them. That sort of information is structurally of like I found one device, I need to represent it in the graph, we can do through our device profiles.

And those device profiles allow us to do that pretty quickly. A whole nother talk for how we build device profiles some other time. But once we have the structural components inside of the graph, now we have to connect them all up, right? What is this thermostat actually controlling? And in most cases that thermostat does not have a direct relationship with any other piece of equipment.

It is being either pulled or signaling a controller of some sort. And the controller has some human created logic inside of it. Somebody actually went in and programmed it in order to drive some other device based on the events coming out of that thermostat. And so that relationship data to us is an explicit relationship, right?

So there is now an explicit relationship between the thermostat and the  rooftop cooling unit or whatever your system happens to be. Those relationships, we learn through a couple of different mechanisms but we do a lot of time-based correlation. And so as we see sort of action reactions happen inside of the things in our graph, we start to draw lines between them and over time, those lines get sort of more and more confirmed as we move on.

And then the last one, I guess the last two is a mix of sort of geospatial. How do you represent geospatial constructs inside of these environments? Somebody went on a rant during the last member call about mapping data. It might've been Steve,  like how hard it is to get floor plans in these spaces.

And so part of the geospatial, there's a couple of different ways we do geospatial. We use some public data sources to get the outside footprint of a building, right. It's pretty easy to get the outside walls of a building. That at least gets us sort of longitude latitude in a large sense.

Then we allow the customer to either upload PDFs or AutoCAD files or if they've done a 3D scan of their space, that either has gone into Matterport or into like a True View. We can actually connect to those and pull a slice out of it as well to get the indoor maps. But now as you're trying to place devices throughout those spaces, you run into a couple of different things.

One, point names that were commissioned in the 80s, likely have no relation  to what the current name of that space is. So when the current tenant uploads their map, and it's got a conference room called, like Frontier Land and in the point name, it's like RM624, how do you link those things up?

Right. And so, so we, again, we start looking at correlations over time between systems that we know where they are physically, systems that we don't necessarily know where they are physically, and it allows us to start moving things closer to each other. We do also at any time an administrator can grab a device and just drag it to where it needs to go so that we can stop trying to guess on it.

But we do try to make pretty good guesses about where things go.

James Dice: [00:40:19] So there's like a user interface piece that lets someone that doesn't know anything about graphs, sort of update things?

Shaun Cooley: [00:40:26] Yeah. Our user interface, and you can see it on the website, it's very much focused on the visualization and control of your data. And so, we do a couple of different things in there. And then, sorry, and I'll come back to that in a second on the last piece. The last thing that we do is enrichments. And so we look for signals coming out of the data that we can further enrich with sort of more meaning.

It's not, I don't want to call it analytics because it's not quite like it doesn't replace the analytics that an application vendor would build on top of us. But if you take something like like a Cisco wireless access point, that's tracking the movements of mobile devices moving around a space, right.

You can get the same thing from like, Euclid analytics used to do this before, wherever they went and inside of We Work, but like Muraki has it, Ruckus has it, Aruba has it, for the wirless access point. If two mobile devices are always within a meter of each other, there's a pretty good chance we can introduce a person into the graph, right? Those two mobile devices, aren't just moving, being in unison on their own. It's likely that there's a person that's moving around with them. And so now that concept of a person gets you to much more useful information as a developer. And again, without that extra step of having to try to figure out that correlation between those two over time, because we're doing it already instead of our graph.

And so we do a lot of enrichments like that, that look at things like, correlation between multiple devices over time in order to introduce new concepts.

James Dice: [00:41:47] Okay, cool. So you mentioned extending Brick and sort of updating the standard after that. So like, it's interesting that you said that because not a lot of people do that.

So what's that process look like, I guess having Jason as an employee?

Shaun Cooley: [00:42:04] It's helpful. It's definitely helpful. So the Brick consortia is still being formed right now. I don't know how public it is. I won't say company names, but there's several very large building system companies that are forming this Brick consortium around it.

Jason and Mapped will obviously continue to be involved in that, really Jason on behalf of Mapped. And so, internally as we come across more and more systems, what we find is that Brick, I think in the like 0.9 or 1.0, they went really deep on HVAC.

They covered like every possible construct in the HVAC system. And then when they went to like 1.1, they introduced a lot of energy management and lighting type things in there. 1.2 that just came out, again, further extended it in a bunch of other directions. And so,  as Jason gets to spend time, as other members of the future or consortium, get to spend time, their really thinking through some of these other systems and how to appropriately model it inside of these graphs.

We will contribute back. Our view, from a standard standpoint is that, Brick is pretty well ahead and, not to bring back up standards wars, right. But we think Brick is pretty far ahead of the other standards out there, as far as modeling relationships and all of the actual things inside of a building. The other part of it is that Brick is very prescriptive, is very clear about how you represent certain things inside of that.

In some of the other sort of semantic tagging type standards are not so clear about it. Right. And I think when you look through them, what you find is that there are a significant number of sort of standard tags, but then when you go into an environment that makes use of it.

They had something that they wanted to represent that wasn't available in the standard and they just like made up a tag. And as soon as you do that, now you're back to sort of custom meanings, custom names that don't really make sense to anybody else. And it's really hard to enter some of those environments with no prior knowledge and figure out what's going on inside of there.

And so for us, brick is the appropriate way to really have a self-describing environment, whereas a software developer, I don't need to know in advance what I'm targeting. I can just sort of show up, look at the model and understand how these things work together and what the data is that's coming out of them.

James Dice: [00:44:16] Totally. So let's talk about the data model piece on to get an API is in the minute, but extending that data model into the use cases that it enables. So I think there are certain number of say, software application companies out there that have the opinion that you can't separate. The data model from the use case or the data model from the application that sits on top of it, because inevitably if like your data model is not going to have enough information or it's going to be wrong and it's going to have to be redone.

So what do you say to that opinion?

Shaun Cooley: [00:44:57] Well, I think, yeah, I think first every database and operating system vendor on the it would disagree with that that statement, right? Like I think we have seen again and again, that a platform is capable of supporting many disparate use cases.

And things that the platform creator didn't think of at the beginning, right? These this is sort of the benefit of a platform, if we were still, if you took an Android phone, right. And every vendor of Android phone with a different camera, different GPS unit, different CPU required me as a software developer to code directly to that GPU or to the GPU or the GPS or the camera.

I think we would have like two apps running on all of our phones. And the fact that like, somebody has abstracted away, all that complexity into a well thought out API means that it's very easy for a software developer to build an app that makes use of the camera or makes use of the GPS or makes use of some other component inside of there.

And I think that brick is enough of a a standardized model without really changing what the origin meaning is right in, in that, I can represent a thermostat or a set point temperature or a, set point humidity. And have the values that go along with it. And as a developer, I can understand that at scale across a large portfolio,  without needing to know, like, is this the camera from Sony or is this the camera from right. Like, similarly, I don't want to know if the thermostat was built by Honeywell or Johnson controls. Like just give me the value that was inside of it. That's all I really care about at the moment. And so, these sorts of arguments I think in the tech space and this one I think as a side effect of what's typically referred to as the stack fallacy, right.

Which is that it's always easier to move down the stack than it is to move up the stack. But what's important to keep in mind, is that a platform vendor, if you take an Intel who's making, they don't to know what the application that's being built on top of CPU is. In order to provide a very, sort of usable and broadly applicable CPU.

And I think similarly, as a vendor, that's trying to take all of these disparate systems and represent them in a uniform way. I don't need to know what the application upstack is. And now I think the stack fellows is totally valid if we tried to move upstack. If we as a, as mapped, tried to go after the applications, after the single pane of glass, after the, energy optimization or carbon credit trading, things like I don't have the first clue in how they sell those to customers or what the sort of value that's promised is.

But from a here's chaos, I can provide normalization. I have a very good idea of how to provide that normalization layer in the same way that, a CPU vendor can figure out all the possible instructions you might need. And make those available through, through the sort of code that you would put onto that execute inside of that CPO.

James Dice: [00:47:54] Totally. So let's talk about the API. Everyone talks about an API for your building. Maybe it's just in the Nexus Community, but I say everyone, what I really have been doing the nerds of smart buildings talk about an API for the buildings. So what are the sort of the keys for an API for a building?

Shaun Cooley: [00:48:12] Yeah, I think maybe I'll start a little higher level than that. We, I think, as modern software engineers, you look for a lot of things from an API. You're looking that you can use a modern programming language to access it, that you're not really having to deal with the, like the transport layer that happens underneath it or the, like the physical layer that happens underneath that. And I think today when you look at something like a BACnet or a KNX, like, those things become very important, right. Is it over IP? Is it over serial? What's the speed of the bus that I'm on?

I see all the time, like even on the IP side of things, the knick in the controller is still only 10 megabit. Right. And it like brings the whole network to a, through a crawl. And so, these sorts of things as a modern software developer, like, I don't even want to deal with trying to get into the building. The firewall that the building has, or, does the building even have a network? Or is it individual tenant networks that are inside of there?

And so it starts with like, it should be accessible everywhere. And we look at this from the cloud  standpoint, right? Like an API should be available in the cloud. I don't need it to be in the building. I don't want it to be in the building because that's not the place where you're really trying to get access to everything these days.

And so it starts within the cloud. Then the next step is, can we use sort a modern technology to access that API? Right. And so, we use something called GraphQL. It was created by Facebook and has been proven to be able to handle very large graphs and pretty complex queries across that graph.

We've extended it in a couple of different ways. So, if you think of Brick,  Brick is the structural representation of everything that was inside of the building. It is not the time series data. The individual time series points still have to be stored somewhere else, that is not Brick.

And so in graph, every vertex in the graph has a time series stored behind it. So if you look at like the set point temperature for texts, that thermostat had that I talked about earlier, there is a time series store behind that where I can also get the data from that vertex over time.

And so, we make that available in a lot of different forms. You can get it in raw form, which is just, I want the values wherever they happen to occur. We can do it in aggregate. So yeah, give me the values over the last year by month or, give me the min/max and average by month, or, give me just the max over on a minute, by minute over the last two hours. Those sorts of queries drive a lot of flexibility in the way that a application developer starts to make use of the data.

I think that, from a Brick standpoint, and Jason will disagree  with me, but RDF, which is the relational data format that is sort of the driver behind the Brick, is not developer friendly. It is not something that I would put in front of a typical developer and expect them to understand.

I think even when I started working with Jason probably six months into it, I was still asking questions about RDF and like trying to wrap my head around it. And there's this, whole community of data scientists that understand RDF inside and out and are building, this web of things and, all the work that the W3C is doing around the RDF standards.

But as a developer, who's been writing code for 25 years, it's like, there's a lot of stuff in there that I just couldn't wrap my head around. And so we actually use a variation of it, but the ontology is still the same. So if you understand Brick, our ontology is no different, but because we're exposing it through GraphQL, it's a more developer friendly.

It's a little easier to understand the relationships and the hierarchies and how you get from something that was in the structural RDF to the time series data, right? Those pieces are typically, separate systems and not easy to connect. And I think that for us also, moving away from RDF allowed us to scale it to a much larger scale than we could with any of the RDF databases that existed out there.

And so, we continue  to own that, but again, as a developer, that's not your problem, right? Like, that's my problem in Mapped is like, how do I scale my database? How do I do security? How do I do all the other things on it? And so, GraphQL, Brick based. We allow for both polling, so you can make an individual query into the graph, give me all the temperatures step points as they changed over the last six months across my entire portfolio. Pretty easy, right?

Like, has somebody been messing with my thermostats throughout the building? Like I just want to see where the value changed. You can also subscribe to queries so you can put in what we call a streaming query, which is here's my query, every time it matches value, just call me on a web hook. A web hook is, they've got an application running in their environment and every time we have this, we actually fire a message out to them as well.

So, if you think of like in the buildings right now, the controllers like, Hey, what's the temperature? Hey, what's the temperature? What's the temperature? What's the temperature? We obviously don't want to build that same thing in the cloud. And so you can set a query that says, when this temperature changes, let me know.

And so we can use that to sort of push these notifications back out to other applications. And so, if you think of something like a dashboard, when the dashboard for spins up it's going to make a bunch of queries to build this, sort of initialview of that dashboard, the graphs and charts, and whatever happens in the dashboard, and then it's going to subscribe to a bunch of queries. And as values come in, it will update the values in the dashboard. And this sort of a model drives this use where, the dashboard that you leave up on a screen, or that you leave open in the background on your laptop is not just hammering away at the API. And, like really like, driving a lot of network traffic and other things that you don't really need. Right. Cause you only care when the value changes. And so, yeah, lots of, lots of ways that we think about how to make an API sort of consistent and strong. The other piece that I'll add is that, as a building owner or as a tenant of a building, as you install other applications, you start to think about how is that application using my data.

And so how are they using my data? Shouldn't just be a question. It should be set up in advance in the permissions that you grant to that application. And again, I think that, when you look at trying to automate buildings from a control standpoint, it was very easy to say, like, what do we need permissions for?

Right. Like it's either on a private serial bus or eventually it's on a segmented IP network. Like, the things are all trusted that we're plugged into the environment, but now you're plugging in like vendor after vendor and application after. And like, you don't know who's accessing what, where they're taking the data, if anybody has remote access to that.

And so when we moved that into the cloud, it's much easier for us at that point to put a very clear definition of how you grant access to something. And so in our environment, as the building owner or the tenant, if you're installing an application, you choose this application can access my electrical data in my elevators, but not my HVAC.

And it can access, my entire portfolio, except for like these three floors that have the federal government customer, where they're not allowed to see the data. And so you can be very explicit around how these things come together. And it's enforced up that API layer. So you don't need to worry about how that happens over time.

James Dice: [00:55:01] Cool, interesting. So how do you think about an API versus standard? You brought that up earlier. I wanted to hear your thoughts.

Shaun Cooley: [00:55:10] Yeah. They're can obviously be standards that define APIs as well. I think that when we look at in the case of Brick or even Haystack, right? They're defining a data standard a way to structure and sort of tag or represent data. What they don't prescribe is how you access that over a network.

Or how you control access to who can see parts of it and who can't see parts of it or how you sort of, throttle that, right? Like if one application decides to start hammering your Brick server, like let's say you've got an RDF server, like a Neo for J or something that you've got all of your Brick, schemer running inside of.

And one of your applications starts making, 100 requests a second to it. Like, do you need to throttle that? Who's in charge of throttling that? How do you even notice that's happening other than the fact that your servers like catching on fire at the time? Right. And so an API allows  allows you to introduce a lot of these sort of I would say walls around the actual data format.

And we use Brick as the data format, but all of those constructs around who has access? How much access do they have? How frequently can they call it? Who's paying when they call it? Right. That's a big question as well. If I host a server up in the cloud somewhere, like somebody has got to pay for all the CPU that it's using where if, like, if I'm hosting my own Brick server and it's just getting like hammered by one of those vendors, are they paying for that bill?

Or am I paying for the bill? Who's doing that. And so I think you get a lot of these, where the standard on the data side does a great job modeling the data itself, but it doesn't go into answering those questions around, like how do you access it and where and when to apply and all those pieces.

James Dice: [00:56:47] Got it. How about um, this is my last, I think nerdy question on this layer is the, and I asked this a lot on the podcast, but like, where are you seeing. Where are you seeing the market in terms of using this layer for control? So the application wants to send the command back down to the systems, and since you're in the middle of that what's sort of the state-of-the-art for supervisory control.

Shaun Cooley: [00:57:10] So, so I should be totally clear, T today we are read only and absolutely by choice read only. I think that, you know, when you've got a data layer like this, that is allowing right commands back into the environment and allowing you to install various applications you very quickly run into contention between those applications.

The application that's trying to optimize energy, wants it to be 77 and the application that was trying to optimize, occupant comfort wants it to be 74. How do those get settled? Right? Are those two applications literally just fighting over the value, like back and forth and back and forth.

And your systems are constantly like kicking on and off to try to like meet up with the two of them. And so, our voice to be read only from the beginning is because we're working on other things to address that. Right. And I think that there are really good ways to address it, that don't sort of expose the exact value back to the developer.

You can start to, sort of, and it's not to say we won't expose the exact value, but I think that in a lot of these where you might have contention, you want the developer to declare their intent which is I intend it to be a little bit warmer or I intended to be a little bit colder or intended to a little bit brighter or a little bit darker.

And so you can start to sort of reconcile those intense in a platform like ours in order to figure out what the end value should be back in. A control system. And so, we, we view control as a future thing for us. We're seeing enough use cases right now where people are just trying to get data and the data, they're looking, how can we optimize  And they're not necessarily looking for the system to go back in and reprogram everything, optimize energy. They just want to know like, where are we spending? How do we do things with it? I think we're seeing a lot of use cases right now, and I expect them to be pretty short lived, but a lot of sort of post COVID returned to work, use cases around, like, where are people moving?

How long are they congregating in certain areas? What's the fresh air exchange rate in those areas like, how quickly am I turning over the air? When was the last time it was cleaned? Like hotspots and not hotspots from a, from an infectious disease standpoint. Right. And I, and again, I expect those to be very short-lived, but they're driving a lot of thinking around.

Sort of, how do you use data to better the environment? We're also seeing both in the EU and in New York, these, mandates towards energy data,  as a building over a certain size, you now have to provide that energy data in near real time to the sort of monitoring agencies.

So they can figure out which buildings are sort of burning the most power in that space and which ones aren't. And it, again, it just turns into a data problem. Like how do I get the data out and normalized? And so as we, as we sort of go through and address these these read only problems, we'll eventually get to the right piece as well.

And today it's we have a software block and that just doesn't allow us to write back to the environment. And it's it was, again, it's just to avoid a lot of the contention and other things that come along with writing back in there.

James Dice: [01:00:03] Got it. Very cool. So this might be another, I guess that was a roadmap question.

This might be another one around I've been thinking a lot recently and, I'll sort of provide some context around these thoughts, which is there's this use case around FDD fault detection, diagnostics. You need to then integrate with a work order system. So we have two applications that now need to talk to each other and I've identified, I feel like this it's kind of the same issue that we're talking about just higher up the stack. Where all of these FDD companies, and this is probably the same for a bunch of different use cases, but all these FTD companies are saying what CMS has devolved my clients have.

Right. Okay. Now I'm going to write integrations with all of those and  then on the other side, the CMMS guys are saying like, same thing. Right. So is there an opportunity for you guys to move up to that layer as well? Or are you thinking about that at all?

Shaun Cooley: [01:01:00] So, so we refer to that as data exchange which is, if you of a in a lot of the platforms like ours in other industries, you'll see these bow tie diagrams, right.

And it's data, very cleanly moving from like a bunch of things on one side into the platform. And then back out to a bunch the things on the other side, I think that the data exchange is more of, on the right hand side of the bow tie data, moving from one application to the others.

Yeah. There's a couple of different ways that we look at that. So one of them is in the enrichments that I talked about earlier with the wireless access points, like tracking devices. We do intend to open up our enrichments to allow third parties, to put enrichments into the platform and to monetize those enrichments.

And so FDD is a perfect example of like, you don't actually want to run your FDD outside of the platform. You want to put it in the platform and then monetize it, from anyone else who's making use of the platform. And that's just, something like F DD using so much data that the sheer cost of like taking it out of one platform and into the next, or out of one cloud and into the next, starts to add a lot of overhead to that.

And so, there's better ways to do that particular example. I think on these ones where, you'll start to see an energy optimization app decide that a value needs to be changed. And rather than pushing the change directly back into the building, it opens up a ticket in the workforce management solution.

Those sorts of data exchanges are pretty straight forward through our platform, right. There are ways to write back into the graph and ways to pull that back out on the other side, but we continue to look for cleaner ways to do it. And I think as we get more and more vendors that want to do that data exchange we'll find a better ways to do it.

I think we also view data exchange as a, I would say, to unaffiliated parties. And so you'll start to find data inside of buildings that can be shared more generally. Take a parking system, for example, on a Saturday or Sunday afternoon, that building parking lot is completely empty.

And so you may decide that on Saturdays and Sundays, your parking data is publicly available. And so you'll start to see apps in that sort of data exchange start to make use of that publicly available data as well, where you didn't explicitly install an application, make use of it. But you know, that there's value to you as a building owner to drive, traffic literal traffic to your, building because you can monetize the empty parking spaces or whatever it happens to be.

And I, and we view that as a data exchange problem as well.

James Dice: [01:03:24] So I'm reading this book called, Platform Revolution. And those people who've been listening to the podcast probably realized that I've been saying that for a couple of weeks. I'm a slow reader right now while we have the course going on.

Shaun Cooley: [01:03:35] By the time they hear this in a month it'll be even slower for how long you've been reading that book.

James Dice: [01:03:40] Hopefully I won't still be still reading it by the time people hear this. But my question around that is like the way that they define platforms in that book. And they're drawing from examples like Uber and AWS and all these, outside of our industry type of technology companies.

They define it as, you have a producer and a consumer and you have network effects, right? So the way you just described that, as it seems like you're thinking of that as well. Whereas like the traditional, the way people say platform in our industry, they're really just talking about just like bringing data into a database and having an application.

Shaun Cooley: [01:04:14] That's right.

James Dice: [01:04:15] That seems like you guys are thinking more in terms of like a marketplace and interactions between the multiple third parties, that kind of thing.

Shaun Cooley: [01:04:24] That's right. Yeah. so we do similarly break it up into data producers and data consumers from a data producer standpoint we view sort of on the building itself, you've got the owner or the operator, any sort of maintenance company, or, outsourced operations company that's coming in and producing data inside of there. You also have tenants of the building.

Right. So, again, if you think back to that, the building systems. I've got building wide systems. HVAC, vertical lifts, fire safety, those sorts of systems are building wide, especially in high rise, oftentimes lighting is owned by the tenant, right. Then they came into empty space and did all the build out in their own empty space.

And so they oftentimes own lighting. They almost always own access control. They almost always own surveillance, they own calendaring and room booking. And so, you know, when you really run through all the systems, I think Joe and I had an exchange on your Nexus forum of like the number of systems that, that exist inside of these.

And I think we landed on like, 80 systems in there. So it was crazy. But you know, some of those are produced by the owner operator, manager of the building. Some of them are produced by the tenants. Then you can start looking at at data that's produced by individual occupants by the humans that are walking around the space, especially, so their location information if you're tracking mobile devices moving around space, or maybe badge swipes. You can think of a tenant experience app that runs on a mobile device, that's also producing information coming out of there.

Eventually you get outside of the building, the typical ones are sort of weather, right? Everyone looks at weather now. But there's traffic, there's geopolitical geospatial information. The differences in the occupancy of a building on a day where there's a huge protest out front versus a day where there's no protest out front, is drastic.

Right? And so, you need to start paying attention to all of these things really across the board. And so we view all of that as data producers. On the other side, when you start talking about data consumers, we start with what we call first party data consumers. So if you take the building owner operator or one of the tenants, and they're putting data into the platform as a producer, and they're also consuming it for whatever BI or data science or finance or ops teams, they have those data consumers are first party to us, right?

It's usually their own data that they're making that they're accessing on the other side. Then you move into what we call second party data consumers. And the second party is really any buddy who has a contractual relationship with the first party that provides some service to them. So you can think of MSIs MSPs, even the company that does like maintenance or a company that's coming in and like cleaning the office at the end of the day.

And we don't view them at those second parties as using the data directly to serve the first party. We view them as using the data to optimize their own business. And so if you think of the company, that's cleaning offices or  take like an ACO, right? It's like, tens of thousands of trucks that are rolling out to buildings every single day.

If they can understand from an FDD or predictive maintenance standpoint, when they actually need to go and respond to a building. They can start to optimize their workforce. They can roll a truck only when it's absolutely necessary. And now they can do more with less people. Right. Or, serve more customers with the same number of people.

And I think that those second party use cases, there's a lot of just, people that serve these buildings that have a real interest in understanding the data coming out of the building as well. Because most of them serve more than one building, right. And they want to know across the portfolio.

So like, where am I needed? What do I need to do? How can I optimize my business? Then you get into third parties. Our view, third party data consumers are really the folks that are producing software applications that sell to these buildings. The reason that we call them third parties is because we tend to be a component under the hood to them.

Right. When they go in with their sales team to a Boston Properties and they sell whatever the solution is that they're trying to sell. Like they don't even need to mention that Mapped is under there. Maybe if Boston Properties has Mapped in the building, they want to point it out like, hey, it's one click integration, easy to use. But they can go in and sell, as their standalone software solution and, never mentioned the fact that we're under the hood. And so those third parties to us are really, they're providing a product to the first party. And that product makes use of the platform as a data consumer up along the way. And then we get into data exchange which we also internally refer to as fourth party, but we never say fourth party because everyone goes, what the hell is a fourth party?

And so, but internally we go back and forth on like data exchange versus fourth party. And fourth party is where you can imagine like the municipalities or the governments that want access to data out of these buildings, whether it's real time, energy use. I think one of the most eye-opening things to me is the USEIA, right?

The energy information administration that does this 10 year survey of buildings. It's the same model as the U S census, right? Every 10 years they send out a survey, the buildings that feel like answering it, give them some data about energy use and upgrades and other things. And then it takes them two and a half years to compile the data coming out of it.

I think right now we're still looking at data from the 2010 survey that was released in 2012. The 2020 survey they did, we won't have the data until 2022 or 2023, which is like, how is this a thing? Right? Like, how does it still take it's like every 10 years we get new data for how the buildings are using energy, how much they've done on upgrades and things like that.

And so we expect just like the EU has started to do that, like the USDA and other agencies and even local municipalities and like local governments will start paying attention to data coming out of these buildings as well. And those sorts of like, again fourth party I'm just going to stick with it, that's going to be the new phrase is fourth party. Yeah. But those fourth party they, they don't have any direct relationship with them other than sort of a geospatial relationship. Right? Like the building exists in there domain. And so therefore I get data out of it.

But you can imagine other use cases like that as well. I talked about the parking one uh,  Google maps hitting an API like ours and asking for all the parking spots within a mile radius of us, that sort of a query is because you're within the sort of bounds of the building, right.

You're within that one mile radius that was defined. And you can also imagine like a first responder who's showing up to the building, having an app that when they first come into the building can give them all the details of the building simply by the fact that they're in the building, right? The fact that you've now set foot inside of it, or like right in the parking lot, you can start to see like, what's the current occupancy of the building.

Like where was the fire alarm coming from? How do I most quickly get there? Can I get four plans to,  help me get through the space? Those sorts of again, fourth party uses, there is no direct contractual relationship with that first party that actually owns the data.

It's just it's because of the fact that you're within proximity or within the sort of bounds of the ownership of that fourth party. And we think that there's going to be just a lot of use cases on that side as well.

James Dice: [01:11:10] Totally. I feel like that's how people should explain data models. So like, to,  as a first responder, I want to come in and figure out where the fire's out or where, you know,  and how would you do that if you didn't have a data anyway.

Cool. So as we kind of wrap up here, that was fascinating by the way, I haven't heard a lot of people explain it at that level before. How are these problems similar in the other industries? Because you guys are not limited, just limited to commercial buildings and your scope and what you're trying to approach.

So. How are we different versus how are we similar to other industries out there?

Shaun Cooley: [01:11:47] Yeah. I, I think that, so, so, if you look at our website, we also target industrial which includes energy, production, oil, and gas manufacturing even retail. I think a lot of times it's easy to forget that retail is still a commercial building.

There's still all these systems inside of it. And and similar sorts of, if you're a large retailer, that's got a thousand locations, you're dealing with the exact same headache that a CVRE or a Boston properties is dealing with. Just, at a, much more distributed scale and far fewer engineers to, to manage all those spaces.

Yeah. And so, I think that when we look across the other spaces that the reason that we're so focused on CRE to start with is that, that CRE shell that you get. Really exists in a lot of the other spaces as well. If you look at a manufacturing floor, it still has HVAC, it still has lighting.

It still has, safety and security type functionality, fire safety. It still has access control and surveillance and all the other things that you would get in service CRM building, or, proper Siri building. But then it adds extra things like robotic arms and conveyor belt and, other stuff that that you need to integrate with as well.

From a similarity standpoint, lots of overlap in the systems also, a lot of overlap in the way that those environments came to be very similarly like they use different protocols. Like, I don't know of any buildings that use OPC UA or OPC or  right.

Or some of these other protocols that are used in manufacturing, but you get the same sort of. Way that systems were built an integrator at some point over the last 50 years showed up with their bag of tools and managed to put together a thing that kept the line moving, kept the parts coming out of, parts going in one end and product coming out of the other end.

And so, the problems that they now deal with even if you take a, a very large like automotive manufacturer, that's got, 10 or 12 factories within a, a block of each other those factories were all built over different time. They're all retooled at different times, there was different vendors.

I think similarly they're dealing with a lot of the folks that really deeply understand those systems exiting the workforce as well. And so there's a lot of sort of knowledge. That's just, going away day by day the system integrators that originally built them oftentimes are out of business or got merged into some other system integrator.

somebody else came in and made some tweaks and like changed the things. And so they're dealing with it, just a lot of the exact same problems. I think where it differs though, is the use cases that they're looking at. You can equate like OEE or overall equipment efficiency to, some of the things that we look at on like energy usage or energy optimization, sort of commercial real estate.

But I think you'll find a lot more sort of human safety type applications that are going into play. You find a lot more quality type applications. And so you're sort of looking at like, what is the quality of the widget that I'm producing or an oil refinery, what's the quality of the mix that I'm producing at the moment?

We look across intake  major oil company and. Little things like when they start producing a different mix, like they switched from, 89 octane to, to jet a right. And when they go around the refinery and sort of, twist the knobs that or the valves that allow them to control how they're producing that mix.

There are many times where the first bit that they produce was inaccurately like one knob wasn't turned. Right. And so what you're finding is more and more of these refineries now have a central view to see where all the positioning of those are. Whereas before they would radio out to people who oftentimes would go on a bicycle to like out where it was needed to be.

And so, but it's a lot of, it really is a lot of the  problems, right? Like misconfiguration, FTD. Energy optimization, safety, security, those sorts of things. I think, when we look at the other industries, though, we get into a lot of just regulatory concerns around how you deliver products into those spaces.

There's not a whole lot of regulatory concerns in sort of commercial real estate, right? Like you, if you have like a UL certification or T U V certification on the product that you're putting into it, like you're usually pretty good to go. Yeah. You go into a refinery with a tiny little box that could potentially spark um, you know,

the amount of explosive gases in the air at any given time might be enough to level, an entire city.

And so, that you've got to meet, like has locked certification and you know,

some of these, like at Cisco, some of these has locked boxes that we had. They look like the nesting dolls, right. Where you've got like. Your box in the middle, in a box. And before long it's like the size of a car, because if any of those gases get in there, you've got really big problems.

And so, I think there's a lot of other complications into entering these spaces. Also from a equipment standpoint, you know, if we put equipment in a building and it has a fan in it not a big or it, you put a fan down in a mine like you've got a lot of problems. If you want to put something on a launchpad all your electronics need to be potted, right?

They need to be covered in like a silicone that holds them together because the a hundred or 90 decibels of vibration will things taking off is going to shake every component off of your circuit board. And so, if you get past those sorts of things, the discovery and ingestion and normalization all starts to be very similar to what we're doing, right.

We just need different device profiles. We need different protocols that we speak inside of the environments. And for us, brick needs to continually be extended to all of these other types of equipment that can exist in types of relationships . In these other areas.

Cool.

That's so fascinating.

Yeah. It's it's good. Fun. I hopefully we will, get some of those certifications soon and can really start driving into some of those environments as well. It's things that like you have to take them very seriously, right? Because this isn't, if we go into a commercial building and we break the air conditioning for a day, there's some mildly annoyed people, right.

Building engineer might be yelling at me, but like nobody died in the process of that. Right. And if we screw up something in a refinery or in an energy plant there's actual lives on the line and, those sorts of things. you know, Again, when we decide to be read only at the beginning a big part of it is in some of those other industries, right.

Capabilities can be pretty dangerous. And so we have to take it very seriously.

James Dice: [01:17:41] All right. Well, depth is a good place for us to end off today. Thanks so much for coming

Shaun Cooley: [01:17:46] on the show. Just serious note at the end.

James Dice: [01:17:48] Well, I appreciate it. This has been super educational, so thanks for coming on.

Shaun Cooley: [01:17:52] Yeah. Thanks for having me.

James Dice: [01:17:57] All right, friends. Thanks for listening to this episode of the nexus podcast for more episodes like this, and to get the weekly nexus newsletter, which by the way, readers have said is the best way to stay up to date on the future of the smart building industry. Please subscribe@nexuslabs.online. You can find the show notes for this conversation there as well. Have a great day.

⭐️ Pro Article

This article is for Nexus Pro members only

Upgrade to Nexus Pro
⭐️ Pro Article

This article is for Nexus Pro members only

Upgrade to Nexus Pro

Are you a Nexus Pro member yet? Join now to get access to our community of 600+ members.

Join Today

Have you taken our Smart Building Strategist Course yet? Sign up to get access to our courses platform.

Enroll Now

Get the renowned Nexus Newsletter

Access the Nexus Community

Head over to Nexus Connect and see what’s new in the community. Don’t forget to check out the latest member-only events.

Go to Nexus Connect

Upgrade to Nexus Pro

Join Nexus Pro and get full access including invite-only member gatherings, access to the community chatroom Nexus Connect, networking opportunities, and deep dive essays.

Sign Up