Article
5
min read
James Dice

On the merits of an independent data layer

January 27, 2020

To receive updates on new content like this, sign up for the weekly Nexus newsletter.

Subscribe now

My conversation with KGS Buildings’ Nick and Alex continued last week. I’ve had a lot of fun nerding out with them and others who have reached out in the last few weeks. I want to share a brief nugget from our conversation on the merits of an independent, open data layer.

Here’s a quick summary of the independent data layer concept:

When designing your smart building stack, you separate the Integration and Historian Layers from the Application Layer rather than choosing one vendor’s solution for the whole stack. See my What is EMIS? essay to understand this delineation in more depth. You may also see it referred to as a data lake or middleware. I’m sure there are new acronyms for it—our industry sure loves acronyms.

The proponents of this approach tout the following primary benefit, which can sound pretty great from the building owner’s perspective:

You simply need to tag your data, put it in a data lake, and then plug in any application like fault detection and diagnostics (FDD). Then, if you don’t like the FDD, then you still own the tagged points (information model) and can just plug a different FDD vendor in.

And if you were to (crudely) draw it up, it might look something like this…

This sounds pretty compelling, right? I know it does… because I’ve made this argument once or twice in my past life as a consultant. The argument looks something like this:

It’s a risk-free first step on the journey to a smart building: unlock and model the data that’s currently locked away in proprietary and siloed systems.

It creates a single source of truth by enforcing one data model (e.g. Project Haystack or Brick Schema) for all applications and promotes interoperability.

It reduces dependence on one vendor and promotes a cooperative ecosystem. Depending on the building owner’s needs, it may be most beneficial to select multiple vendors to fulfill all the capabilities desired. If the data layer platform is designed as such, it could start to look like an app store for the building.

Similarly, it de-risks the investment by allowing the owner to trial, test, and compare multiple smart building applications without needing to restart the costly integration from scratch.

But, but, but:

Once you peek under the hood of this approach, as Alex and Nick helped me do, it might not be so pretty. Here’s their take:

While this sounds great to an owner, its simply not true. In a perfect world with perfectly understood points and metadata about those points, as well as metadata about the equipment and system interactivity (aka sequences), this would be possible. But this is not a world we live in. It’s hard to imagine living in it for quite some time.

As I unpacked this further after our conversation, it started to look worse. Here’s why:

De-risking strategies like this perpetuate the myth that these technologies aren’t quite ready for primetime. Some vendors in this space have proven their solutions in real buildings over and over again—they’re already primetime.

It might actually increase risk for the owner by adding complexity, increasing the timeline, delaying results (e.g. energy savings), and involving more vendors that need to work together.

It probably won’t work. If you don’t understand and plan for the applications that will use the data, you’ll struggle to model it appropriately. Today’s applications accommodate and even require vastly different types of metadata, meaning even standardized tagging is bound to fail applications that need more or need it in a different format.

Complex applications like FDD are not an undifferentiated commodity. Here’s Alex and Nick again:

For the foreseeable future it is not a commodity. There are enormous differences in the complexity of information models vendors are employing and therefore the usefulness of the FDD results.

As we discussed last week, these guys know a thing or two about FDD results.

Just because your data is in a full-stack software, doesn’t mean it’s not open and usable. The best vendors (but certainly not all vendors) can provide the full stack and still serve as the data layer for other applications.

Where do you stand on this?

Upgrade to Nexus Pro to continue reading

Upgrade

Upgrade to Nexus Pro to continue reading

Upgrade

To receive updates on new content like this, sign up for the weekly Nexus newsletter.

Subscribe now

My conversation with KGS Buildings’ Nick and Alex continued last week. I’ve had a lot of fun nerding out with them and others who have reached out in the last few weeks. I want to share a brief nugget from our conversation on the merits of an independent, open data layer.

Here’s a quick summary of the independent data layer concept:

When designing your smart building stack, you separate the Integration and Historian Layers from the Application Layer rather than choosing one vendor’s solution for the whole stack. See my What is EMIS? essay to understand this delineation in more depth. You may also see it referred to as a data lake or middleware. I’m sure there are new acronyms for it—our industry sure loves acronyms.

The proponents of this approach tout the following primary benefit, which can sound pretty great from the building owner’s perspective:

You simply need to tag your data, put it in a data lake, and then plug in any application like fault detection and diagnostics (FDD). Then, if you don’t like the FDD, then you still own the tagged points (information model) and can just plug a different FDD vendor in.

And if you were to (crudely) draw it up, it might look something like this…

This sounds pretty compelling, right? I know it does… because I’ve made this argument once or twice in my past life as a consultant. The argument looks something like this:

It’s a risk-free first step on the journey to a smart building: unlock and model the data that’s currently locked away in proprietary and siloed systems.

It creates a single source of truth by enforcing one data model (e.g. Project Haystack or Brick Schema) for all applications and promotes interoperability.

It reduces dependence on one vendor and promotes a cooperative ecosystem. Depending on the building owner’s needs, it may be most beneficial to select multiple vendors to fulfill all the capabilities desired. If the data layer platform is designed as such, it could start to look like an app store for the building.

Similarly, it de-risks the investment by allowing the owner to trial, test, and compare multiple smart building applications without needing to restart the costly integration from scratch.

But, but, but:

Once you peek under the hood of this approach, as Alex and Nick helped me do, it might not be so pretty. Here’s their take:

While this sounds great to an owner, its simply not true. In a perfect world with perfectly understood points and metadata about those points, as well as metadata about the equipment and system interactivity (aka sequences), this would be possible. But this is not a world we live in. It’s hard to imagine living in it for quite some time.

As I unpacked this further after our conversation, it started to look worse. Here’s why:

De-risking strategies like this perpetuate the myth that these technologies aren’t quite ready for primetime. Some vendors in this space have proven their solutions in real buildings over and over again—they’re already primetime.

It might actually increase risk for the owner by adding complexity, increasing the timeline, delaying results (e.g. energy savings), and involving more vendors that need to work together.

It probably won’t work. If you don’t understand and plan for the applications that will use the data, you’ll struggle to model it appropriately. Today’s applications accommodate and even require vastly different types of metadata, meaning even standardized tagging is bound to fail applications that need more or need it in a different format.

Complex applications like FDD are not an undifferentiated commodity. Here’s Alex and Nick again:

For the foreseeable future it is not a commodity. There are enormous differences in the complexity of information models vendors are employing and therefore the usefulness of the FDD results.

As we discussed last week, these guys know a thing or two about FDD results.

Just because your data is in a full-stack software, doesn’t mean it’s not open and usable. The best vendors (but certainly not all vendors) can provide the full stack and still serve as the data layer for other applications.

Where do you stand on this?

⭐️ Pro Article

This article is for Nexus Pro members only

Upgrade to Nexus Pro
⭐️ Pro Article

This article is for Nexus Pro members only

Upgrade to Nexus Pro

Are you a Nexus Pro member yet? Join now to get access to our community of 600+ members.

Join Today

Have you taken our Smart Building Strategist Course yet? Sign up to get access to our courses platform.

Enroll Now

Get the renowned Nexus Newsletter

Access the Nexus Community

Head over to Nexus Connect and see what’s new in the community. Don’t forget to check out the latest member-only events.

Go to Nexus Connect

Upgrade to Nexus Pro

Join Nexus Pro and get full access including invite-only member gatherings, access to the community chatroom Nexus Connect, networking opportunities, and deep dive essays.

Sign Up