In mission-critical facilities, thereâs one KPI that matters most: Uptime. Whether itâs a hospital operating room, a pharmaceutical cleanroom, a research vivarium, or a high-tech manufacturing line, a single failure can risk lives, derail R&D, spoil product batches, and incur massive costs.Â
Facility managers (FMs) responsible for these environments live in a world of near-zero tolerance for outages. As Michael Conway of GMC Commissioning (commissioning service provider for laboratory and pharma clients) told us, âresilience and uptime are number one, safety is number two,â and everything else falls in line after that, including energy savings.Â
This mindset makes sense when you consider the stakes: unplanned downtime in pharmaceutical manufacturing can cost $100,000 to $500,000 per hour, and the average hospital loses $7,900 per minute when critical systems go down.Â
Yet keeping complex facilities online 24/7 is a daily battle. FMs are pragmatic, busy professionals, often firefighting problems before they snowball. They are also skeptical of hype âif a new technology canât tangibly reduce their headaches, they have no time for it. In this article, we explore how technology can genuinely improve uptime in mission-critical spaces across healthcare, pharma, higher ed, and manufacturing.Â
We spoke with experts on the front lines: Conway, Alex Grace of Clockworks Analytics (who provides fault detection software to critical facilities), and Jim Meachamâs team at Altura Associates (a firm delivering commissioning and master systems integration in hospitals and campuses).Â
They told us technology is helping FMs in critical environments in three powerful ways: proving compliance, identifying and diagnosing issues proactively, and minimizing downtime during mandatory upgrades.Â
Below, we walk through each of these, with real examples, to see how the right tech stack is boosting the #1 KPI these FMs care about.
In highly regulated industries, FM isnât just about keeping equipment runningâitâs also about maintaining environmental conditions within strict parameters and proving it.Â
In healthcare, itâs the Joint Commissionâs surprise inspections. In pharma facilities, the FDA conducts regular audits of production environmentsâif a sterile cleanroom âgoes negativeâ on pressure even briefly, the facility must stop production, document the incident, and prove that no contaminated product was released. Rather than scrambling after the fact, facilities are investing in building automation and fault detection and diagnostics (FDD) that alert staff the moment conditions drift out of spec.Â
This means compliance and uptime are two sides of the same coin. FMs traditionally have managed this with clipboards and log sheets, manually recording temperatures, humidity, pressure, etc., during routine rounds. As Alex Grace of Clockworks Analytics shared, a surprising amount of this is still done on pen and paperâa labor-intensive, error-prone process. Â
To automate this work, Clockworks has developed digital compliance dashboards for hospital pharmacies (and other regulated spaces) that replace the old clipboards.Â
âEvery box on that heat map is an hour of temperature, humidity, pressure⌠Every box thatâs green⌠shows [everything is] good. And if itâs a box thatâs red, it shows they were out of compliance for that hour and they need to do something,â Grace explains.Â
Tolerance thresholds are built in, so the moment any parameter goes out of range, itâs flagged. All of this creates an automatic record for regulatorsâessentially a permanent digital logbook. In other words, rather than an FM hoping those paper logs are filled and filed correctly, they can pull up a living dashboard that proves (to any auditor or higher-up) that, say, the negative air pressure in a lab was maintained 99.98% of the time last monthâand pinpoint the exact 10 minutes it wasnât, typically along with the real root cause (more on that in the next section) so that any deviation comes with an explanation of why it happened and how it was resolved.
An FM can now spend 5 minutes checking a dashboard instead of many hours compiling paper log binders, and many more hours hunting down the âwhyâ behind the problems.Â
For critical facilities, preventing downtime is better than scrambling to fix it. FMs are increasingly turning to fault detection and diagnostics (FDD) tools to catch problems early, ideally before spaces drift out of compliance or systems shut down.Â
The idea is straightforward: use analytics to spot degradation early and fix it before it becomes a 2 AM crisis.
Clockworks now lets users tag equipment and zones as âcritical,â so an FM can automatically prioritize any faults affecting those areas. An HVAC issue threatening, say, an isolation room will jump to the top of the list, whereas a fault in an office might be ranked lower.Â
Crucially, these technologies can detect subtle anomalies that a human might miss during routine rounds. One great example Grace shared involved something as mundane as a bad sensor. In a pharmacy cleanroom, a drifting relative humidity sensor could spell trouble: if it starts reading inaccurately low, the control system might over-humidify or simply report false compliance data. Clockworks caught a critical RH sensor that read zero for 1.5 hours one dayâa blip that staff likely overlooked.Â
âSensors donât go from working perfectly to not working at allâthey start to fail,â Grace noted. By catching that early weird behavior, the team could replace or recalibrate the sensor before it flatlined entirely or caused an out-of-spec condition.Â
Multiply that by hundreds of sensors (for temperature, pressure, airflow, etc.) and you get a sense of how FDD systems act as tireless sentinels for an FM, watching trends and patterns that would be impossible to manually track.Â
And itâs not just FMs. Grace mentioned that some of their service provider partners are under performance-based Public-Private Partnership (P3) contracts where any downtime incurs hefty fines. One partner has even budgeted $9.5 million in downtime fines for this year. FDD literally pays for itself each time it prevents or shortens equipment downtime.Â
The Altura team shared a case at the University of Washington where a vivarium (lab animal facility) was served by a dedicated central utility plant. The plantâs large chillers were intermittently failing. A traditional approach might be to call the chiller vendor and reactively replace parts; instead, the Altura and facilities teams, armed with analytics (SkySpark integrated with the plantâs PLC controls), hunted for root causes in the data. They discovered that during low-load conditions, the sequencing was causing the chillerâs inlet guide vanes to rapidly cycle, essentially wearing them out.Â
Armed with this insight, they worked with the manufacturer to tweak the onboard chiller controls and adjusted the waterside economizer logic to keep the chiller out of that unstable low-load zone and end the downtime-causing failures. That same project snowballed into a treasure trove of other findings, including a faulty dehumidification sequence in a downstream lab building and the discovery of a closed 6-inch bypass valve wasting 30% of pumping energy.Â
As Alturaâs Tom Pine put it, âyou never know what youâre going to findâ once you start digging into the data. What began as an uptime project also ended up improving efficiency and performance. The UW plant manager, initially focused only on reliability, became a data evangelist and spread these practices to the main campus plant, seeing that better uptime and energy optimization can go hand in hand once you have good data.
Most issues threatening uptime in these facilities are not exotic or headline-grabbing; often, theyâre mundane things like stuck dampers, fouled filters, drifting sensors, or misprogrammed sequences. Grace underscored this, saying that faults which bring down critical equipment are usually ânothing special.âÂ
Conway emphasized that before you can detect a fault, the system actually has to be monitoring the right things. On new construction projects, GMC often works backwards from the ownerâs operational priorities to identify gaps in controls design. âIf a facility wants to know whether an actuator failed, we ask: is there a feedback point in the design?â says Michael Conway.
In most designs, the monitoring points needed for diagnosticsâlike valve feedback, differential pressure, or humidity trendsâare omitted to save costs or because they were not explicitly requested. Conway calls this âtailor-fitting the BAS to the needs of the space.â
The playbook: focus on the fundamentals first (ensure you have the right sensors, valves, and actuators) and use analytics to continuously watch them.Â
Even with perfect maintenance and compliance, every critical space and system eventually needs planned downtime for upgrading or replacing major systems. Managing retrofits, expansions, and cutovers in a mission-critical environment is a high-wire act for FMsâbut technology can help reduce the downtime required.Â
[Members can log in to hear the story of how Altura is helping Kaiser Permanente replace an old JCI BAS with a new Distech system, while the hospital maintains full operation.] Â
In this realm, itâs not just the software or analytics, but also the design of the control systems and networks that determines how flexible you can be when making changes. Our conversation with Alturaâs team about a long-term BAS upgrade at Kaiser Permanenteâs Baldwin Park hospital was eye-opening.
Baldwin Park is a 1 million sq. ft. medical center with dozens of operating rooms, pharmacies, labs. A few years ago, the facility embarked on a multi-year project to replace an old Johnson Controls BAS with a new Distech system, wing by wing, while the hospital remained 24/7 operational.Â
The team is minimizing downtime with an aggressive strategy of phased overnight cutovers with extensive preparation. Alturaâs Sia Dabiri described it: âeverything⌠from switching the equipment to installation and commissioning and TAB and sign-off⌠is all done within 12 hours.â Achieving this is an enormous technical and logistical feat, and it only works with the right technology backbone and process in place.
One key enabler is having an open, modular BAS architecture, as detailed in our 2020 article The BAS Architecture of the Future. That future is now here: Kaiser Permanenteâs standard is to run a Niagara-based supervisory server on a virtual machine, with open protocol (BACnet/IP) controllers beneath it.Â
This openness paid off in a big way when, about a year into the project, the original controls installer (the Distech vendor) couldnât keep up with the schedule and had to bow out. In a proprietary system world, that would be a nightmare scenarioâswitching BAS vendors mid-project could mean starting from scratch (since Vendor Bâs system wouldnât talk to Vendor Aâs). But because of the open architecture, the general contractor was able to hire a new controls firm to step in, and Alturaâs team (as the MSI) could slot them in with minimal disruption.Â
All the front-end graphics and supervisory logic were on Niagara (standardized), and Altura even had the capability in-house to program and bench-test the Distech controllers themselves. This meant the handover from one vendor to the next was smoothâno rip-and-replace needed, no extended downtime, just a swap of contractors. The BAS architecture gave the owner leverage and flexibility, ultimately keeping the upgrade timeline on track and avoiding prolonged outages.
Technology also shines in how the team minimizes the actual cutover downtime. Long before any switch flipped, the team was rebuilding databases and graphics offline, bench-testing controllers with the sequences for the new system, and performing pre-functional testing in a sandbox environment. Weeks ahead of a scheduled cutover, they use SkySpark to run automated functional tests on the equipment in question, generating reports of any control issues or calibration problems that would trip up the commissioning. The facilities staff and contractors could then fix those issues in advance.Â
By the time the actual night of the cutover arrived, 80-90% of the âunknownsâ were already vettedâso it was likelier that when the new system came online, everything would work on the first go. This approach drastically reduced the need for trial-and-error during the outage window. It also avoided repeated disruptive testing.Â
âThey want to avoid all these overnight testings, because every time they have to do overnight testing, they have to cancel surgeries, they have to cancel patients,â Sia explained. By leveraging analytics and thorough planning, they ensured that most verifications were done virtually or in a controlled way, leaving only the absolute must-do tasks for the short cutover window.
During the cutover itself, technology provides an extra safety net. In such a project, there is always the fear: what if the new system doesnât come up clean by morning? To mitigate that, the team did a few clever things. They had the analytics system âdata pipelineâ already in place watching the data from the moment of switchover.Â
âYou can do a lot of pre-testing with analytics, and then you can do immediate testing once youâre cut over because that analytics pipeline is already in place,â Meacham explains. In other words, the second the new BAS started controlling the OR room, the FDD platform was reading the sensors and verifying that temperature, humidity, pressurization, etc. were all as required.Â
All these layers of technologyâopen architecture, virtualization, parallel systems, analytics-driven testingâcombine to make an incredibly difficult upgrade successful without any unplanned downtime.
Maintaining uptime during major changes requires aligning your technology stack with resilience from the get-go. From our discussions, a few key ingredients emerged as must-haves for mission-critical facilities:
Facilities managers of mission-critical spaces have a singular mandate: keep it online. All the energy savings, occupant comfort, or other nice-to-haves mean little if the surgery gets canceled, the experiment is ruined, or the production line stops. FMs are skeptical for good reason: theyâve seen âsmartâ tech make things dumber when itâs deployed without regard for resilience.Â
In the end, mission-critical facilities will always be high-wire actsâbut with the right tools, FMs donât have to work without a net. The fire-fighting mentality is giving way (gradually) to a data-driven, proactive ops culture.Â
Uptime is becoming a science, not just an art. The surgeons, scientists, engineers, and students depending on your facility donât care how fancy your analytics look or how âopenâ your BAS is â they care that the lights stay on, the air stays clean, and the mission never stops.
In this realm, itâs not just the software or analytics, but also the design of the control systems and networks that determines how flexible you can be when making changes. Our conversation with Alturaâs team about a long-term BAS upgrade at Kaiser Permanenteâs Baldwin Park hospital was eye-opening.
Baldwin Park is a 1 million sq. ft. medical center with dozens of operating rooms, pharmacies, labs. A few years ago, the facility embarked on a multi-year project to replace an old Johnson Controls BAS with a new Distech system, wing by wing, while the hospital remained 24/7 operational.Â
The team is minimizing downtime with an aggressive strategy of phased overnight cutovers with extensive preparation. Alturaâs Sia Dabiri described it: âeverything⌠from switching the equipment to installation and commissioning and TAB and sign-off⌠is all done within 12 hours.â Achieving this is an enormous technical and logistical feat, and it only works with the right technology backbone and process in place.
One key enabler is having an open, modular BAS architecture, as detailed in our 2020 article The BAS Architecture of the Future. That future is now here: Kaiser Permanenteâs standard is to run a Niagara-based supervisory server on a virtual machine, with open protocol (BACnet/IP) controllers beneath it.Â
This openness paid off in a big way when, about a year into the project, the original controls installer (the Distech vendor) couldnât keep up with the schedule and had to bow out. In a proprietary system world, that would be a nightmare scenarioâswitching BAS vendors mid-project could mean starting from scratch (since Vendor Bâs system wouldnât talk to Vendor Aâs). But because of the open architecture, the general contractor was able to hire a new controls firm to step in, and Alturaâs team (as the MSI) could slot them in with minimal disruption.Â
All the front-end graphics and supervisory logic were on Niagara (standardized), and Altura even had the capability in-house to program and bench-test the Distech controllers themselves. This meant the handover from one vendor to the next was smoothâno rip-and-replace needed, no extended downtime, just a swap of contractors. The BAS architecture gave the owner leverage and flexibility, ultimately keeping the upgrade timeline on track and avoiding prolonged outages.
Technology also shines in how the team minimizes the actual cutover downtime. Long before any switch flipped, the team was rebuilding databases and graphics offline, bench-testing controllers with the sequences for the new system, and performing pre-functional testing in a sandbox environment. Weeks ahead of a scheduled cutover, they use SkySpark to run automated functional tests on the equipment in question, generating reports of any control issues or calibration problems that would trip up the commissioning. The facilities staff and contractors could then fix those issues in advance.Â
By the time the actual night of the cutover arrived, 80-90% of the âunknownsâ were already vettedâso it was likelier that when the new system came online, everything would work on the first go. This approach drastically reduced the need for trial-and-error during the outage window. It also avoided repeated disruptive testing.Â
âThey want to avoid all these overnight testings, because every time they have to do overnight testing, they have to cancel surgeries, they have to cancel patients,â Sia explained. By leveraging analytics and thorough planning, they ensured that most verifications were done virtually or in a controlled way, leaving only the absolute must-do tasks for the short cutover window.
During the cutover itself, technology provides an extra safety net. In such a project, there is always the fear: what if the new system doesnât come up clean by morning? To mitigate that, the team did a few clever things. They had the analytics system âdata pipelineâ already in place watching the data from the moment of switchover.Â
âYou can do a lot of pre-testing with analytics, and then you can do immediate testing once youâre cut over because that analytics pipeline is already in place,â Meacham explains. In other words, the second the new BAS started controlling the OR room, the FDD platform was reading the sensors and verifying that temperature, humidity, pressurization, etc. were all as required.Â
All these layers of technologyâopen architecture, virtualization, parallel systems, analytics-driven testingâcombine to make an incredibly difficult upgrade successful without any unplanned downtime.
Maintaining uptime during major changes requires aligning your technology stack with resilience from the get-go. From our discussions, a few key ingredients emerged as must-haves for mission-critical facilities:
Facilities managers of mission-critical spaces have a singular mandate: keep it online. All the energy savings, occupant comfort, or other nice-to-haves mean little if the surgery gets canceled, the experiment is ruined, or the production line stops. FMs are skeptical for good reason: theyâve seen âsmartâ tech make things dumber when itâs deployed without regard for resilience.Â
In the end, mission-critical facilities will always be high-wire actsâbut with the right tools, FMs donât have to work without a net. The fire-fighting mentality is giving way (gradually) to a data-driven, proactive ops culture.Â
Uptime is becoming a science, not just an art. The surgeons, scientists, engineers, and students depending on your facility donât care how fancy your analytics look or how âopenâ your BAS is â they care that the lights stay on, the air stays clean, and the mission never stops.
In this realm, itâs not just the software or analytics, but also the design of the control systems and networks that determines how flexible you can be when making changes. Our conversation with Alturaâs team about a long-term BAS upgrade at Kaiser Permanenteâs Baldwin Park hospital was eye-opening.
Baldwin Park is a 1 million sq. ft. medical center with dozens of operating rooms, pharmacies, labs. A few years ago, the facility embarked on a multi-year project to replace an old Johnson Controls BAS with a new Distech system, wing by wing, while the hospital remained 24/7 operational.Â
The team is minimizing downtime with an aggressive strategy of phased overnight cutovers with extensive preparation. Alturaâs Sia Dabiri described it: âeverything⌠from switching the equipment to installation and commissioning and TAB and sign-off⌠is all done within 12 hours.â Achieving this is an enormous technical and logistical feat, and it only works with the right technology backbone and process in place.
One key enabler is having an open, modular BAS architecture, as detailed in our 2020 article The BAS Architecture of the Future. That future is now here: Kaiser Permanenteâs standard is to run a Niagara-based supervisory server on a virtual machine, with open protocol (BACnet/IP) controllers beneath it.Â
This openness paid off in a big way when, about a year into the project, the original controls installer (the Distech vendor) couldnât keep up with the schedule and had to bow out. In a proprietary system world, that would be a nightmare scenarioâswitching BAS vendors mid-project could mean starting from scratch (since Vendor Bâs system wouldnât talk to Vendor Aâs). But because of the open architecture, the general contractor was able to hire a new controls firm to step in, and Alturaâs team (as the MSI) could slot them in with minimal disruption.Â
All the front-end graphics and supervisory logic were on Niagara (standardized), and Altura even had the capability in-house to program and bench-test the Distech controllers themselves. This meant the handover from one vendor to the next was smoothâno rip-and-replace needed, no extended downtime, just a swap of contractors. The BAS architecture gave the owner leverage and flexibility, ultimately keeping the upgrade timeline on track and avoiding prolonged outages.
Technology also shines in how the team minimizes the actual cutover downtime. Long before any switch flipped, the team was rebuilding databases and graphics offline, bench-testing controllers with the sequences for the new system, and performing pre-functional testing in a sandbox environment. Weeks ahead of a scheduled cutover, they use SkySpark to run automated functional tests on the equipment in question, generating reports of any control issues or calibration problems that would trip up the commissioning. The facilities staff and contractors could then fix those issues in advance.Â
By the time the actual night of the cutover arrived, 80-90% of the âunknownsâ were already vettedâso it was likelier that when the new system came online, everything would work on the first go. This approach drastically reduced the need for trial-and-error during the outage window. It also avoided repeated disruptive testing.Â
âThey want to avoid all these overnight testings, because every time they have to do overnight testing, they have to cancel surgeries, they have to cancel patients,â Sia explained. By leveraging analytics and thorough planning, they ensured that most verifications were done virtually or in a controlled way, leaving only the absolute must-do tasks for the short cutover window.
During the cutover itself, technology provides an extra safety net. In such a project, there is always the fear: what if the new system doesnât come up clean by morning? To mitigate that, the team did a few clever things. They had the analytics system âdata pipelineâ already in place watching the data from the moment of switchover.Â
âYou can do a lot of pre-testing with analytics, and then you can do immediate testing once youâre cut over because that analytics pipeline is already in place,â Meacham explains. In other words, the second the new BAS started controlling the OR room, the FDD platform was reading the sensors and verifying that temperature, humidity, pressurization, etc. were all as required.Â
All these layers of technologyâopen architecture, virtualization, parallel systems, analytics-driven testingâcombine to make an incredibly difficult upgrade successful without any unplanned downtime.
Maintaining uptime during major changes requires aligning your technology stack with resilience from the get-go. From our discussions, a few key ingredients emerged as must-haves for mission-critical facilities:
Facilities managers of mission-critical spaces have a singular mandate: keep it online. All the energy savings, occupant comfort, or other nice-to-haves mean little if the surgery gets canceled, the experiment is ruined, or the production line stops. FMs are skeptical for good reason: theyâve seen âsmartâ tech make things dumber when itâs deployed without regard for resilience.Â
In the end, mission-critical facilities will always be high-wire actsâbut with the right tools, FMs donât have to work without a net. The fire-fighting mentality is giving way (gradually) to a data-driven, proactive ops culture.Â
Uptime is becoming a science, not just an art. The surgeons, scientists, engineers, and students depending on your facility donât care how fancy your analytics look or how âopenâ your BAS is â they care that the lights stay on, the air stays clean, and the mission never stops.
In mission-critical facilities, thereâs one KPI that matters most: Uptime. Whether itâs a hospital operating room, a pharmaceutical cleanroom, a research vivarium, or a high-tech manufacturing line, a single failure can risk lives, derail R&D, spoil product batches, and incur massive costs.Â
Facility managers (FMs) responsible for these environments live in a world of near-zero tolerance for outages. As Michael Conway of GMC Commissioning (commissioning service provider for laboratory and pharma clients) told us, âresilience and uptime are number one, safety is number two,â and everything else falls in line after that, including energy savings.Â
This mindset makes sense when you consider the stakes: unplanned downtime in pharmaceutical manufacturing can cost $100,000 to $500,000 per hour, and the average hospital loses $7,900 per minute when critical systems go down.Â
Yet keeping complex facilities online 24/7 is a daily battle. FMs are pragmatic, busy professionals, often firefighting problems before they snowball. They are also skeptical of hype âif a new technology canât tangibly reduce their headaches, they have no time for it. In this article, we explore how technology can genuinely improve uptime in mission-critical spaces across healthcare, pharma, higher ed, and manufacturing.Â
We spoke with experts on the front lines: Conway, Alex Grace of Clockworks Analytics (who provides fault detection software to critical facilities), and Jim Meachamâs team at Altura Associates (a firm delivering commissioning and master systems integration in hospitals and campuses).Â
They told us technology is helping FMs in critical environments in three powerful ways: proving compliance, identifying and diagnosing issues proactively, and minimizing downtime during mandatory upgrades.Â
Below, we walk through each of these, with real examples, to see how the right tech stack is boosting the #1 KPI these FMs care about.
In highly regulated industries, FM isnât just about keeping equipment runningâitâs also about maintaining environmental conditions within strict parameters and proving it.Â
In healthcare, itâs the Joint Commissionâs surprise inspections. In pharma facilities, the FDA conducts regular audits of production environmentsâif a sterile cleanroom âgoes negativeâ on pressure even briefly, the facility must stop production, document the incident, and prove that no contaminated product was released. Rather than scrambling after the fact, facilities are investing in building automation and fault detection and diagnostics (FDD) that alert staff the moment conditions drift out of spec.Â
This means compliance and uptime are two sides of the same coin. FMs traditionally have managed this with clipboards and log sheets, manually recording temperatures, humidity, pressure, etc., during routine rounds. As Alex Grace of Clockworks Analytics shared, a surprising amount of this is still done on pen and paperâa labor-intensive, error-prone process. Â
To automate this work, Clockworks has developed digital compliance dashboards for hospital pharmacies (and other regulated spaces) that replace the old clipboards.Â
âEvery box on that heat map is an hour of temperature, humidity, pressure⌠Every box thatâs green⌠shows [everything is] good. And if itâs a box thatâs red, it shows they were out of compliance for that hour and they need to do something,â Grace explains.Â
Tolerance thresholds are built in, so the moment any parameter goes out of range, itâs flagged. All of this creates an automatic record for regulatorsâessentially a permanent digital logbook. In other words, rather than an FM hoping those paper logs are filled and filed correctly, they can pull up a living dashboard that proves (to any auditor or higher-up) that, say, the negative air pressure in a lab was maintained 99.98% of the time last monthâand pinpoint the exact 10 minutes it wasnât, typically along with the real root cause (more on that in the next section) so that any deviation comes with an explanation of why it happened and how it was resolved.
An FM can now spend 5 minutes checking a dashboard instead of many hours compiling paper log binders, and many more hours hunting down the âwhyâ behind the problems.Â
For critical facilities, preventing downtime is better than scrambling to fix it. FMs are increasingly turning to fault detection and diagnostics (FDD) tools to catch problems early, ideally before spaces drift out of compliance or systems shut down.Â
The idea is straightforward: use analytics to spot degradation early and fix it before it becomes a 2 AM crisis.
Clockworks now lets users tag equipment and zones as âcritical,â so an FM can automatically prioritize any faults affecting those areas. An HVAC issue threatening, say, an isolation room will jump to the top of the list, whereas a fault in an office might be ranked lower.Â
Crucially, these technologies can detect subtle anomalies that a human might miss during routine rounds. One great example Grace shared involved something as mundane as a bad sensor. In a pharmacy cleanroom, a drifting relative humidity sensor could spell trouble: if it starts reading inaccurately low, the control system might over-humidify or simply report false compliance data. Clockworks caught a critical RH sensor that read zero for 1.5 hours one dayâa blip that staff likely overlooked.Â
âSensors donât go from working perfectly to not working at allâthey start to fail,â Grace noted. By catching that early weird behavior, the team could replace or recalibrate the sensor before it flatlined entirely or caused an out-of-spec condition.Â
Multiply that by hundreds of sensors (for temperature, pressure, airflow, etc.) and you get a sense of how FDD systems act as tireless sentinels for an FM, watching trends and patterns that would be impossible to manually track.Â
And itâs not just FMs. Grace mentioned that some of their service provider partners are under performance-based Public-Private Partnership (P3) contracts where any downtime incurs hefty fines. One partner has even budgeted $9.5 million in downtime fines for this year. FDD literally pays for itself each time it prevents or shortens equipment downtime.Â
The Altura team shared a case at the University of Washington where a vivarium (lab animal facility) was served by a dedicated central utility plant. The plantâs large chillers were intermittently failing. A traditional approach might be to call the chiller vendor and reactively replace parts; instead, the Altura and facilities teams, armed with analytics (SkySpark integrated with the plantâs PLC controls), hunted for root causes in the data. They discovered that during low-load conditions, the sequencing was causing the chillerâs inlet guide vanes to rapidly cycle, essentially wearing them out.Â
Armed with this insight, they worked with the manufacturer to tweak the onboard chiller controls and adjusted the waterside economizer logic to keep the chiller out of that unstable low-load zone and end the downtime-causing failures. That same project snowballed into a treasure trove of other findings, including a faulty dehumidification sequence in a downstream lab building and the discovery of a closed 6-inch bypass valve wasting 30% of pumping energy.Â
As Alturaâs Tom Pine put it, âyou never know what youâre going to findâ once you start digging into the data. What began as an uptime project also ended up improving efficiency and performance. The UW plant manager, initially focused only on reliability, became a data evangelist and spread these practices to the main campus plant, seeing that better uptime and energy optimization can go hand in hand once you have good data.
Most issues threatening uptime in these facilities are not exotic or headline-grabbing; often, theyâre mundane things like stuck dampers, fouled filters, drifting sensors, or misprogrammed sequences. Grace underscored this, saying that faults which bring down critical equipment are usually ânothing special.âÂ
Conway emphasized that before you can detect a fault, the system actually has to be monitoring the right things. On new construction projects, GMC often works backwards from the ownerâs operational priorities to identify gaps in controls design. âIf a facility wants to know whether an actuator failed, we ask: is there a feedback point in the design?â says Michael Conway.
In most designs, the monitoring points needed for diagnosticsâlike valve feedback, differential pressure, or humidity trendsâare omitted to save costs or because they were not explicitly requested. Conway calls this âtailor-fitting the BAS to the needs of the space.â
The playbook: focus on the fundamentals first (ensure you have the right sensors, valves, and actuators) and use analytics to continuously watch them.Â
Even with perfect maintenance and compliance, every critical space and system eventually needs planned downtime for upgrading or replacing major systems. Managing retrofits, expansions, and cutovers in a mission-critical environment is a high-wire act for FMsâbut technology can help reduce the downtime required.Â
[Members can log in to hear the story of how Altura is helping Kaiser Permanente replace an old JCI BAS with a new Distech system, while the hospital maintains full operation.] Â
Head over to Nexus Connect and see whatâs new in the community. Donât forget to check out the latest member-only events.
Go to Nexus ConnectJoin Nexus Pro and get full access including invite-only member gatherings, access to the community chatroom Nexus Connect, networking opportunities, and deep dive essays.
Sign Up
This is a great piece!
I agree.