CS-Light: a new energy-saving smart lighting system for optimised light control

 

LED lighting provides brighter, lower-power, longer lasting illumination than traditional lighting systems, and can be programmed to change color and brightness instantaneously. Despite this, lighting in smart buildings still consumes a significant amount of energy.

 

Professor Archan Misra and fellow researchers at Singapore Management University look at the development of a smart lighting control system, CS-Light, designed specifically for open-floor indoor layouts, such as offices, collaborative workspaces and campuses.

 

Read the original article:  https://doi.org/10.1145/3486611.3486657

 

Image source: Adobe Stock / Oleksandr

 

 

Transcript:

 

Hello and welcome back to another instalment of Research Pod.

 

In this episode, we will be exploring the work of Professor Archan Misra and fellow researchers Anuradha Ravi, Kasun Gamlath and Siyun Hu from the School of Computing and Information Systems at the Singapore Management University. The researchers’ work details the development of a smart lighting control system, CS-Light, designed specifically for open-floor indoor layouts. The study also demonstrates the system’s efficacy and advantages over traditional lighting control technologies.

 

Light-emitting diode or LED lighting has been developed to provide brighter, lower-power illumination than traditional lighting systems. LEDs last longer, often consume a few watts of power and can be programmed to change color and brightness instantaneously. Despite this, lighting in smart buildings still consumes a significant amount of energy, second only to heating, ventilation, and air conditioning (or HVAC) systems.

 

Traditional methods of lighting control in buildings include coarse-grained and binary lighting control. Coarse-grained lighting control groups multiple light fixtures together and controls them as a single unit, rather than controlling each fixture individually. In an open-plan office, where occupancy may be high in one area and low in another, motion sensors may pick up low occupancy in this instance and dim all lights because they are linked together as a single unit. Or else, if the occupancy sensor senses high occupancy, it will brighten the LEDs throughout the space, potentially illuminating even unoccupied tables or pods needlessly. Binary lighting control is as simple as turning on and off each lighting fixture, even though LEDs allow lighting intensity to be very precisely calibrated ,  in increments of 1 percent or finer. This is, again, not optimal, as the actual brightness in an open space is the sum of all the light emitted from LEDs both nearby and farther away, and straight-forward on/off controls don’t make the most of the full range of intermediate settings possible. .

 

The research team has developed ‘CS-Light’ as a means of overcoming these disadvantages, and thus reducing energy consumption while maintaining the comfort of occupants in open-floor work and other communal spaces.

 

CS-Light enables fine-grained and non-binary LED control—that is, allowing individual LED luminaires to be adjusted in very fine steps , down to 1 percent at a time. Moreover, the required real-time sensing of lighting intensity is done using only existing ceiling-mounted security cameras that monitor such spaces.

 

The team tested the CS-Light system in two separate real-world scenarios: a collaborative workspace situated within Singapore Management University’s first zero-energy building that was separated into 10 distinct zones, and a research lab housing approximately 10-12 cubicles.

 

Achieving accurate pixel-to-LUX estimation was one of the critical technical challenges facing the researchers in this mission. What is it and how did they get past this hurdle?

 

Pixel-to-LUX estimation is a method used to estimate the illuminance levels (a technical term translating to a space’s brightness, measured in a unit called LUX) of a space using a camera’s captured images. The method involves analysing the brightness values of the pixels in the captured image and converting them into corresponding illuminance values using a calibrated conversion function. This conversion is of course not trivial—in a series of experiments on the test sites, the researchers demonstrated that this depends on, among other factors, the colour and reflectiveness of objects in the space ( for example, a dark coloured, matte finish backpack results in much lower pixel values than a shiny, silver laptop) and the amount of incidental daylight, as daylight has a color palette that varies and is distinct from the electronic LED lights. However, the team, figured out a way to use machine-learning, or ML, models to capture such variations , automatically infer the color of objects within a space, and then use these ML model to convert the camera’s pixel values to an “illuminance” reading.

 

Professor Misra and colleagues implemented a fingerprinting approach during a training phase to create such ML models. The team placed between 8 and 25 black and white paper strips onto tables in a designated workspace while LED light intensity was varied between 1 to 100 percent. The images captured by the video surveillance cameras were divided into a collection of ‘patches’ (each roughly 25 by 25 pixels in size), each corresponding to the view of one of these paper strips. The pixel intensity of each of these image patches was recorded, together with the actual illuminance, sensed by a physical sensor placed during the training phase.

 

This collected data triple, consisting of the patch color, illuminance, and patch pixel intensity data, was then used to build a set of ML models, such that given intensity readings and patch colour estimates, the model would predict the current illuminance value.

 

The researchers also employed a machine learning-based approach to build a patch classifiers—something that classified each 25 by 25 pixel region’s colour, and used additional statistical techniques to improve the accuracy of illuminance estimation.

 

Study results showed that the patch classifier was 84.6 percent accurate in identifying light vs. dark coloured areas, under varying ambient lighting conditions and levels of occupancy.

Following this offline training phase, the researchers transitioned to the online, or real-time, phase for illuminance estimation with CS-Light. The first step involves image capture in real-time and then the identification and localization of human occupants in various workspace zones. The researchers then used micro-regions from images to estimate illuminance values, using the ML models for colour estimation and pixel-to-LUX mapping mentioned earlier.

 

These estimates and values associated with the level of human occupancy, as well as the brightness of LED luminaires, were fed into the Smartlight Optimizer software. The software calculates the optimal level of lighting for LED luminaires. Here, ‘optimal’ refers to the desire that the cumulative intensity of the LEDs should be enough  to ensure that all occupied areas/tables were comfortably lit and no higher so as not to waste energy ,while unoccupied areas could be darker. These settings were then provided to the SmartLight Controller software, which adjusts the LED luminaire settings to provide the optimal lighting level for each workspace zone. Of course, any such LED adjustments were conducted gradually (no faster than once every 30-seconds) to avoid drastic changes in lighting that might distract human occupants.

 

Within a collaborative workspace of around 100 metres square, the researchers demonstrated that the CS-Light system could achieve energy savings of 63.5 percent at low occupancy and savings of 37 percent at high occupancy. When workspaces were unoccupied, CS-Light can save as much as 79% in energy consumed by lighting. The ML-based method used for estimating LUX was demonstrated to be accurate, with a median error of 8.5 percent across different real-world conditions. The researchers showed that using CS-Light reduced lighting energy consumption by 53 percent under normal levels of human occupancy and during daylight hours.

 

To ensure that CS-Light wasn’t disruptive to the occupants, the team also asked 15 individuals to provide their responses to a survey of comfort and operability while CS-Light was in operation. One hundred percent of occupants felt comfortable undertaking their day-to-day activities while CS-Light was in operation during daylight hours and at high occupancy. The same level of comfort was also expressed during times of low and medium occupancy.

 

According to the researchers, “in the absence of daylight (i.e., with only LED-provided illumination), 13 of the 15 surveyed occupants felt the area was perfectly lit under high occupancy conditions, while 2 felt the area could have been slightly brighter”.

 

To further test the efficacy of using CS-Light to adapt lighting conditions and save energy, the system was deployed in the collaborative workspace (zones 1 to 10) for 6 days between the times 8:30 am and 11:30 pm. Even though the occupants often changed the layout of the workspace (by rearranging chairs, tables and large monitors in unpredictable ways), the researchers showed that the CS-Light system was still able to provide average energy savings of 39 percent.

 

One limitation of the CS-Light system is that it does not account for variations in LED colour (e.g., if all the lights turned pink or green). Another limitation of CS-Light is that it focuses on the use of static cameras and did not consider cameras that pan, tilt or zoom to sweep an area. The researchers, however, believe that this limitation can be overcome if the real-time status of the cameras was provided as an input to an appropriately calibrated CS-Light system.

 

Overall, this study demonstrated the capability of CS-Light in generating notable energy savings, while assuring human comfort, across both daylight and daylight-absent hours.

 

That’s all for this episode, thanks for listening. You can find the paper on the CS-Light system for smart lighting control in the show notes for this episode and, as ever, be sure to subscribe to ResearchPod to hear more of the latest science news.

 

See you again soon.

Leave a Reply

Your email address will not be published.

Top
Researchpod Let's Talk

Share This

Copy Link to Clipboard

Copy