Benchmarking the Electro-Energetic Performance of Industrial Systems and Processes
Over 40% of the world’s electricity consumption (amounted in 2012 at 20,900 TWh), is consumed by Industrial Systems and Processes (ISP) with overall efficiencies ranging from 80% to as low as 25%, for an average efficiency value of 55%. The wasted energy by industrial systems and processes is estimated at 3,921 TWh/year (almost the entire consumption of the United States alone in 2012).
In an actual economic environment, business sustainability requires high-efficiency technological processes that will increase their competitiveness. In other words: “Utilization of energy in the most cost effective way to carry out a process whereby waste energy is minimized and energy consumption reduced”.
The main drivers of energy efficiency are considered to be:
Rising energy prices
One of the main barriers to energy efficiency programs is the lack of a real Benchmarking methodology. It is unanimously considered that a benchmarking methodology should be based on technical and scientific principles.
Traditional benchmarking methodologies used for industrial systems and processes are inspired by those being used in commercial and residential sectors. In many ways, the benchmarking process was adapted to a “clerk work” recording timelessly numbers in Excel sheets. This is because conventional benchmarking is based on mimicking “the best practice” in industry.
It was demonstrated that results based on using “best practice” as a reference, are not reliable due to large variability of benchmarking factors (i.e. parameters defined by the creation of a system boundary). These parameters (requiring “Adjustments” according to IPMVP) cause baseline inaccuracies which require permanent and tedious Normalization activities.
Demand Side Management (DSM) industry programs consider the total Energy Consumption of an Industrial System or Process as a whole (E.Used), while the proposed concept splits the energy in two specific components:
Ideal Energy (E.Ideal)
Waste Energy or Energy-at-Risk (E@R)
The Ideal Energy can be considered to be the productive energy, which represents the theoretical energy (or power) required to accomplish a given task by an ISP. The value of this energy is obtained by using a Mathematical Model (MM) which is based on physical laws specifically pertaining to the actual task being accomplished.
This Ideal Energy is considered to be independent of any technology, as it is based purely on scientific facts that govern the physical world. An ideal system uses only the energy that is required to obtain the result with zero losses, while the real system uses more energy to overcome losses embedded in the system itself.
Within the total energy consumption of an ISP, the Waste (or non-productive) Energy is the conjugate of the Ideal Energy. It is implied that Waste Energy is the part, of the total energy consumed by the ISP, which is focused on by DSM programs.
Attempting to predict energy savings against Ideal Energy is like attempting to violate the physical laws of the universe. Since Ideal Energy represents the “Minimum Minimorum” energy required to accomplish a task, no energy savings can be obtained from Ideal Energy.
By definition, the Energy-at-Risk (E@R) of an industrial system or process is defined as the “non-productive” energy. It represents the Waste Energy spent by the ISP to accomplish the task it was originally designed for.
Considering the two proposed components of energy as described earlier, the Benchmark Energy Factor (BEF) can be defined as the ratio between the total energy (E.Used) and the Ideal Energy (E.Ideal).
By definition, the BEF represents the overall invested energy (E.Used) when compared to the required energy to obtain the desired output. One can consider the BEF value to be the factor multiple of the Reference energy value that the ISP is consuming to accomplish a given task. In other words, how many times the minimum energy is being consumed to accomplish the task at hand?