*The Structure of Concern Project compares many theoretical models from many disciplines to the Adizes PAEI model, arguing that they must all be reflecting the same underlying phenomenon. One concern structure model is described below.*

The code size optimization problem affects the design of many small, portable electronic devices that are now used everyday, such as cell phones and personal digital assistants. There is a set of interlocking constraints on the design of the real-time embedded logic systems in these devices. Determining the right balances and tradeoffs for these constraints is quite difficult.

In these small portable systems, processor chips need to be very energy-efficient, since battery life is a huge performance factor for consumers. Speed is also a factor, because these are real-time applications that interact with people and with signals from other machines. Cost is a third critical factor, and the cost of these components can be reduced by making the chips smaller (requiring smaller code sets). However, one cannot optimize all three of these factors at once.

One way to reduce energy consumption is to take advantage of something called Dynamic Voltage Scaling (DVS) technologies that reduce voltages by reducing clock speed on the chip. This is a tradeoff. Energy is saved at the cost of speed. Reducing cost by reducing size can also sacrifice speed, and energy efficiency as well. Code size can be reduced using dual instruction sets, with 16-bit instructions that are decompressed into 32-bit instructions prior to execution. This reduces the code size, but increases the number of instructions to process (decompression instructions are added), sacrificing speed and requiring more clock cycles and hence more energy to run. Thus, a triple tradeoff constrains design decisions for embedded systems. Shin, Lee & Min (2004) demonstrate that the problem is NP-hard and hence computationally intractable.

The intractability of the problem suggests heuristic approaches for exploring different solutions. Shin et al define four such heuristics, each of which reduce code size and increase the number of execution cycles while reducing clock speed. These algorithms are iterated so long as they do not violate the real-time time and energy requirements of the system. The algorithms crunch through the code set applying their adjustments to one task after another until there is nothing left to alter within the allowable constraints. Each algorithms targets tasks amenable to different kinds of adjustments. They are described below in PAEI order:

P – TM-ONLY: This algorithm will not sacrifice speed. It will reduce code size on tasks only when doing so does not slow them down appreciably. It will expend energy to do so, within the system’s outer limits. Tasks that would be slowed by code reduction are left alone. Economy yes, but never at the cost of speed. This is very much a P heuristic.

A – EZ-ONLY: Tasks that can have their coding reduced without increasing energy demands are favored by EZ-ONLY. Slowing down processes is not a problem so long as energy economy is being maintained or enhanced. Speed and simplicity are good, so long as they do not require expending additional energy resources. This heuristic favors economy, in an A-like fashion.

E – DYN-MIX: The dynamic-mixed approach respects whichever of the two constraints, speed or energy use, is tighter on that task. A concession is made to that tight constraint, but there can be heavy impact on the looser constraint in the effort to reduce code size. The looser constraint becomes the opportunity horizon where the algorithm makes gains, as it pushes at system limitations.

I – FIX-MIX: Both constraints are respected, so code size is reduced only to the point where existing boundary values are preserved intact. If reducing code size any further would disrupt either speed or energy efficiency, this algorithm is happy to leave well enough alone. But if reducing code size can be done with all constraints being respected, then so much the better, it is done.

Shin et al. also had a RANDOM algorithm for comparison. It selected tasks at random to reduce code size within system limits without considering internal tradeoffs. Simulation runs where the other algorithms matched RANDOM’s solutions were excluded. The speed and energy constraints in those runs were obviously too loose, such that any greedy optimizing algorithm would find the same solution.

DYN-MIX produced the best results against this particular set of tradeoffs, followed by FIX-MIX and TM-ONLY. EZ-ONLY did not greatly outperform RANDOM, compared to the other algorithms for this type of problem.

*Proceedings of the International Conference on Real-Time and Embedded Computing Systems and Applications (RTCSA)*.