While this topic would make sense to post in the aftermarket ECM section, the latest developments in "SD" chips are going to bring up "howzit work" questions. So...
An ECM's job (among other things) is to fire the injectors such that the correct amount of fuel is injected based on the current operating conditions of the engine. How much to inject becomes the task. The way it is usually done is to calculate the amount of air that made it into the cylinder, factor that by the air/fuel ratio, convert to pulsewidth based on the injector size, and there you have it.
But wait, how do you know how much air "made it in" to the cylinder?
With a MAF (Mass Airflow) system, the MAF sensor measures the airflow rate, if you divide that by the engine RPM, and the number of cylinders that fire in one rev, you arrive at the amount of air that gets into the cylinder on the intake stroke. Easy math.
With a Speed Density system, we calculate the air based on the current operating conditions and the pumping characteristics of the engine (the VE table). If the engine were 100% efficient, then every cycle the cylinder would fill with enough air such that the pressure in the cylinder (or vacuum) would match what is in the manifold. Then the intake valve would close, this air would get compressed and used for combustion.
But no engine (that we will ever see) is 100% efficient, so we use a big table of percentages to help us calculate exactly how much air got into the cylinder. Since the efficiency changes with throttle/MAP and RPM, the table is 3d, with MAP and RPM as the axes.
So, with the engine displacement, MAP, and VE (there is a factor for density that is used also), we can calculate the amount of air in the cylinder.
We then use our regular math, factor in the air/fuel ratio and injector characteristics, and we have injector pulsewidth.
Its a lot of words, but does the above make sense?
Reply with questions, comments, corrections, clarifications.
Bob