South Africa
Login / Register

IR Sensor

IR Sensors

One of the more useful things to connect to an Arduino, at least when working with model trains, is an infra-red optical sensor. This is a device that reports a number that varies depending on how much infra-red light is falling on it. Infra-red (literally “below red”) is light that the human eye can’t see. Some normal light sources, like sunlight and light bulbs, contain infra-red as well as visible light, but for sensors usually a specific source is supplied so that there is a bright source focused on the sensor. When a train, or some other object, passes between the light and the sensor, the number reported drops by a substantial amount, and this can be used to detect the presence or absence of a train.

As with all of my Arduino projects, I’m using Arduinos based on the Atmel processors, i.e., the original, simple, Arduinos like the Uno and Mega, not the newer computer-like ones. I’m also using ones that operate on a 5 Volt power supply, rather than the 3.3 Volt kind. The latter can be handy for some kinds of circuits, but I find having 5 volts to work with, even though they may require more resistors and wasted power. The low-voltage kind are more useful if you need to operate off battery power, which I don’t.

Phototransistor Sensors

I’m using the Digikey 160-1065-ND & 160-1063-ND, which are Lite-on LTR-301 phototransistors and LTE-302 infrared LEDs, respectively. These are less than fifty cents each.

Types of Sensors

The fact that the detector is a phototransistor, rather than a photocell, is important. A photocell produces voltage from light, but only a small amount, and it needs to be amplified to be read reliably. It also doesn’t need a voltage source; it IS a voltage source, and that takes away one way to control it. A phototransistor on the other hand uses the small current produced from the received light to switch another circuit on. This means that there must be a voltage source. The following diagram shows two typical ways to use a phototransistor in a circuit, where Vcc is the voltage source, Vout is typically the Arduino itself (+5V for the Arduinos I’m using), and ground at the bottom of the diagram is the Arduino’s ground.


In the “Common-Emitter Amplifier” circuit, the voltage source is read above the phototransistor. If the transistor is off (not illuminated) voltage will build up, and Vout will be at or close to Vcc, meaning about +5V. When it’s on, current flows through the phototransistor, and Vout will be lower (but not zero, due to R1 and the inherent current limit of the phototransistor). R1 is there to give the current a path to ground that limits the amount of current that can flow, preventing damage. The lower resistance it has, the lower Vout will read when it’s in its “low” state.

In the “Common-Collector Amplifier” circuit, things work similarly except that the pin being read is below the transistor. Thus when the transistor is off, R1 will pull the voltage to ground. When the transistor is on, voltage will “back up” due to the resistor, and Vout will read a non-zero voltage. Again the behavior of Vout depends on the value of R1: a smaller R will yield a lower “low”.

For the phototransistors I’m using, 2 KOhms is probably a good minimum (lower than that, and too much current will flow, although since this is a transistor, not a LED, that’s not as serious a problem as in LED circuits). That’s also roughly the value needed to make the sensor work in “Active” mode, where the value read will vary proportionally with the strength of the light.

Note: the number is calculated from the typical current through the transistor, which for these is between 0.6 and 2.4 milliAmps. In practice I found improved sensitivity up to about 5K ohms. However, while I measured R1/R2 as working with 5K ohm resistors (and working slightly better than with 2K ohm ones), in truth to be sure of the circuit working at its best over all normal photodetector variation, the size of these resistors probably shouldn’t exceed 2K Ohms (R = V/I = 5V / 0.0024 A = 2,083 ohms).

This mode of operation (common-collector) counts on voltage backing up behind R1 regardless of the rate getting through the phototransistor. If the transistor is off, nothing flows and the Vout pin is pulled down to ground. If it is on at all (light is present), current flows, and the high resistance makes it harder for it to get to ground, and a voltage builds up. The more light, the more current (charge per unit of time), the higher voltage is built up.

This isn’t perfect, but with the common-collector design, the resistor can be calibrated to the level of light you want to detect, and used to craft a simple on/off control. In practical terms, that’s probably more bother than it’s worth, at least if you have the option of reading Vout with an Analog/Digital converter, which is what the Arduino’s Analog pins do, but it’s something I thought I’d want to look into. For these transistors a value of 8 KOhms or greater should be enough to make it work this way. In practice I found this unworkable, as the resistor value needed to be calibrated for ambient room lighting (incandescent lights have an IR component), and since that can vary, there was no way to do it when designing the circuit.

So, I ended up using Common-Collector (I could have used the other), and was forced into using it in Active Mode, using software to compensate for variations in room lighting.

Multiplexing Sensors

Analog pins are a limited resource on an Arduino, so it’s important to use them as efficiently as possible. While I could connect each sensor to its own pin, there’s a better way: multiplexing.

This is the basic concept: there are banks of LED/phototransistor pairs sharing an analog pin that can be used to read more than one sensor. A digital pin serves as the voltage supply: turn it on, and all of the phototransistors connected to it begin to build up charge, either above the phototransistor (if off) or between the photo transistor and the lower resistor (if on). The latter is the voltage that is read by the analog pin. The current requirements for the phototransistors are under 2.5 mA each, so you aren’t likely to run out of power, assuming the LEDs are driven from a different supply.

My test circuit replicated one set of four sensors, arranged as two banks of two sensors each, as shown below. The LEDs were driven from a separate 12V DC supply and consumed about 30 mA total. I went through several iterations of the resistor sizes before settling on these values, which seemed to produce good results.

The LEDs and sensors are the Lite-On ones mentioned at the top of the page. The LEDs are infrared LEDs (marked L1 to L4 in the diagram below). The phototransistors are infrared-sensitive, and designed to work with these LEDs. Both are lensed and have sideways oriented active areas, meaning if the pins go down to the ground, the light from the emitter will shine in a narrow beam across the tracks into the sensitive area of the phototransistor, which will turn on, allowing current through. In theory, there will be a substantial variation in output when a train passes in front of the emitter. In practice, very bright room lighting reduced the range of variation because scattered light from the room lighting was also sensed.


There are four resistors in the Arduino portion of the diagram, not counting the one on the LED supply. R1 and R2 are the ones needed to limit the current to ground, which were discussed above, and Rs1 and Rs1 are “safety” resistors to protect the Arduino.

Arduino pins are driven by output transistors, which can be damaged if you short them to ground. That can leave you with an Arduino with a pin that will never work again. Resistors Rs1 and Rs2 prevent that happening. A value of 330 ohms will limit current from a +5V pin to 10 mA, which is more than the transistors need to operate, but well below the 40 mA threshold where the Arduino risks damage. Because this circuit will eventually get installed on a layout, where wring errors are quite possible, these provide a safety net to prevent me from damaging the Arduino. They won’t avoid every problem, but they’re a cheap fix for a likely one.

Note: the size of Rs depends on the maximum current, which depends on the number of phototransistors and how much they draw (which varies in practice), so you need to make the resistor a size that will allow the needed current through without dropping too much voltage. As only one is read at a time, and very little current leaks through R1 or R2, once charge has built up you aren’t going to require more than 2.4 milliamps. At that current, the resistor is dropping about 0.8 volts, leaving ~4.2 (or a bit less) available for the phototransistor to switch. This will limit the maximum reading from a sensor to about 4.2/5*1023 = 859, rather than 1023, but in practice that won’t be a problem. You could make the resistors smaller, allowing a higher sensor voltage, with more current through the pin in a short, at long as you keep the value above 125 Ohms (which limits current to 40 mA). I may experiment with this to allow faster operation of the sensors, but I’ve found 330 Ohms to work well in practice.

The LEDs are always on. This means that they can’t be located where a phototransistor can “see” two LEDs, but that shouldn’t be a significant problem, as the angle of the output light is 40 degrees. I found the sensors to be very narrowly focused, and sensors an inch (25mm) apart (side to side) did not pick up significant light from an adjacent LED on the other side of an N-scale track (about 1”, or 25mm, away).

Because the LEDs are always on, this means that I don’t waste any pins controlling them, and their power can come from a separate supply and not subtract from the Arduino’s power budget.

The “multiplexed” part is that D1 being raised will cause P1 and P2 to become active. If A1 is then read, the state of P1 will be known, and if A2 is read, the state of P2 will be known. If D1 is set low, and D2 is raised, then P3 and P4 will become active, and reading A1 will report the status of P3, while reading A2 will report the status of P4. This will let me read four sensors using two Analog pins (or 8 with four).

One thing I discovered was that it was necessary to wait a short time after turning the digital pins on, before reading the analog pins. This allows charge to build up at the analog pin. The required delay is only a few tens of microseconds (I’ve used 100 microseconds to be conservative), but it makes a big difference in the readings.

This design can easily be generalized to handle more sensors. It helps to think of these as “banks” of sensors, with each bank connected to one digital pin. There can be as many banks as there are available digital pins. Each bank can contain as many sensors as there are analog pins. Each sensor could require 2.4 mA and that will add up, so there is a limit of around16 sensors per bank (and it’s safer to limit yourself to a bit less than that), and probably 40 or 80 (depending on Arduino model) total if the Arduino isn’t doing anything else. You’re likely to hit other limits (like time available to read them) first.

Reading Sensors

Actually reading the sensors requires software to switch the digital pins on and off, and to read the corresponding analog pins. And, because the real world is a messy place, simply reading a number from the sensor may not be enough. It can be, depending on your needs, but for more reliable operation you’ll need to average out several readings, through a method called “smoothing” to make the sensor numbers more reliable, and in most model railroad operations you’ll also want to avoid misreading them though a method called a “hold-down timer”. I spent some time developing software to do just that, and have packaged it up as an Arduino library.

Design Issues

When the phototransistor is exposed to light, it takes some time to turn on. This is measured in microseconds, and not many of them (10-15 per the datasheet). A “read” on the analog pin takes a relatively long time (~110 microseconds), so I didn’t think I needed to worry about that part. I was wrong, and I think the reason is the current limiting done by the safety resistor (allowing 10 mA max), plus the leakage allowed through the main resistor at the bottom of the circuit (2.4 mA). This takes time to build up a charge (voltage) to be read by the analog pin.

A photo transistor isn’t just an on/off device. It’s “partly on” with a little light, “more on” with more light, and “really on” when brightly lit. The more “on” it is, the more current it lets through, but if any is getting through it’s going to build up on the output side. And that means that even a “dark” sensor is probably getting enough infrared from room lights to cause a “false positive” when it gets read if enough time has passed. One fix for this is to read the pin twice (which obviously takes twice as long) and discard the first reading, which will be the potentially false one. I tried that, and discovered that it wasn’t really necessary. As long as I was reading the sensors fairly often (once a second) the two values were almost identical in most cases.

But the difference between high and low varied quite a bit. With room lights off, and just the LEDs, “off” was 0-20 or so (on a 0-1023 scale). On was around 600 – 800 (except for one phototransistor, which seemed to be defective, and was around 300 when “on”; I think its lens is bad and not focusing on the LED). When I bathed the test area in light from a halogen desk lamp (a worst-case incandescent-light test), “off” was around 400 – 600 and on was around 750. So I needed to make my software sophisticated enough to adapt to changing room lighting and figure out what “off” is as lighting varies (from people leaning over the layout, etc).

Finally, because this is being used in a control program, I want to be able to do other things that require fairly close timing (on the order of a millisecond) while also working with the sensors, so I can’t just read every sensor in one pass. The program needs to allow reading things on a more granular level, to give me time to do other things. As long as I keep the number of sensors in a bank small, this is relatively easy to do since I can just read one bank, go do other things, then come back and read the other. If I wanted to break banks up into subsections, it would get a lot more complicated, but I don’t need to do that for the quantity of sensors I’m going to be working with.

Sensor Smoothing and Adaptation

Smoothing is a complex enough topic that I have a whole page dedicated to discussing it in detail, so I’ll just summarize the important parts here. The basic idea of smoothing is that any new reading is averaged with past readings, through a technique known as an “Exponential Moving Average”. This causes big changes to take effect slowly, and if the big change doesn’t last, it mostly gets ignored. Thus if some electrical noise caused a sensor to go from 200 to 400 for a millisecond or two, then fall back, the sensor reading might go from 200 to 250, but it wouldn’t change any more than that, avoiding a “false positive” report that some real-world event had happened.

A potential downside is that the program won’t “see” real changes immediately, but if you’re reading the sensor every few milliseconds that doesn’t really matter. An N-scale bullet train moving at 300 scale kph (186 mph) is moving about half a meter (18”) per second, but that works out to half a millimeter per microsecond. You can take tens of milliseconds to detect that the front has arrived, before even the tail end of the first car has moved past the sensor. I’m using a smoothing algorithm that will allow changes to be detected after about 3 read cycles, so if I read these every three milliseconds (which is reasonable) then the train has moved at most 9mm (~3/8”) before I detect it. And most trains move a lot slower than bullet trains.

Smoothing avoids problems caused by sudden changes. For example, a bit of electrical noise on the wire, or a flash from a camera. Electrical noise is likely injected from AC sources, and the slowest of these is probably going to be wall current, at 50 or 60 Hz depending on where you live. This causes a rise and fall over about 8 or 10 milliseconds, so if you wait 8 milliseconds to react, the change is gone. If you were to wait only four, the change might be detected (depending on how synchronized your reads were with the cycle). This means I should read a specific sensor no more often than once every 2.7 milliseconds if I’m using a 3-sample smoothing approach. Camera flash is much less of a problem. The longest lasts about 1/500-second, or 2 milliseconds, so if I read even once a millisecond, the change is gone before it can be reported.

Another, and likely more serious source of electrical noise is the solenoid-based turnout. When one of these is thrown, voltage goes from zero to a high of 12-16 volts very quickly, and this can induce a current spike in nearby wires. It will be shortlived, as the voltage applied to the solenoid is DC, and only a changing voltage can induce a current. Although sensors are often located near turnouts, it’s a good idea to try to keep the wires run separately, and cross at right angles where necessary. But there is likely to be some induced noise when a turnout is thrown, and smoothing can minimize the impact of that.

There can be longer-scale changes: a person leaning over the layout, or a door opening slowly that lets more light in. Most of these will be relatively distant, and thus affect multiple sensors, so having the room reach to changes in room lighting over a longer scale can adapt the sensors to these. My light adaption uses a smoothing algorithm that takes 11 cycles to reach the half-way point (at 3 msec reads that’s 33 msec to get halfway). Most human motion happens on a scale of a few millimeters per millisecond (or meters per second), so things with human causes will probably be smoothed out by this (trying to make the smoothing take longer for slow-moving changes may ignore problems it needs to react to). If this ends up ignoring things it shouldn’t, I can switch to one that adapts twice as fast, but that may react too quickly.

Software Design

The software to read the sensors wasn’t trivial. This is analog circuitry, even though I’m using it to get a digital result. And Analog is messy. The software needs to deal with that. When the phototransistor is exposed to light, it takes some time to turn on. This is measured in microseconds, and not many of them (10-15 per the datasheet). A “read” on the analog pin takes a relatively long time (~110 microseconds), so I didn’t think I needed to worry about that part. I was wrong, and I’ll get to that in the description of the tests below.

A photo transistor isn’t just an on/off device. It’s “partly on” with a little light, “more on” with more light, and “really on” when brightly lit. The more “on” it is, the more current it lets through, but if any is getting through it’s going to build up on the output side. And that means that even a “dark” sensor is probably getting enough infrared from room lights to cause a “false positive” when it gets read if enough time has passed. One fix for this is to read the pin twice (which obviously takes twice as long) and discard the first reading, which will be the potentially false one. I tried that, and discovered that it wasn’t really necessary. As long as I was reading the sensors fairly often (once a second) the two values were almost identical in most cases.

But the difference between high and low varied quite a bit. With room lights off, and just the LEDs, “off” was 0-20 or so (on a 0-1023 scale). On was around 600 – 800 (except for one phototransistor, which seemed to be defective, and was around 300 when “on”; I think its lens is bad and not focusing on the LED). When I bathed the test area in light from a halogen desk lamp (a worst-case incandescent-light test), “off” was around 400 – 600 and on was around 750. So I’m going to need to make my software sophisticated enough to adapt to changing room lighting and figure out what “off” is as lighting varies (from people leaning over the layout, etc).




Sensor Smoothing

When working with analog inputs, sometimes one value read in a series will be significantly different from the preceding and following ones. This can be due to a number of factors, but usually these factors aren’t important (e.g., electrical noise, a collision sensor detecting vibration in itself, etc). The solution to this is called “smoothing”, a method for adapting to longer-term changes while limiting the impact of short term ones. These are usually based on some form of moving average.

There are a vast number of smoothing algorithms, and some sensor controllers implement these in hardware so you don’t have to program them in software. But sometimes it’s useful to have a simple one that you can apply in software, and I’ll describe the behavior of a set of algorithms based on “exponential weighted averaging” where the exponent is a reciprocal of a power of two. Using a power of two as the exponent allows for more efficient execution of this smoothing in a simple processor like the Arduino.

This is based on an algorithm described by Alan Burlison on his website, which avoids floating point math and division operations, both operations that take “forever” on a chip this simple, given the speeds I’ll be working at.

Digital Filters

Specifically, the solution to a couple of my problems is a technique, or rather a set of techniques, called Digital Filtering. To even out the fluctuations in the sensor reading due to noise in the wires, people walking past the light, and similar things, a digital “low pass” filter is used. To detect large-scale changes when a tram blocks the light from the LED, and separate those from random variation of the sensor readings over time, a digital “high pass” filter is used. Both of these are software algorithms I can write (or rather copy and modify from people who have done similar things before).

Many people who set out to create low-pass filters don’t do this very efficiently. I’ve seen programs that stored past values and recomputed an average each time. That’s a lot of extra instructions for something you want to do often (it’s fine if you’re checking a sensor once a second, but not if you’re checking a half-dozen once a millisecond). Using an Exponential Moving Average only requires the current sensor reading and the last computed average to implement a low-pass filter. Done efficiently, this is very fast.

The basic idea of an Exponential Weighted Moving Average (see “Exponential Moving Average” in the wikipedia Moving Average link above) is that each new sample is multiplied by a “weight” between zero and one, and the past average is multipled by one minus the weight, and the two added to form a new average. The closer the weight is to one, the more important recent samples are, and the more quickly the average will adapt. Conversely, the closer the weight is to zero, the slower the average will adapt.

With reciprocal powers of two (e.g., 1/2, 1/4/ 1/8, etc), the fastest-adapting one is 1/2, and higher powers of two take increasingly long times to adapt.

The Algorithm

The “time” to adapt depends on the frequency of samples collected and averaged. If you collect samples once a millisecond, an algorithm that converges in 10 steps takes ten milliseconds to converge, while if you collect samples every 20 milliseconds, the same algorithm takes 200 milliseconds (two tenths of a second) to converge. Choosing the right algorithm thus depends both on how often you are collecting and averaging new data readings, and how quickly you need to be able to respond to changes.

The actual calculation is quite simple:

newval = (alpha x sample) + (1 – alpha) x oldval

or to put it a bit closer to what we’re going to be doing, for power of two N (where N= 2, 4, 8, etc):

value[i+1] = ( (1/N) * sample) + (1 – (1/N) * value[ i ])

Alan’s version of the algorithm makes some clever adaptions based on the fact that N is a power of two to perform this math in a fixed-point representation using bit-shifting to avoid division (Arduino processors can’t do division). His original implementation was for sensors scaled to a 0-100 range, which allowed him to work with ints (if you can do that, working with ints is better than what I’m doing). But analogRead returns 0 – 1023, and to work with that, the intermediate value needs to be kept in a long rather than an int.

Using alpha=1/2 yields an extra optimization, as one of the multiplies can be removed. It also uses fewer shifts (each bit shifted is one instruction cycle), so it’s faster there too. In general, the larger N gets, the slower the smoothing will be (although for most purposes it’s way too fast for the differences between the various values of N to matter in most applications).

Since I want to work on a number of sensors at once, I’m keeping them in an array of sensors, and breaking value[i] and value[i+1] into separate arrays, so I originally had three arrays the size of my number of sensors: sensorValues (the newly read readings), interimValues (my fixed-point moving average in a long), and smoothValues (my sensor readings after smoothing). Eventually I realized that all I needed to save was the “interim” value, which is the moving average in fixed-point form. When I need to use it as a normal int I can convert one value back to an int (the second line that generates “smoothNxxValues[i]”) where it’s needed.

I’ll leave the longer version here for reference, see the library for the new version.

int sensorValues[SENSORS];
int smoothValues[SENSORS];
long interimValues[SENSORS];

// alpha = 1/16, with fixed point offset of five bits (multiply by 32)
// multiply by 32 divide by 16 is multiply by 2 (left shift 1),
// while multiply by 15/16 is multiply by 15, then right shift 4
// max val is 1024 x 32 x 15 = 491,520 so interim must be long
interimN16Values[i] = (long(sensorValues[i]) << 1) + ((interimN16Values[i] * 15L) >> 4);
smoothN16Values[i] = int((interimN16Values[i] + 16L) >> 5);

// alpha = 1/2, with fixed point offset of five bits (multiply by 32)
// multiply by 32 divide by 2 is multiply by 16 (left shift 4),
// while multiply by 1/2 is right shift 1 (eliminates an extra multiply operation)
// max val is 1024 x 32 x 3 = 98,304 so interim must be long
interimN2Values[i] = (long(sensorValues[i]) << 4) + (interimN2Values[i] >> 1);
smoothN2Values[i] = int((interimN2Values[i] + 16L) >> 5);

Note that interimN2Values and interimN16Values are the smoothed values in fixed-point form. As mentioned above, you don’t actually need to keep both these and the “smoothed” equivalents around. Because of the rounding used to generate the int versions of the smoother versions, it’s best to store the long interim values and just generate the smoothed ones on the fly as needed.

Adaption Rates

I wrote a simple program to execute several algorithms against “before” and “after” values, so I could chart them. In all of these, at time=0 the sensor is returning the “before” value, and at time =1 and later, the sensor is returning the “after” value, so the chart is showing how long (in read-and-smooth cycles) it takes each algorithm to adapt to the new value. In the diagram below I’ve charted the rise from a sensor reading of 10 to one of 300 over 50 cycles, for N = 1/2 (blue), 1/4 (red), 1/8 (green) and 1/16 (purple).

Adaption from 10 to 300 over 50 cycles for A=1/2, 1/4/ 1/8 and 1/16

What’s worth mentioning here is that for a new value of 300, at step 1 the average value is very close to 300/N (so 150, 75, 38, and 19 respectively). The actual values were (155, 82, 46 and 28). Counting “converged” as meaning “within 1% of the final value”, N=2 (alpha = 1/2) took 7 steps to converge, N=4 took 16, N=8 took 34 and N=16 took 70 (even after 100 steps, N=16 hadn’t quite reached 300; it was 299 from step 84 onwards). A more useful measure might be how long it takes to get halfway. For N=2 that was step 1 (pretty much by definition), while for N=4 is was step 3, for N=8 step, 6 and for N=16, step 11.

Let’s consider a smaller change: going from 100 to 150. The curves look similar so I won’t bother with the graph. To get halfway (125) for the four algorithms took 1, 3, 6 and 11 steps. Seeing a pattern? Looking at 10 to 1010 yields 2, 3, 6, and 11. The only reason N=2 was different was a bit of rounding error: at step 1 it was 505, halfway from 0 to the high value, but the starting value was 10, so halfway to the end value would be 510 and it wasn’t quite there.

Let’s look at decay curves. Here’s the graph for 150 down to 100: it looks pretty much the same as the rising curve.

Adaption from 150 to 100 over 50 cycles for A=1/2, 1/4, 1/8 and 1/16

In terms of metrics it’s the same too: from 150 to halfway (125) took 1, 3, 6, and 11 steps again.

What this means is that the absolute rate of change varies depending on how big the change was, so if you’re looking for a 10-point change, it makes a big difference if the actual value jumped by 20 points or 200.

Smoothing Speed

The other difference between the algorithms besides how quickly they converge on a new value is how much processing time they take. Frankly, compared to the hundreds of microseconds it takes to read a value from a sensor, the difference between them is negligible.

Taking the four algorithms described here, packaging them as a function call that processed a set of 8 sensors (so the function call overhead gets spread over the eight of them) and then calling that 1,000 times (and subtracting some other measured overhead) gave me timings for a single sensor smooth of N=2: 7.17 microseconds (usec), N=4: 8.06 usec, N=8: 9.12 usec, and N=16: 9.38 usec.

So regardless of which is used, the variation between them is barely two and a quarter microseconds per sensor read. Even knowing I’ll be doing between 4,000 and 8,000 of these per second in my application, that a total difference of less than 18 milliseconds per second (or 1.8%).

A more useful number is how much delay this adds to a cycle around loop(). In my application, I’m probably going to take about 2 milliseconds to make one loop, and I’ll be processing at most 8 sensors. This means that the longest algorithm adds 75.04 microseconds per 2,000, or 3.752%. The shortest adds 57.36 usec, or 2.868%, so there’s just over a 1% variation in the effect on the application timing. This tells me that I can use any of these algorithms without needing to worry about this aspect, even for the fairly time-sensitive application I’m planning.

Using With IR Sensors

Now the reason I’m concerned with this is that I want to process the sensor readings coming from a bunch of Infra-Red Phototransistors that I’m using to detect moving trains. The values reported by these jump around quite a bit, due to environmental noise and random room lighting changes (unfortunately a lot of room light has an infrared component). In a dimly-lit room, “off” is very close to zero (below 10), while “on” varies from around 250 to 500 the way I have them set up, depending on the particular LED and sensor. (there’s a lot of individual component variation).

So what this means is that if I want to detect an upward move of 100 points, and I’m sampling my sensor every two milliseconds, N=2 will do this in 2 msec (1 cycle), N=4 in 4 msec (2 cycles), N=8 in 8 msec (4 cycles) and N=16 in 14 msec (7 cycles).

For an N-scale model train, 40 kph (24 mph) is about 74 mm/sec, or 0.074 mm/msec. So even with N=16 the train will barely move one millimeter before I detect it, assuming I’m measuring an offset of 100 out of 300 as detection.

If I want to use the halfway mark, ((300 – 10)/2) + 10= 155, then this becomes 1 step, 3 steps, 6 steps and 11 steps as described above, but even then, when using N=16 the train is still only moving 1.63 mm before it is detected.

So any of these algorithms will work fairly well. Higher values of N will ignore noise better. But even N=2 is going to ignore a single sample that’s less than twice my detection threshold. So if I’m looking for half the difference in sensor levels (150 in a dim room), then a random event would need to be 300 / 1024 (or about 1.5 volts) to cause a false positive. N=4 would double that, so about 600 / 1024, or 3 volts, and N=8 would quadruple it, which can’t happen on a 5 volt sensor.

Now if the range is tighter, the detection band will be smaller. In a really brightly lit environment (my workbench with a halogen task light a foot above the sensors), the difference between “off” and “on” was as low as 150 points, so detection would need to sense a 75-point shift. For my four algorithms, that works out to 1, 3, 6 and 11 cycles again (it’s still half the difference), but the absolute values are higher (600 for “off”, 750 for “on”), and while N=2 now needs just 150 points (0.73 volts) to trigger a one cycle false positive, N=4 needs 300 points or 1.5 volts, and no value possible with analogRead will cause N=8 or larger to change that fast (upwards).

Going downward in a brightly-lit room is pretty much the same case as going up from zero, except that I think it’s harder for noise to generate shifts that significant, as it would have to take voltage below that produced by ambient light (a loose wire or bad solder joint could do it, but if you’ve got one of those the sensor won’t be working reliably anyway).

To be perfectly safe, I should use N=8 (adaption in 6 cycles). Using N=2 will be faster (detecting changes typically in a single cycle), but it is at risk if random variation exceeds the off/on sensor level difference. Normal variation without external noise sources is rarely more than +/- 2 (out of 1024), far below the several hundred I’d need to worry about.

But the bottom line is that unless there’s reason to expect a lot of induced noise in the sensor wiring (which there could be in a model railroad), I can choose the algorithm without needing to worry about the convergence time (note that if I cared about how long it takes to converge on a stable final value, N=16 with it’s >100 cycle convergence time would likely pose some problems. I do actually care a little about convergence time, as it affects my ability to measure the full range of the sensor, which I need to do to adapt to variations in room lighting. But I’m doing that on a timescale of seconds, so practically speaking even there N=16 isn’t likely to be a problem.


Sensor Tests

The sensor behavior was complex enough that I had to run a number of tests before I was sure I was handling them correctly. This page records the results of the most interesting ones. For these tests, what I’d do is keep an array (or multiple arrays) and each time around loop (or on some other frequency) I’d put the relevant numbers into the next free cell of the array(s). Since there isn’t enough SRAM for the array to be as long as I’d like, I used short arrays and had recording start at some offset from an event, or I’d treat the array as a circular buffer (running off the end looped back to overwriting the beginning, and stop some fixed time after the even of interest. Then I’d print out the numbers in the array(s).

Printed numbers were saved as text, and imported into an Excel spreadsheet for processing and graphing them.

Testing the Algorithm

The example I found uses an “alpha” of 1/16, or 0.0625. This takes a long time to adapt to changes in light levels (i.e., it’s doing more “smoothing” than is necessary for what I need). The algorithm can be easily changed to use any 1/N alpha, and I adapted it to use an alpha of 1/2 (0.5 or N=2), which works much better. The following is a graph of samples taken using a test program, showing the raw sensor values (blue), the lightly smoothed (N=2, red)) and heavily smoothed (N=16, green) values:

IR Sensor Falling Response Time graphed over 80 cycles of 1.5 msec each

It’s a bit hard to read, but where the raw values dropped from 165 to 10 in 21 read cycles, the fast smoothing did the same drop in the same time (taking only one cycle longer to make the initial drop from 165 to below 65) and the slow smoother took 63 cycles to drop to 10, and 22 just to drop below 65. Each step recorded here took 1,470 micro seconds +/- 10 microseconds (or just under 1.5 milliseconds per cycle).

Looking more closely, you can see how the blue line drops slightly below the red at first, representing how smoothing delays responding to sudden changes (this is probably while the obstacle is still moving in front of the sensor and light is only partially blocked), and then as the sensor value drops steeply, the fast-smoothed curve lags it only slightly, while the slow-smoothed curve lags far behind.

IR Sensor Falling Response Time over 20 cycles

Rising is similar: the raw value actually goes from 0 to 163 (its maximum) in just two steps. The lightly smoothed curve takes 8 steps to maximum and only two to rise above 100, and the heavily smoothed curve takes a whopping 79 steps to maximum, and 15 to exceed 100.

Oddly, during my testing I observed some situations where the fast-smoothing algorithm took more than ten cycles to rise 100 points. I’m not entirely sure what caused that, but there would appear to be some variation in sensor response from one time to the next.

IR Sensor Rising Response Time over 80 cycles

IR Sensor Rising Response Time over 20 cycles

If you’re curious, I generate these graphs by storing each cycle’s current values in an array, the index for which resets back to zero when it gets to the end. When my trigger event (going up by 10 in one cycle) occurs, I print the whole array out to the serial monitor. Then I cut/paste the numbers into a text file, import to Excel as CSV values, and use Excel’s graphing function to draw a picture of a subset of the data points.

Room Light Test

Let’s look at some real sensors and real lighting. For these, I’ve updated my code to use N=4 for the main smoothing (so the red line below is adapting to half the difference after 3 cycles), and I’ve also added a number, called “Adjusted”, that tracks the average over all smoothed sensors with additional N=16 smoothing, which is used to reset the definition of HIGH when ambient lighting changes. This means that if room light jumps by 500 points (a very extreme change), the definition of HIGH won’t change by more that 31 points in a single cycle, and that means that the difference between LOW and HIGH would need to be less than 62 points for a false detection of an occupancy change.

The following chart is from one sensor in a set of four. The sensors are LTR-301 LED phototransistors lit by LTE-302 IR LEDs (set up per my Sensors design), with values recorded for the first sensor in the set. With the IR LED alone illuminated in a room lit with dim LED lighting (no IR component from the room LED lights, sensor readings under 10 with the IR LEDs off) the sensor reported values around 285 for HIGH, and under 10 for LOW, roughly a 140-point detection threshold. With a lamp placed so that it’s exposed 60W bulb was 22” from the sensor, on-axis but up 45 degrees, the sensor reading went up to 450 for HIGH and 50 for LOW, for a 200-point threshold.

Using a program that recorded readings for each cycle (roughly once per millisecond) for a full second, I then placed a solid object blocking the direct light from the bulb (some indirect light still made it to the sensor, and HIGH readings went up to about 340) and recorded the following graph. The X-axis is time in milliseconds, and the Y-axis is the result returned by analogRead, either unprocessed (raw) or after being smoothed.

Raw IR sensor (blue), N=4 smoothed (red) and N=16 smoothed (purple) values as sensor goes clear

The above chart shows the raw sensor value (blue wiggly line), the average HIGH value over all sensors (red line), and a line (purple) created from the heavily-smoothed “difference from average” value used in my sensor processing logic added to the average. Note: the reason the blue line is wiggly is that it’s showing the effect of 60Hz AC power on an incandescent lightbulb: the temperature varies over time, although at worst this was about a 16-point swing and the smoothing removed almost all trace of it.

The intent here is that the purple line represents the value actually compared with a smoothed sensor reading each cycle to determine if the sensor is occupied or not. It needs to track room light changes over all sensors, but smooth them out so that the cycle-to-cycle difference between the actual sensor value and the high value is never too large, even on a sudden change.

Now what this shows is interesting. It takes a long time for the light level to rise, and I think that’s mostly in the speed of how quickly I could remove the obstruction (75 msec to full brightness is a long time for a sensor, but a short time for human muscles). This could reflect a variation in room light from someone moving an arm in front of a nearby table-lamp, for example.

As the room light rises, you can see how the purple line initially falls below the blue line, and then about halfway up starts to rise above it. That’s exactly the behavior expected. Additionally, while the maximum cycle-to-cycle variation of the sensor was 10 points, the maximum of the purple line number was just 4.

BTW, I think the reason the red line ends up well above the blue is that some of the detectors are much more sensitive to off-axis light than the one I charted in blue, so the average ended up higher after increasing the off-axis light.

This test showed that my idea of using N=16 smoothing was producing usable results, and gave me an idea of how quickly it would adapt and how closely it would track the N=4 smoothed number I was using for sensor results. Having this as a reference both confirmed that my approach was good, and let me structure the adaptive code that keeps the sensors returning useful results even as room light changes.

Early Testing

My first tests were simply to prove that I could read the sensors accurately and to work out initial bugs in the software. The following records the testing I did at that time.

I set these up for initial testing on a simple breadboard (below) which just happens to be wide enough for the track to fit between the two rows. In addition to the four pins on the Arduino, the black wire (top center) connects to the logic ground pin on the Arduino, and the two connectors on the lower left (orange and blue jumpers sticking off the edge) connect to the 12V supply powering the LEDs.

Test rig (this version was using Digital pins 0 and 1, which was a bad idea)

The test rig has LEDs along the bottom (those are the ones with yellow dots on the top, they’re hard to see), with power supplied by two connections on the bottom left. The phototransistors are arranged along the top (the ones with red dots), and these have the wires to the Arduino. The jumpers on the board are a bit of a mess because I had to use ones that would lay flat, since I’m going to put track on top of them, and those only come in specific lengths.

The two things that look like multimeter probes (because they normally are) are connections to my 12V bench power supply, which happens to use the same shielded jacks as a multimeter, so I can use multimeter cables with clip ends to connect to things.

Here’s another picture:

Side View

It may not be obvious, but if you number the sensors from left to right, they’re actually read in the order: 2, 4, 1, 3 or 1, 3, 2, 4 (assuming I vary the digital pin only after reading both analog ones, which is the way my program works). That wasn’t intentional, I just hadn’t thought about how the software would work when I wired them up. The order doesn’t really matter, since it takes about half a millisecond to read all four (at least in the simplest form of my program), and at that speed a train passing in front of them isn’t going to move more than about a tenth of a millimeter between reads, and probably much less.

Test 1

Now my first several attempts didn’t have a running train, I simply wanted to read sensors and print out their values, repeating once a second, so I could get a feel for what the program was doing and how sensitive the sensors were. This used a very simple program that read the four sensors into an array along with the times they were read, printed out the two arrays (time and value) and then paused for a full second before repeating. This let me do some basic testing with ambient light changes (turning lights on and off) and resistor values, to get a feel for how things worked before I spent time on anything more sophisticated.

My initial testing was done with 12K Ohm pull-down resistors, because I misplaced my pack of 10K ohm ones. The LEDs were driven with a 330 ohm resistor, drawing about 30 mA of current off the power supply, so they weren’t as bright as they could be (they’re rated up to 50 mA), but they were at a good level for a long service life and a reasonable light level.

With the LEDs active in a darkened room, all sensors reported values in the 900s (on a 0-1023 scale) in this test. With the LEDs off, they reported values around 300. But with the room lights on, the “off” values climbed to around 700. Part of the problem was that the “room light” was a halogen desk lamp, putting out lots of IR on its own. Unfortunately, that’s also going to be true of any incandescent lamp. There’s enough IR out of a normal bulb, that these sensors are going to report on it. The layout lighting is mostly fluorescent, so it may not be as big of a problem, but I do need to worry about it. A 700-to-900 range isn’t a broad high/low range, and the sensitivity to room light levels was a problem I’d ultimately have to deal with in software.

Test 1b

After another trip to the local electronics store, and returning with a really big pile of resistors (I was determined not to be missing the right size when I needed one, but I still didn’t have a 470 ohm one when I realized I needed safety resistors, so I used 680 ohms for those on the test rig), I swapped out the pull-down resistors for some 2K ohm ones.

I’m tired of running out of resistors in mid-project

This gave much lower values when the LEDs were off, and suitably high ones when on. It’s possible that tuning the 12K Ohm resistors down closer to 8 KOhms would work better, but it looks like “Active Mode” (1-3K Ohm resistors) is going to be of more use for me (further testing confirmed that).

However, it was still very sensitive to room light. With the LEDs turned off, in a dark room with distant incandescent light they read 19, and it’s quite possible that’s coming from me, rather than the lights, since I was sitting in front of the sensors. With a desktop incandescent bulb about three feet (1m) away, behind the light sensors (so it wasn’t shining on the sensitive front part), the sensors read numbers around 70. However, with a desktop halogen light about 18” (0.5m) away, and above them, readings shot up to 300. So these things aren’t very precisely tuned to the 490nm frequency they’re specified for (or else the halogen puts out a lot at that frequency).

With the LEDs activated, and the room darkened, the sensors read about 500. Turning the halogen on at that point made them read 600. So distinguishing the light from the LEDs from ambient room lighting gives me about a 600-to-300 swing (I tested that with a solid object in front of the LED), which is less than I’d like to have, although it could be workable.

Unfortunately the big problem came when I tried to test individual sensors. With light-blocks between each pair, so P1 could only see light from L1, when I blocked P1, the Arduino read a lower value for both P1 and P3 (the two sharing the same output pin, pin numbers #1 and #2 on the test rig). Clearly I did not have a usable multiplexed sensor at this point.

The problem was a stupid error: I had used digital pins 0 and 1, because they were the closest ones. I overlooked the fact that these are “special” digital pins, and you shouldn’t use them if you’re also using USB (as I was), because they’ll give erroneous results due to their connection to the serial controller. Once I move to pins 2 and 3, things worked much better.

Test 2

With the Digital pin problem discovered and fixed, I made my program a bit more sophisticated, so I could test multiple reads. The goal here was to see if doing two reads and discarding the first will improve accuracy. What I found was that the first set of reads after a long delay (tens of seconds) could have the very first read of each sensor bad. But once I started cycling and reading sensors once a second, that went away and the “discard” value was almost always only a point or two (out of 1023) different. I removed the “read twice” code at that point, to speed up the algorithm.

And that broke it. It turns out that there does need to be a delay after the digital pin is turned on to energize the bank. This looks like it could be as short as 50 microseconds, but I found slightly better results with 100 microseconds (and no additional benefit from 200). Now with four sensors in two banks I can read all four sensors in under 700 microseconds.

With that, I started testing different values of R1 and R2, and found 8K+ was too erratic for my use, but values from 2K to 5K worked fairly well, with 5K slightly better than 3K, and 1K significantly worse than 3K. I didn’t test 6K as I didn’t have the right size resistors to easily test that. In theory, the appropriate value will vary based on the sensors in use, from about 2K to about 8K. Later on I reverted to 2K so I was testing a circuit that would work for any normal variation of phototransistor (see test 4).

With bright ambient lighting, “on” was about 750, and “off” was about 500 – 670. With less-bright ambient lighting the difference improved, and in a dimly-lit room with a 50W bulb about 6’ (2m) away behind a lampshade, “off” was 50 or less and “on” was still 600 – 750 (except for my defective sensor, which was now 300 – 500 when “off”). I need to test these under the layout’s fluorescent lighting to see how that works. Although it’s bright, there should be relatively little infrared in that light, so if the detectors really are frequency-specific, that shouldn’t be an issue.

Test 3

The next part was to test my smoothing algorithm. I tried the original example I’d found online, but it took about 30 sensor reads to adapt to large changes, and while that was probably workable, it was much slower than I wanted or needed. This was using an alpha of 1/16, or 0.0625. I decided it would be easy enough to adapt this to an alpha of 1/2 (0.5), so I did.

This didn’t work the first time I tried it (I’d done the math wrong), but after a while if fiddling around and re-checking my math, I fixed it, and found that with alpha = 0.5 even under poor conditions it would adapt in about six cycles (and perhaps less, depending on how far I needed the count to move before considering it to be a state change). With that working, I turned my attention to creating the code that would determine that a sensor was in an OCCUPIED or EMPTY state.

Test 4

With the smoothing algorithm working (and I tested it rather throughly, I thought), I wrote some simple code to keep a couple of recent sensor levels (5 and 10 cycles back) and check against them to look for changes greater than 100 over a multi-cycle period, as the minimum normal variation seemed to be less than 75 and the difference between low and high was never less than about 120 in any of the cases I observed. And I’d noted from my testing that the algorithm would make a >100 transition relatively quickly. After the usual debugging, this worked fine to detect “going low” transitions (when a sensor became occupied), but just couldn’t seem to detect “going high” ones.

I made a number of changes at this point, including switching back to 2K ohm resistors (I’d been using 5K ohm up to now) and creating a second test board I could use at the desk where I do programming, as the electrical bench (i.e., the dining room table) wasn’t really suitable for extensive software debugging efforts, and it was clear I had some kind of software problem.

Leonardo with two-sensor test circuit

The second board, using a new set of LEDs and sensors, worked very similarly to the old one. This one was plugged into an Arduino Leonardo, simply to keep me from having to carry the Uno back and forth, and it was built with only two detectors, using the Arduino’s 5V supply to light the LEDs (the extra 30 mA is fine given how little this board is doing). I omitted the safety resistors to keep the small breadboard I was using cleaner. It’s a very simple circuit, and a copy of the one I’d already done, so I was less concerned with accidentally grounding a pin.

The room lighting was slightly different (two indirect 50W-equivalent LED bulbs rather than incandescent/halogen, giving a dimmer light than my electrical bench, suitable for working on a computer), and I was seeing highs around 180 – 200, and lows around 0 – 50 (each sensor was consistent, but one was consistently higher than the other). A “going low” transition occurred over about two or three cycles (and this could have partly been due to the speed my finger was moving when placed in front of the LED; I expect it takes two cycles to fully block the light). A typical reading sequence went: 177, 178, 178, 176, 140, 87 or 210, 211, 211, 200, 136, 68 (I stopped collecting history when the delta exceeded my threshold of 100). This actually made me wonder if my smoothing algorithm might be adapting too quickly, although it seemed to hold things nicely stable over normal variation.

And then I proceeded to dive into the program, to figure out just what was going on, millisecond to millisecond, by capturing a rolling set of samples each cycle for ten cycles, and printing them out for even a slight increase. Note that I don’t print them out each cycle, as that would add tens of milliseconds to the loop, and I want the timing between pin reads to be the real timing.

And there was the problem (or so I thought): the algorithm takes a very long time to rise from 0. In ten cycles, it rose only 13 points on one sensor, and 27 on the other. Here’s the snapshot from the second sensor (starting from zero at cycle 8):

cycle 4319, upwards trap.
0 =168, 1, .
1 =170, 1, .
2 =171, 4, .
3 =172, 8, .
4 =172, 13, .
5 =172, 18, .
6 =172, 23, .
7 =172, 27, .
8 =165, 0, .
9 =166, 1, .

Notice that sensor 0 has also fluctuated by seven points in the same period. I’m pretty sure that’s an artifact of the two of them using the same A/D chip, although it could be from sharing the same analog pin. Either way, that’s part of the “normal variation” I need the smoothing to compensate for and the “large transition” code to ignore; unlike sensor 1 it doesn’t keep on rising too much.

My mistake (or so I thought) was that in testing the smoothing algorithm, I only looked at values every 10 cycles, and saw it reach the final state in only a couple of 10-sample steps, where the high value was a couple-hundred points above the low. What I missed was that during the initial ten cycles, it was on the start of the exponential curve, and not rising much each cycle. After ten steps it was rising about 5 per step, so it only took another 10 or 20 to pass the 100 threshold. At that scale, I couldn’t really see that it fell faster than it rose (in an absolute value sense; as a relative value to the start I expect the rates were similar).

At this point I wrote a short program to capture several hundred cycles, and then print them out, allowing me to graph them (using Excel) on a step-by-step level, to see just what the sensors were doing and how my algorithms were adapting to those changes. I’ve posted those up above. Oddly, what I saw in the graphs were numbers that should have worked to detect a sensor going clear within my 10-cycle window.

Armed with that information, I went back to my main test program (the one with the code to detect high/low transitions that wasn’t working) and added a similar rolling history of the last 200 raw and smoothed sensor readings, and saw this:

Raw (blue) and smoothed (red) sensor readings in main test program

Now that’s what I should see. It’s not quite as pretty as the test program, since the raw values don’t leap all the way up in one step, but it’s still rising fast and the smoothed line is still following it well and also rising fast. But why wasn’t my trigger catching that? The smoothed values take 7 cycles to go from 14 to 138 and after 5 more are at 170. That’s more than enough to trigger my “up by 100 in 5 or 10 cycles” code, so why doesn’t it?

While I was working on that, another puzzle presented itself, I seemed to be seeing a synchronization between readings on two different pins (larger than the one noted above). But with a little checking, I realized that what I was seeing was the effect of off-axis light between my two sensors. There were less than an inch apart (about 2cm), with the LEDs about 1 inch (2.5 cm) from the sensors. When I blocked the light into sensor 0, I was also blocking some of the light from LED 0 into sensor 1. This caused a deflection of about 20 points out of 180 in the value of the sensor. Which indicates how tightly focused the sensor is: I was blocking a bit less than 50% of the light on my test board that could be hitting it, and it was deflecting only a bit more than 10%. I stuck a business card between sensor 0 and sensor 1, and almost all of the synchronization went away. I did still see the other sensor move by a couple of points, which probably represents the effect of higher voltage above the phototransistor when the other phototransistor is off. But this was so minor as to be below the random fluctuations I’m seeing in a dimly-lit room.

And I finally tracked down the mystery of why my sensor wouldn’t detect an upward transition. I had a coding error that prevented my sensor history variables from ever being updated (memo to self: always test for X >= Y, not X == Y when incrementing up to a trigger point, because sometimes X starts out higher than Y). I can’t believe it took me four days to figure that out. Even looking at graphs that proved my snapshot history of the sensor variables wasn’t being compared to the current sensor properly, I never caught on to the basic idea that my snapshot wasn’t being captured at all. Some days I amaze myself, and not in a good way.

Black and White Premium WordPress Theme