Tag Archives: pwm

Experimenting with Buck Converters

If you’re like me, the first thing you do when wiring up a new circuit is to connect the power and ground rails (with the power source initially turned off… maybe).  And you’ve probably got at least one power supply that’ll do the job.  But what if you didn’t have the supply you needed?  Perhaps all you’ve got is a 12V battery, and you’re in need of a 3.3V source.  Well, if you’ve finished your chores, I hear Tosche Station will sell you some power converters.  Failing that, you could always build your own.  A simple buck converter will likely do the trick:

The Standard Buck Converter (via Microchip)

The buck converter takes a DC input voltage and reduces it by a controllable amount, much like a resistive voltage divider.  But unlike your average voltage divider, the buck converter can efficiently supply a substantial output current.  In fact, this circuit’s output current should be greater than its input current (on average).  And no, that doesn’t violate any laws of physics; the converter’s output power will still be less than its average input power because its output voltage is lower than the voltage at its input.  In mathematical terms, (PIN = VINIIN) > (POUT = VOUTIOUT), where IOUT = ILOAD above.  Make sense?

Now clearly this isn’t your typical linear regulator.  Not with that inductor sitting there anyways.  So just how does it work then?  Well, the key is in the switching action of the p-channel MOSFET – a fact that leads us to call such circuits switching regulators.

Think of the FET as a simple switch.  When this switch is on (conducting), the current through the inductor (L1) will  ramp up (since VIN > VOUT), as will the voltage at the output capacitor.  But we don’t want the output voltage to go as high as the input voltage.  So, after a very brief on-time, we turn the switch off again.  But once the FET stops conducting, the inductor’s current has to go somewhere.  Well fortunately we have diode D1 available – it provides a path for current to continue to circulate (out to the load, back to ground, up through the diode and back to the inductor).

Here’s the trick though: this on-off switching cycle happens over and over again, many thousands of times per second.  In fact, the more frequently we switch, the smoother our output voltage will become.  This is because we have an output capacitor picking up the slack (so to speak).  During the switch-off periods, as the inductor’s current drops, COUT supplies the bulk of the output current (ILOAD).  Once the FET is switched on again, the inductor’s current ramps back up and recharges the output capacitor.  Thus, we maintain a constant ILOAD while the capacitor absorbs the ripple current (IRIPPLE).

Now as you may have already guessed, the ratio of the input to the output voltage is determined based on the relative lengths of the switch’s on and off periods.  This is a method known as pulse-width modulation (and it’s used in tons of other circuits):

Typical PWM Waveforms

If the switch were turned on 100% of the time, the output voltage would eventually equal the input voltage.  On the other hand, if the switch were on 0% of the time, the output voltage would be zero.  So it makes sense then that the output voltage is equal to our duty cycle (the percent on-time) times our input voltage.  In other words, if the switch is on for 50% of a cycle, in theory VOUT = (50%)(VIN).  Now in practice, non-ideal components will cause the required duty cycle to be higher than expected, but we’ll get to that later.

By the way, if you’re wondering how to go about picking component values for your own buck converter, there are equations for that.  But instead of going into all of the details here, I’m going to refer you to this excellent guide (and video) from Microchip.  It’ll walk you through an example design for a 12V to 5V, 2A buck converter.

The Experiments

So this past Sunday I was in my “lab” (aka the table in my basement) and decided to see how efficiently I could build myself a buck converter with the parts I had on hand:

yay for protoboards!

I’ve laid out my circuit almost exactly as shown in the schematic above (ignoring the four ceramic filtering capacitors connected across the four parallel supply rails).  On the left is my p-channel MOSFET, followed by diode D1.  The large green and black thing is, you guessed it, the inductor (which I scavenged from a broken battery charger).  To the right of the inductor are three capacitors in parallel (I’ve done this to get better output filtering characteristics at high frequencies).  The white and black wires you see leaving the right side of the board are connected to my resistive load.  The circuit is configured like so:

  • VIN = 10V
  • PFET = IRF9540
  • D1 = 1N4004
  • L1 = 100uH
  • COUT = 270uF (effective)
  • FSWITCHING = 52kHz
  • D = 62%
  • VOUT = 5V

I should also note that these component values were chosen based on an expected load current of 1A and a ripple current of 0.3A.  I only had the one good inductor to play with, so I computed my switching frequency based on its value.

Experiment #1 (Bad Diode)

For this first test, utilizing the 1N4004 general purpose diode, I measured the following:

  • PIN = 8.7W
  • POUT = 5.1W
  • Efficiency = 59%
  • TPFET = 120°F
  • TDIODE = 149°F
  • TINDUCTOR = 82°F

Alright, so while an efficiency of 59% isn’t terrible (particularly by comparison to the 50% you’d achieve using a linear regulator), it’s not great either.  Simple buck converters are typically 80-90% efficient.  So unless my measurements are way off, clearly something’s not right here.  Based on the temperatures I measured using my Kintrex IR thermometer, I suspect the diode may be our efficiency bottleneck (since it’s the warmest component).  To find out, let’s measure a few voltage waveforms while the circuit operates:

Buck Converter Test #1 Scope Trace

The top waveform, in pink, is our measured 5V output, shown at 5V/div.  Below that, in green, is the voltage measured at the connection between D1, L1, and the FET, shown at 10V/div.  At the very bottom, in yellow, is our gate drive signal.  When this signal hits 10V, the p-channel FET will be off; when it’s at 0V, the FET will be conducting.  Note that because of this inverse relationship, the duty cycle calculated by the scope (~38%) is incorrect; we need to subtract this value from 100% to get our true duty cycle (~62%).

So as I suspected, the diode is clearly having some issues.  See that big (~25V) negative voltage spike across the diode each time the FET turns off?  Yea, that’s not so good.  It means our diode isn’t turning on as quickly as it should.  This puts additional stress on the FET.  But when the diode finally does turn on, it’s showing a voltage drop of just over 0.9V.  During the switch-off period of a cycle, the diode has to conduct, on average, the full load current of 1A.  Since our duty cycle is 62%, the diode will be conducting 38% of the time, meaning it conducts an average current of 0.38A.  Using P = IV, we can determine that the diode is dissipating at least (0.38)(0.9) = 0.34W.  The diode’s turn-on delay probably accounts for more loss, but I’m not entirely sure how to calculate that.

Experiment #2 (Schottky Diode)

Well let’s just see what happens when we substitute a much better Schottky diode (15TQ060) in place of that lousy 1N4004 rectifier.  Here’s the data from test #2:

  • PIN = 5.5W
  • POUT = 5.1W
  • Efficiency = 93%
  • TPFET = 81°F
  • TDIODE = 81°F
  • TINDUCTOR = 80°F

Wow, that’s quite a difference in efficiency (from 59% to 93%)!  And it’s all thanks to the minimal (<0.3V) forward voltage of that Schottky diode, as well as its fast turn-on time.  Our waveforms are starting to look a lot cleaner as well:

Buck Converter Test #2 Scope Trace

Experiment #3 (More Current!)

So what happens if we now decide to turn up the heat a little?  What if we suddenly decide to supply a 2A load instead of 1A?  Well, I tried just that.  Here’s what happened:

  • PIN = 12.8W
  • POUT = 10.5W
  • Efficiency = 82%
  • TPFET = 96°F
  • TDIODE = 85°F
  • TINDUCTOR = 81°F

Hmm, it seems our efficiency has dropped again.  But not unexpectedly.  Based on the temperatures of the components, it seems like our FET may now be the limiting factor.  And no surprise; the p-channel device I’ve chosen has an on-state resistance of ~0.2Ω.  Using P = IV = I2R, and multiplying by the duty cycle (the switch on-time), I get:

P = I2RD = (22)(0.2)(0.62) = 0.50W

And even at 0.3V, the diode is still dissipating substantial power as well:

P = IV(1-D) = (2)(0.3)(1-0.62) = 0.23W

Now unfortunately, I don’t have a better p-channel MOSFET to try out at the moment.  So I’m just going to have to accept those losses for now (shame on me).  However, there is a way to nearly eliminate the losses of the diode: replace it with another FET!

The Synchronous Buck Converter (via Microchip)

This time I’ll be using an n-channel MOSFET which, as you may know, contains it’s own diode (called the body diode; the p-channel FET contains one as well, but I’ve omitted it in the diagrams above).  But we won’t be relying on that diode to handle any current.  Instead, we’re going to switch on the n-channel FET whenever the p-channel FET turns off (they’ll be complimentary).  In doing so, we’ll create a low resistance path (<0.1Ω) through which the ripple current can continue to flow during the P-FET’s switch-off period.

Sadly, I can’t claim credit for this brilliant idea.  I’m not sure who first thought it up, but it’s called the synchronous buck converter (as opposed to the asynchronous, or standard, buck converter).  I imagine this is because you have to operate the two FETs in sync with one another.  This makes life a little tricky, as you don’t want to accidentally turn on both transistors at once (thus creating a short from power to ground).  But it’s not bad.

Here’s a scope trace showing the n-channel FET’s additional gate drive signal (in purple):

Buck Converter Test #4 Scope Trace

Experiment #4 (More Current, More FET!)

Well I’m sure you’re just dying to know how much of an efficiency improvement this synchronous converter will provide.  Well fear not, here are the results of my last test:

  • PIN = 12.0W
  • POUT = 10.4W
  • Efficiency = 87%
  • TPFET = 96°F
  • TNFET = 84°F
  • TINDUCTOR = 82°F

Indeed, this is an improvement!  We’re not quite back to the 93% efficiency we saw at a load current of 1A, but 87% is still better than 82%, no?

So, lessons learned?  Use quality components.  That means FETs with as low an on-resistance as possible.  And if you don’t want to go with the synchronous converter, make sure you pick a fast diode with low forward voltage.

By the way, although I haven’t done it here, you’ll probably want to wrap your buck converter in a controller of some kind (unless your load current will be fairly constant).  If you don’t, with a constant duty cycle, variations in load will cause your output voltage to change by a fair amount.  Fortunately, If you look around the interwebs, you’ll find a number of ICs that provide buck converter control.  But if you’re clever, you could whip up your own op-amp control circuit.  Or just program an AVR to do the job for you – they’re great at PWM.  Give it a shot and let me know how you make out.

One last thing: if you’ve been counting, you’ll notice that the losses I’ve calculated don’t add up to the difference between input and output power.  Lest we forget, there are still resistive losses in the inductor and output capacitor, as well as switching losses on the transistor(s).  This switching loss has to do with the power dissipated in the FET’s gate capacitance, as well as resistive losses as the transistor ramps between on and off states (nothing happens instantly you know).  The solution?  Again, buy better parts. 🙂

Questions, comments, suggestions, requests?  Feel free to leave them below.  Thanks!

Review: TI’s High-Power LED Driver Evaluation Board

On my desktop, I keep a list of miscellaneous parts I’d like to buy at some point (e.g. power resistors, laser diodes, etc).  Parts not destined for any specific project, just things that I’d like to toy with.  Well for a long time, I’ve wanted to get my hands on some high-power LEDs.  I suppose I’m just a sucker for pretty lights.  But for some reason, I’ve never gotten around to ordering any – probably because I’ve never had a good means of driving said LEDs (and I’m too busy lazy to make my own driver circuit).

Well last week Farnell (Newark in the States) came to my rescue with an offer to send me any product from their site (within a certain price limit) for free.  All they asked of me was an evaluation (this post) and a link to the product on their site.  And which product did I pick?  The TPS62260LED-338, a three-color LED driver evaluation module:

TPS62260LED-338This board hosts three 500mA LEDs (W5SM) from OSRAM.  Each LED is driven by a TPS62260 step-down DC-DC converter.  A low-cost MSP430F2131 microcontroller controls all three drivers via pulse-width modulation.

Out of the box, my first impression: these LEDs are painfully bright (especially that red one – my vision is still spotted as I type this).  They’re not kidding about protective eyewear.  But I wouldn’t want it any other way. 🙂 For most of my testing however, I simply covered the LEDs with about four sheets of paper.  That brought their intensity down to a comfortable level.

I must commend TI on making this board very easy to use and probe.  They’ve provided several nice wire-loop test points for connecting scope probes.  And they’ve even broken out the power and ground connections for people like me who don’t have the proper barrel connector power supply.  I was also pleased to see how they’d integrated heat sinks for each of the three LEDs into the PCB itself using a plethora of plated drill holes.  In operation, the board only just becomes warm to the touch.

But let’s talk about the real highlight of this board: the LED driver circuits.  Because LEDs operate within such a tight voltage range (their operating voltage is actually assumed to be about constant), they’re normally powered by some type of current controller (since the brightness of an LED is proportional to the current flowing through it).  Any yet, this board features three DC-DC voltage converters – devices which take a high input voltage and convert it to a lower output voltage.  So how is this supposed to work?

Well, each converter IC provides closed-loop control over its switching output.  In other words, the TPS62260 measures a feedback voltage and uses this to adjust its output duty cycle.  So regardless of how much current (well, up to 600mA) is being drawn from the output, the converter is able to maintain a fixed output voltage.  But here’s the tricky part: you can attach the converter’s feedback measurement input pin to anything (within reason, of course).  In this case, TI has wired each feedback pin to a 2Ω current-sensing resistor (part R9, below) connected in series with each LED.  Each converter will adjust its output in order to maintain 0.6V at its feedback pin (as 0.6V is the internal voltage reference of the converter).  Using ohm’s law, and realizing that the current will be the same in both the sense resistor and the LED, since they are in series, we can determine the LED’s current to be I = V/R = 0.6/2 = 0.3A or 300mA.

LED Driver Schematic

But wait, the current-sensing resistor is fixed, the converter’s internal voltage reference is fixed… so how do we control the current delivered to the LED?  Simply put: we don’t.  Then how can we control its brightness?  Pulse-width modulation.  Imagine flipping a light switch on and off so rapidly that you can no longer detect a flicker.  Then, adjust the ratio of the on and off times.  The longer the on time, the brighter the light will appear.  This is precisely what the MSP430 microcontroller is doing to control the brightness of the LEDs.  In fact, you can see this happening if you wave the board around rapidly while one of the LEDs is being dimmed (in this case, the blue LED):

Pulse-width modulation in action!

That image was captured with a 0.1s shutter speed.  And actually, with that knowledge, we can calculate the frequency of the PWM signal.  I count about twelve blinks of the blue LED there – so twelve blinks in 0.1s yields a frequency of 12/0.1 = 120Hz (a result I confirmed with my IOBoard oscilloscope).  If you’d like to read more about pulse-width modulation, check out my previous post on the subject.

So out of the box, the microcontroller on this evaluation board is programmed to slowly turn on and off each LED in sequence, such that one LED is always fully on while another is being ramped on or off.  This produces a very pleasing color gradient.

Now, according to the manual that came with the board, you’re also supposed to be able to turn the knob on the board in order to manually adjust the color balance.  Unfortunately, this feature did not work for me.  When I turn the knob on my board, the automatic sequence stops and the LEDs hold their current brightness states.  However, they do not change brightness when the knob is turned further.  I’ve probed the knob (which is actually a digital encoder) and believe it to be working properly.  My guess is that somebody just botched up the software.  It happens.

This brings me to my final point of discussion: reprogramming.  The TPS62260LED-338 provides a JTAG header for the traditional four-wire JTAG programmer.  Unfortunately, I do not possess such a programmer.  I was hoping instead to use the MSP430 programmer which is integrated into my LaunchPad development board.  Sadly, I never checked into the details: the LaunchPad programs via the two-wire SpyBiWire (SBW) interface, not the standard JTAG interface.  And of course, the MSP430F2131 does not support SBW.  So for now, there will be no reprogramming.  Of course, thanks to all of the convenient test points, it’s fairly easy for me to just put the micro into reset and drive the LEDs using my own PWM waveforms.  If anyone out there has any tricks for reprogramming though, please let me know!

So in conclusion, I’d say the TPS62260LED-338 is a product worth checking out.  For just over $20, it’s a pretty good deal.  If they’d given it the USB programming interface of the LaunchPad, I’d probably be happier, but then they would’ve needed to lower the current draw of the LEDs, which would’ve been no fun, or required a separate power supply, which wouldn’t have been such a big deal.

The Mechatronics TVIP (+Video)

MechatronicsToday’s post is going to be a trip down memory lane for me.  The TVIP or Thrust-Vectoring Inverted Pendulum was my very first real engineering project in college.  My good friend Alex and I constructed it almost five years ago, during the second semester of our freshman year at RPI.  We actually started designing the system during our very first semester.  However, the bulk of our work was performed for an independent study in the [former] RPI Mechatronics lab during the spring of 2006.  By the way, in case you’re wondering, the RPI Mechatronics lab closed down when Dr. Kevin Craig decided to leave RPI for Marquette University.  Of course, he took most of the lab with him, and now runs the Multidisciplinary Mechatronics Innovations lab at Marquette!

Why do I bring this up now?  Five years later?  Well as I’ve mentioned before, I’ve been doing a lot of job interviewing lately.  During interviews, the TVIP seems to come up pretty frequently.  It’s still a great example of the work I did while employed in the Mechatronics lab.  And I’m still pretty proud of this rickety old thing.  I consider it to be quite an accomplishment, particularly for a pair of college freshmen.  🙂

So what exactly is the Thrust-Vectoring Inverted Pendulum?  Well, like many Mechatronics lab projects, it’s a demonstration of control systems, mechanical dynamics, electronics, and modeling.  To be more specific (and simple), it’s a big aluminum pendulum with a scary-looking propellor attached at the end.  A diagram should help explain:

Thrust-Vectoring Inverted Pendulum (CAD Model)

What you see there is a dual A-frame, about three feet tall, supporting a horizontal shaft to which a pendulum is connected.  That shaft is suspended by a pair of bearings.  An optical encoder is used to detect the angular position (angle θ) of the shaft and pendulum.  At the end of the pendulum, the propellor and motor are attached to a geared DC servo motor, which is used to vary the direction (angle φ) of the propellor’s thrust.  Hence the term thrust-vectoring (plus, five years ago, it sounded really cool).

Now the purpose of this setup was to demonstrate the control of an unstable system.  Thus, our goal was to vary the direction and magnitude of the propellor’s thrust in order to swing up the pendulum and then balance it in its inverted, unstable position.

This was accomplished by using two separate control loops, both implemented within LabVIEW.  The first controller was responsible for swing-up (since the propellor could not produce enough thrust to pull the pendulum straight up).  As you’ll see in the video below, the swing-up algorithm was essentially a proportional controller with an excessively high gain.  This actually caused the system to go into unstable oscillations.  That sounds pretty bad, doesn’t it?  But it’s exactly what we wanted.  Just like a child on a playground swing, our proportional controller caused the pendulum to swing higher and higher.  However, instead of continuing to let the system oscillate unchecked, once the pendulum started to close in on its balance point, LabVIEW switched the output over to a PID control loop.  The job of this second controller was to catch the pendulum and then hold it, as closely as possible, in its verticle, inverted position.  How about a video before I go on?

Now that video was taken at the conclusion of the Spring 2006 semester.  You’ll notice that the pendulum still oscillates slightly (plus or minus about eight degrees) when inverted.  This was due to an improperly tuned PID controller.  We did improve that performance somewhat over the next year or so.  During my later employment in the Mechatronics lab, I wrote up a paper that, unfortunately, never made it to publication.  It contains a bit of the more serious math on modeling and control I didn’t really understand when Alex and I first designed this system.  You’ll find a link to this paper at the end of my post.

The Details

So just what makes this thing tick?  Well, the motor and propellor are pretty standard fare for RC electric planes.  Of course that doesn’t make them any less scary.  Running this thing indoors produced a lot of noise, and you had to keep clear of the pendulum while it was operating (lest you lose a finger).  The propellor and motor were connected to a standard geared DC motor which was actually capable of rotating 360 degrees.  This presented another safety hazard; if the positioning motor rotated too far, the prop would begin biting into the aluminum pendulum.  Amazingly, in two years of operation, this never happened.  But it could have.  Control of this positioning motor was accomplished by a third PD control loop within LabVIEW.  Angular feedback was given by an optical encoder.

Now we really should have bought a commercial, heavy-duty hobby servo in order to vector the thrust of our prop.  This would have likely been safer and easier.  But what did we know back then?  Ultimately we solved the safety issue by replacing the propellor with a ducted fan.  It didn’t quite produce as much thrust, but you didn’t have to worry about how to grab the pendulum in the event of a control/electronics failure.

The New Thrust-Vectoring Inverted Pendulum

And speaking of electronics failures, we did have a couple of frightening moments during our initial testing.  The speed of the propellor motor was governed by a power MOSFET.  Twice, that FET failed during operation; in both cases it failed by shorting.  The result?

Instantaneous full-power thrust.

Pretty scary stuff, particularly since we first powered the prop motor using a 12V lead-acid battery.  However, we also eventually replaced that battery (which you see in the video above) with a dedicated high-current power supply (the box with the RPI sticker).

Now as I started to mention earlier, both the propellor and positioning motors were driven by microcontroller-produced PWM signals wired into power transistors.  We used a microcontroller (MCU) for PWM generation because the NI data acquisition (DAQ) hardware we had available at the time could not generate these signals.  Instead, it produced analog outputs which were fed into the ADCs of our MCU.  The MCU (an AVR ATTiny13) then produced PWM signals whose duty cycles were proportional to those analog inputs.  Fortunately, the DAQ hardware we had could easily process our two quadrature encoder inputs into angular measurements.

Well I’ll leave the rest of the details to the paper linked below.  If you’re interested in how to model this system mathematically, as well as how to implement the various controllers described here, have a read!  I think you’ll find my test in characterizing the viscous damping of the ducted fan rather interesting.  If only I’d had Eureqa back then…  Oh, and I’ve also included MCU source code for the ADC-PWM conversion:

Paper: A Mechatronics Case Study: Thrust Vectoring and Control of an Unstable System
Schematic: GIF Image (Also available in the paper)

Feel free to make comments or ask questions in the comment section.  Thanks!

Pure Analog Servo Control

A Standard Hobby ServoHobby servos, such as the one pictured at right, are wonderfully useful little devices.  You’ll find them moving control surfaces on model planes, in steering linkages on RC cars, and even in the feeding mechanism of an automatic ping-pong ball launcher (one of my simpler college design projects).

Anytime you need something to rotate to a specific position, think of the hobby servo.  They’re fairly low cost, and come in a variety of torque sizes, from tens to hundreds of ounce-inches.

So let’s say you’ve bought yourself a servo from Tower Hobbies (or wherever).  How are you going to control it?  Well, you could purchase a radio and receiver, but if you’re not planning on building your servo into a vehicle of some sort, that’s really overkill (and expensive).  You could program a microcontroller to generate the control signals, but that could get complicated if you’ve never worked with MCUs before.  Instead, what I’d like to discuss today is a purely analog circuit for PWM servo control.

First, let me give you a little background information.  Hobby servos are typically connected by three wires: power (red), ground (black), and signal (yellow/white).   The power and ground lines are typically hooked directly to your battery or power supply.  The signal line, however, is used to command the servo to move to a specific angular position.  This signaling is normally accomplished via pulse-width modulation (PWM).  That is, a digital pulse is sent to the servo on a routine basis (e.g. at 100Hz, or 100 times per second).  The width or duration of this pulse determines the position of the servo’s horn. For instance, a pulse width of 1ms commands a fully clockwise rotation, a width of 2ms commands a fully counter-clockwise rotation, and a width of 1.5ms will center the horn.

Now the question is, how do we generate such a signal?  Why, we simply use the following pulse-width modulator circuit (adapted from Maxim Application Note 3201):

PWM Generator SchematicAlright, so maybe you’re thinking, “Dude, that’s a big circuit.”  Well yea, it sortof is.

But then again, those three op-amps could actually all be housed inside a single 14-pin DIP/SOIC package.  And beyond that, all you need are eight resistors, one capacitor, and one potentiometer (a variable resistor).  So while this may be physically more complex than just plopping down a microcontroller, there’s no software required.

So just how does this circuit create our PWM signal?  Well let’s start with the “Integrator” section.  This group of components (R1, C1, and U1) mathematically integrate or sum the voltage wired into the left terminal of R1 (line label #5).  Put simply, the capacitor C1 is summing up this input voltage over time.  To see how this happens, let’s start by analyzing the node between R1 and C1 (label #2).  Now assuming all of our op-amps are ideal (a fair assumption in most cases), no current will enter or leave their inverting (-) and non-inverting (+) terminals.  Since the current flowing through a series connection of electrical components (R1 and C1) must be equal, we can write the following:

Integrator Equation Derivation

The first half of this formula may look familiar; it’s ohm’s law (V/R = I).  However, we’ve defined the voltage across R1 as (V5 – 2.5).  Why?  This is because the voltage at the input terminals (+ and -) of our ideal op-amp must be equal since we have a negative feedback path (a connection from output to inverting (-) terminal) through the capacitor.  And we know that the voltage at the non-inverting (+) terminal of the op-amp must be 2.5V because of the voltage divider at node #1.  Thus, since C1 is providing a feedback path for the op-amp, we can safely assume that the inverting terminal is also at 2.5V.  The second half of this equation comes from the I-V relationship for capacitors, I = C*dv/dt.

So if we solve this equation for V3, with an initial capacitor voltage of zero, we get:

Integrator Equation DerivationIf we keep the voltage V5 constant, we’re left with just an equation for a straight line.  Basically, the output V3 starts at 2.5V, then ramps linearly up/down, depending on V5, as time (t) goes on.  If left unchecked, the output of U1 would eventually hit a supply limit (either 0V or 5V). However, the second half of the above circuit, labeled “Oscillator Comparator” ensures that this does not happen by switching V5 between 0V and 5V.

Let’s take a look at U2, the second op-amp pictured above.  You’ll notice there’s no feedback path between its output and its inverting (-) terminal.  So what we’ve got is a comparator.  That is, if the voltage on its non-inverting (+) terminal is greater than that on its inverting (-) terminal, the output of U2 will be roughly 5V (our positive supply voltage). Otherwise, the output will be roughly 0V.  I say “roughly” because this op-amp (TL072) can’t operate “rail-to-rail”, which means its output can’t quite reach our supply voltages.

In order to understand this comparator a little better, let’s take a look at the point at which it switches between its high (5V) and low (0V) output.  Since the inverting terminal of U2 is fixed at 2.5V by the voltage divider at node #1, this switching must take place when node #4 passes 2.5V.  Let’s determine the voltage V3 necessary for this to occur.  To begin, I’ll equate the currents through R2 and R3 (since again, no current flows into the + terminal):

Switching Point DerivationNow don’t be confused about where that 2.5V is coming from.  This is the switching voltage for U2.  We’re not saying that U2’s non-inverting (+) terminal is fixed at 2.5V.  It’s not, because we don’t have negative feedback.  This voltage will vary based on V3 and V5.  Anyway, solving for the switching-point voltage V3 we obtain the following:

Switching Point DerivationSo we’re going to have two switching points, based on the two possible values of V5.  When V5 is 5V, V3 will be decreasing linearly, and a switch will occur at V3 = 1.325V. However, when V5 is 0V, V3 will be increasing linearly, and switching will occur at V3 = 3.675V. So this is how the oscillation happens: V3 ramps linearly in one direction until it reaches a switching threshold, at which point the integration reverses and V3 ramps backwards.  So what does this give you?  A triangle wave, as seen in green in this PSpice simulation:

PSpice Simulation (Red = Threshold, Green = Triangle Wave Oscillation, Blue = Output)

Of course, we can’t just use a triangle wave to signal our servo.  What we need now is a third comparator to generate a PWM signal using this triangle wave and a variable threshold voltage (the red line pictured above).   This is where the components around U5 come into play.  Again, since U5 has no negative feedback path, it operates as a comparator.  Thus, its output can only be 5V or 0V (roughly).  So if we feed our triangle wave into its non-inverting (+) input, and a DC threshold voltage into its inverting (-) input, what we get at the output is a square wave (the blue line) whose pulse width is inversely proportional to our threshold (i.e. a higher threshold yields a shorter pulse).

The last trick here is that we can’t just hook up a potentiometer (pot) between power and ground.  That would give us a threshold voltage variable from 0V to 5V. What we actually need is a threshold voltage that varies from about 3.2V to 3.5V, for a pulse width ranging from 1-2ms (based on the form of triangle wave shown above).  Well in order to accomplish this, I’ve placed two additional resistors (Rx and Ry) in series with the potentiometer (Rpot).  In order to determine appropriate values for these resistors, I’ll start with two voltage divider formulae which are based on the two limits of the potentiometer:

Comparator Threshold Resistance DerivationSo when the pot’s screw is turned fully clockwise, the pot’s entire 10kΩ resistance will be placed between Rx and the inverting (-) input of U5.  This will produce our maximum threshold voltage, VH.   However, when the screw is turned fully counter-clockwise, the pot will act as a short between Rx and U5, yielding our lowest threshold voltage, VL.  If we now combine and solve these two formulae, we can determine values for Rx and Ry:

Comparator Threshold Resistance DerivationNote: To determine resistances for different potentiometer values, you’d just need to replace the 10k in the first set of equations with your updated value and re-solve.

Now of course, all we really need here is a means of controlling the threshold voltage at U5’s inverting (-) terminal.  Back when I was working on my automatic ping-pong ball launcher, I wanted to use my laptop and a DAQ card to control my servo.  The DAQ card I had available at the time didn’t allow me to generate precisely-timed digital signals.  However, it did provide several analog outputs, which I could have connected directly to U5 in order to control this circuit’s pulse width.  But I didn’t know about this circuit back then, so I actually just used a microcontroller programmed to generate the appropriate signals based on an ADC input connected to my DAQ hardware.

Finally, you may also be wondering, how can I calculate the frequency of this PWM signal?  (Or maybe you’re getting sick of all these equations?)  Well, given the above formulae, it’s actually quite simple to calculate.  We just need to set the integration formula equal to the switching voltage formula, like so:

Switching Frequency DerivationWe now solve for the time t, then multiply by four (since this equation gives you the time required by one quarter of a full cycle), and invert to find the frequency:

Switching Frequency DerivationAlright, enough of these crazy formulae.  Pictures of the final circuit?  Yes, please!

Protoboard Closeup

You’ll notice that Rx is actually a series combination of three resistors, while Ry is a series combination of two resistors.  This is because I didn’t have suitable values for Rx and Ry just lying around.  Oh well, it just makes things a little messier!  Here’s the full setup:

Full Protoboard SetupFinally, here’s a screenshot of the IOBoard oscilloscope VI I used to test out my PWM circuitry.  You’ll notice that, as I mentioned earlier, the comparator’s output doesn’t quite reach 0V and 5V because these op-amps (TL072) do not have rail-to-rail outputs:

IOBoard Scope - 2ms Pulse Width

One final note on the schematic above.   The resistance R10 should have been unnecessary. I initially included it because PSpice wouldn’t run my simulation with U5’s output floating.  However, after constructing this circuit I found it necessary for reliably servo operation.  I’m not entirely sure why this was the case; perhaps the voltage levels without R10 were slightly outside of the servo’s acceptable range?  The signal on the screen certainly didn’t appear much different with or without it.  Perhaps if I had a higher frequency/resolution scope I’d see something more telling…  Oh well, it may not be an issue with your servo.

Anyway, if you have any questions on this circuit or would like to make suggestions, feel free to leave a comment.  I’d love to hear about your experience.

Also, please help yourself to my PSpice files (from the Orcad 16.0 student demo).  These were used to create the schematic shown here as well to perform simulations.

PWM Generator Schematic 2

Here’s my final bill of materials (BOM):

  • 2x TL072 Operational Amplifier
  • 1x Hitec HS-81 Servo
  • 1x 4.7uF Capacitor (Can be electrolytic, despite the slight negative voltage)
  • 1x 10kΩ Potentiometer
  • 2x 20kΩ Resistor
  • 2x 1kΩ Resistor
  • 1x 470Ω
  • 1x 119kΩ Resistor
  • 1x 166kΩ Resistor

Update (11/2/2010): It was pointed out that the TL072 may require a minimum supply voltage of 7V, so 5V could be cutting it a little close here.  Now I’ve looked through the datasheet and don’t see a specific limit mentioned, but most of the graphs do only go down to ±3.5V (which I suppose you could interpret as 7V).  Regardless, the circuit works fine with a single 5V supply, although the outputs don’t go rail-to-rail (which is normal operation), as I mentioned earlier.  The real concern with most op-amps is their upper supply limit, as you don’t want to fry anything.  Also, you might be more interested in the TL074 for this project, which contains four op-amps in one package.  I didn’t happen to have a quad op-amp lying around when I built this circuit, hence the two duals.

V5

So You Want to Use PWM, Eh?

PWM Waveform Captured on an OscilloscopePulse-width modulation. It probably sounds a little confusing if you’re new to electronics. Kindof a word mashup, really. What do pulses, width, and modulation have to do with each other anyway? I remember first learning about PWM during my freshman year of college at RPI. I was in a pilot course called “Foundations of Engineering” under the excellent instruction of Professor Kevin Craig (whom I later worked for). I remember thinking later, “Hey, this PWM stuff is pretty clever!” So let’s take a look at PWM and see what we can learn. (If you’re already familiar with the basics of PWM, skip down a few paragraphs for more advanced topics and experiments!)

Say you’ve got a light-emitting diode (LED) and a battery. If you connect the two directly, the LED should produce a lot of light (assuming the voltage of the battery isn’t too high for the LED). But what if you wanted to reduce the amount of light that LED produces? Well, you could add a resistor in series with the LED to reduce the amount of current supplied by the battery. However, this won’t allow for easily adjustable brightness and may waste a bit of energy. That loss may not matter for a single LED, but what if you’re driving several high-power LEDs or light bulbs? This is where pulse-width modulation comes into play.

PWM Graph - 30% Duty CycleImagine you could connect and disconnect the LED and battery multiple times per second, causing the LED to flash or pulse (see graph above). If this ON-OFF cycle is fast enough, you won’t even notice the blinking. In fact, the LED will appear to be continuously lit, but reduced in brightness. In addition, its brightness will be proportional to the ratio of the on and off times. In other words, if the LED is connected for 30% of a pulse cycle, it will appear to be producing about 30% of its full brightness continuously, even though it’s actually turning completely on and off. So to adjust the brightness of the LED, all we need to do is adjust, or modulate, that ON-OFF ratio, also known as the pulse width – hence the name! The ratio between the on and off time is also commonly called the duty cycle.

Now in case you’re imagining yourself frantically flipping switches on and off, or tapping wires against battery terminals, you can stop. Just put a transistor in series with your LED! It can act as a switch which can be controlled by a microcontroller or some type of oscillator circuit (see links below).

Hobby Servo (Commanded via PWM)So what’s PWM good for, anyways? Well, dimming LEDs and other lights is just one of a number of applications (example). You’ll also find PWM used in motor controllers. You can make a very simple DC speed control using a PWM generator and a single transistor (examples – notice the extra diodes in use here to prevent damaging inductive spikes). In addition, PWM is very important for some types of power supplies; specifically the aptly-named “switched-mode” PSUs. This technique can also be used to create a digital to analog converter (DAC) by low-pass filtering the square wave. Finally, pulse-width modulation is sometimes used as a means of digital communication. For example, to command the position of a hobby servo.

Now you may be wondering why I’m writing about PWM all of a sudden. Well, there’s actually a point to all of this background information. By now, you’ve probably seen a car or two with these new-fangled LED tail lights. They’re pretty easy to spot since you can typically make out the individual LEDs within the whole tail light assembly:

Ford LED Tail Light Upgrade - Ain't that a Fancy Photo?
But have you ever noticed that on some cars (e.g. Cadillacs), these lights tend to flicker? You may not see it if you’re looking straight ahead, but if you quickly move your eyes from left to right, you may catch a glimpse of the flicker created by a low-frequency PWM controller. Now, call me strange, but I find this really annoying and distracting. Maybe I just have fast eyes or something, but I hate flicker. Back in the days of CRT monitors I could usually tell the difference between 60Hz and 70Hz refresh rates. But in the case of these tail lights, it sounds like there’s danger for people with photosensitive epilepsy. According to the Epilepsy Foundation, flashing lights in the 5 to 30Hz range can trigger seizures. Obviously, having a seizure while driving would not be a good thing for anyone.

By the way, if you’re ever trying to determine the frequency of a blinking light, just snap a couple pictures while moving your camera (or the light). The one catch is that you need to be able to specify a known shutter speed. Then you just have to count the blinks and divide by the shutter speed (in seconds) to find frequency. Here’s an example:

LED PWM Frequency Comparison

This method can also give you a pretty good indication of duty cycle – in this case it looks to be about 60%. Here’s a second shot I took while on the road one night. You can tell the streetlights are running on 60Hz AC (although they’re not LEDs so they never go completely dark during a cycle), while the green stoplight is likely getting DC:

Pulsing Streetlights

I’m thinking this long-exposure shot might also pass as modern art in some circles.

The Advanced Stuff

So what’s the deal with these awful low-frequency PWM tail lights? Well, one reason you might choose a lower frequency is to save on energy lost during switching. Both LEDs and the transistors used to drive them have parasitic capacitance. In other words, they store a very very small amount of energy (think nanojoules) each time you turn them on. This energy is consumed in addition to the steady-state power drawn by the LED to provide illumination. Furthermore, this stored energy is rapidly dissipated (and thus not recovered) each time the device turns off. Now if you’re turning an LED on and off fifty times per second, it’s probably no big deal. But what if you wanted to eliminate any possibility of flicker by driving the frequency up into the kilohertz range? Would this introduce substantial power loss? I was curious, so setup a simple experiment to find out.

Test Setup
The heart of this test circuit is fairly simple – two bright red LEDs (Model OVLBR4C7) along with 92Ω current-limiting resistors controlled by a BS170 MOSFET. To measure the power consumed by this circuit, I’ve taken a non-traditional approach. Because I was worried that the cheap ammeters I have available would be thrown off by varying PWM frequencies, I decided to measure power consumption based on the discharge time of a supercapacitor. And who doesn’t love supercaps, anyways?

The theory is pretty simple. The energy stored in a capacitor is equal to ½*C*V² (Joules). So all I had to do was charge up the cap, measure its voltage, let the circuit discharge it over a fixed period of time, then measure the final cap voltage. For my 2.5F capacitor (from NessCap), I chose ~60 seconds as my discharge period. Here’s a screenshot of the voltage logging application I used to collect my test data:

IOBoard Test Program
The white line in the graph above plots the capacitor voltage during discharge. The red line indicates the voltage measured across a phototransistor (L14C1). This was used to quantify the amount of light produced by the LEDs at each test point. To get a better measurement I covered the LEDs and phototransistor with an opaque plastic cup, then covered the whole setup with a shoebox and turned off the lights. I was trying to see if, for some reason, the intensity of the LEDs was non-linear with respect to duty cycle or was affected by PWM frequency. Unfortunately this data turned out to be rather boring, but I’ve still included it in my summary spreadsheet which you can download below.

Now before I go on, you’re probably wondering what sort of data acquisition hardware I’m using. Well I doubt you’ve heard of it as it hasn’t yet been commercially released. Right now it’s being called the RPI IOboard. It’s a pretty impressive piece of hardware with dual 12-bit, 1.5MSPS ADCs, dual 14-bit, 1.4MSPS DACs, and a host of digital I/O all powered by a 400Mhz Blackfin processor. For the past few years it’s been developed at RPI and tested at a number of schools across the country. However since the project’s lead professor, Don Millard, left RPI last year, I’m not exactly sure what will become of the board. The screenshot you see above is actually one of several executable VIs I developed as examples for use with the board. Further information on the hardware can be found here.

Test Setup Closeup
So back to the experiment at hand. For my first round of testing, I utilized the IOBoard to generate varying PWM signals for the MOSFET. Thus, the current required to drive the BS170 was not included in my first measurements. I varied both frequency and duty cycle for three pairs of LEDs: white (C513A-WSN), red (OVLBG4C7), and green (OVLBR4C7).

TABLE 1: Data for power consumption tests without gate-drive losses:

Frequency/Duty Cycle (WHITE LED) 30% 60% 90%
50 Hz
36.15 mW 62.08 mW 84.89 mW
300 Hz
36.26 mW 63.50 mW 85.12 mW
10 kHz
38.75 mW 64.25 mW 86.14 mW
100 kHz
38.52 mW 62.80 mW 86.59 mW
Frequency/Duty Cycle (RED LED) 30% 60% 90%
50 Hz
54.70 mW 93.82 mW 123.75 mW
300 Hz
57.76 mW 93.81 mW 125.35 mW
10 kHz
56.99 mW 94.00 mW 126.08 mW
100 kHz
56.61 mW 95.11 mW 125.47 mW
Frequency/Duty Cycle (GREEN LED) 30% 60% 90%
50 Hz
41.49 mW 71.29 mW 91.65 mW
300 Hz
41.93 mW 70.29 mW 91.69 mW
10 kHz
41.90 mW 69.96 mW 93.36 mW
100 kHz
42.57 mW 69.71 mW 93.58 mW

So if you look through the data above, you’ll notice that there is, on average, a slight positive correlation between power consumption and frequency. In other words, the higher the switching frequency, the greater the power consumption. This is just what we would expect. Again, this data does not include losses due to transistor gate capacitance, only losses due to the LEDs’ capacitance and the MOSFET’s output capacitance.

For my next test, I wanted to see what losses might be incurred in driving the MOSFET’s gate. Thus, I called on my trusted 8-bit AVR microcontroller (ATMega644P). I wrote a very simple program (which may be downloaded below) to produce a varying PWM output from one of the MCU’s timer/counter outputs. I then measured the power consumption of the entire circuit, AVR included. For this test I only used a 60% duty cycle:

TABLE 2: Data for the ATMega644 driving a BS170 and two green LEDs:

Test Frequency Total Average Power (mW) Calculated Switching
Losses (mW)
50 Hz
91.741 0.000
300 Hz
92.708 0.000
10 kHz
92.622 0.016
100 kHz
92.978 0.157
1 Mhz 95.789 1.568

TABLE 3: Data for the ATMega644 driving a FDP8860 and two green LEDs:

Test Frequency Total Average Power (mW) Calculated Switching
Losses (mW)
50 Hz
93.475 0.004
300 Hz
95.809 0.021
10 kHz
98.238 0.710
100 kHz
114.526 6.848
1 Mhz 161.657 60.914

TABLE 4: Data for the ATMega644 directly driving two green LEDs:

Test Frequency Total Average Power (mW) Calculated Switching
Losses (mW)
50 Hz
69.278 0.000
300 Hz
67.926 0.000
10 kHz
68.778 0.015
100 kHz
68.534 0.147
1 Mhz 70.708 1.467

Discussion of Results

In Tables 2-4, we’re starting to see a much clearer positive correlation between frequency and power consumption. For these tests I also added a fifth data point not gathered with the IOBoard: a frequency of 1Mhz. This should in theory increase our maximum losses by 10x. The results seem to support with this prediction.

The tables above also include a rudimentary calculation for switching losses based on capacitances. I measured the capacitance of my green LEDs to be about 120pF (this value was not mentioned in the datasheet). The gate capacitance of the BS170 is given in its datasheet as 24pF. Finally, the input capacitance of the FDP8860 (a much beefier power MOSFET) is typically listed as 9200pF. To determine switching losses I again applied the formula for a capacitor’s stored energy (½*C*V²). At each switching interval, the parasitic capacitances in the circuit store and then dissipate this much energy. So to determine how much power is lost, we simply multiply this lost energy by the switching frequency (since 1 watt = 1 joule/sec). It appears that these calculated figures match the measurements fairly well. Isn’t it nice when math agrees with reality? Gives me a fuzzy feeling, that.

Now we can essentially think of the 50Hz test point as a baseline with zero switching loss. For the data in Table 4, the 50Hz power consumption is about 69.3mW. The calculation predicts that at 1Mhz, we’ll lose 1.5mW to parasitic capacitance for a total consumption of 69.3 + 1.5 = 70.8mW. This isn’t that far from our measured 70.7mW.

It’s also interesting to note the substantially higher losses incurred when using the FDP8860. This is largely due to its (relatively) enormous input capacitance of 9200pF. This is nearly 400x the capacitance of the tiny BS170. That’s the price you pay for the ability to sustain larger currents without overheating. For more information on power MOSFETs have a look at this IRF document called “Power MOSFET Basics.”

Summary

Well after all that, I’m going to say that whoever manufactures these tail lights can’t really use efficiency as an excuse for choosing a low switching frequency. Unless they need huge FETs to drive huge currents, switching losses really aren’t so much of an issue. I’m guessing that somehow it was just cheaper to go with a low frequency. I’m pretty sure the components themselves aren’t any cheaper, but perhaps the assembly was less expensive. It may be that some automakers already had a low-frequency module in place to drive old incandescent bulbs and then when LEDs came along they just kept using that same module. Anybody out there care to comment on this?

So my advice to those making LED dimmers: pick a frequency of about 300-500Hz to eliminate flicker while keeping switching loss low. Then find yourself a sufficiently large transistor with low capacitance and low on-resistance. And if you’re working on motor controls or power supplies, things get a lot more interesting, but as a start, try a frequency in the 20+ kHz range to avoid audible whine. Good luck!

  • For further reading on LED losses, try this NI article: Light Emitting Diodes.
  • For more accurate MOSFET swithing loss formulae, try this MAXIM article.
  • Test code for the ATMega644P is available here.
  • A complete spreadsheet containing all data can be downloaded here.

Update (9/22/2010): In the comments below, Jas Strong pointed out that in my switching loss calculations, I’d also neglected the power lost in the MOSFET during turn-on. Jas is absolutely correct about that; I should have mentioned this previously. Essentially, while the gate capacitance of the MOSFET is charging, the resistance between drain and source will pass from very high to very low resistance as the conduction channel is formed. This time period, although short, includes a region of, shall we say, “moderate” resistance which briefly dissipates additional power.

Now, in the case of my two-LED test setup, I neglected the effects of resistive switching loss because they’re quite small. Let’s take a quick look at the numbers. First, we need to know how long it takes Vgs to reach the threshold voltage. For simplicity, I’m going to assume that my AVR drives the gate with a constant current of 40mA (the maximum an AVR will provide per I/O pin). Our worst-case turn-on time will occur with the FDP8860, which has a gate capacitance of 9200pF and a typical threshold voltage of 1.6V. Using the formula ic = C*(dv/dt), I find dv/dt = 4,347,826 which means we reach Vth in 1.6/4,347,826 = 368ns. At a switching frequency of 1Mhz, this represents about 37% of a switching cycle. However, we need to double this since we lose power durning turn-on and turn-off. Thus, we’re losing energy in the MOSFET’s resistance over 74% of a single cycle at 1Mhz. That sounds like a lot, but just how much energy is actually lost?

To determine this loss, I’m going to make a big assumption and say that the MOSFET ramps linearly from 20kΩ down to 0Ω during turn-on. I’m also going to assume the voltage of the diode is constant at 3V and the power supply is constant at 4.2V. Remembering that I have 92Ω resistors in series with the LEDs, the instantaneous power dissipation in the FET becomes 2*Rmos*[(4.2-3)/(92+Rmos)]^2 (based on the fact that I have two LEDs and using the formula P = RI^2 and ohms law, I = V/R). Now I need to integrate to determine an average power dissipation over this interval. If my math is correct (feel free to check me), I get a loss of 0.632mW. Since this occurs during 74% of a cycle, the total loss at 1Mhz will be about 0.468mW. Not too serious in my opinion.

Now of course, the power required by my two-LED setup is piddly in comparison with that drawn by a couple brake lights. Once you start sinking more current into your LEDs, this resistive switching loss, as well as the on-resistance of your MOSFET, is going to start to make a bigger difference. So thanks very much Jas for pointing this out!

Frequency Duty Cycle Start Cap Voltage Start Phototransistor Voltage
50 0.3 4.248407 1.464428967
300 0.3 4.246836767 1.4911225
10000 0.3 4.2389857 1.4911225
100000 0.3 4.243696367 1.538228733