Automated Amplifier Testing with Tractor

We previously went through a quick characterization of a tube guitar amplifier in the blog post located here, with a promise that we’d come back and automate the testing.

That is the topic of the post today.

Background

In automating the testing, there are a few things to keep in mind. Remember, factory testing during production is very different from lab testing during development. The former verifies the design was correctly assembled. The latter ensures the design itself is correct.

To ensure the design was assembled correctly, we can do that by spot-checking the design at both nominal and the limits of operation. In other words, look closely at how it performs when a very quiet signal is presented, or a very loud signal. Or a signal at the upper or lower frequency limits of operation. There are a few things to be looking for: First, a part could have been inadvertently omitted on a particular board. That most often will occur in the case of a part "tombstoning" during reflow. If the amount of solder paste under the pads of a lightweight part isn't equal due to a placement error (eg the part partially missed the pads), then during reflow the pads will experience unequal forces on the part and the result is one side of the part can be raised up off of the board. Effectively the part isn't present at all because it's not connected on one side.

Next, consider that a part could have been hung wrong. This could mean that where you expected a 1K resistor to be placed, in fact a 12 pF capacitor could have been placed on every single board. These are catastrophic problems that will be found on first inspection.But a more difficult problem to detect could be that the 1K resistors were swapped with the 1100 ohm resistors. A solid, repeatable test procedure ensures you can find it in minutes rather than hours. A comment such as "The first five amps we ran right off the line have all failed nominal gain at 1 kHz" is a clue the line was setup wrong.

Once we're sure the board was assembled correctly, we can move on to verifying the product was assembled correctly. This means ensuring all the switches work and the knobs have been wired correctly (eg the wiring isn't reversed).

Finally, we want to let a human listen to the amp, twist the the pots while listening for scratchiness (which is very hard to pick up algorithmically), and verify indicators are functioning.

It's better to collect more information when you start. This allows you to build statistical confidence in your design. Collecting and storing test data is cheap. And there's nothing better when diagnosing a customer return than being able to run the factory test on the device and see what has changed. That can give insight into long-term failure trends that are tough to otherwise determine. If a device left your factory with a THD+N of -85 dB, but it came back because the customer claimed the playback was very noisy, and you now measure a THD+N of -40 dB, you might have a first indication of an ESD problem where an opamp has been degraded due to an ESD strike. That info would allow engineering to take a more proactive approach to improving the product. 

Building the Amp Test in Tractor

For this effort, we’re using version 0.9 of Tractor, available here. Tractor is our open-source software designed to automate testing. From capturing a serial number, to testing, to saving all the results to a database, and allowing you to query stats on the production--that is what Tractor does. Without any coding required.

We start by opening Tractor:



Let’s add a first test to capture a serial number. We click in the “Add Test" button above, and we get a dialog for selecting the type of test we’d like to add.



The test we want to run is IdInputA00. This test will prompt the operator to complete an action. In this, we want them to enter the serial number of the device being tested. Note that there are categories of tests we could have selected from. The "Operator" category are the tests that required the participation of the operator--either to enter a serial number, verify an indicator, or change cabling. 



Clicking OK takes us back to the home screen and we can see the new test has been added. In the test plan pane on the left side we can see the test has been given a unique name “IdInputA00-0” and on the right side we see we can change some details about that particular test. And towards the bottom, we can see the test description and whether there are any issues that would prevent it from running right way.



We can fill in the Prompt Message in the Test Details pane and ask the user to enter a serial number, and then hit OK to save the changes



Let’s hit the Run Tests button and see what happens:



At this point, the main screen goes away and is replaced with the Operator Screen. And we can hit “Start” to kick off the test we just created:



A prompt shows up asking us to enter the serial number. If you have a scanner, then scan it now. Otherwise, just enter “Unit 1” and press Enter




We’re greeted with a big green “PASS” because we just passed our first test, which was instructing the operator to scan a barcode.

Now, let’s go back and add another test. This time, we’ll use the PromptA00 test, which instructs the operator to complete an action



Back at the main screen, we can see the test details for this test.

Above we see we have a prompt message, which set to “Set controls as shown and verify power illuminated”

We’ve also specified a bitmap file which we prepared in another program. This bitmap will be displayed to the operator to help the operator recall the correct positions for the settings.

Because we’ve asked the operator to “verify power illuminated,” we want to consider the possibility that the test might fail. That is, if the operator sees the indicator lamp isn’t active, they need to be able to flag the test as failed. For that reason, we want to display the fail button.

Now, we can go and run the tests again, and this time after we’re asked to enter a serial number we see the second test that we just created:



Above, we’re instructing the operator to set the control positions as shown and verify the lamp is illuminated.

That was the second test. Now that the amp is setup as we expect, let's run audio tests.

On to the Audio Tests

Let’s formulate a basic set of audio tests we’ll run to evaluate this amp:

Noise

First, let’s measure the noise of the amp without any signal present at the input. From previous measurements, we might expect this to be between -60 dBV to -50 dBV in 20 to 20 KHz bandwidth. The noise test is called RmsLevelA01. This measures the RMS of any signal in a given bandwidth.

For this test, we’ll use a 16K FFT size. Higher FFT sizes will resolve signals buried in noise a bit better, for a broadband measurement we don’t need to resolve tones from noise—speed is better. We probably could go smaller here too, but we do want to clearly resolve the 60 Hz components in saved graphs. More on that later.

We’ve opted to measure the left channel only (since the amp is mono), and we’ve specified the test limits are -60 dBV to -50 dBV. In other words, a measurement of -65 dBV would fail, but a measurement of -55 dBV would pass. We can tighten this window up as we look at yields and run a few hundred amps.

The Analyzer Input Range is 6 dBV, which means the attenuator relay will be off.

If we specified 26 dBV, then the attenuator would be engaged.

And lastly, we’re specifying the RMS measurement is to be done from 20 to 20 KHz.

Gain

Next, let’s check the gain of the amp at 200 Hz (roughly the open G string on a guitar) with a very small signal: -80 dBV. From previous measurements, we’d expect that gain to be 40 to 50 dB.

Then we’ll check the gain at a high signal level (for a guitar) of -25 dBV. From previous measurements, we’d expect that gain to 35 to 45 dB. Some quick math: -25 dBV + 40 dB of gain = 15 dBV of signal level, which exceeds the 6 dBV input level of the QA401. So, in this case, we specify we want the 26 dBV input range on the QA401.

Next, let’s verify THD at -25 dBV input at 1 kHz. We’d expect some gritty distortion (this is a tube guitar amp, after all) and from previous measurements we’d expect that distortion to be between -10 and -5 dB. Verifying distortion at high levels is a very good way to test that your internal DC rails are where they are supposed to be. It will also verify that your internal DC rails are able to deliver the current the amp needs at power. And since we’re looking at a required gain, we implicitly know we’re achieving a certain level of power. In other words, check the gain, not the power.

We want to verify the tone knob is working as expect at the max setting too, so we’d prompt the operator to set the tone control to max and then take a measurement at 1 kHz

Previously, we’d characterized the tone control as follows, and you can see the gain at 1 KHz with the tone knob fully CW is around 54 dB at -60 dBV output.

To capture that gain measurement, we use the following:



And finally, we’d like to let the operator audition a WAV file of a guitar being played so that the operator can verify the knobs are free from scratching when rotated. That test is called Audition01.

Here’s our final list of tests:



Next, we’ll quickly scan through the list and re-order tests based on attenuator settings. Because there’s settling time of a about a second required after the attenuator state is changed, if we can group the tests that have the attenuator off at the beginning, and tests that have the attenuator on at the end, then we can save a bit of time.

After re-ordering, we end up with this test plan.



And now, we re-name the tests so that we can more easily see what each one does.



And a short video of the final test running on the amp:

 

Cloud Database Logging

Tractor will log your results to a SQL database you run (Microsoft's SQL Light is free and is what was used for development). But the latest release of Tractor can also use a cloud database.

The benefit of a cloud database is that it's infinitely scalable, backed up, and free: QuantAsylum is running the cloud database so that you can store your test data without having to worry about running your own SQL server. The cloud runs on Azure, Microsoft’s cloud computing initiative. We will store your test data for up to a year. If you want to store it for even longer, just ask.

The drawbacks of using the cloud database exclusively is that you won't be able to save screen captures of all measurements and your ability to query specific test data is more limited. More on that below. 

Continuing our demo, let’s make sure that we save the test results for all future runs to this cloud database. To do that, we go into the Settings menu and indicate we want to use the Audit Database. The audit database doesn’t store screen captures or other graphics that the SQL database will store: It just stores test results, both pass and fail, as they happen. 



With the audit database logging enabled, as tests complete they get written to your local drive, and then a background thread uploads them to a cloud database at about 1 test per second. In the capture below, you can see depth of the queue has grown to 3 tests, and in one second it will drop to 2 tests. The local queue ensures that if you lose internet connectivity your device test data isn’t lost. It will be collected on the drive until the cloud service is restored.



If you want to look up the results of a unit that was previously tested and saved to the cloud, that can be done from the tools menu ->Query Cloud option. In the capture below, on the left side a query has been run for a particular unit’s serial number (555 in this case). On the right side, we’ve queried for the stats on the measured noise floor all units. The database has 5 data points, and reports a mean noise floor of -50.79 dB with a standard deviation of 0.09 dB. Very handy to know.



If you want a local chronological history of all the tests run by Tractor, they are stored to an HTML file on your machine which you can access by going to Tools->Open Log in Browser.

That log makes it easy to see the results of tests, and you can click on the Screen links and see the actual screen capture from the test. Very handy if you are trying to track down why a failure occurred.


Summary

If you aren't capturing test data on every unit that leaves your factory, you should be. More and more, product liability insurers are interested in knowing you are delivering products with a measurable level of quality. Defect measurement is key here.

It should take under an hour to sit down and walk through the example above on your own amp. We’d love to hear what you are doing and what features you might need. Just let us know.

If you liked the post you just read, please consider signing up for our mailing list at the bottom of the page.