I have been fortunate enough to have been selected as a reviewer for numerous types of test equipment over the years and have been a big advocate for the use of SCPI automation, I have seen differences between different products in terms of their capabilities.
When it comes to datasheets, often the performance for SCPI remote control is boiled down to a single, vague figure. Whether it says <5ms, or <30ms or <120ms typical, it is hard to know which instrument is faster. Of course, whether this matters at all depends on your application and needs – if you’re reading the meter measurements and they only update at a rate of 4Hz, and it is the only instrument you’re working with, then it probably isn’t so critical. But if you’re looking for fast sampling, have numerous configuration commands to upload between measurements, or running sequences involving multiple instruments (especially with a single-threaded program) without the benefit of a global trigger but still want “near-simultaneous” measurements, then a faster command response time could really be beneficial.
I thought this would be rather simple to measure with a simple program but looking around, there doesn’t seem to be much in the way of benchmarks. The only thing I came across was the benchmark feature in lxi-tools which seems very limiting as it only supports LXI LAN instruments and only benchmarks the *IDN? command.
As a result, as it would be useful in an upcoming RoadTest, I decided to write a benchmark of my own called scpibenchv1.
SCPI Benchmarking 101
Benchmarking SCPI performance can be quite simple – in some cases it can just entail firing the same command a hundred times to the instrument and timing how long it takes to receive all the responses, divide that by a hundred and you get the average time it took per command. This isn’t exactly wrong but there are some intricacies worth mentioning.
The first is to understand what you are measuring – I have depicted this in the above diagram for one query in green. The measured time can be thought of as consisting of transmission time (exaggerated, in purple) in both directions and the instrument’s processing time. This may mean that the results you receive are not necessarily going to match the datasheet’s times as they may only consider the processing time (in red).
This means that depending on your choice of VISA layer, your operating system, the physical interconnection between your PC and the instrument, the results are likely to vary. Deriving the true instrument processing time is not really possible, but that is probably not of great importance to a user anyway.
The second issue to consider is that many write requests have no way of knowing when they are actually done. If you just issue SOUR:VOLT 5, the instrument wouldn’t respond in any way. You could issue it hundreds of times in a row and not get any useful timing from that benchmark at best, or at worst, you could overflow your instrument’s input buffer or cause it to lock up entirely.
Instead, for such write requests, we must follow it with a query of some sort that elicits a response from the instrument. In my case, I prefer to use *OPC? which asks the instrument whether all operations are complete. It always returns 1 as soon as everything is done, so we have a firm answer that the previous requests are completed.
Unfortunately, this creates a situation where the measured time (in green) includes that of both the write and query request. There is no perfect way to derive the actual processing time for the first write request, but we can make some estimates by timing how long it takes to run *OPC? commands only, and subtract the timing from the sequence of two commands. This is not perfect, so the benchmark doesn’t automatically do this as it may introduce new sources of error. Instead, it seems most sensible to report times for individual queries or “combinations” of commands.
The scpibenchv1 Program
I have called the program scpibenchv1 and the code is listed in the Appendix which follows this posting.
The code has been written to be somewhat user friendly – it can be invoked with a single argument which is the VISA Resource Identifier. If present, it would use it, otherwise it will list available resources and ask the user to select one from a list.
Once selected, the code probes the instrument to check the presence of some mandatory commands needed for the benchmark before running through each command in the code (that I have defined, including common commands and commands specific mostly to power supplies), returning the average time for each command (set) execution over a fixed number (1000) iterations.
An example of the benchmark’s output looks as follows:
SCPI Command Benchmark v1.0 by Gough Lui (goughlui.com) ---------- No SCPI resource string provided! Invoke command with SCPI resource string as argument or choose device List of devices: 0: TCPIP::192.168.xxx.xxxx::INSTR 1: USB0::0x0AAD::0x0197::3638.4472k03-100856::INSTR 2: TCPIP0::192.168.xxx.xxx::inst0::INSTR 3: TCPIP0::192.168.xxx.xxx::inst0::INSTR 4: ASRL3::INSTR Which Device Number? 1 Probing required commands ... OK! Connected to: Rohde&Schwarz,NGM202,3638.4472k03/100856,03.034 08E58D74C1E as USB0::0x0AAD::0x0197::3638.4472k03-100856::INSTR ---------- Benchmarking *RST+*OPC? Average over 1000 loops: 98.57900190353394 ms Benchmarking *IDN? Average over 1000 loops: 1.034827470779419 ms Benchmarking *OPC? Average over 1000 loops: 0.7587606906890869 ms Benchmarking *STB? Average over 1000 loops: 0.7261652946472168 ms Benchmarking OUTP 1+*OPC? Average over 1000 loops: 26.44062042236328 ms Benchmarking SOUR:VOLT 5+*OPC? Average over 1000 loops: 1.715118169784546 ms Benchmarking SOUR:VOLT? Average over 1000 loops: 0.8286237716674805 ms Benchmarking SOUR:CURR 1+*OPC? Average over 1000 loops: 2.9131903648376465 ms Benchmarking SOUR:CURR? Average over 1000 loops: 1.1234571933746338 ms Benchmarking MEAS:VOLT? Average over 1000 loops: 1.1209640502929688 ms Benchmarking MEAS:CURR? Average over 1000 loops: 0.9687156677246094 ms Benchmarking READ? Average over 1000 loops: 99.98959684371948 ms Benchmarking FETC? Command not supported! Benchmarking OUTP 0+*OPC? Average over 1000 loops: 26.41445302963257 ms Benchmark Complete!
If something goes wrong, you will likely receive the message below, followed by a full trace-back which might give a hint as to why it did not succeed.
Probing required commands ... Failure on a required command for this benchmark. Exiting.
This could be because the wrong device was selected (or input), or the device itself does not support one of the required commands for the benchmark – namely *OPC?, SYST:ERR?, *RST and *IDN? Without these commands, it could become a bit difficult to benchmark the device, although the code could be modified to make certain assumptions about the device in order to run with a problematic device.
The benchmark, however, does have some special logic. For devices with “SOCKET” in the identifier, it adds the read_termination = “\n” setting which is most common. Another piece of special logic, which I am particularly happy about is the probe for command support.
Given a device, you probably don’t know whether a command is implemented by the device or not. Ideally, if you knew ahead of time, you wouldn’t try to execute commands the device does not understand. In reality, you can just try the command, but one of a few things could occur if the command is invalid for the instrument.
My probing logic begins by first clearing the error queue by repeatedly querying SYST:ERR? until it returns zero. Once the error queue is clear, then we can try the requested command. If the command is a query, then that can run on its own and will either return a value (completing the query) or timeout (in case the query fails). If the command is a direct command, then we will issue it and follow it with an *OPC? query to be sure it was executed. In both cases, we can check the error log and if it is still zero, then the device is happy and we can return True to say the command is supported. If the queue is non-zero or a timeout occurred, then something went wrong and we return False to say the command is not supported, then the benchmark for that command is skipped.
That way, we can write one benchmark which can include commands that do not run on all types of instrument and will gather whichever results it can. The downside is that you might get a few beeps from the instruments as they error-out on invalid commands, and the instrument must support the SYST:ERR? query for this mechanism of detection to work.
To cater for slow instruments, I have set the timeout to 10s, as modern systems usually use 2s as the time-out period. As a result, if the benchmark did hit a timeout during the probe, then a wait of 10s is expected. After this, the command is looped and timed with no intervening status checks to provide the most accurate command execution time. If it fails while iterating, Python will exit on an exception and it is usually a sign of communications instabilities or faults.
Experiment
As I have a number of instruments on hand, I decided to test the following instruments using the following interfaces on either my main desktop machine or a Lenovo E431 laptop, both running Windows 10 and NI-VISA 20.0:
- Aim-TTi QPX750SP [LAN SOCKET, USB CDC] (under review)
- Rohde & Schwarz NGU401 [LAN VXI-11, LAN HISLIP, USB TMC] (under review, on loan)
- Rohde & Schwarz NGM202 [LAN VXI-11, LAN HISLIP, USB TMC]
- Rohde & Schwarz HMP4040 [LAN SOCKET, USB CDC*]
- Rohde & Schwarz RTM3004 [LAN VXI-11, USB TMC]
- Keithley 2450 SourceMeter [LAN VXI-11, USB TMC]
- Keithley 2110 DMM [LAN BRIDGE*, USB TMC]
- Keysight E36103A [LAN SOCKET*, USB TMC]
- B&K Precision Model 8600 [LAN BRIDGE*, USB TMC]
Not all types of supported connectivity for each instrument have been tested. Preference was given to testing the best types of connectivity for each instrument, which means LAN HISLIP -> LAN VXI-11 -> LAN SOCKET and USB TMC -> USB CDC in order of preference, but some exceptions did occur. As HiSLIP support is not universal, VXI-11 was tested as well for instruments supporting HiSLIP. In the case of the HMP4040, there were some issues getting USB TMC to work stably so USB CDC was used instead. For the E36103A, it seems something is wrong with the LAN VXI-11 support resulting in communication hangs, so LAN SOCKET was used instead. In the case of the K2110 and BK8600, both are USB-only so LAN SOCKET tests were done via my Raspberry Pi-powered Python USB-TMC bridge script. My Rohde & Schwarz HMC8043 was not tested as it is not in my possession at this time, while the B&K Precision BA6010 Battery Analyser was not tested due to unstable communications (as noted in its review) and the Tektronix PA1000 Power Analyzer was not tested as it does not support the necessary commands and insists on having “:” at the beginning of each non-common command.
Results
The results from this benchmarking experiment were an extensive table of numbers. While they are specific to my test setup, they should be accurate to within about 1ms or thereabouts. Not all devices will support all tested commands, so data is not available where the device does not support the given command. There are probably variations of the commands which are worth including in future versions, especially targeting different types of instruments (e.g. INP 0 and INP 1 for DC electronic loads for example).
As I’m sure someone may ask – I have decided to release the raw data as a csv file if you are interested.
Results (Per Command)
Amongst all benchmarked commands, resetting the instrument to defaults took most devices the longest amount of time compared to other commands. Times did vary across a wide gamut. The quickest was the KE2110 at 13.4ms then HMP4040 at 26.5ms. This was followed by the KE2450 and E36103A managing around 50ms, the NGM202 managing in about 100ms, the BK8600 requiring about 250ms, the NGU401 requiring about 305ms and the QPX750SP and RTM3004 requiring around 365ms.
The only command tested by lxi-tools and perhaps the most basic of commands asks for the identity of an instrument. I’d expect this command to be fast and it is, with one exception – the HMP4040 in USB CDC mode which seems to lag behind everyone else significantly. The bulk of the responses range from a lightning fast 0.2ms from the KE2450 to a more pedestrian 10ms from the HMP4040 in LAN SOCKET mode. The penalty of commands passing through the Raspberry Pi Python USB-TMC bridge is about 7.5ms for the KE2110 and 4.7ms for the BK8600. It can also be seen that the NGM202 and NGU401 have a common theme where HiSLIP can be seen to cut the time from VXI-11 almost in half, and the USB TMC mode cuts that down by another 70-80%. The KE2450 is still the stand-out as its VXI-11 performance meets other instruments USB TMC performance and its own USB TMC is faster than anything else tested.
This next command is another simple one – checking if everything has been done. As there was no actual command sent in-between queries, it is expected to return just a solitary “1” almost immediately. In this test, it seems the BK8600 has some strange behaviour, taking about a fifth of a second to respond even though nothing was supposed to be being done. The HMP4040’s USB CDC continues to “lag” while most other devices managed responses in the sub-6ms zone with the exception of the RTM3004 via LAN VXI-11.
Checking the status byte is another common operation. This time, aside from the HMP4040’s USB CDC performance, all instruments were able to respond in under 10ms, with the majority under 6ms.
This is the first command in the set that requires the instrument to do something – switch its output on. Not all devices can do this, so the list of devices has dwindled somewhat, but it can be seen that the KE2450 maintains its lead over all others (in part, because its output is in normal mode so no mechanical switching is required). The remaining instruments mostly take between 16 and 40ms to complete the command and operation complete query, except for the HMP4040 in USB-CDC which suffers a ~50ms penalty which seems to be a pattern.
The next requires more work – in this case setting the output. Many of the instruments could do this rapidly – under 6ms except the HMP4040 which required 13.9ms, the QPX750SP which took a bit longer at 20ms and the BK8600 which required a whopping 210ms.
In this command, we request a read-back of the voltage which is set. This should be a simple command and all tested devices managed to respond within 9ms, noting the obvious exception.
We try the same thing, except this time, we are setting the current. A similar trend is observed as for setting the voltage.
Similarly, for reading back the current, the results are almost identical.
Moving into the measure subsystem, this command requests a voltage measurement. How this is handled by instruments depends on how it is designed and thus key differences in performance begin to appear. There are many instruments which respond to this quickly – sub-9ms. It seems all of these instruments are returning the current measurement on-screen and thus can do so quickly. Others take 60 to 100ms to do so (such as the KE2450, E36103A, KE2110) – these appear to be performing a measurement conversion every time it is requested, rather than returning what is on the screen. As a result, they are slower, but it is likely that you will get a fresh answer every time!
A similar trend is seen with the current measurement command, although current measurement does take longer. The E36103A with practically zero load enters its low-current-measurment range which seems to add additional measurement time compared to higher currents. The KE2110 specifically engages a relay change during this request as it implicitly requires a change of DMM function to measure current – this mechanical constraint adds to the command time.
Next is the test of the READ? command which usually is used to return the reading on the screen. The NGM202 and NGU401 share the same software and behaviour – namely a blocking-read of the front panel values which refresh at a rate of 10Hz, therefore the 100ms result is to be expected. The KE2450 similarly returns a time around 83ms, based on the default settings. The KE2110 responds in about 23ms, again, based on instrument defaults. In this case, the readings are probably more influenced by the default instrument settings rather than the capability of the internal microcontrollers within the instruments.
A graph for FETC? has been omitted as it was only supported by the KE2450 which makes for no real comparison.
Finally, we make a request to turn the output off. The trend is almost identical to the benchmark of turning the outputs on.
Results (Selected Instruments)
Instead of drawing a graph based on commands, we can do the same for a single instrument across multiple modes of connectivity.
The recognised “speed demon” of the KE2450 can be seen above. Most commands are swiftly handled – in USB TMC mode, under 0.4ms for many of the commands. The measure and read commands can be seen to take significant time as they involve conversions, while the reset command is swift (comparing to other instruments).
The NGU401 which I have on loan is a little more leisurely with resets, and the read command is very consistent. Turning the outputs on or off takes a little longer than most other commands, but in USB TMC mode, many can be taken care of in 1ms, and setting commands in about 5.3ms.
Finally, the QPX750SP which I am soon to deliver a review on is similarly leisurely with resets, but otherwise dispatches ordinary commands within about 5.3ms and setting commands in about 20.7ms on USB-TMC which is fine enough considering it is a cost-conscious general purpose power supply.
Conclusion
The SCPI command performance of an instrument is rarely well documented in most datasheets, leading to an “unsatisfactory” guideline typical value. Of course, the performance will depend on the mode of connectivity and command being executed, but I thought this would be a great thing to measure and could be done simply (in theory) while recognising its limitations in not being able to exclude transmission time and also requiring a query command in each command set.
I thought such a benchmark would exist, but upon a quick search, it seemed none was to be found so I wrote my own very simple benchmark. The Python script uses pyvisa and simply probes the instrument to see if it supports a given command and executes it a fixed number of times while timing how long it took to determine the average execution time. The set of commands in this first version is mainly targeted at power supplies, but the code is easily extended to measure other commands as they become necessary and further tests may be worth devising (as noted in the appendix).
The results were quite illustrative of how some commands take longer than others, how some commands take longer because of the way they are implemented (i.e. taking an active measurement versus returning what is on the screen), some commands take longer because of mechanical constraints (e.g. the need to switch a relay) and some commands can take longer simply due to the type of connectivity in use. The amount of time taken for a given command can vary significantly by instrument to instrument, so it is hard to generalise instrument SCPI command response times overall.
This means that testing instruments for their performance may well be a worthwhile thing to do. While my raw data is available as a .csv file, perhaps you might want to run similar tests on your equipment to compare?
Appendix: scpibenchv1 Code Listing
The scpibenchv1 program code is listed below and can be downloaded as a ZIP file. It can be freely used and modified for your own needs, with appropriate acknowledgement.
The script will enable outputs, so be sure to disconnect everything from the instrument’s outputs before running the script. NO RESPONSIBILITY IS ACCEPTED FOR ANY DAMAGE CAUSED FROM THE USE OR MODIFICATION OF THIS SCRIPT! Use at your own risk!
It has been architected such that it is easy to add additional instrument-specific commands to the benchmark, with the possibility to define command-specific repetitions (as some may be very fast and others very slow). Perhaps a new version will also add the possibility to set random values within a range and granularity for outputs, set and read-back verify of those values and potentially test multiple instrument channels in case of different performance.
To run the benchmark, it is recommended that you have Python 3.x, pyvisa and NI-VISA installed on a Windows machine. The benchmark may also run on Linux using pyvisa-py as the VISA layer and pyusb for USB access, although results will vary and HiSLIP support will be lacking. I have tested it with LAN SOCKET, LAN HISLIP, LAN VXI-11, USB CDC, USB TMC connections with no problems, but cannot guarantee all SCPI instruments will be compatible out-of-the-box because of potential configuration issues with termination characters, etc.
# SCPI Command Benchmark v1.0 by Gough Lui (goughlui.com)
# September 2021
# Free to use and modify for your needs with appropriate acknowledgement.
# No warranties expressed or implied.
# Results may vary depending on your computer, VISA layer and mode of connectivity.
# ENSURE EVERYTHING IS DISCONNECTED FROM OUTPUTS AS SCRIPT WILL ENABLE/DISABLE THE OUTPUT!
# DISCLAIMER: I cannot be held responsible for any damage which may be
# incurred from the use or modification of this script!
import pyvisa
import time
import sys
print("SCPI Command Benchmark v1.0 by Gough Lui (goughlui.com)")
print("----------")
resource_manager = pyvisa.ResourceManager()
if len(sys.argv) > 1 :
# Use argment as resource ID if provided
resource_id = sys.argv[1]
else :
# List resources for selection
print("No SCPI resource string provided!")
print("Invoke command with SCPI resource string as argument or choose device")
devids = resource_manager.list_resources()
while (True) :
print("List of devices: ")
count = 0
for device in devids :
print(str(count)+": "+str(devids[count]))
count = count + 1;
try:
devidx = input("Which Device Number? ")
if int(devidx) < len(devids) :
resource_id = devids[int(devidx)]
break
else :
raise
except :
print("Invalid Selection.")
ins_bench = resource_manager.open_resource(resource_id)
if "SOCKET" in resource_id :
# Most common termination requirement for SOCKET type resources
ins_bench.read_termination = "\n"
ins_bench.timeout = 10000 # Set command timeout to 10s
cmdloops = 1000 # Number of loops to run the commands for
def test_cmd (command) :
# Function tests for support of command by observing timeout and error log
eresult = ins_bench.query("SYST:ERR?").split(",")
while int(eresult[0]) != 0 :
eresult = ins_bench.query("SYST:ERR?").split(",")
if "?" in command :
try :
ins_bench.query(command)
except:
eresult = ins_bench.query("SYST:ERR?").split(",")
while int(eresult[0]) != 0 :
eresult = ins_bench.query("SYST:ERR?").split(",")
print("Command not supported!")
return(False)
else:
ins_bench.write(command)
eresult = ins_bench.query("SYST:ERR?").split(",")
if int(eresult[0]) != 0 :
print("Command not supported!")
return(False)
else :
return(True)
def cmd_time (command,loops) :
# Command benchmark function
if "?" in command :
print("Benchmarking " + command )
else :
print("Benchmarking " + command + "+*OPC?")
if test_cmd(command) :
startt = time.time()
for i in range(0,loops) :
if "?" in command :
ins_bench.query(command)
else:
ins_bench.write(command)
ins_bench.query("*OPC?")
endt = time.time()
print("Average over "+str(loops)+" loops: "+str(((endt-startt)/loops)*1000)+" ms")
# Probe Minimum Required Commands
try:
print("Probing required commands ... ",end="")
ins_bench.write("*RST")
ins_bench.query("*IDN?")
ins_bench.query("*OPC?")
eresult = ins_bench.query("SYST:ERR?").split(",")
count = 0
while int(eresult[0]) != 0 :
eresult = ins_bench.query("SYST:ERR?").split(",")
count = count + 1
if count > 100 :
raise ValueError("Failed to clear error queue!")
except:
print("Failure on a required command for this benchmark. Exiting.")
raise
else:
print("OK!")
# Begin Testing Commands
print("Connected to: " + ins_bench.query("*IDN?").strip() + " as " + resource_id)
print("----------")
# Each test command can be customised, loops can be changed
cmd_time("*RST",cmdloops)
cmd_time("*IDN?",cmdloops)
cmd_time("*OPC?",cmdloops)
cmd_time("*STB?",cmdloops)
cmd_time("OUTP 1",cmdloops)
cmd_time("SOUR:VOLT 5",cmdloops)
cmd_time("SOUR:VOLT?",cmdloops)
cmd_time("SOUR:CURR 1",cmdloops)
cmd_time("SOUR:CURR?",cmdloops)
cmd_time("MEAS:VOLT?",cmdloops)
cmd_time("MEAS:CURR?",cmdloops)
cmd_time("READ?",cmdloops)
cmd_time("FETC?",cmdloops)
cmd_time("OUTP 0",cmdloops)
# Announce Completion
ins_bench.close()
print("Benchmark Complete!")