Adexl Calibrated Verification
Adexl Calibrated Verification
with ADE XL
Copyright Statement
© 2012 Cadence Design Systems, Inc. All rights reserved worldwide. Cadence and the
Cadence logo are registered trademarks of Cadence Design Systems, Inc. All others are the
property of their respective holders.
Calibrated Verification with ADE XL
Contents
Purpose ....................................................................................................................... 4
Terms .......................................................................................................................... 4
Audience...................................................................................................................... 5
Why is Calibration so Popular? .................................................................................... 5
The Designers Perspective .......................................................................................... 5
Calibration examples................................................................................................ 6
Understanding calibration limitations........................................................................ 6
An example calculation ............................................................................................ 8
ADE XL Features Supporting Calibration .................................................................... 8
Use of calcVal....................................................................................................... 9
The calculator – Your best friend ............................................................................. 9
Programming with OCEAN .................................................................................... 10
Emulating hardware ............................................................................................... 10
Miscellaneous ........................................................................................................ 10
calcVal vs. OCEAN pre-run script vs. single testbench .......................................... 12
A First Example ......................................................................................................... 13
Testbench and DUT ............................................................................................... 15
From Testbench to ADE XL ................................................................................... 17
Monte-Carlo is a Bit Special ................................................................................... 20
Digital vs. Analog and Few Enhancements ............................................................ 23
Advanced Examples and Techniques........................................................................ 27
Work like hardware using Verilog-A ....................................................................... 27
Keep test setup compact with include files ............................................................. 30
Ensemble for an oscillator-filter replica-calibration ................................................. 31
Pre-run Script Flow for Calibrated Verification ........................................................... 36
A simple pre-run example ...................................................................................... 37
Successive approximation calibration with OCEAN ............................................... 38
More OCEAN examples ......................................................................................... 42
Multi-Step Calibration via OCEAN.......................................................................... 42
Step-by-step guide to pre-run scripts ..................................................................... 45
Guide, Checklist and Further Hints ............................................................................ 48
Generally ................................................................................................................ 48
Plan for your requirements ..................................................................................... 48
Further ground work ............................................................................................... 48
Have a naming convention ..................................................................................... 49
How to start? .......................................................................................................... 49
Test the setup ........................................................................................................ 49
Purpose
The verification of more complex blocks or systems often requires multiple testbenches.
Frequently also expressions referencing different tests are needed and/or results of one
simulation have to be re-used in subsequent tests. All this is possible using Virtuoso®
ADE XL [1].
This application note focuses on one of the most important type of advanced analog
and mixed-signal verification, which is verification in the calibrated condition. Calibration
techniques are becoming more and more popular because they are easy to implement
in CMOS technologies and allow the design of flexible high-performance blocks needed
in almost all SoC’s [2]. With calibration accurate blocks like voltage and current
references, filters, oscillators, etc. can be created in standard silicon technologies
without expensive high-precision components (like laser-trimmed resistors, NiCr
resistors, external high-accuracy resistors or resonators). Instead easy-to-implement
compact digital calibration blocks tweak the analog parts – the device-under test (DUT)
- to compensate chip-technology inherent imperfections like mismatch, RC tolerances,
temperature drift, etc.
Keywords: Calibration, Tolerances, Verification, Design, Analog, Mixed-Signal, Scripts
Remark on DUT: The term can be used with slightly different meanings. Imagine a filter
needs to be calibrated, and its calibration might be done based on a replica oscillator.
We may call the filter DUT or just the whole system doing both the filter operation and
the calibration. With respect to ADE XL the term DUT is important for the Monte-Carlo
mismatch setup. Sharing the mismatch data for all blocks and among different tests is
usually needed, especially for calibrated systems, i.e. the DUT is, here, the whole
system.
Terms
ADC Analog-to-digital converter
ADE Analog Design Environment
CU Control Unit
DAC Digital-to-analog converter
DNL Differential nonlinearity
DUT Device-under test
GUI Graphical user interface
MC Monte-Carlo
OCEAN Open Command Environment for Analysis
Audience
This document is intended for system and block design and verification engineers
working in the analog and mixed-signal domain. Knowledge on basic ADE XL usage
and features is assumed.
Calibration examples
Calibration can be used in many ways, e.g.
The DUT block is measured in the lab or production and a digital control-unit CU
is set up to achieve the desired performance. The CU controls the DUT with a
trim-DAC, its setting can be represented by an integer number.
This is a flexible method, but such lab-based calibration can be expensive, and
the block needs to have good power supply rejection PSRR, low temperature
coefficients, etc., to be still accurate in the presence of supply or temperature
variations.
The DUT behavior is measured with an on-chip calibration circuitry, and the
calibration setup – i.e. the trim bus setting - is updated from time to time. For an
absolute reference, an accurate crystal-controlled clock frequency and/or a
precision reference voltage is often used.
One difficulty here is the design of the calibration part and that a disconnection of
the DUT may disturb the system operation.
To avoid disconnection of the DUT for calibration reasons, a replica cell of the
DUT can be used and calibrated in the background. A further advantage is that
the replica design often can be made easier to calibrate than the actual DUT, e.g.
using a simple RC oscillator as a replica for a complex RC filter.
The disadvantage here is that mismatch between the actual DUT and the replica
limits the calibration accuracy significantly.
There might be differences in frequency, e.g. maybe the filter is an RF filter, but
for easier calibration the oscillator runs at lower frequencies – or vice versa (e.g.
for speed reasons). This is not a perfect situation to obtain highest accuracy. It
might be the case that already the transistor models are not very accurate over a
very wide frequency range.
Often designers want to lower the power consumption as much as possible.
However, saving currents might result in different operating points for key
elements, which cause further deviations.
Good matching requires large devices, but that limits high-frequency
performance. Usually a compromise must be found.
There might be different relations between filter/oscillator capacitances, and for
transistor and wiring caps. Even in RC op-amp filters transistor
transconductances and capacitances may have a significant impact.
The oscillator amplitude may impact the oscillator frequency as well, and the
amplitude could depend on temperature, supply and process. This makes the
oscillator design often harder than the filter design.
The production has tolerances in everything (like R, C, wiring, etc.), but the
calibration is for example only done with tuning of a single parameter (like bias
current). This may lead to further systematic mismatch between oscillator and
filter, e.g. the filter shape may change too.
Even if the filter is re-used as a part of the calibration oscillator, there are usually
some additional oscillator parts. For instance, the filter may give 90° phase shift
and the remaining 90°, to form an oscillator, are implemented with an additional
integrator block – which is of course also not fully ideal.
A ramp generator is not ideal too; it suffers from delay in the comparator, offset
voltages, non-constant charging current, wiring & transistor caps, etc.
On top of random and systematic mismatch errors, other non-idealities limit the
calibration accuracy, as well the calibrated DUT can never be more accurate than the
references (like crystal oscillator frequency) used for the calibration.
An example calculation
To get an overview of the different effects influencing the calibration, a simplified
example calculation like this should be done. Let us assume a replica calibration:
3A. Maximum calibration step size: ±0.5% (e.g. acc. to number of bits and DNL)
3C. Reference error: ±0.01% (e.g. crystal tolerance)
3C. Tracking error on process bw replica & DUT***: ±2% => total error=2.23%
* This value does not really matter much, but it is usually more difficult to obtain a small final
error, if the starting error without calibration is very large.
** There might be some correlation e.g. due to shared bias blocks. This would help to get a
better matching. If this is the case, do a full MC-analysis on both DUT and replica together.
*** It is not so easy to minimize this error for reasons explained on the previous page.
****In a true worst-case calculation we would use |dx2|+|dx2| as the total resulting error, not the
RMS.
In ADE XL we expect to get the final number, i.e. 3.6% in this example. Some deviation
can be expected if the MC run count is small and due to ignoring correlations and
distribution shapes in the simplified calculation. For instance, often the untrimmed
performance histograms look near-Gaussian, but the calibrated end result more
uniform, because the calibration will cut the tails. However, in a replica calibration the
tails coming from mismatch will be still present.
This calculation assumes that process variations are cancelled out. Over full
temperature and supply voltage range the DUT might still vary more than this, of
course. ADE XL allows doing further experiments e.g. to check after which temperature
change a recalibration makes sense.
value of a trim potentiometer) and digital calibration (finding best bus value to achieve
desired performance).
Use of calcVal
For referencing an output expression of a specific test the calcVal function can be
used, e.g. inside the ViVA calculator or within ADE XL (like in variable setup or output
expressions).
This way e.g. calibration data obtained in a calibration test ‘testcal’ can be stored and
used in subsequent tests (like ‘testverif1’, ‘testverif2’, etc.) for further verification in the
calibrated condition; this is useful because often the verification setup is quite complex
and features multiple tests. Even multiple calibration steps can be implemented this way
with calcVal.
Note: ADE XL requires the use of global variables if we want to assign a calcVal
expression. Test-specific local variables are not suitable (look at ADE XL User Guide,
chapter Creating a Combinatorial Expression). ADE XL will automatically care for the
correct execution sequence of the tests. Cyclic dependencies will be detected, but
removing them is up to the user.
cross
value
ymax, xmax, xmin, xmax
To find the correct calibration setting often simply the calculator can be used. For
instance the bus value can be swept in test ‘testcal’ and the desired performance – e.g.
zero offset voltage or a certain 3dB-frequency – can be detected with the cross
function.
The cross function can be used if the sweep of the calibration variable (like a trim bus
value or a bias current) would lead to a monotonic curve of the measure (like offset
voltage, delay, bandwidth, etc.). The min or max functions are useful if the calibration
condition is formulated as a kind of optimum. This is the case if you set the goal in the
form (actual – wanted)². By using the sum of squares even multiple goals can be
defined in a single calculator expression; also logical operators like and (&&) and or (||)
can be used; here are some examples:
11||1 => 11
11&&1 => 1
1||nil => 1
1&&nil => nil
If we run the calibration in a transient analysis, then the calibration can often be
detected with the cross, and the trim setting can be readout with the value function,
like:
Of course the calculator can use calcVal too, e.g. to compare results from different
tests.
Emulating hardware
Another way of executing the calibration and verification in a calibrated condition is to
run both tasks together e.g. in a single mixed-signal simulation. This way even the A-D
interface can be verified, presuming that the digital control unit design already exists to
support such setup. Usually the digital part is described in Verilog or VHDL and the
analog part as usual at the transistor-level. If the calibration parts are represented by
Verilog-A blocks, even a full analog Spectre® simulation is possible. The application
note also describes behavioral blocks which are flexible enough to cover many different
calibration tasks.
Miscellaneous
An approach used more frequently in the past is saving results from a pre-run into a file
and loading it in the post-runs. This is a valid approach still, also using SpectreMDL –
especially for a pure command line flow - but it leaves more work for the user. It might
be difficult to maintain and to enable complex verification over corners and MC. The
OCEAN XL based ADE XL pre-run flow can be regarded as a more modern
replacement for older command-line-centric flows.
All the different techniques can be used exclusively or combined. Which solution is
preferable is a matter of taste, user skill, library availability and efficiency. Therefore, do
not look at one technique only, look at the whole picture. All methods will also work in
ADE GXL so even advanced tasks such as an optimization in the calibrated condition
are possible.
Of course, all techniques can also be used for other purposes. For instance calcVal
can be just used to create combined output expressions referencing different ADE XL
tests – without having a true calibration.
*Workaround possible via checking the currently active corner inside ocean script
A First Example
ADE XL supports using results from another test with calcVal. This, together with a
global variable (being swept for calibration and then fixed to desired value), allows
passing the calibration information to subsequent verification tests. This method allows
compensation for process variations, mismatch or temperature effects. The testbench
doing the calibration and the one doing the verification, could be different or the same. A
separation makes sense for replica calibrations, for more complex setups and better
modularity.
The implementation is shown on a simple PMOS current mirror example. Such a mirror
is part of almost all analog circuits. It should deliver an output current which is just a
copy (or multiple) of the input current. With full analog techniques a current mirror can
be made quite accurate, for instance by using cascodes and by making the transistors
large enough to reduce mismatches. However, often there are limitations which limit the
application. For instance, assume that the bandwidth should be maximized; this
prohibits the use of large transistors (giving low mismatch). Also maybe a cascode
would create headaches (for example due to phase margin or voltage headroom
problems). Thus – and for simplicity – we use no cascodes in our example DUT.
The major first step is to define and achieve the calibration. Often the goal for a
calibration on a current mirror is obtaining a certain wanted output current like 20µA,
another goal could be to achieve a certain desired output-to-input current ratio. Usually
a real accurate design (say within 10 bits or 0.1%) is often only possible at typical
process, voltage and temperature conditions, but in the presence of PVT tolerances and
device mismatch (to be checked in a Monte-Carlo analysis MC) even such mirror design
is not simple anymore.
In our testbench we have the PMOS mirror itself as DUT and the driving current source
with a certain DC current. Usually this input current is coming from another transistor
block, like an NMOS current source output of a bias current generator. For simplicity we
mimic this driver by a fixed voltage source (like if we would have a very good voltage
reference in the system) and on-chip resistors (which have app. ±20% absolute
tolerance and some mismatch according to its sizes).
An analog calibration would be for example possible with a multiplier circuit (here a
current-controlled current source cccs is used), i.e. the multiplication factor is set (in
reality often in a control loop) to obtain the desired 20µA output. A digital trim is shown
later, and implemented with a switchable current mirror (a mirror with built-in trim DAC).
The analog trim has the advantage that in principle an ideal calibration can be achieved,
whereas digital calibration accuracy is always limited to the DAC resolution (±0.5 LSB).
Therefore we can easily demonstrate and debug an analog calibration, because if
calibration works it should be almost 100% accurate.
Ref resistor
Output
current
Trim bus
setting
Further notes:
The pre-run test works also in ADE L if variables are set accordingly.
Vref has 5mV and with the 4x 1k R0 we generate 20µA. This is fed with a
current-controlled current source cccs FIin to the current mirror input. Source
Iin is used for testbench reasons, like injecting 1A AC current to verify the
mirror bandwidth.
The gain of the cccs is unity by default, and set by a global variable
gain=Iin/20u.
The TRIM VDC elements set the voltage according to the bus integer variable
TRIM, with multiplication by the supply voltage
The mirror output voltage is set to 800mV by V1, to have a similar value like in
most applications. This way the systematic mirror error becomes small.
The DUT:
MSB LSB
Note: To demonstrate best-practice and ensure flexibility we use a config view to control
the cell view setup.
Library -- IP090Lib_gpdk
Cell -- mirr_design
View -- adexl_ana
The output vs input current ratio Iout/Iin should be very close to unity, but mainly due to
channel length modulation, it is not accurately the case. ADE XL test ‘testcal’ is
executing a DC sweep on the input current Iincal, and we can detect using the cross-
function for which input current we get the desired output current (like 20µA, set by
Ioutwanted). This way we can calibrate the mirror circuit.
Remark: In our case we can do the calibration in a simple DC sweep. For other circuit
types (like VCOs or filters) maybe a more complex large-signal simulation is needed,
like a swept-pss run. For flexibility we set the calibration goal also a global variable
Ioutwanted. The access is possible using VAR(“Ioutwanted”).
In a 2nd ADE XL test ‘testverif’ we can USE this calculated current as input (via a global
variable Iin), and we can check if we really get the desired output value, or how big the
error would be e.g. in a supply sweep, etc. – in either the same or another testbench.
Besides calculating the remaining current error in the calibration, we can also do further
verification tests (like checking the output impedance, the saturation voltage, or
bandwidth, noise, etc.).
The accuracy which can be obtained by calibration also depends on how frequently it is
applied, for instance just once like during production and at typical conditions only
(doing a partly calibration) or continuously, whenever temperature and supply changes
(full calibration). Some further thinking is needed to care for this. For flexibility we use
just different global variables for supply and temperature in our calibration and
verification ADE XL tests. This way we can decide later if we want to verify the DUT in
full or partial calibration mode.
Remark: All our variables in the calibration testbench end with ‘cal’. Generally it is highly
recommended to follow a clear naming convention.
Remark: Without any rounding errors in this case the current error should be zero (at
least if the DC sweep is dense enough). That might not be the case for a digital
calibration using a trim bus with finite number of bits; or with calibration techniques
based on a replica or just generally with non-perfect calibration algorithms (e.g. when
reaching the calibration limit).
The next figures show how to implement this calibration technique in ADE XL using
calcVal; together with a global variable (we name it Iin because used in the
verification test, Iincal is used in the calibration setup):
Remark: The calcVal expression for a variable can be just typed-in manually or
created via drag’n drop from an output in the ‘Outputs Setup’ tab (look at [1], chapter
‘Creating a Combinatorial Expression’).
This variable is calculated from the ‘testcal’ results and used in ‘testverif’. So in our case
‘testcal’ MUST be executed first; ADE XL makes sure automatically that this is the case
(even for more than 2 tests).
If we want to force an error, by creating a circular reference, we will get an error
message when running the setup. In such cases check your variables and calcVal
expressions and e.g. decide which calcVal expressions need to be removed.
As mentioned, expressions on multiple tests are possible via calcVal, look at the
‘testverif’ output expressions for details. We can use either the VAR syntax or calcVal:
Here are the results of a corner run, which prove that the calibration works as expected:
We can see that the calibration is 100% accurate, e.g. Iref and the current mirror ratio
vary significantly, but after the calibration we always get desired 20µA for all corners.
Library -- IP090Lib_gpdk
Cell -- mirr_design
View -- adexl_ana_new
If we switch to Monte-Carlo the calibration should also be maintained. For full correct
execution, the current implementation in IC615 needs a little dummy pre-run script for
Monte-Carlo. To add such script move to the postrun test and bring up the context-
sensitive menu via right mouse button, then press ‘Pre Run Script’.
For releases before IC6.1.6 ISR3, add just a printf(“ “) in the editor and save it
with .ocn extension and click at the Enable checkbox (only needed for testverif). In the
later versions of IC6.1.6 ISR3 and above, just enabling the ‘Pre Run Script’ would turn
on the functionality.
MC on process variations only usually causes no headaches, but the mismatch setup is
essential for correct verification, especially for calibrated circuits. If we would share the
bias blocks for actual DUT and replica for best accuracy, but would treat the bias
generator with different mismatches in the different ADE XL tests, the simulated
behavior would be too pessimistic. So we have to care for the mismatch setup, as for
normal non-cascaded tests.
In both our calibration and verification tests we have these devices with mismatch:
R0 (leading to some variations in Iref and thus also Iout)
DUT (leading to variations in current gain and thus Iout)
With default setup, i.e. specifying nothing in ADE XL MC setup, we get this in an MC
run:
DUT has same mismatch in both tests, i.e. the current factor of the mirror is the
same
R0 has different mismatch, i.e. Iref is not the same and the calibration will be not
perfect
However, both instances have to share the mismatch information among the tests to
just to reflect the real world situation (where we would execute the calibration and
verification on the same instances).
ADE XL supports different kinds of mismatch setup. In our case, the best solution is to
put all blocks into one big block – thus representing the whole system. In our case just
put the resistors into the PMOS mirror schematic and set it up as ‘master’.
As we are already using a config view, flexibility is easy to achieve, also for later
switching to post-layout verifications. This new setup can be found in adexl_ana_new
which uses pmirr_trim_bias as single DUT.
For understanding the ADE XL setup it is also good to know how Spectre® handles the
DUT correlation across tests (use spectre –h montecarlo for more help). For
mismatch-sharing in the Spectre® standalone netlists, a single instance must be
specified via dut montecarlo option. The dut must be an instance of the same
subcircuit in both netlists.
In our example, the pre run design is different than the main ADE XL design for
verification, but both share a common dut.
Remark: To get a full understanding run MC with this mismatch setup and without, then
check in the “Detail” results pane Iref, Iout and Iout/Iin across the tests. Make sure that the
dummy pre-run script is enabled.
All these subblocks – if they contain mismatch - should be put under one big cell (like
design-under-test). Insert this and the usual stimuli into the testbench schematics,
preferably create config views and do the according mismatch setup in ADE XL.
Library -- IP090Lib_gpdk
Cell -- mirr_design
View -- adexl
The database also contains the same mirror calibration example, but with digital
calibration (view mirr_design/adexl, also using the single DUT setup using
pmirr_trim_bias and schematic_new). We just simply have to change the sweep
variables from Iincal to TRIMcal (in testcal) and from Iin to TRIM (in testverif),
nothing else. This means that there is no need to change anything in the schematic, for
analog calibration we do not touch the trim bus settings, and with digital calibration we
do not touch the analog variables. Untouched variables just stay at their defaults.
For interest and checking the new digital trim part, we have added some more outputs
in ‘testcal’, e.g. we check the trim range and the minimum and maximum step size. The
later sets a limit on the calibration accuracy. In addition we put a specification on the
bus setting, for example we may add here some design margin (i.e. avoiding extreme
bus settings like 0 or 63).
Calibration run result showing the DAC-based trimming characteristic: - corner run -
Remark: The curve shows not too much differential nonlinearity DNL. That can be more
critical for high-resolution DACs. DNL can be made worse by changing the PMOS
widths inside the mirror trim part.
Remark: Iout is near 20µA in ‘testverif’, but accuracy is limited because we only use 6
bits. Also note that the required calibration TRIM varies from 1 to 54, so the design is
not fully centered which is usually desired for maximum yield. The calibration works fine,
but of course other specs (like on Rout) may fail. This indicates the need for some further
design improvement, e.g. via ADE GXL optimizers.
The pre-run scripts are explained in detail in the chapter on OCEAN-based calibration.
Analog calibration with sweep and cross run over process corners:
Library -- IP090Lib_gpdk
Cell -- mirr_design
View -- adexl_ana_prerun
Remark: equivalent number of bits set to 10 (inside the script), only the calibration part
is shown.
Note that the approach using a sweep and the cross statement is usually not slower
for analog calibration, because the cross statement can interpolate, whereas SA does
not and is just halving the stepsize in the next iteration. Therefore SA is only a real
advantage for digital trimming and large number of bits. It will be demonstrated in the
next chapter.
In principle there are even faster algorithms available for both analog and digital
calibrations, e.g. using Regula Falsi, Newton-Raphson or look-up tables or high-order
interpolation algorithms. However, for verification purposes there is usually no real need
for them.
Other very useful Verilog-A blocks can be found in ahdlLib or created easily. Here are
some examples:
Stimuli blocks, e.g. a swept-sine source can be helpful for filter calibration
purposes.
ADC and DAC behavioral blocks are expedient, because often it is not clear from
the beginning what the parameters should be for the real ADC and DAC circuits
which might be part of the calibration (number of bits, range, polarity, delay, etc.).
Remark: Originally this code has been created for checking the offset voltage of a
clocked comparator. Together with a little measurement module, calibration for other
quantities are also possible e.g. for calibration on frequency or amplitude, etc. As in our
example the module works together with an ideal ADC for digital calibration. For
flexibility on number of bits, only the ADC has to be adapted. For pure analog
calibration, you may combine this module with a voltage-controlled current-source.
The database also contains further modules, e.g. for ADCs and DACs. These are
similar to the ones in ahdlLib, with small extensions, e.g. to support both clocked and
non-clocked operation. The later has the advantage of an easier testbench setup.
If many frequencies need to be evaluated, then probably also a full frequency sweep
(plus using value function in the calculator) makes sense, leading to a multi-
dimensional sweep:
Example with trim on Vctrl (e.g. defining BW) and Filtertype (e.g. setting Q-factor)
An alternative is to use a transient run and performing the gain calculation with time-
swept large-signal frequency source and Verilog-A amplitude detectors (like described
in [4]). This is slower than a purely linear AC simulation, but allows directly using fast
successive application calibration via Verilog-A block.
Another way is making again a transient run, but without swept sine source, and using
ACTIMES of Spectre® to perform an AC analysis at different time points. This is faster
than a full transient sweep with sine stimuli. However, in this case the proposed simple
Verilog-A module would not work. So ACTIMES can be used in the verification part
only, not to obtain the calibration data.
Remark: In such complex AC calibration cases using OCEAN pre-run scripts is the
preferred solution. Later we also provide an example of a multi-step calibration via pre-
run script.
Library -- IP090Lib_gpdk
Cell -- test_replicaCalRc
View -- schematic_flat_calOnly
Remark: A multi-step filter calibration on cut-off frequency and filter ripple is possible
too, but now we focus on topics important for replica-calibrations. Also note that this
circuit is a very simple one, e.g. SC or RC op-amp filters are often much better on
accuracy and matching than our simple example.
Ramp
generator
SA
block
The goal is to obtain a certain fix RC time constant (representing also the filter time
constant), so the ramp speed should vary not much after the calibration. At the slow
technology corner SS, the speed of the ramp would be more than 20% smaller at bus
center than at nominal NN, but the calibration part sets the bus so that at the end of the
calibration sequence (setting of the LSB) the speeds are the nearly same (mainly limited
only by trim step size).
The setup in test_replicaCalRC allows running the calibration very efficiently using
successive approximation in a single transient analysis run. The Verilog-A block SA sets
(together with an ideal Verilog-A ADC) the bus so that the RC ramp creates the desired
delay (till it reaches the comparator threshold voltage). Usually this delay is set in such
a way that the bus is at nominal corner approximately in the center of the trim range and
that the corresponding filter has the desired cut-off frequency.
Result: - at NN we get trimming of the 6-bit bus to 25, the comp threshold is 0.8V –
Speed of ramp is
calibrated
Remark: A near-ideal ahdlLib comparator is used. Usually later a real circuit has to be
designed. The testbench is very helpful to check the impact of major comparator
measures like offset voltage and delay.
Next we can extend this pure calibration example to do both calibration AND
verification. The changes are quite minor: - test_replicaCalRC/schematic_flat -
In the Verilog-A module autostop after the calibration has been disabled; stopping of
the ramp generator is done using a 2nd shut-down transistor (an alternative would be
using an OR-gate and the existing NMOS transistor). This way the plain verification part
can start after 3.5us, e.g. by starting a sine wave at the filter input (or by using
ACTIMES). Even a periodic steady-state analysis is possible, by adding
(*instrument_module*) to the Verilog-A code to avoid hidden states (look at
Cadence Virtuoso Spectre Circuit Simulator RF Analysis Theory manual).
This single-testbench approach is very fast and already possible with ADE L.
Results for a sweep on supply and temperature: - using ADE L Parametric Sweep Tool -
At the beginning of the calibration the ramp speed varies, but at the end all ramps are
very close together. Also the 3dB-bandwidth is stable within approx. ±1.3%. For this 2D-
sweep the calibration span is 12 to 33, so there is some margin for technology
variations, aging, post-layout simulation, etc.
These tasks and other more complex ones (MC, sensitivity, optimization, etc.) can be
done much easier in ADE XL or GXL, of course. An initial direct migration of the ADE L
setup to ADE XL is available in view adexl_flat.
Using split tests for calibration and verification using calcVal. This is very useful
if the filter is much more complex (like embedded in a whole receive system)
than the calibration block or if many verification tests are required. In this case
we would use just calcVal for calibrated verification, nothing more special w.r.t.
ADE XL techniques.
In the database the setup is available under filter_design/adexl with tests testcal
& testverif. Also all blocks are combined into one big DUT and config views are
used.
Using no Verilog-A modules and using calcVal and ADE methods like plain
sweeps (instead of SA) and calculator cross expressions to perform the
calibration. This way we would use just the same methodology like in the 1st
simple DC non-replica calibration of our PMOS mirror example.
The implementation is up to the user, e.g. we can drive the DUT with an ideal
ADC with a pwl-source at its input.
Using an OCEAN pre-run script to implement SA. This would result in a very
compact testbench and a fast setup.
There is no single best solution, e.g. a non-SA pure full-sweep based approach is easy
to implement but can be inefficient for large trim busses, because the number of trim
settings just increases exponentially with 2bits. On the other hand, a sweep is very
robust and gives more insight, e.g. on checking the DNL of the trim DACs, on trim
range, etc.
As mentioned the implementation of the two first variants is straight forward, so let us
move to the pre-run script method.
The best approach for beginners is probably to learn from examples. ADE makes this
easy, because ADE L simulation setups can be saved quickly as OCEAN scripts and
also ADE XL itself offers pre-run script templates. OCEAN combines more or less
SKILL programming constructs (like while, if, for, etc.) with system commands, analysis
and calculator expressions. OCEAN XL is also the language working in the background
of ADE XL/GXL.
The following pre-run script reads the setup of the main test; then it defines and runs a
DC sweep with the input current as variable. Based on the simulation result, it updates
the value of variable Iin before running the main verification simulation for that point.
Library -- IP090Lib_gpdk
Cell -- mirr_design
View -- adexl_prerun
; Read the simulation setup for the test, but disable all analysis
ocnxlLoadCurrentEnvironment(?noAnalysis t)
; define sweep for calibration variable - Iin used acc. to testbench for verification
analysis('dc ?saveOppoint t ?param "Iin" ?start "5u" ?stop "40u" ?step "0.125u")
; Get target from ADE XL variable, hint: with desVar you can SET a variable
target = evalstring( desVar( "Ioutwanted"))
Remark: In principle we could also use the run() command, but as calibration should
also work in a Monte-Carlo analysis, use ocnxlSetCalibration() and
ocnxlRunCalibration(). This way the MC setup from ADE XL is inherited. There is
no performance advantage of using the plain run() command for non-MC runs.
For modularity, we put this code into a separate file prerun_SweepCal.ocn and load it to
the actual pre-run script. This has the advantage that you can use your preferred text
editor (e.g. nedit which supports SKILL/OCEAN syntax highlighting) and you can better
switch between different versions.
To adapt our example for using SA, only the pre-run script needs modification, and we
can start programming using the ADE XL calibration template.
;; Simple but fully functional example for an initial understanding, no change of design
ocnxlSetCalibration()
for( n 1 noOfBits
i = noOfBits - n
bitWord = bitWord + 2**i
Iin = trimmin+bitWord*trimrange/noOfSteps
desVar( "Iin" Iin )
ocnxlRunCalibration()
simResult = IDC("/V1/PLUS")
if( simResult > target
then bitWord = bitWord - 2**i
)
)
CalResult = trimmin+bitWord*trimrange/noOfSteps
; or just directly use Iin
;; Add this value as ADE XL output so that it can be viewed in outputs window.
ocnxlAddOrUpdateOutput("Iincal_from_Ocean" CalResult)
ocnxlAddOrUpdateOutput("EffectiveBitWord_from_Ocean" bitWord)
ocnxlAddOrUpdateOutput("EffectiveCalResolution_from_Ocean" trimrange/noOfSteps)
; The binary search works here on the full analog testbench - without trim bus!
; Therefore the bus value is something like the value obtained using an ideal DAC.
;Load the pre run utility functions (SKILL path was set in .cdsinit)
load("prerun_functions.il")
(let (logfile debug resultsDir jobID pointID iterNum cornerName msg logfileName
Iin Iincal bitWord i noOfBits simResult target)
; Iin is design variable used in verif run testbench
; Iincal is design variable used in calibration run testbench
load("debugCode1.il")
;Initialize pre run session and inherit test setup from the current point; Call this first
ocnxlLoadCurrentEnvironment( ?noAnalysis t )
;Save resultsDir since calling design() resets it; reset after calling design
resultsDir=resultsDir()
;Define design vars specific to the calibration test bench and define analyses
desVar( "Vddcal" "Vdd" )
desVar( "tempcal" "temps" )
; no dc sweep anymore!
analysis('dc ?saveOppoint t )
envOption(
'analysisOrder list("dc" "ac")
)
saveOption( 'subcktprobelvl "2" )
saveOption( 'currents "all" )
saveOption( 'save "all" )
save( 'i "/V1/PLUS" "/V0/MINUS" "/Vref/PLUS" )
temp( "VAR(\"tempcal\")" )
(when debug
fprintf(logfile "resultsDir() = %s\n" resultsDir())
fprintf(logfile "modelFile = %L\n" modelFile())
fprintf(logfile "desVar() = %L\n" desVar())
ocnDisplay(?output logfile 'analysis)
fprintf(logfile "save options:\n")
ocnDisplay(?output logfile 'save)
fprintf(logfile "starting calibration\n")
)
noOfBits = 10
noOfSteps=2**noOfBits
; Get target from ADE XL variable, hint: with desVar you can also SET a variable
target = evalstring( desVar( "Ioutwanted"))
;target = 20u
trimrange=2*target
trimmin=5u
bitWord = 0
for( n 1 noOfBits
i = noOfBits - n
bitWord = bitWord + 2**i
Iincal = bitWord*trimrange/noOfSteps + trimmin
desVar( "Iincal" Iincal)
;Run
ocnxlRunCalibration()
simResult = IDC("/V1/PLUS")
(when debug
fprintf(logfile "Iincal = %L\n" Iincal)
fprintf(logfile "bitWord = %d\n" bitWord )
fprintf(logfile "simResult = %L\n" simResult)
)
;Add this value as an ADE XL output so that it can be viewed in outputs window
;An output named will be added for each point
ocnxlAddOrUpdateOutput("Iincal_from_Ocean" Iincal)
ocnxlAddOrUpdateOutput("EffectiveBitWord_from_Ocean" bitWord)
ocnCloseSession()
For releases IC6.1.6 ISR11 and above, you can refer to pre-run script
“prerun_SACalAnaExt_ISR11.ocn”. From IC6.1.6 ISR11, ADE XL has been enhanced
to identify the Monte Carlo mismatch DUT automatically.
Remark: Some parts, like the one on debugging are very generic. For re-use we put
them into a separate file with extension ‘il’. Also some generic functions are put into a
separate file prerun_functions.il, which is loaded at the beginning. Calling design()
removes by default all the settings (like in ADE L), but with envSetVal("asimenv"
"retainStateSettings" 'cyclic "yes") the settings can be retained (see
http://support.cadence.com/wps/mypoc/cos?uri=deeplinkmin:ViewSolution;solutionNum
ber=11530352 ).
A next step could be applying successive approximation in the pre-run script to the real
digitally trimmed PMOS mirror. The code is even simpler than for the analog variant, as
we now do not need to calculate the trim current in the loop anymore. The DUT is doing
this for us (look at file prerun_SACalExt.ocn).
The remaining variant of a full plain sweep for a digital calibration is usually of little
interest, because the code for SA is not really more difficult but much faster.
Remark: The use of another trim variable is a bypass to the ADE XL limitation which
allows only global variables to combine with calcVal.
Of course the pre-run script method can be also applied to our more complex filter
replica calibration. An example can be found in filter_design/adexl_prerun which doesn’t
use the Verilog-A block for executing the calibration but a simpler testbench with a
single ramp in time together with an OCEAN loop.
DUT:
Remark: The trim might be done in different ways. Here, binary-stepped capacitors
connected to the output node are switched via NMOS transistors.
The pre-run script (look at prerun_2stepCal.ocn) has been created step-by-step, i.e.
First doing a frequency calibration via bi-section on CTRIM at a large trim current
to make sure that the circuit acts as oscillator.
Next we can reduce the current – again using bi-section on ITRIM e.g. with 10
bit accuracy - to get the desired output amplitude.
After this we can check again for the frequency, and do a little correction, e.g.
just try two LSB steps. Usually there is some up-shift in frequency because the
transistor capacitances are dependent on the operating point.
Last, we can reduce for example ITRIM by 30% to get a hopefully stable
amplifier and run the main test with these settings. How much current reduction
is needed is a trial-and-error approach and depends on the circuit and the
oscillation amplitude in the 1st trim calibration parts. The best oscillator point for
calibration is with a small amplitude, because then, with an already small bias
reduction the circuit becomes stable.
It might happen that the reduction for amplifier stabilization causes again a little
shift in frequency, so empirically we may add further trimming steps (e.g. reduce
CTRIM bus by 2 LSB steps). However, with this low ITRIM the circuit would not
oscillate anymore.
This combination of systematic and empirical calibration methods can lead to significant
circuit improvements, e.g. at RF it is very difficult to obtain accurate low-noise amplifiers
and sharp filters with methods like feedback (critical on stability and noise) or
feedforward (critical on matching). This way calibration significantly helps to overcome
non-idealities of simple but RF-suited and power-efficient circuits. In our example
without calibration the narrow band filtering would never work, because the center
frequency is by far too sensitive on technology corners. Also the optimum bias current
highly depends on process, temperature, supply voltage, etc. and only some of these
sensitivities can be reduced by good analog design practices (like using cascodes,
PTAT bias currents, etc.). The verification and design of such circuits would be nearly
impossible without the advanced verification methods described here.
Flow chart:
I: Setup the verification test.
Is your pre-run
Yes similar to the No
verification run?
IV: Run & test the setup, 1st at nominal, then for corners, last in MC.
Make modifications and debug it.
Remarks:
The first yes-no decision is not critical as both approaches can be combined
anyway.
Currently it is not possible to do the calibration in an OCEAN pre-run script, but
let it reference to another existing ADE XL test (let us call it test2). Inheritance is
only possible to the attached main test (test1). However, a simple workaround is
simply to attach the pre-run script to this other test test2 and use the calibration
setting found in test1 via calcVal. The technique is just the same as described
in step V.
testverif2
Get calibration data via calcVal
Can run in parallel
testverif3
Pre-run:
Run calibration
testcal
The more complex the calibration, the greater the chance that advanced techniques
should be combined at least for best efficiency.
ADE XL – especially with, but even without, OCEAN – supports such complex setups
like multi-step calibrations, partial calibrations (e.g. recalibration only for certain T),
etc.; ending up with for instance multiple pre-runs and multiple verification runs over
different designs.
With OCEAN you would typically put all the calibration steps into a single pre-run script.
For the calcVal approach you would setup individual calibration tests first, then
cascading them by forming the calcVal expressions.
General
Not only for cascaded tests, you have to decide if you want a separate library for the
design and for the testbenches (or not). This has the advantage that the testbench
library would be easier to reuse in other projects (maybe using another PDK). Also you
need to decide which cell the adexl view should belong to. In our examples we create
an own cellview for adexl views.
Also consider using config views and the Hierarchy Editor HED already from the
beginning. It will offer the flexibility you often need at the end.
How to start?
If you want a cascaded test setup, best start with non-cascaded tests, like ‘testcal’ doing
the calibration, and ‘testverif’ doing the verification for given fix setting of the calibration
variables. Then add the calcVals (or pre-run scripts) to transfer the calibration data to
the subsequent tests.
Work on a simplified setup first, with short runtimes, idealized blocks, small
corner sets, low MC count, etc. Then switch to the final setup.
Generally, keep the ADE L test setups up-to-date saved as spectre-state
cellviews, for backup reasons and debugging.
Save the adexl view frequently to have a backup before doing complex changes.
Double-check extensions before going on to further verifications.
Document tricky features for best understanding. Work in a modular way for best
re-use in later projects.
Temperature is special
Temperature sweeps are special in ADE, because with default settings, you get the
same global temperature in all tests. If you want a fix temperature for calibration, then
open the ADE L/XL test editor and enter VAR(“tempcal”) at the temperature entry
field, and use another variable like temps for doing later sweeps in the DUT verification
tests.
Mismatch setup
For fully correct calibration often the mismatch setup in MC is important. Check it
separately, e.g. by inspecting several MC runs manually (on Details view of the Results
pane, instead of looking only to the histograms). The best way is to create a statistical
corner, and then check the mismatch setup again in detail. If the statistical corner
executes correctly, but MC itself not, then check the pre-run script (or if dummy pre-run
script is active). For using the proposed single fix DUT (featuring the whole design)
approach there should no problems.
Examples:
Doing a true bandpass filter trimming in the frequency domain with successive
approximation. Here the SA block Verilog-A trick would not help (like in transient
analysis-based calibration), better use the pre-run script method (or use a simple
sweep – no issue if number of bits is not that high).
For non-ideal trim DACs with missing codes, etc. SA may fail and using a cross
function and a full sweep might be still too slow. In such cases more complex
algorithms might be required. At least for final sign-off, it is best to use a true
AMS testbench for the calibration run to check also the algorithm itself.
For correct mismatch sharing in MC runs having a common fix DUT in all tests is
required. It is not possible yet (e.g. by using different config views) to simplify the
DUT circuit, e.g. for doing the verification or calibration faster. This will result in
incorrect mismatch sharing.
For result comparisons among different tests calcVal can be used, but
accessing pre-run results from main ADE XL tests e.g. the introduction of a
helping variable is required inside the pre-run script or use calcVal in a 2nd
post-run test referencing the 1st main test output results.
The pre-run script flow is compliant with the ADE XL PAD flow, However, if you
call design() inside the script you would need to adapt the views manually
There is a minor restriction in the OCEAN script export capability. If you want to
save OCEAN statements from an ADE XL test you have to save the setup first as
an ADE L state. Then open it with ADE L and export.
Currently there is no direct way to create a specification inside a pre-run script for
pre-run script expressions. However, there are two simple workarounds: You can
create a spec on a variable set by the pre-run script (like using
ocnxlUpdatePointVariable("outvar" outvar) and create an output
expression with spec in ADE XL outvar_spec = VAR("outvar") or you can
create a spec on the output indirectly using calcVal (e.g. using
ocnxlAddOrUpdateOutput("out1" out1) and then create an expression
with spec in ADE XL out1_spec = calcVal("out1" "testname")).
OCEAN and OCEAN XL are powerful programming languages building the
backbone of ADE L/XL/GXL, but some features are not supported in OCEAN
pre-run scripts, e.g. scripts run in non-graphical mode, so plot commands are
disabled, also the GXL optimizers cannot be called there (optimizers can be
called in true OCEAN XL scripts). Pre-run scripts can work even with the AMS
simulator.
Summary
We presented a flow and the corresponding ADE XL features to implement simple and
advanced calibrated verifications. We have described how the user can apply the
different key techniques and tailor them for good performance and flexibility. The
methods are compliant with other design and verification requirements and further
Virtuoso® features like optimization with ADE GXL, etc.
References
[1] ADE XL User Guide
[2] Analog Circuit Design in Nanoscale CMOS Technologies, Lewyn, L.L.; Ytterdal,
T.; Wulff, C.; Martin, K.; Proceedings of the IEEE Volume: 97 , Issue: 10, 2009 ,
p. 1687 - 1714
[3] OCEAN Reference
[4] Cadence Verilog-AMS Training Behavioral Modeling with Verilog-AMS, Version
2.3, Module 8
Appendix
Database Instructions
The whole database for the application-note can be found here:
http://support.cadence.com/wps/mypoc/cos?uri=deeplinkmin:DocumentViewer;src=wp;q
=ApplicationNotes/Custom_IC_Design/adexl_appnote_dbMay2015.tar.gz
To run the examples, generally use mmsim11.1 (e.g. 11.10.214) and ic615 (e.g. isr8) -
or newer versions.
Installation
Untar the tar file via gtar –zxvf adexl_appnote_db6Aug2012.tar.gz. It
contains the GPDK090 (under userLibs/90nm) and an OA working library
IP090Lib_gpdk under userLibs/IP090Lib_gpdk. Also look at README.txt file in
dfII directory.
cd to /adexl_appnote_db/dfII.
Start virtuoso
Open IP090Lib_gpdk/mirr_design/adexl_ana.
In case of model problems, check the model files and update the paths if needed.
Run Nominal and P corners. You should get correct results. Check especially
postrun/Iout. It should be calibrated to 20uA.
oscillator (just with more DC current in the gain stage). The calibration can be
performed first on oscillator frequency, then on its amplitude (as shown). The real
application might be as LC bandpass amplifier which should work with high gain and
narrow bandwidth, just with a DC current slightly below the oscillation point. This way
we use a kind of replica calibration, without duplicating any circuit blocks.