Friday, 26 July 2013

Technology focus - Future developments in On-Board Diagnostics

The latest generation of OBD is a very sophisticated and capable system for detecting emission related problems with the engine and powertrain. But, it relies on the fact that it is necessary to get the driver of the vehicle to do something about any problem that occurs. 

With respect to this factor, OBD2/EOBD is no improvement over OBD1 - as there must be some enforcement capability. Currently under consideration are plans for OBD3, which would take OBD2 a step further by adding the possibility of remote data transfer. This would involve using remote transmitter/transponder technology similar to that which is already being used for automatic electronic toll collection systems. An OBD3 equipped vehicle would be able to report emissions problems directly back to a regulatory agency. The transmitter/transponder would communicate the vehicle VIN (Vehicle Identification Number) and any diagnostic codes that have been logged. The system could be set up to automatically report an emissions problem the instant the MIL light comes on, or alternatively, the system could respond to answer a query from a regulator to its current emissions performance status.

What makes this approach so attractive is its efficiency, with remote monitoring via the onboard telemetry, the need for periodic inspections could be eliminated because only those vehicles that reported problems would have to be tested. The regulatory authorities could focus their efforts on vehicles and owners who are actually causing a violation rather than just random testing. It is clear that with a system like this, much more efficient use of available regulatory enforcement resources could be implemented, with a consequential improvement in the quality of our air.

An inevitable change that could come with OBD3 would be even closer scrutiny of vehicle emissions. The misfire detection algorithms currently required by OBD2 only watches for misfires during driving conditions that occur during the prescribed driving cycles. It does not monitor misfires during other engine operating modes like full load for example. More sophisticated methods of misfire detection will become common place which can feedback other information to the ECU about the combustion process, for example, the maximum cylinder pressure, detonation events or cylinder work done/balancing. This adds another dimension to the engine control system allowing greater efficiency and more power from any given engine design just via more sophisticated ECU control strategy.

Future OBD system will undoubtedly incorporate new developments in sensor technology. Currently the evaluation is done via sensors monitoring emissions indirectly. Clearly an improvement would be the ability to measure exhaust gas composition directly via on-board measurement systems (OBM). This is more in keeping with emission regulation philosophy and would overcome the inherent weakness of current OBD systems, that is, they fail to detect a number of minor faults that do not individually activate the MIL, or cause excessive emissions but whose combined effect is to cause the production of excess emissions.

The main barrier is the lack of availability of suitably durable and sensitive sensors for CO, NOx and HC. Some progress has been made with respect to this and some vehicles are now being fitted with NOx sensors. Currently there does appear to be void between the laboratory based sensors used in research and reliable mass produced units that could form the basis of an OBM (On Board Monitoring) system.

Fig 1 - NOx sensors are now in use! (Source: NGK)

Another development for future consideration is the further implementation of OBD for diesel engines. As diesel engine technology becomes more sophisticated, so does the requirement for OBD. In addition, emission legislation is driving more sophisticated requirements for after treatment of exhaust gas. All of these subsystems are to be subjected to checking via the OBD system and present their own specific challenges. For example, the monitoring of exhaust after treatment systems (particulate filters and catalysts) in addition to more complex EGR and air management systems.

Fig 2 - Current monitoring requirements for diesel engines

Rate based monitoring will be more significant for future systems which allows in-use performance ratio information to be logged. It is a standardised method of measuring monitoring frequency and filters out the affect of short trips, infrequent journeys etc. as factors which could affect the OBD logging and reactions. It is an essential part of the evaluation where driving habits or patterns are not known and it ensures that monitors run efficiently in use and detect faults in a timely and appropriate manner. It is defined as…

Minimum frequency = N/D

N = Number of times a monitor has run
D = Number of times vehicle has been operated

A significant factor in the development of future system will be the implementation of the latest technologies with respect to hardware and software development. Model based development and calibration of system will dramatically reduce the testing time by reducing the number of test iterations required. This technique is quite common for developing engine specific calibrations for ECUs during the engine development phase.

Virtual Development of OBD
Hardware-in-loop (HIL) simulation plays a significant part in rapid development of any ECU hardware and embedded system. New hardware can be tested and validated under a number of simulated conditions and its performance verified before it even goes near any prototype vehicle. The following tasks can be performed with this technology:

Full automation of testing for OBD functionality
Testing parameter extremes
Testing of experimental designs
Regression testing of new designs of software and hardware
Automatic documentation of results

Fig 3 - HiL environment for OBD testing

However, even in a HiL environment, an expensive target platform is needed (i.e. a development ECU). These are normally expensive, and in a typical development environment, they are scarce. In line with the general Industry trend to 'frontload' - it is now possible to have a complete virtual ECU and environment for testing of ECU functions, including OBD, running on a normal PC with a real time environment. The advantage is that no hardware is needed, but more importantly, simulation procedures (drive or test cycles) can be executed faster than real time - this means a 20 minute real time test cycle, can be executed in a few seconds - this has a significant benefit in the rapid prototype phase. See more information about virtual ECU development here:

Sunday, 21 July 2013

Master your Multi Meter - A basic tutorial

A multi-meter is one of the most versatile pieces of kit in your workshop toolbox. The question is though, how do you use it productively, and what can it tell you? Well, consider this blog a beginners guide to finding your way round the most common features of a typical digital multi-meter, We’ll look at how to make some typical measurements and how to interpret the readings. So, read on and master your multi-meter.

Multi-meter basics – what is a multi-meter?
A simple enough question, but worth answering in a bit of detail. For electrical system measurement applications there are a number of different aspects of a system that you may want to measure or monitor. For example, to measure the voltage of an electrical system is often of interest as this effectively shows the electrical ‘pressure’ in the system that pushes the current around (to do the work – for example, heating a bulb filament to become white hot and emit light).
In addition, you may want to know the current itself, as current is effectively the amount of flow in the system – more flow, more work done. So the pressure and flow are linked, more pressure, more flow, more work done. So voltage as well as current measurement is often important.
But you can probably appreciate that a flow meter is a completely different measurement device and measuring principle than a pressure gauge – and it’s the same with electrical measurements – a voltmeter is a different type meter to an ammeter, or in fact, an ohm meter (measuring resistance to current flow in a part of the circuit). So you need different instruments for each – that is the beauty of a multi-meter! It’s a single meter, that is capable of measuring more than one thing in an electrical circuit – it is effectively more than a single meter, it’s several devices in one (hence the name multi). Typically, multi-meters will always be able to measure amps, voltage and ohms (resistance), but often they can incorporate other features that allow the user to do much more analysis and measurement of a circuit.
Originally, multi-meters were analogue meters, with a needle and dial (like a speedometer). However, these have been almost universally superseded by the digital multi-meter, also known as a DMM. Note that analogue meters are often known as AVO’s or AVO meters (Amps-Volts-Ohms). DMM’s are much more robust than an analogue meter as they don’t need a sensitive needle/dial, however, some users prefer to look at a dial as it is easier to process visually and the detect trends (that’s why digital speedometers haven’t really caught on).

Fig 1 - Analogue and Digital type multi-meters (Draper)

What can you measure with a multi-meter
Let’s concentrate on the basics – how do you connect a meter to the circuit to measure volts (pressure), amps (flow) and resistance (resistance to flow). As we mentioned, the measurement principle is different for each, and so is the way that the meter is connected in the circuit. Let us study each case:

To measure the voltage, you need to connect the meter ‘across’ the component or circuit section of interest, effectively in parallel. The meter then ‘sees’ the same pressure as the component and can measure and display the value. Note that the volt meter has a very high resistance; this ensures that no additional load is applied to the circuit by the meter, and hence the circuit itself is not disturbed by the meter whilst performing a reading. Take a look at the circuit diagram:

Fig 2 - Connection of the DMM for voltage measurement 

Note that most vehicle circuits will be earth return, hence one side of the voltmeter may often be connected to a convenient earth point when taking a reading. Another common, but perhaps underused technique when measuring voltage, that is particularly useful on vehicle circuits where voltage is low but current is high, is too measure the voltage ‘drop’ across part of the circuit. Particularly if a high resistance is suspected to be causing a problem – for example, across a switch or connector. Measuring the voltage drop highlights a resistance in a working, loaded circuit and can easily show up bad connections, the diagram below shows the meter connection – a typical value is that the voltage drop should be no more than 10% on any part of the wiring circuit to the component.

Fig 3 – Connection of the DMM for voltage drop measurement

To measure current, the meter has to be connect into the circuit – in series. Note that the meter is connected in circuit so it can measure the flow around the circuit when in operation. When using a multi-meter to measure current, the resistance of the meter circuit is very low. This prevents the meter from creating an additional circuit resistance and affecting the accuracy of the reading. The diagram below shows how the meter is connected:

Fig 4 – DMM connected for current measurement

The important thing to remember is that the meter itself will have a limit to the amount of current it can measure, most DMM are limited to 10 amps maximum. So, you must be careful not to overload the meter circuit, which is often protected by a fuse inside the meter.

Apart from voltage and current, it’s often useful to be able to measure the resistance of a circuit, or a component– that is, the restriction that the circuit/component provides to the flow of current. According to Ohms law, the resistance (in ohms) multiplied by the current (in amps) equals volts. So by knowing the current and voltage across a component, you can calculate its resistance (resistance equals volts divided by amps). However, it’s not always convenient to measure both voltage and current in a circuit (as you need 2 meters), also, you may want to establish resistance without powering up the circuit (for example, measuring a component out of the circuit).

For this application, you can use an ohmmeter (ohms are the unit of resistance). An ohmmeter measures resistance by applying a small current through the component/circuit and establishing resistance by measuring voltage and current. The current is tiny, supplied by a dry cell battery inside the meter, so there is no anger of damage to the unit under test, also, it is removed from the circuit completely for the measurement so the circuit does not need to be activated. The circuit for measuring resistance is shown below and always involves complete removal of the component, or isolation of the circuit, in order to make a measurement.

Fig 5 – DMM connected across component for resistance measurement

Note that it often makes sense to check or calibrate the meter before the resistance measurement is made. To do this you connect the test leads of the meter together, and check that the reading is zero ohms. Also, note that the component or circuit must be completely isolated or you will get false readings

What else can you measure?
Now we have covered the basic measurements, it’s worth noting that most multi-meters have additional measurement modes or features, some of which can be quite useful and are worth understanding. Typical extra measurements that you may see, depending on the meter are:

Continuity test:
Very similar to a resistance test, often incorporating an audible signal to give a ‘go’ – ‘no go’ indication. Very useful for testing bulbs, fuses, switches etc. – components that are generally either open or closed circuit – this mode gives a quick indication if OK or not. The meter generally gives an audible signal (from a buzzer) if the resistance is below a certain value. Can also be used on wiring and connectors to detect open circuits, as long as the circuit is not live, and is isolated.

Diode test:
Another type of resistance check but specifically for diodes (which are an electrical one way valve). They are semiconductor junctions and need a minimum voltage applied across them before they will ‘switch on’ and conduct. Most multi-meters will not provide this minimum voltage in resistance test mode, as they tend to use very low voltages to prevent circuit or component damage. In diode test mode, a small current is supplied by the meter. This is used to test the diode in forward and reverse direction (by reversing the lead connections manually). For a healthy diode, one way should conduct, the other should block. When the diode conducts, the voltage drop across the diode is shown on the display, generally about 0.5 -0.7 volts depending on the diode type. If the diode is blocking, no current flows and the display shows zero. 

In addition, special features are often added to multi-meters aimed at specific applications, for example, Electronics laboratory test meters may include:

An additional connector to allow temperature measurement via a thermocouple that is generally supplied with the meter. The tip of this is the sensing element and the temperature is displayed on the meter display – not that thermocouples are not incredibly accurate at approx. +/- 1 degrees centigrade

Transistor Test
This is an electronic component test feature, similar to diode test, but to test the gain factor of a transistor (i.e. the amplification ability of the transistor, also known as hfe). Unless you’re an electronics engineer or technician, you won’t need this.

Capacitor Test
As above, specifically for testing the capacitance value of a capacitor – again, unless you are into electronics, you won’t need this!

Multi-meters aimed at Automotive Diagnostics are also popular, these may include the following features:

Electronics Tacho
This mode allows measurement of engine speed via an inductive clamp generally supplied with the meter. The clamp picks up ignition pulses from the HT lead, the meter calculates the time difference between the pulses and converts this to engine speed. Sometimes the meter can be adjusted for 4 stroke or 2 stroke engines to give the correct reading, otherwise the reading has to be halved for a 2-stroke or wasted spark ignition system.

Dwell/Pulse width
This mode allows pulse evaluation, to be able to understand the width of a pulse, or the duty factor of a pulse (i.e. how long the pulse is active within a switching cycle). Typical automotive applications would be – points dwell measurement (how long the points are closed for - normally given as an angle in degrees or %); injector opening time (in milliseconds); idle speed control valve (% on/off of the driver circuit), there are also others…

Practical Measuring Tips
When using a multi-meters, bear in mind the following:
  • Note that most multi-meters will measure AC (Alternating Current) and DC (Direct Current). For vehicle applications, you will be measuring DC almost exclusively! 
  • Make sure that when you are measuring that the meter is set correctly before you connect to the test point. Select the correct mode (AC volts, DC volts etc.). 
  • Some meters are ‘auto ranging’ so they are able to automatically detect the correct range according to the input. However, with some meters you have to select the range manually – be careful when doing this, if you are not sure what range you need, start at the highest and work your way down the scale!
  • Make sure that you connect the test leads correctly for the mode you are in, there are normally several jack sockets, different ones for current and voltage. If you don’t get this right you will get no reading (best case) or worst case, you will damage the meter!
  • Most digital multi-meters will read a maximum of 10 amps current, and most are fitted with an internal fuse so that if they are overloaded the fuse blows before any damage occurs to the cables or the meter.

Typical Measurements
Let’s look at a typical measurement application - measuring voltage. Voltage is a measure of the system ‘pressure’, and you need this to push the current around. No volts no current flow! To measure voltage, simply connect the leads to the appropriate jack sockets 

Picture1 – Connecting leads into jack sockets

Use the dial to select the correct measurement range, for a 12 volt system it’s between 10 – 20 volts, for this meter we can select the 30 volt range, or, if the meter is capable of auto ranging, just select DC volts

Picture 2 – Selecting correct range on the meter

Now connect the leads, for negative earth cars, black lead to a good earth, red lead to the test point (positive earth cars, other way round)

Picture 3 – Connecting the leads

You’re now connected, so you can observe the reading, the display will show the voltage potential at the test point. The test point should show a voltage reading similar to battery volts.

Picture 4  – Meter reading battery volts from light switch supply

Before starting any voltage measurements, connect your meter across the vehicle battery, this shows that the meter is working, and that the battery is not flat! Note that in voltage range setting the resistance of the meter is ‘high’ so that it doesn't affect the circuit being tested by being a significant additional current path.

A digital multi-meter is a useful piece of kit, particularly one with the useful extras for automotive use. However, you don’t need to spend a fortune, all meters will read volts, ohms and amps and those are the basic functions you need for electrical system fault finding. The most important things to look for are a good quality, durable unit with a protective case or holder that will stand up to the working environment. Long leads are essential for use around the vehicle (look for leads of approx. 1 metre), in addition to a large clear display (display back lighting is also useful).

Sunday, 14 July 2013

Automotive Scope Basics - setting the time base correctly

Setting the correct sampling rate when using a scope is a decisive factor in getting a quality measurement. It enables you to get the information that you need from the measured signal, to ensure you are able to make an informed diagnostic judgement! This factor is particularly important with respect to the signal input channel and the target measurable. Let’s look closely at how you can get this right, and avoid under or over sampling.

First thing to understand is the reasoning for the appropriate sample rate, of course, you could sample every channel as fast as possible! But, you would end up with large data files, difficult to handle, with extra information (and possibly noise) on the signal that you don’t need. The opposite, if you don’t set the sample rate high enough, you will miss crucial components of the signal – this is known as aliasing! One of the first things to appreciate is the relationship between sampling time (distance between samples) and frequency, it is easy!

Frequency = 1/sample time

e.g sample time is 10 milliseconds => 1/0.01 => 100Hz

So, by measuring the cycle time, you can calculate the frequency, and vice versa – but what does this mean and how does this help? In signal processing technology, there are sampling theorems proposed (try Googling – Nyquist, Shannon) which state that the sampling frequency must be at least twice that of the highest frequency signal component of interest. This means that, if you can measure and establish the fundamantal highest frequency, then you can set the sample rate accordingly. There are some slight complications though, generally when we are using a Scope to measure automotive signals, the signals are transient in nature, normally related to engine speed. So, it is important to consider what the signal frequency will be at the highest frequency that may be achieved during a measurement – let’s look at an example. The diagram (Fig 1) shows a typical inductive CPS signal (Crank Position Sensor), in this case with 2 ‘gaps’ as reference points for the engine management system.

Fig 1 – CPS raw signal, 2 positions per revolution where there are missing teeth as reference points

In this case the cursors are measuring the time difference for 1 engine rev (about 30 milliseconds) which converts to 2000rpm (which is correct).

e.g. 1/0.03 => ~33.33Hz - then multiply by 60 to convert to min from seconds...

 =>33.33 x 60 => 1999 rpm (approx. 2000rpm)

If we look in a bit more detail at the signal, we can examine the highest frequency part (Fig 2)

Fig 2 – CPS signal, zoomed in, cursors measuring the time difference of the high frequency part

In this case, the display shows the frequency directly in the bottom right hand corner (I am using a Picoscope). So based on this (~ 4kHz), I know I have to set the sample rate at 8kHz to ensure that I don’t miss anything on the signal. As mentioned earlier, this measurement is at an engine speed of 2000rpm. So, I need to consider an upper limit, in this case, I could say it’s a gasoline engine, I am not likely to exceed 6000rpm in my measurement task, so I can set the sample rate accordingly at 3 x 8kHz or slightly greater (according to the time base setting steps you have on your measurement device). So now I know that I will sample with good digital resolution and conversion quality, without oversampling, right throughout my task. Let’s take a look at what happens when you under or over sample – This picture below (Fig 3) shows the effect of under sampling. In this screenshot it is not too extreme, the signal is sampled at ½ the correct value according to the sampling theorem. This means it is being sampled at its actual frequency – so you can see the basic shape of the wave, but you can also see the loss of details compared to the correctly sampled signal.

Fig 3 – Undersampling (brown trace at 8kHz – the minimum required, blue trace at 4 kHz)

In Fig 4 below, we can see the effect of oversampling on the signal, basically a lot of noise can be seen, which in this case, will add no value to the evaluation and could be misleading (In contradiction though - it’s worthy to note though that sometimes noise can be the root source of a problem, so, sometimes it may be necessary to sample at high frequency to capture this). 

Fig 4 – Oversampled signal (brown trace at 8kHz, blue trace at 1MHz)

What is not shown is the effect on the size of data files – large files are difficult to handle and store! Fig 5 shows the relative file size of different sample rate for the data shown in the screen shots, file size grows exponentially compared to sample rate! For measurements over extended periods, at high resolution, the files will be very large!

Fig 5 – Data file size compared to sample rate

Experimenting around a bit with this signal showed that a 100kHz sample rate was a good compromise! Generally sampling theorems suggest a minimum of twice the highest frequency component. In my experience, anything between 2 and 10 is fine, depending upon the application and the task. Fig 6 shows the signal at 100kHz. This is a sampling factor of 25 at 2000rpm and 8.33 at 6000rpm – which is fine for this CPS signal – the signal will be appropriately sampled, even at the maximum engine speed.

Fig 6 – Sampling at 100kHz, a good compromise in this case

Establishing the correct basic sampling frequency for a signal is good practice, it optimises the trade off between good quality measurement data and manageable files to work with. Of most importance though - it means the you have a good understanding of what you are doing, and that you know how to configure and use your scope effectively! It is well worth practicing this aspect of the set-up - measure some signals with deliberate over and under sampling, then compare them so you can see the difference in data quality, with a known signal type, when the scope is sampling correctly set, and when it is not!

Thursday, 4 July 2013

Oscilloscopes for Diagnostics - set-up fundamentals

Oscilloscopes were once the reserve of scientists and physicists, measuring complex signals in research labs, pushing the boundaries in the quest of extending human knowledge! but, as we all know, the cost of sophisticated technology reduces over time such that things that were once horrendously expensive are now practically throw away! 

The consequence of this is that oscilloscopes are now relatively affordable as a general measuring device, and a particularly useful for electrical system diagnostics - they're almost down to the price of a really good multimeter, but just because they are within the grasp of the average 'technician in the street', does that mean it's worth digging deep to get your mitts on one? and even if you did, would you be able get any real benefit out if it. 

The basics:
So, you've probably heard the term oscilloscope, but what are we actually talking about? Well, fundamentally, the 'scope' (as its very commonly known for short) is a voltmeter, however, the voltage displayed on a voltmeter is given as a reading, either via a needle on a scale, or as a discrete value. The reading given by a scope is via a curve, drawn on a display screen - so what? If the signal is displayed in this way, it means that fast moving changes can be displayed. Consider a signal in the form of a single voltage pulse, like a spike! If you looked at this signal on a digital multimeter, it probably wouldn't event register, looking via an analogue meter, you may, if you're lucky see the needle twitch. However, look at the signal on a properly set up scope and you will see much more of the detail, you'll see the profile of the signal shape, the rising and falling edges, you'll see the peak value, plus the duration of the pulse - so much, much more detail, you would miss all that with a meter. Of course for some signals, you don't need all that detail, generally, signals that a changing slowly over time, for example, a temperature signal, or a pressure sensor signal. These are fine viewed with a meter. However, signals which are dynamic in nature, in particular, those related to crank position, or fast changing sensor values, make much more sense when viewed on a scope. To give a specific example, think of  a crank position sensor signal. View this signal on a meter, and you'll just see a constant or slightly wavering voltage, but, measure this with a scope, and you will see all the details of the edges relating to crank position, and the missing tooth relating to a TDC reference mark. A signal such a this needs a scope measurement to really be able to check the signal quality during operation - using a meter, you would be hard pressed to diagnose any run-time faults. 

Scopes - Analogue, then Digital
Originally, scopes were used in labs, for analysing signal wave forms  they were analogue devices, similar in many ways to the old fashioned television, complete with a cathode ray tube. Basic operation was that an electron beam was fired at a cathode screen, drawing a straight line across the screen at regular intervals according to the scope set up. This line was projected across the screen and displayed (delayed) until it was redrawn - this process being cyclic, with the screen acting as a kind of very short term memory. The beam would be deflected by the applied signal to be measured, so, imagine a pulse - this would deflect the beam and the line drawn on the screen would show the pulse pattern over time as the line is drawn across the screen. The disadvantage of this type of device is that it is difficult to store or capture a waveform, the only possible method is to literally photograph the screen image - not ideal.

Fig 1 - A typical analogue lab scope - at one time, state of the art kit!

The digital age overcame this, by sampling a waveform and storing it digitally, with this technology measurements can be stored and further processing and analysis can be carried out by a signal processor, either during or after the measurement. Wave forms can be archived as files on a hard disc drive and then these digitally captured traces can be analysed in greater detail after the measurement. Digital scopes are now standard, and have the sampling speed and performance that now matches analogues scopes for all but a few, very specific applications. Certainly, for automotive use, the digital scope is standard, so, no further discussion on scope history from this point on! It is important though to understand the concept of how a digital scope captures the signal, as this creates a few considerations that you'll need to consider when setting up a scope for a measurement task - lets take a look at this in more detail...

Fig 2 - A typical 'scope' kit for Automotive Diagnostics (source: Pico Technology)

Scope settings for accurate sampling:
A digital sampling device (in our case a scope, but there are others in many different applications) uses something called an analogue-to-digital converter (often abbreviated to ADC) to convert the target wave form or signal to something that can be stored or manipulated by a digital signal processor or computer. The ADC samples the applied input signal at fast, regular intervals, each sample creates a measurement value, a number, that can be resolved into a binary value (1's and 0's) then stored in electronic memory. In this way, we end up with a string of numbers, in memory, which represent the waveform. However, there is a critical consideration here! In between sample points, the signal has not been measured, so, between 2 samples we simply draw a straight line (known as interpolation), but we have to be certain that we sample enough points to capture the full signal detail. if you don't sample quickly enough, some detail between could be missed completely (a phenomena known as aliasing) so an important consideration when setting up the scope is sampling frequency - are you sampling fast enough to capture the detail of the signal? Of course, you could just sample every signal as fast as your device will allow, but over sampling uses up valuable memory space, and adds no value because measurement files will be unnecessarily large to manipulate and store. So, this part of the set-up is a compromise that you will need to consider and get right!

Fig 3 - It is important to sample fast enough to capture the high frequency signal components

In addition, you need to tank about the vertical resolution - the scope has a certain input range ( say -10 to +10 volts). Within this range, there are a certain number of 'bits' (or input steps) available for the digitisation process, effectively this is the minimum signal change width that the scope can record between samples on the vertical axis. This is important for similar reason to the correct sample rate, you need to use as many available bits to digitise your signal, otherwise the conversion will be poor and detail will be lost - this signal will appear 'blocky' with steps instead of a nice smooth curve during dynamic changes of the signal.

Fig 4 - Quantisation error due to insufficient dynamic range of vertical axis

Once you have got your sampling and input range setting correct, relative to the signal range you are measuring, then you should be able to enjoy measuring good quality data, which is excellent information to assist with many diagnostic procedures. Remember to always have a good ground connection/reference on the input channel to avoid signal noise and cross talk. Make sure that you use the correct type of probe for the signal that you want to capture, also, make sure that you zero/calibrate the input channel before any real measurement - just to be sure that you'll capture what you want, with the correct amount of detail. A useful tip is to make sure that your scope is always on hand, primed and ready for use - not tucked away somwhere that it is an effort to be able to use it. Then, it is easy to use it to measure and store signals from known good components or systems. In this way, you can build up your own reference library of 'good' data, that you can use in diagnostic procedures. Over time you'll build up a 'big data' set of information, that you can use for comparison, when you are looking to locate a real fault - this can be an invaluable timesaver!