With the increase in interest in Doppler radar chronographs I thought it may be useful to some for me to pass on some of the things I have learnt using these devices. Now I am not a radar expert or anything like an expert, I am just a bloke who has had to use them in dozens of different applications to track everything from 155mm shells down to 4.6mm bullets and explosive formed slugs from warheads. During this time I was lucky enough to work with someone who is almost certainly the most experienced operator of Doppler radars of all kinds for tracking gun launched projectiles in the UK and whose expertise is acknowledged in many other countries. I learnt a lot from him in a close working relationship based on mutual respect and a lot of traded insults on each other’s expertise. All Doppler radars work by measuring the frequency change in the signal being reflected from the moving projectile. This gives an accurate measurement of the projectiles velocity relative to the radar. The radar does not know where the gun is, it does not care. The radar does not know where the projectile is either, it has no way of measuring the distance from itself to the projectile, all it knows is the time since it started measuring, the time the projectile entered its beam and the velocity of the projectile relative to the radar at any time since entering the beam. Any other data has to be derived from what has been measured. It does not measure the velocity of the projectile at the gun muzzle. The expensive Doppler radars I was using were basically the same as the FX and LabRadar systems available today, just a lot more powerful, able to calculate more from their data and, in the case of tracking radars, able to work out where the projectile is in the sky due to having a very narrow beam and a moving radar head which enables the beam to follow the trajectory of the projectile. The fixed head radars like the ones you can buy use a wide beam to try to capture the projectile as soon as possible and keep it in the beam. Ignoring the tracking radars, the fixed head radars we used had the advantage of software which could calculate the velocity of the projectiles relative to the gun through being told where the gun muzzle was relative to the radar. So the best the sets you can buy can do is to try to work out the velocity of the projectile as if it had started at the radar. Suppose we had an ideal set up of radar and gun. The raw velocity data from the radar would look something like this. The actual muzzle velocity was 775ft/sec. Each dot represents a data point. This would be regarded as an excellent set of data most of the ones I have seen contain far less than this with a much larger spread of the random points. At the start of the data close to zero time you can see the points measure a much lower velocity. This is when the projectile is first entering the radar beam and the angles between the projectile direction of travel and the direction of the beam are highest. The data soon climbs to the velocities we want, in the case above after about one foot of projectile travel. To calculate the velocity at time zero the software in the radar unit has to curve fit the data and extend the resulting curve back to zero. The question then is what data does the software use? In the case above suppose the radar uses all the data and fits a straight line to it we get one result at time zero, but, if the software ignores the initial low reading data points and fits a line to the rest of the data we get a different result as shown below. The black line is the curve fit obtained if we use all the data and predicts a velocity of 773ft/sec at time zero. The red line is the curve fit if we ignore the initial data and in this case we get a zero time velocity of 774.5ft/sec. Now those two figures are not that far apart probably an acceptable error for most purposes but this is an ideal case so what can go wrong? The point at which the radar detects the projectile will depend on where you have positioned the radar and your gun and the directions in which they are pointing. The distance to the start of the data can vary hugely, I have seen LabRadar data which did not start until the projectile was six yards away. In our trials we also found that the weather can affect the amount of data retrieved with some days when no data could be obtained. So let’s assume you are having a really bad day, you could easily end up with data like this (I have seen worse!). Now the zero time velocities are 763ft/sec for all the data and 766.5ft/sec ignoring the first data point, both significantly below the 775ft/sec true value. The problem for the user is that you won’t know if your unit is working on all the data, selected data or how much data. The amount of data will vary with each shot and each setup. I have assumed a straight line fit to the data in all cases. Some designs may use other curve fits such as polynomial. This will introduce more changes to the predicted zero time velocity and the accuracy of the prediction in different circumstances. Based on many years of use of these devices there seem to be a number of vital actions you have to take to ensure consistent accurate use. The setup is all important. Get the unit as close to the muzzle of the gun as possible. Carefully align the direction of fire with the direction of the radar beam. In the example above with little data I built in a misalignment of 5 degrees which reduced measured values by about 2ft/sec. Be aware that the weather may affect the amount of data you are getting, the worst results we got would be when a weather front was approaching. Finally, be consistent in your positioning and line of fire relative to the radar for each shot. Good results can be obtained from this type of chronograph if you take all the precautions.