Define the term Measurement

Measurement is the process of assigning a numerical value to a physical quantity or characteristic of an object or system using a defined standard or unit. It involves comparing the unknown quantity with a standard quantity of the same kind and determining the numerical value of the unknown quantity in terms of the standard unit. Measurement is essential in many areas of human activity, including science, engineering, commerce, and everyday life. It provides a means of quantifying and communicating information about physical quantities and helps in making informed decisions.

Recall the Significance of Measurement

Measurement is significant in many ways, and it plays a crucial role in various fields, including science, engineering, economics, and social sciences. The following are some of the key reasons why measurement is significant:

  1. Provides objective information: Measurement provides objective and quantifiable information that can be used to make informed decisions. It allows us to determine the size, quantity, or amount of a particular phenomenon or variable.
  2. Enables comparison: Measurement enables us to compare and contrast different objects or phenomena. We can compare the weight, height, length, or volume of different objects, or we can compare the effectiveness of different treatments or interventions.
  3. Facilitates prediction: Measurement allows us to make predictions about the future based on past data. For instance, measuring the performance of a company over several years can help us predict its future profitability.
  4. Improves accuracy: Measurement improves the accuracy of our observations and helps us avoid errors that might arise from subjective judgments or biases.
  5. Facilitates communication: Measurement provides a common language that enables us to communicate about the same phenomena or variables accurately. This facilitates the exchange of information and knowledge across different fields and disciplines.

Overall, measurement is a vital tool for understanding the world around us and making informed decisions. It enables us to quantify, compare, and predict various phenomena, and facilitates communication and knowledge sharing across different fields and disciplines.

Recall the Methods of Measurement

These are the methods of comparison used in the measurement process. In precision measurement various methods of measurement are adopted depending upon the accuracy required and the amount of permissible error.

The methods of measurement can be classified as:

l. Direct method

2. Indirect method

  1. Direct method of measurement:

This is a simple method of measurement, in which the value of the quantity to be measured is obtained directly without any calculations. For example, measurements by using scales, vernier callipers, micrometres, bevel protectors etc. This method is most widely used in production. This method is not very accurate because it depends on human insensitiveness in making judgement.

  1. Indirect method of measurement:

In an indirect method the value of the quantity to be measured is obtained by measuring other quantities which are functionally related to the required value. E.g. Angle measurement by sine bar, measurement of screw pitch diameter by three wire method etc.

Recall types and functions of the Instruments

There are various types of measuring instruments, each designed to perform a specific function. Some common types of instruments are:

  1. Thermometer – used to measure temperature.
  2. Voltmeter – used to measure voltage.
  3. Ammeter – used to measure electric current.
  4. Multimeter – can measure voltage, current, and resistance.
  5. Oscilloscope – used to visualise electronic signals.
  6. Barometer – used to measure atmospheric pressure.
  7. Hygrometer – used to measure humidity.
  8. Anemometer – used to measure wind speed.
  9. Tachometer – used to measure the speed of rotation.
  10. pH metre – used to measure the acidity or alkalinity of a solution.

Describe the Measurement System Performance

Measurement system performance refers to the accuracy, precision, and reliability of a system used for measuring various physical quantities. An example of a measurement system is a thermometer used to measure temperature.

The accuracy of a thermometer refers to how close the measured temperature is to the true temperature. For example, if a thermometer reads 20 degrees Celsius, but the true temperature is actually 22 degrees Celsius, the thermometer is not accurate. A more accurate thermometer would measure the temperature closer to the true value.

The precision of a thermometer refers to how consistent the measurements are. For example, if a thermometer always measures 20.5 degrees Celsius regardless of the actual temperature, it is precise but not accurate. If a thermometer measures the temperature differently each time, it is not precise.

The reliability of a thermometer refers to its consistency over time and under different conditions. For example, a thermometer that consistently measures the same temperature every time it is used and under different conditions is reliable. A thermometer that gives inconsistent measurements depending on the time of day or location is not reliable.

Therefore, the measurement system performance of a thermometer can be assessed by evaluating its accuracy, precision, and reliability. These factors are important in ensuring that the measurement system is suitable for the intended use and provides accurate and consistent results.

Define the following terms in Measuring Instruments

  1. Accuracy – the degree to which a measurement is close to the true value of the quantity being measured.
  2. Precision – the degree of repeatability or reproducibility of a measurement.
  3. Calibration – the process of comparing a measuring instrument’s output to a known standard to determine its accuracy and adjust it accordingly.
  4. Repeatability – the degree to which a measuring instrument can produce consistent results when measuring the same quantity under the same conditions.
  5. Scale Range – the minimum and maximum values that a measuring instrument can measure.
  6. Scale Span – the difference between the maximum and minimum values that a measuring instrument can measure.
  7. Linearity – the degree to which a measuring instrument’s output is proportional to the quantity being measured.
  8. Hysteresis – the phenomenon in which a measuring instrument’s output depends on the direction of change in the quantity being measured.

Define the following terms used in Measuring Instruments:

i. Dead Time – the period of time during which a measuring instrument cannot respond to changes in the quantity being measured due to physical or technical limitations. Dead time can result in inaccuracies and measurement errors.

ii. Dead Zone – the range of values of the quantity being measured in which a measuring instrument cannot detect or respond to changes. Dead zones can also result in measurement errors and inaccuracies.

iii. Resolution – the smallest change in the quantity being measured that a measuring instrument can detect and display. It is usually expressed in terms of the smallest division on the instrument’s scale.

iv. Threshold – the minimum value of the quantity being measured that a measuring instrument can detect and respond to. Below the threshold, the instrument may not provide any output.

v. Sensitivity – the degree of responsiveness of a measuring instrument to changes in the quantity being measured. It is usually expressed as the ratio of the change in the instrument’s output to the change in the quantity being measured.

vi. Loading Effect – the impact of the measuring instrument on the system being measured. The act of measuring can sometimes introduce changes in the system, leading to inaccuracies in the measurement. Loading effects can be reduced by using instruments with high input impedance and minimising the amount of current or voltage drawn from the system.

Recall the Errors

There are different types of errors that can occur when using measuring instruments, including:

  1. Systematic Error: This type of error occurs consistently in the same direction, such as a measurement being consistently too high or too low. It can be caused by factors such as calibration errors, environmental conditions, or instrument bias.
  2. Random Error: This type of error occurs randomly and can be caused by factors such as human error, instrument sensitivity, or external disturbances. It leads to imprecision in the measurements, and multiple measurements may show different results.
  3. Gross Error: This type of error occurs due to mistakes in using the measuring instrument or incorrect data entry. It can cause a significant deviation from the true value and is easily noticeable.
  4. Drift Error: This type of error occurs due to changes in the calibration of the instrument over time, such as due to wear and tear or changes in the environmental conditions.

Measuring instruments can also have limitations and uncertainties that can affect their accuracy and precision, such as:

  1. Sensitivity: This refers to the smallest change in the measured quantity that can be detected by the instrument.
  2. Resolution: This refers to the smallest interval between two measurements that can be detected by the instrument.
  3. Linearity: This refers to how well the instrument can measure a range of values, without significant deviation from a straight line on the calibration curve.
  4. Hysteresis: This refers to the phenomenon where the readings of the instrument depend on the previous values measured, rather than just the current value.
  5. Response time: This refers to the time it takes for the instrument to stabilise and provide an accurate reading.

It is important to understand and account for these errors and limitations when using measuring instruments to ensure accurate and reliable measurements.

Define the following terms: True Value, Limiting Error, and Absolute Error

  1. True value: The true value refers to the exact value of a measured quantity. It is the value that would be obtained if the measurement were perfect and free from any error. The true value is often unknown, and the purpose of measurement is to estimate it as accurately as possible.
  2. Limiting error: The limiting error, also known as the maximum permissible error, is the maximum amount of error that is allowed in a measurement. It is often specified by the manufacturer of a measuring instrument or defined by industry standards. If the error of a measurement exceeds the limiting error, the measurement is considered unreliable or invalid.
  3. Absolute error: The absolute error is the difference between the measured value and the true value of a quantity. It represents the magnitude of the deviation or error in the measurement and is expressed in the same units as the measured quantity. The absolute error is always positive and can be calculated as:
    Absolute error = |Measured value – True value|

Recall the types of Errors

In the context of electrical measurements, the following are the types of errors that can occur:

  1. Systematic errors: These errors occur consistently in the same direction and are usually caused by a flaw or bias in the measuring instrument or the measurement process. Systematic errors can be reduced or eliminated by calibration, adjusting or repairing the instrument, or using a different measurement technique.
  2. Random errors: These errors are unpredictable and can occur in any direction, resulting from variations in the measurement conditions, the instrument, or the operator. Random errors can be reduced by taking multiple measurements and calculating the average or by using statistical techniques to analyse the data.
  3. Gross errors: These errors are caused by mistakes or malfunctions that result in a large deviation from the true value. Gross errors can be caused by incorrect use of the measuring instrument, misreading the measurement, or faulty equipment.
  4. Environmental errors: These errors are caused by the effects of the environment on the measurement, such as temperature, humidity, electromagnetic interference, or vibrations. Environmental errors can be minimised by controlling the measurement conditions and using appropriate shielding or filtering techniques.

It’s worth noting that some errors may have a combination of these types, and the correction or compensation method will depend on the specific error’s cause and characteristics.

Recall the combination of quantities with Limiting Errors

When combining quantities that have limiting errors, the resulting quantity will also have a limiting error. The limiting error of the combined quantity depends on the type of combination and the limiting errors of the individual quantities.

There are two types of combinations: direct and indirect. Direct combination involves adding or subtracting quantities, while indirect combination involves multiplying or dividing quantities.

  1. Direct Combination: When adding or subtracting quantities, the limiting error of the resulting quantity is the sum of the limiting errors of the individual quantities. For example, if a length is measured as 10.0 ± 0.2 cm and another length is measured as 5.0 ± 0.1 cm, the resulting sum of lengths is 15.0 ± 0.3 cm.
  2. Indirect Combination: When multiplying or dividing quantities, the limiting error of the resulting quantity is calculated using the relative limiting error of each quantity. The relative limiting error is the limiting error divided by the value of the quantity. For example, if a length is measured as 10.0 ± 0.2 cm and a width is measured as 5.0 ± 0.1 cm, the area is calculated as length x width = 50 cm2. The relative limiting errors of the length and width are 0.02 and 0.02, respectively. The relative limiting error of the area is then calculated as the square root of the sum of the squares of the relative limiting errors, which gives 0.028. The limiting error of the area is then calculated as the product of the area and the relative limiting error, which gives 1.4 cm2. Therefore, the area is 50 ± 1.4 cm2.

It is important to consider the limiting errors when combining quantities to ensure that the resulting quantity accurately reflects the precision of the individual measurements.

Recall the following terms used in the Measurement

Here are some terms used in measurement and their descriptions:

  1. Accuracy: Accuracy refers to how close a measured value is to the true value. It is the degree of agreement between a measured value and the accepted or true value. The accuracy of a measurement can be affected by systematic and random errors.
  2. Precision: Precision refers to the degree of consistency or reproducibility of a measurement. It is the degree to which repeated measurements of the same quantity give the same result. The precision of a measurement can be affected by random errors.
  3. Resolution: Resolution refers to the smallest incremental change that can be detected by a measuring instrument. It is the smallest change in the input signal that can be detected and distinguished from noise or background. The resolution of a measuring instrument is determined by its sensitivity, noise level, and signal-to-noise ratio.
  4. Sensitivity: Sensitivity refers to the degree of response or output produced by a measuring instrument for a given change in the input signal. It is the ratio of the output change to the input change and is often expressed in units such as volts per unit input.
  5. Linearity: Linearity refers to the degree to which a measuring instrument’s output is proportional to the input signal over its entire range of operation. A linear instrument produces a straight-line relationship between the input and output, while a nonlinear instrument may produce a curved or distorted response.
  6. Hysteresis: Hysteresis refers to the difference in output produced by a measuring instrument for the same input signal, depending on whether the input signal is increasing or decreasing. It is often observed in instruments with mechanical or magnetic components and can affect the accuracy and precision of a measurement.

Overall, understanding these terms is essential for making accurate and reliable measurements and interpreting the results correctly.

Recall Units and Type of Units

Units are used to express physical quantities and are essential in scientific and engineering fields. There are different types of units, including:

  1. Base Units: These are the fundamental units of measurement for physical quantities and are used as the building blocks for derived units. The seven base units in the International System of Units (SI) are:
  • Metre (m) for length
  • Kilogram (kg) for mass
  • Second (s) for time
  • Ampere (A) for electric current
  • Kelvin (K) for temperature
  • Mole (mol) for amount of substance
  • Candela (cd) for luminous intensity
  1. Derived Units: These are units that are obtained by combining base units. Derived units are used to express more complex physical quantities. Examples of derived units in the SI include:
  • Newton (N) for force, which is derived from kg*m/s2
  • Joule (J) for energy, which is derived from N*m
  • Watt (W) for power, which is derived from J/s
  • Coulomb (C) for electric charge, which is derived from A*s
  1. Supplementary Units: These are units used to express angles and solid angles. The two supplementary units in the SI are:
  • Radian (rad) for angle
  • Steradian (sr) for solid angle
  1. Non-SI Units: These are units that are not part of the SI but are widely used in different fields. Examples of non-SI units include:
  • Fahrenheit (°F) for temperature
  • Pound (lb) for mass
  • Foot (ft) for length

It is important to use the correct units when expressing physical quantities to ensure accurate and consistent communication within scientific and engineering communities.

Recall Dimensions and derive Dimensions of Electrical Quantities

Dimensions refer to the physical quantities that are involved in a particular measurement or calculation. The dimensions of a quantity are typically represented using square brackets, such as [L] for length, [M] for mass, and [T] for time.

Electrical quantities can be derived using the following dimensions:

  1. Electric current – [I]
  2. Voltage – [M][L]2[T]-3[I]-1
  3. Resistance – [M][L]2[T]-3[I]-2
  4. Capacitance – [M]-1[L]-2[T]4[I]2
  5. Inductance – [M][L]2[T]-2[I]-2

Define and classify Unit Standards

Unit standards are specific and detailed statements of what learners should know and be able to do at a particular level of education or training. They are used to define and assess learning outcomes, skills, and competencies in a systematic and objective way. Unit standards can be classified into two main types:

  1. Performance standards: Performance standards describe what learners should be able to do, demonstrate, or perform in a particular context or situation. They specify the level of proficiency or mastery required to meet the standard and often include criteria for evaluating the performance.

Performance standards may include practical skills such as operating machinery or performing a task, cognitive skills such as problem-solving or decision-making, or interpersonal skills such as communication or teamwork. Performance standards are often used in vocational or technical training programs to assess learners’ ability to apply their knowledge and skills in real-world settings.

  1. Knowledge standards: Knowledge standards describe what learners should know or understand about a particular subject or topic. They specify the level of depth, breadth, and complexity required to meet the standard and often include criteria for evaluating the knowledge.

Knowledge standards may include factual knowledge such as terminology, concepts, or principles, procedural knowledge such as methods, algorithms, or processes, or conceptual knowledge such as theories, models, or frameworks. Knowledge standards are often used in academic or professional training programs to assess learners’ understanding and application of theoretical concepts and principles.

Both types of unit standards are essential for defining and assessing learning outcomes and ensuring that learners have the knowledge and skills needed to succeed in their chosen fields. By setting clear and measurable standards, unit standards help to improve the quality and relevance of education and training and support lifelong learning and professional development.