0% found this document useful (0 votes)
15 views

RM Module 4

Uploaded by

Arvind Acharya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

RM Module 4

Uploaded by

Arvind Acharya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Research Methodology II

Module 4
Parametric and Non-Parametric tests
• Parametric and non-parametric tests are two major categories of statistical
tests used to analyze data. The choice between these tests depends on the
type of data and whether certain assumptions about the population (e.g.,
normal distribution) are satisfied.
• Both parametric and non-parametric tests are essential tools in statistical
analysis. Parametric tests are more powerful and preferred when the data
meets certain assumptions, while non-parametric tests provide flexibility for
non-normal data or smaller samples.
Parametric Tests

• These tests rely on certain assumptions about the population parameters, such
as the data being normally distributed and having homogeneous variance.
Characteristics:
• Assumes data follows a normal distribution.
• Used for interval or ratio-level data.
• Requires equal variances between groups (homoscedasticity).
• More powerful than non-parametric tests if assumptions are met.
Examples of Parametric Tests

• t-Test: Compares the means of two groups.


• Example: Checking if male and female students have the same average test scores.
• ANOVA (Analysis of Variance): Compares the means of three or more groups.
• Example: Comparing the performance of students across different teaching methods.
• Pearson Correlation: Measures the strength and direction of a linear relationship
between two variables.
• Example: Relationship between hours studied and exam scores.
• Z-Test: Used for large samples when the population variance is known.
Non-Parametric Tests

• These tests do not rely on specific assumptions about the population distribution.
They are more flexible and can be used when the data does not meet the
assumptions required for parametric tests.
Characteristics:
• No assumption of normality or equal variances.
• Suitable for ordinal data or when the sample size is small.
• Used when data contains outliers or is skewed.
• Less powerful than parametric tests if parametric assumptions are met.
Examples of Non-Parametric Tests

• Mann-Whitney U Test: Compares two independent groups.


• Example: Comparing customer satisfaction scores from two different stores.
• Wilcoxon Signed-Rank Test: Compares two related samples or repeated measurements.
• Example: Before-and-after scores from the same group of students.
• Kruskal-Wallis Test: Non-parametric equivalent of ANOVA for comparing more than two
groups.
• Example: Comparing employee performance across departments.
• Spearman’s Rank Correlation: Measures the strength of a relationship between two variables.
• Example: Association between age and job satisfaction.
Difference
When to Use Each Type

• Use Parametric Tests:


• When the data follows a normal distribution.
• When the sample size is large.
• When comparing means or relationships between continuous variables.
• Use Non-Parametric Tests:
• When the data is skewed or contains outliers.
• When working with ordinal or categorical data.
• When assumptions for parametric tests are not met.
Chi-Square (χ²) Test

• The Chi-Square (χ²) Test is a statistical test used to determine whether there is a
significant association between two categorical variables.
• A hospital administrator wants to know if there is a significant association
between patient satisfaction and type of service provided (e.g., Inpatient vs.
Outpatient).
• It evaluates how observed results compare to expected results under the
assumption that no relationship exists between the variables (i.e., the null
hypothesis).
Assumptions of the Chi-Square Test

• Data must be categorical (nominal or ordinal).


• The sample size should be large enough (expected frequency in each
category should be at least 5).
• Observations should be independent (no participant contributes to more
than one cell).
• The data must represent counts or frequencies (not percentages or
continuous values).
Z-Test
• The Z-test is a parametric statistical test used to determine whether the
means of two datasets are significantly different or whether a sample mean
differs from a population mean.
• It is commonly applied when the sample size is large (n ≥ 30) or when the
population variance is known. The Z-test assumes the data follows a normal
distribution.
Assumptions of the Z-Test

• Large Sample Size (n ≥ 30).


• The population variance is known or the sample variance approximates it
well.
• Normal distribution of the population.
• Observations must be independent.
T-Test
• The t-test is a statistical hypothesis test used to determine whether there is a significant
difference between the means of two groups. It is commonly used when the sample sizes
are small and the population standard deviations are unknown. There are several types of t-
tests, including:
• Independent t-test: Compares the means of two independent groups (e.g., test scores of
two different classes).
• Paired t-test: Compares means from the same group at different times (e.g., test scores
before and after a training program).
• One-sample t-test: Compares the mean of a single group to a known value or population
mean (e.g., testing if the average height of a group is different from the national average).
Difference
The t-test and z-test are both statistical methods used to determine if there are
significant differences between means.
• t-test: Used to compare means when the sample size is small (typically less
than 30) or when the population standard deviation is unknown.
• z-test: Used to compare means when the sample size is large (typically 30 or
more) or when the population standard deviation is known.
ANOVA
• ANOVA, or Analysis of Variance, is a statistical method used to compare the
means of three or more groups to determine if at least one group mean is
statistically different from the others.
• It is particularly useful in experiments and observational studies where
researchers want to understand the effects of different factors on a
continuous outcome.
Types of ANOVA

• One-Way ANOVA: Compares means across a single factor with multiple


levels.
• Example: Comparing test scores among students taught by three different methods.
• Two-Way ANOVA: Examines the effects of two factors simultaneously, as
well as their interaction.
• Example: Analyzing the effects of teaching method and student gender on test scores.
Steps to Conduct One-Way ANOVA

• Collect Data: Gather data from all groups being compared.


• Calculate Group Means: Compute the mean for each group.
• Calculate Overall Mean: Compute the overall mean of all data points.
• Calculate Variances: Compute the variances
Example Scenario

• Suppose a researcher wants to compare the effectiveness of three different


diets on weight loss. The researcher collects weight loss data from three
groups of participants, each following one of the diets for a month.
• A one-way ANOVA can be performed to determine if there are significant
differences in weight loss among the three diets.
Levene's F-test
• Levene's F-test is a statistical test used to assess the equality of variances
across different groups.
• It is particularly useful in situations where the assumption of equal variances
is critical, such as in ANOVA (Analysis of Variance) and other parametric
tests.
• Levene's F-test is designed to test the null hypothesis that multiple groups
have the same variance. If the null hypothesis is rejected, it suggests that at
least one group has a different variance than the others.
Example

• Consider a study comparing test scores from three different teaching


methods. Before conducting an ANOVA, you would use Levene's test to
ensure that the variances of the test scores across the three groups are equal.

• If the test indicates significant differences in variances, you may need to use
a different statistical approach that does not assume equal variances.

You might also like