What is justification in research/15 examples of justification
Research in science is fundamental projects to obtain advances and new knowledge that allow us to better understand the world, managing and dealing with all kinds of phenomena. But investigations are not a spontaneous phenomenon: they require planning, design and, especially, a reason that justifies their being carried out. This rationale must be particularly compelling in cases where financial and other means are required for the investigation to begin. What is justification in research ?
For this reason, before starting a scientific project, it is necessary to develop a justification for that research . Next we are going to see different examples of justification for an investigation and what questions they must answer.

What is justification in an investigation/research?
The justification of an investigation is the part of a scientific project in which the reasons and arguments that have led the person behind proposing it and wanting to carry it out are exposed . This justification must be added when writing the work in writing, usually appearing at the beginning of it, both in the abstract and in the theoretical introduction. Its objective is to try to answer what, how, why and for what purpose the investigation has been carried out.
Therefore, the part of the justification is something fundamental that all scientific work must explain, since it provides the reasons that have led one or more people to decide to start the research that they present in the article or book. These are the reasons that are considered to make research useful and beneficial to the scientific community . It is very important to indicate in it what benefits for common knowledge can carry out or have carried out such research, as well as to advance in the understanding of a certain knowledge as its practical applications.
As its name indicates, the justification of an investigation is the part that justifies the work, that is, within it a series of arguments must be highlighted that must be valid and powerful enough to prove the need to carry out the investigation. When it comes to demonstrating that the work will be useful, there are many options for arguing and defending such research . What is justification in research ?
Among the most common we have the fact that this research will allow science to advance in a specific field of knowledge , something that serves as a precedent for more complex and larger investigations to be developed in the future. It can also be indicated that the research will serve so that what has been discovered can be applied as a solution to an important problem for society.
Another interesting argument used in the justification of an investigation is that, based on what has been discovered in it, a new method can be developed of something that was already known to be solved but that will be more economical, that is, that the investigation will allow develop a new system to face a certain problem but lowering costs, improving efficiency or reducing the consumption of resources, improving the quality of life of people who could not afford to pay the classical method or promoting social and educational changes without having as obstacle to the liquidity of funds.
Several examples of justification for an investigation
Now that we know what the justifications of an investigation are and what questions they must answer, using solid and valid arguments, we move on to see several examples of justification of an investigation from different areas . Most come from real investigations, only that here a summary of the part of the introduction has been exposed in which the antecedents of the field to be investigated are exposed a little and what are the reasons, objectives and arguments that have led the research team to deepen on that theme .
1. The effects of television on the behavior of young people
“ Television has become the most influential medium in the development of behavior and thought patterns in children and adolescents around the world, some of them quite disruptive (violence, aggressiveness, lack of respect towards teachers and other reference adults. ..). The relationship between television and youth behavior is suspected, but no clear causal link had been identified.
This article aims to review the evidence in favor of the hypothesis of the harmful effects of television , trying to understand more fully the effect of this means of communication on younger audiences, its repercussions at a social level and define how it should be a more responsible television “. What is justification in research ?
2. Local development and microfinance as strategies to attend to social needs
“Today, states are involved in two important processes but seen too much at a global level: economics and politics. People often make the mistake of leaving aside the local, a sphere that, focusing on the economic aspect, cannot be understood without understanding the nature of small-scale social development (family, neighborhood, town …) and small economic transactions. that occur in it: microfinance. Although microfinance has been largely ignored, it undoubtedly influences socio-economic policies, albeit often in unexpected ways.
The development of a society cannot be approached only at the global level, but also by paying special interest to the local and trying to understand microfinance in its multiple dimensions: economic, social, environmental, political, cultural and institutional. The objective of this article is precisely to explore these dimensions, addressing the different theoretical approaches to the notions of local development and microfinance in order to establish them as tools for addressing the socioeconomic needs of people with fewer resources.
Since the needs and the capacity to satisfy them are indicative of the poverty of the society , these seemingly insignificant socio-economic aspects should be included in the political agenda , in order to understand and design better intervention strategies for the most disadvantaged people ”.
3. Expression of rabies virus G protein in carrots and corn
“Rabies supposes great economic losses, both in cure methods and in prevention vaccines. The current vaccines are difficult to access and acquire for the population of developing countries, since they do not have the logistical or economic resources so that they The entire population is vaccinated against this pathology, which is why it is necessary to develop new rabies vaccine alternatives, made with resources that can be obtained in countries with mostly subsistence economies.
Among the advantages of plant-derived vaccines we have lower costs in production , storage, transportation and distribution. Furthermore, it is possible to administer plant tissue to human animals without the need to purify the protein of interest. For this reason, it is of interest to find out how the G protein of the rabies virus is expressed in vegetables, specifically in carrots and corn , plants widely cultivated throughout the world. ” What is justification in research ?
4. Comprehensive use of crustacean waste
“The shrimp industry discards every year hundreds of tons of crustacean remains, specifically the exoskeleton (the shell) and the cephalothorax (head). These parts contain a substance, chitin, which could have applications in the preservation of highly perishable foods, such as fresh fruits.
At present, several methods have been used to preserve fruit and not all of them are respectful with the environment . The objective of this research is to determine if the application of a biofilm of chitin and chitosan, obtained by green chemistry, is beneficial to extend the useful life of fruits and propose it as a new ecological method in the conservation of the harvest , since these two substances are neither harmful nor aggressive to the environment “.
5. Reduction of depression in old age through reminiscence therapy
“There is little work on the modification of autobiographical memories with different age groups. However, some research has suggested that life review based on the retrieval of autobiographical memories is effective in modifying such memories in people with depression.
This work is based on the results of several studies that indicate a significant reduction in depression symptoms in elderly people who have undergone a program with individual reminiscence sessions, a program that promotes recovery from positive and negative events. The objective of the present study is to analyze what is the relationship between depressive symptoms in old age and the characteristics of autobiographical memories , that is, what role do the memories obtained that explain reduce the symptoms of depression play “.
6. Adherence to pharmacological treatment in patients with type 2 diabetes
“Diabetes mellitus is a disease strongly determined by genetics, in which the individual presents alterations in the metabolism of carbohydrates, proteins and fats, with a relative or absolute deficit of insulin secretion. Between 85 and 90% of patients with diabetes mellitus are type 2 diabetic and it is chronic. What is justification in research ?
We understand as adherence to a treatment the behavior of the patient when it coincides with the medical prescription, taking the prescribed drugs, following prescribed diets or maintaining healthy lifestyle habits . Adherence to a treatment is important to evaluate the clinical evolution of a pathology. Studies indicate that 50% of people with chronic diseases comply with their treatment, with several risk factors for this not being the case.
We consider it important to identify in patients with type 2 diabetes mellitus what is the frequency of therapeutic non-adherence, what relationship it has with metabolic control, in addition to more precisely detecting which are the most common associated risk factors, in order to carry out tending programs to change their behavior in order to encourage them to follow the treatment that has been prescribed for them . ”
7. Relationship between family climate and school climate
“Classic studies, like that of Bernstein in the 70s, point out that the negative or positive attitude of the adolescent towards the teachers can be determined by the perception that his family has about the educational field. Both the family environment and the attitude towards authority in the classroom seem to be two very important factors in explaining violent behavior in adolescence in the school context .
Taking this into account, the main objective of this work has been to examine the relationship between both contexts from the adolescent’s perception of the family and school climates , analyzing the role played by different individual factors in the interaction between these two contexts ” .
8. Prevention of gender violence in universities
“University faculties are not places outside of gender violence. As a social problem that it is, gender violence affects women of all social classes , ages, cultures and economic levels, and overcomes the classic stereotypes associated with those who suffer it , why and where it occurs It does not matter if it is a socio-economically unfavorable context or if you are in the most select private university: violence against women is everywhere. What is justification in research ?
For this reason, the purpose of this research has been to analyze the existence of gender-based violence in Spanish universities and to identify and develop measures that can help prevent it , detecting the main foci, motives and contexts in which there are more possibilities for it to occur. produce in the university population “.
9. Linguistic study in children with Down syndrome
“This final degree project focuses on Down Syndrome , specifically on defining the basic abilities possessed by people with this intellectual disability , focusing on the processes of literacy during Primary Education.
The purpose of the study is to obtain information that will help those families who have a member with this syndrome , in order to help them progress taking into account their linguistic abilities and develop resources that allow the acquisition of theoretical-practical skills to be able to progress at work , socially and personally “.
10. Effects of the implementation of a VAT system in the United Arab Emirates
“The six member countries of the Cooperation Council for the Arab States of the Gulf (CCEAG) agreed to launch a common market to increase investment and trade among their members. To facilitate this proposal, the countries agreed to implement a value-added tax system (VAT) for the year 2012.
It is very necessary to evaluate the basic principles and the social and economic implications that this new measure could have before it is officially applied . The purpose of this work is to provide a comprehensive analysis of the proposed VAT system and what socio-economic repercussions it could imply for the Gulf countries, in addition to identifying possible risks and developing preventive strategies. ” What is justification in research ?
11. Study on the benefits of reading aloud to students
“One of the most traditional pedagogical techniques is to read aloud to students. One student reads aloud, while the others follow the reading in their respective books, being aware of which line they go to and, if any, the teacher so requests, change another student to read aloud.
Although classic, the benefits of reading aloud and listening for content acquisition in class have not been fully evaluated. Among the suspected benefits of this technique we have that the student not only learns to control the volume of his voice or knows how to project it in a public context such as the class, but also, if he has to listen, it allows him to improve the capacity active listening, internalizing academic knowledge.
The objective of the present investigation is to find out to what extent these suspected advantages are real, and to see if the method of reading aloud to students, both by the teacher and by one of them, improves comprehension and skills. It feeds the student’s critical thinking , following the class more and asking himself questions about the content while simultaneously acquiring it “.
12. Project to increase production in Chino Winds
“Before 1992, the Yavapai ranch was exploited in a traditional way. About two-thirds of the ranch was not fenced and a rather simple irrigation system was used. The cattle walked freely all year round within this portion of land, having little control of what they ate and without exposing potentially fertile areas that could be used for growing fruits, vegetables, and cereals. Livestock’s favorite areas were those near water sources, wasted as there was no complex irrigation system to irrigate the entire property. What is justification in research ?
The poor exploitation of the Yavapai Ranch is surprising since, taking into account its potential profitability, it turns out to be a great wasted production opportunity. The reason for this project is to improve the irrigation system and make better use of the land, hoping for a greater increase in production and consequently a greater obtaining of income that defies investment costs. In addition, by controlling grazing, it is expected to improve the vegetation cover of the historically exploited areas on the ranch, albeit passively. ”
13. Teaching mathematics and understanding its usefulness in real life
“Until today, the way of teaching mathematics has focused on giving the student a definition or a formula, showing them an example of how to use it and hoping that they know how to imitate it, without explaining or having the certainty that they understand what they have to do Nor does it promote the development of the student’s creative and integrative capacity, memorization is more emphasized than comprehension, and traditional tools do not provide the tools to investigate, analyze and discern the problem.
The main objective and motive of this project is to make students learn to use mathematics in their day to day, learning that they are useful for all kinds of areas beyond the subject of mathematics: economics, technology, science … So, It is proposed to give them real examples, in which they themselves have to use their knowledge and resolution capacity to propose a resolution process, talking to each other or communicating in the most precise way all their mental processing. What is justification in research ?
The justification for this project is the large number of students who, after being explained what to do or what formula to apply, detach it from reality itself. There are not a few students who when they finish the mathematics course it is as if they had not learned anything, in the sense that they are not able to see the relationship between what they have learned in that subject and their real life. The subject of mathematics is not in the curriculum to teach useless content, but to make it easier for people to understand reality and solve problems in real life , like any other subject “.
14. Study on the reproduction of sockeye salmon in Canada
“The objective of this study is to observe and analyze the habits of the sockeye salmon from the Fraser River (British Columbia, Canada). The justification for this research is that, due to global environmental changes and the increase in the temperature of the water, it has been found that the population of this species in this area has changed, not being certain that the species is out of danger and even suspecting a possible risk that the sockeye salmon could end up being a threatened species ”
The incidence of human beings on this species is well known and historical, since the exploitation of natural resources in its habitat and other economic activities had already dramatically modified the ecological niche where sockeye salmon develop and reproduce. Knowing what the adaptation and change processes of this species have been, more specific conservation programs can be developed, in addition to starting environmental projects that prevent the total disappearance of the sockeye salmon “.
15. Justification of the treatment and use of laboratory animals
“The use of animals in scientific research is something historically seen as necessary since there are ethical codes that protect people from taking part in experiments without their consent or causing them some kind of damage, both physical and mental. Although to a certain extent Necessary point, animal research has opened many debates, since the use of non-human animals is done to test techniques that would never be used in humans, such as implanting diseases, testing potentially dangerous drugs or removing vital parts.
Despite the fact that throughout the 20th century and what we have been in the 21st, multiple ethical codes have been developed in which the ethical treatment of laboratory animals is addressed, the simple fact of using them without their consent is an aspect that movements animalists do not overlook. Research should be carried out only if there is a clear scientific purpose, and that involves minimal harm and suffering to the animal.
This point is not the justification for actual research , but rather what is deemed necessary to justify research using animals. The scientific purpose of the research must have a great potential benefit for scientific knowledge at the cost of suffering , preferably not very serious, of the animal. The species that are chosen must be the most appropriate, that they are not in danger of extinction or protected by law and that it is known how to treat them in the least stressful way possible but that implies some kind of scientific benefit “.
Related Articles
Importance of research in our society/in different disciplines.

What is documentary research Characteristics Types Advantages
How to do meta analysis/why do and use/5 steps to doing.

What is qualitative research Characteristics Methods Case Study
Leave a reply cancel reply.
Your email address will not be published. Required fields are marked *
Save my name, email, and website in this browser for the next time I comment.

Please input characters displayed above.
- What is Textual Citation/meaning/concept February 16, 2023
Adblock Detected
What is Research Justification and How is it Done? (C)
The justification of the investigation refers to the basis of the investigation or the reason why the investigation is being carried out. The justification should include an explanation for the design used and the methods used in the research.
The justification for the project is to try to explain why a solution to the problem described in the research needs to be implemented. The rationale must be correctly addressed so that the entire research project can be strong.

In an investigation one must justify virtually everything that is done. Each aspect of the study design has an influence on what is learned from the study.
Critics may detract from the findings if they believe that there is something atypical about the people who were selected for the study, some bias that makes certain people selected, something unfair about the groups compared, something bad about the approach of the questions, etc.
Therefore, you need to provide a reason for home aspect of the study. To see how a foundation makes a difference, one must imagine that two different studies are being read with similar designs and methods but with different reasons. Then one should ask what is considered most persuasive; that will have the best justification.
How to Write a Research Rationale
1- state the statement.
A good narrative of justification must begin with a brief summary of what one wants to declare, which will be the focus of the piece.
The statement should state what changes are believed to be imposed, what budget is needed, what policies should be implemented, the problem at hand, etc.
It should be a simple statement, for example: you want to do a study on the cultivation of peaches in this locality.
2- Establish reasons
Once the statement has been made, the reasoning should begin. For example, if you want to do research on the cultivation of peaches in a village, you should give details of why this topic is important.
In this case, it could be said that peaches are of great economic importance for this locality.
It is important to frame the argument with the audience in mind. So in this case one should not only talk about the fact that peaches are important, one should talk about how this study would help increase the GDP of the community, the creation of jobs, etc.
3- Provide support
An argument can be made to strengthen the investigation, but if one does not have support for these arguments the reader will not be able to convince himself that it speaks the truth.
Any support should be provided in the form of statistics, studies and expert opinions.
For example, if you want to study peaches, you can include figures and studies on the impact of peaches on local economy and jobs.
As far as possible, serious studies should be found to support the argument. As long as more support is offered, the justification will be stronger.
4- Discuss budget problems
The research budget should be an important part of the justification. Relevant information on the budget should be included, including the resources needed to conduct the research and the impact it will have; the possible revenues that will be generated or what costs will be saved.
In the case of the study of peaches could be mentioned the budget necessary to carry out the research and the possible positive economic impacts on the locality that the study would have.
Difference between good and bad justification narratives
All aspects of a good project justification must be based on logical reasoning or foundation.
To see how good reasoning makes the difference, one could imagine that two studies are being read with similar designs and methods but with different foundations.
The most logical, partial and professional narrative will be the most appropriate. In the following cases it can be observed:
Research question
Example of bad justification: I was curious.
Example of good justification: a discrepancy was noticed in the investigation and it was wanted to put to the test.
Example of bad justification: I know these teachers.
Example of good justification: these teachers represent the population that other researchers have been studying.
Example of bad justification: we did not bother comparing them with other people because we knew they were honest people.
An example of good justification: they were compared to another group that was similar to them in all respects, except in their knowledge of this subject of particular interest.
Information Collection
Example of bad justification: it was easier to do it this way and / or had no time to do anything else.
Example of good justification: the information we collected was directly relevant to the discrepancy we wanted to know a little more.
Interpretation
Example of bad justification: the patterns we observe make sense and support my personal experiences.
Example of good justification: the patterns we observed were consistent with one version of this theory and not with the other. Therefore, questions arise about the second version of this theory.
Example of a justification for the investigation
Poppy study on the hiv epidemic in the uk and ireland.
Several reports have suggested that age-related co-morbidity occurs earlier in HIV-infected subjects on effective antiretroviral therapy compared to HIV-negative subjects.
However, control populations within these studies are not always closely matched to HIV-infected populations and therefore such findings need to be interpreted with caution.
POPPY attempts to recruit HIV-infected individuals of different age groups and a good equivalence of unaffected HIV control population to be able to determine the effects that HIV infection has on other medical conditions.
Across the United Kingdom, those of black or white African descent, and those who acquired HIV through sexual intercourse account for 84% of older people receiving HIV treatment in 2009 (A. Brown, personal communication) .
Clinics participating in the POPPY study have provided care for
Of the patients who received treatment at these clinics in 2008-2009, 12,1620 fell into one of these groups, of which about 19% were> 50 on their most recent visit.
- Justifying your study. Retrieved from msu.edu
- Background and justfication for the study. Recovered from 1.imperial.ac.uk
- Research justification. Retrieved from sk.sagepub.com
- How to write a project justification on a proposal. Recovered from fundsforngos.org
- How to write a narrative justification. Retrieved from education.seattlepi.com
Recent Posts
- Undergraduate Project Topics
- MBA-MSC-PGD Project Topics
- OND/NCE Project Topics
- HND Project Topics

Call Us Today: 09067754232
- Hire A Writer
- Hire A Data Analyst
- Happy Customers
- OND/NCE RESEARCH PROJECT TOPICS
- HND RESEARCH PROJECT TOPICS
- UNDERGRADUATE PROJECT TOPICS
- MBA-MSC-PGD THESIS R...
Our Archives
- Accounting 745
- Accounting Education 12
- Actuarial Science 5
- Adult Education 11
- African Languages 4
- Agricultural Business And Financial Management 5
- Agricultural Economics 17
- Agricultural Engineering 3
- Agricultural Extension 3
- Agricultural Marketing And Cooperatives 11
- Agricultural Science 3
- Agricultural Science Education 1
- Animal Production 3
- Animal Science 5
- Archaeology And Museum 2
- Architecture 4
- Atmospheric And Environmental Physics 2
- Auditing And Forensic Accounting 9
- Banking And Finance 549
- Biochemistry 3
- Biology Education 16
- Biomathematics 2
- Brewing Science 5
- Building Technology 17
- Business Administration 476
- Business Education 18
- Business Management 33
- Chemical Engineering 4
- Chemistry 6
- Chemistry Education 6
- Child & Basic Education 14
- Child Right 3
- Civil Engineering 8
- Clothing And Fashion 1
- Commerce 10
- Communication Arts 7
- Computer Science 231
- Computer Science Education 17
- Cooperative And Rural Development 3
- Cooperative Economics 24
- Criminology And Security Studies 22
- Crop Production 9
- Crop Science And Environmental Protection 3
- Curriculum Studies 5
- Defence Studies 7
- Disaster & Risk Management 6
- Economics 362
- Economics Education 14
- Education 2171
- Education Foundation 18
- Education Management And Policy 4
- Educational Administration And Planning 9
- Educational Measurement And Evaluation 5
- Electrical Electronics Engineering 12
- Electronic Accounting 17
- Elementary Education 2
- Energy Economics 4
- English Language Education 16
- English Literary Studies 27
- Environmental Biology 2
- Environmental Geochemistry 1
- Environmental Geology 2
- Environmental Science 9
- Estate Management 44
- Ethics And Civic Education 2
- Fine & Applied Arts 5
- Fisheries And Aquaculture 2
- Food And Nutrition 3
- Food Science & Technology 21
- Forestry And Wildlife 2
- French Education 4
- Gender And Women Studies 5
- Genetics And Biotechnology 1
- Geography 2
- Geography Education 4
- Geophysics 1
- Guidance Counseling 12
- Health & Sex Education 5
- Health Economics 8
- Health Education 46
- Health Environmental Education And Human Kinetics 6
- Health Information Management 7
- History & International Relations 31
- Home And Rural Economics 7
- Home Economics 5
- Hospitality And Catering Management 11
- Human Resource Management 268
- Human Right 1
- Hydrogeology 3
- Industrial Chemistry 8
- Industrial Mathematics 1
- Industrial Physics 1
- Information Technology 17
- Insurance 16
- Integrated Science Education 8
- International Affairs And Strategic Studies 6
- International Law And Diplomacy 24
- Islamic And Arabic Studies 3
- Journalism 8
- Library And Information Science 5
- Linguistics 2
- Marine And Transport 3
- Marine Biology 1
- Marine Engineering 4
- Marketing 151
- Mass Communication 287
- Mathematical Economics 2
- Mathematics 15
- Mathematics Education 10
- Mba Finance 8
- Mechanical Engineering 6
- Medical And Health Science 13
- Medicine And Surgery 2
- Microbiology 17
- Office Technology & Management 11
- Petroleum Engineering 4
- Philosophy 38
- Physics Education 11
- Political Science 128
- Primary Science Education 2
- Production And Management 1
- Project Management 1
- Psychology 12
- Psychology Education 5
- Public Administration 35
- Public Health 29
- Public Relations 12
- Purchasing And Supply 11
- Pure And Applied Chemistry 1
- Quantity Surveying 13
- Radiography And Radiological Sciences 5
- Religious And Cultural Studies 7
- Science And Computer Education 7
- Science Laboratory And Technology 14
- Secretarial Studies 9
- Smes & Entrepreneurship 145
- Social Science And Humanities 1
- Social Studies Education 8
- Sociology And Anthropology 24
- Soil Science 3
- Staff Development And Distance Education 4
- Statistics 36
- Surveying And Geo-informatics 3
- Taxation 64
- Teacher Education 8
- Technical Education 1
- Theatre Arts 4
- Theology 17
- Tourism And Hospitality Management 56
- Urban & Regional Planning 13
- Veterinary 1
- Vocational Education 17
- MBA-MSC-PGD Thesis research materials
- Click Here For More Departments »
- THE DIFFERENCE BETWEEN JUSTIFICATION OF THE STUDY AND SIGNIFICANCE OF THE STUDY
Over the years justification and significance of study has always been confusing to students especially the undergraduate students. I have seen lots of undergraduate research works that students write justification of the study thinking it was significance of the study. These mistakes are not only seen at undergraduate level but also with project supervisors.
Justification of the study and significance of the study are always very important in all research works. There is no research work that does not contain significance of the study or justification of the study. The positioning might differ depending on the requirement or the format demanded by the tertiary institution a student finds his/her self. But the most important thing is that a student understands when and why having a significance of the study or justification of the study is very important.
The justification of the study is mostly preferred by supervisors working with post graduate students during the MBA thesis or dissertation; though some supervisors still demand for justification of the study in place of significance of the study from their undergraduate students.
The Justification And Significance Of Study Nexus
The correlation between justification of the study and significance of the study is that they are both found on the chapter one of any research project work. We all know that all chapters one of any undergraduate research project work are numbered. You can see the position of justification and significance of the study as shown below:
Chapter one: Introduction
- background of the study
- statement of the problem
- aim and objectives of the study
- research questions
- statement of the hypothesis
- significance or justification of the study
- scope of the study
- limitation/delimitation of the study
- operational definition of terms
You can see from the table of content for the chapter one of a complete project/research work; the significance of the study or justification of the study is at number 1.6 though it might change depending on the topic you are working on. It can also change depending on the department upon which the project is carried out. The table of content of the chapter one presented above is basically for the project topics under the education departments ; but those from other departments might differ especially those under history and international relations .
The Difference Between Justification And Significance Of The Study
The justification of the study is basically why a particular research work was carried out. What was the problem identified that made a student want to carry out such research work. Here you will also capture why the methodology was adopted and also why the experiment was conducted; could it be practical or scholastic purposes.
The significance of the study is actually all about what was found during the research work. The findings could be the nature of the relationship between one variable and the other. This result will be gotten by the use of statistical tools like chi-square, correlation, regression, MANOVA, ANOVA etc; then you back it up with some empirical findings.
- ACCOUNTING 745
- ACCOUNTING EDUCATION 12
- ACTUARIAL SCIENCE 5
- ADULT EDUCATION 11
- AFRICAN LANGUAGES 4
- AGRICULTURAL BUSINESS ... 5
- AGRICULTURAL ECONOMICS 17
- AGRICULTURAL ENGINEERING 3
- AGRICULTURAL EXTENSION 3
- AGRICULTURAL MARKETING... 11
- AGRICULTURAL SCIENCE 3
- AGRICULTURAL SCIENCE E... 1
- ANIMAL PRODUCTION 3
- ANIMAL SCIENCE 5
- ARCHAEOLOGY AND MUSEUM 2
- ARCHITECTURE 4
- ATMOSPHERIC AND ENVIRO... 2
- AUDITING AND FORENSIC ... 9
- BANKING AND FINANCE 549
- BIOCHEMISTRY 3
- BIOLOGY EDUCATION 16
- BIOMATHEMATICS 2
- BREWING SCIENCE 5
- BUILDING TECHNOLOGY 17
- BUSINESS ADMINISTRATION 476
- BUSINESS EDUCATION 18
- BUSINESS MANAGEMENT 33
- CHEMICAL ENGINEERING 4
- CHEMISTRY 6
- CHEMISTRY EDUCATION 6
- CHILD & BASIC EDUCATION 14
- CHILD RIGHT 3
- CIVIL ENGINEERING 8
- CLOTHING AND FASHION 1
- COMMERCE 10
- COMMUNICATION ARTS 7
- COMPUTER SCIENCE 231
- COMPUTER SCIENCE EDUCA... 17
- COOPERATIVE AND RURAL ... 3
- COOPERATIVE ECONOMICS 24
- CRIMINOLOGY AND SECURI... 22
- CROP PRODUCTION 9
- CROP SCIENCE AND ENVIR... 3
- CURRICULUM STUDIES 5
- DEFENCE STUDIES 7
- DISASTER & RISK MANAGE... 6
- ECONOMICS 362
- ECONOMICS EDUCATION 14
- EDUCATION 2171
- EDUCATION FOUNDATION 18
- EDUCATION MANAGEMENT A... 4
- EDUCATIONAL ADMINISTRA... 9
- EDUCATIONAL MEASUREMEN... 5
- ELECTRICAL ELECTRONICS... 12
- ELECTRONIC ACCOUNTING 17
- ELEMENTARY EDUCATION 2
- ENERGY ECONOMICS 4
- ENGLISH LANGUAGE EDUCA... 16
- ENGLISH LITERARY STUDIES 27
- ENVIRONMENTAL BIOLOGY 2
- ENVIRONMENTAL GEOCHEMI... 1
- ENVIRONMENTAL GEOLOGY 2
- ENVIRONMENTAL SCIENCE 9
- ESTATE MANAGEMENT 44
- ETHICS AND CIVIC EDUCA... 2
- FINE & APPLIED ARTS 5
- FISHERIES AND AQUACULT... 2
- FOOD AND NUTRITION 3
- FOOD SCIENCE & TECHNOL... 21
- FORESTRY AND WILDLIFE 2
- FRENCH EDUCATION 4
- GENDER AND WOMEN STUDIES 5
- GENETICS AND BIOTECHNO... 1
- GEOGRAPHY 2
- GEOGRAPHY EDUCATION 4
- GEOPHYSICS 1
- GUIDANCE COUNSELING 12
- HEALTH & SEX EDUCATION 5
- HEALTH ECONOMICS 8
- HEALTH EDUCATION 46
- HEALTH ENVIRONMENTAL ... 6
- HEALTH INFORMATION MAN... 7
- HISTORY & INTERNATIONA... 31
- HOME AND RURAL ECONOMICS 7
- HOME ECONOMICS 5
- HOSPITALITY AND CATERI... 11
- HUMAN RESOURCE MANAGEM... 268
- HUMAN RIGHT 1
- HYDROGEOLOGY 3
- INDUSTRIAL CHEMISTRY 8
- INDUSTRIAL MATHEMATICS 1
- INDUSTRIAL PHYSICS 1
- INFORMATION TECHNOLOGY 17
- INSURANCE 16
- INTEGRATED SCIENCE EDU... 8
- INTERNATIONAL AFFAIRS ... 6
- INTERNATIONAL LAW AND ... 24
- ISLAMIC AND ARABIC STU... 3
- JOURNALISM 8
- LIBRARY AND INFORMATI... 5
- LINGUISTICS 2
- MARINE AND TRANSPORT 3
- MARINE BIOLOGY 1
- MARINE ENGINEERING 4
- MARKETING 151
- MASS COMMUNICATION 287
- MATHEMATICAL ECONOMICS 2
- MATHEMATICS 15
- MATHEMATICS EDUCATION 10
- MBA FINANCE 8
- MECHANICAL ENGINEERING 6
- MEDICAL AND HEALTH SCI... 13
- MEDICINE AND SURGERY 2
- MICROBIOLOGY 17
- OFFICE TECHNOLOGY & MA... 11
- PETROLEUM ENGINEERING 4
- PHILOSOPHY 38
- PHYSICS EDUCATION 11
- POLITICAL SCIENCE 128
- PRIMARY SCIENCE EDUCAT... 2
- PRODUCTION AND MANAGEM... 1
- PROJECT MANAGEMENT 1
- PSYCHOLOGY 12
- PSYCHOLOGY EDUCATION 5
- PUBLIC ADMINISTRATION 35
- PUBLIC HEALTH 29
- PUBLIC RELATIONS 12
- PURCHASING AND SUPPLY 11
- PURE AND APPLIED CHEMI... 1
- QUANTITY SURVEYING 13
- RADIOGRAPHY AND RADIOL... 5
- RELIGIOUS AND CULTURAL... 7
- SCIENCE AND COMPUTER E... 7
- SCIENCE LABORATORY AND... 14
- SECRETARIAL STUDIES 9
- SMEs & ENTREPRENEURSHIP 145
- SOCIAL SCIENCE AND HUM... 1
- SOCIAL STUDIES EDUCATION 8
- SOCIOLOGY AND ANTHROPO... 24
- SOIL SCIENCE 3
- STAFF DEVELOPMENT AND ... 4
- STATISTICS 36
- SURVEYING AND GEO-INFO... 3
- TAXATION 64
- TEACHER EDUCATION 8
- TECHNICAL EDUCATION 1
- THEATRE ARTS 4
- THEOLOGY 17
- TOURISM AND HOSPITALIT... 56
- URBAN & REGIONAL PLAN... 13
- VETERINARY 1
- VOCATIONAL EDUCATION 17
- MBA-MSC-PGD Thesis resea... 17
- Click Here For More Departments
Featured Posts
- SPINBOT: ARTICLE REWRITER AND THE QUALITY OF UNDERGRADUATE PROJECTS
- WHY STUDENTS MISTAKE CONCEPTUAL FRAMEWORK TO CONCEPTUAL LITERATURE
- 6 TIPS ON HOW TO PRESENT AN UNDERGRADUATE SEMINAR PAPER
- PICO PROCESS: HOW TO DO STUDY PROTOCOL FOR UNDERGRADUATE PROJECTS
- SOLUTION TO THE CHALLENGES UNDERGRADUATE STUDENTS FACE DURING DISSERTATION WRITING
© 2023 UniProjectMaterials - THE DIFFERENCE BETWEEN JUSTIFICATION OF THE STUDY AND SIGNIFICANCE OF THE STUDY | UniProjectMaterials Blog | Terms of use
- Affiliate Program

- UNITED STATES
- 台灣 (TAIWAN)
- TÜRKIYE (TURKEY)
Academic Editing Services
- - Research Paper
- - Journal Manuscript
- - Dissertation
- - College & University Assignments
Admissions Editing Services
- - Application Essay
- - Personal Statement
- - Recommendation Letter
- - Cover Letter
- - CV/Resume
Business Editing Services
- - Business Documents
- - Report & Brochure
- - Website & Blog
Writer Editing Services
- - Script & Screenplay
Our Editors
Client reviews.
- Editing & Proofreading Prices
- Wordvice Points
- Partner Discount
- Plagiarism Checker
- APA Citation Generator
- MLA Citation Generator
- Chicago Citation Generator
- Vancouver Citation Generator
- - APA Style
- - MLA Style
- - Chicago Style
- - Vancouver Style
- Writing & Editing Guide
- Academic Resources
- Admissions Resources
Rationale of the Study in Research Examples
What is the Rationale of the Study?
The rationale of the study explains the reason why the study was conducted (in an article or thesis) or why the study should be conducted (in a proposal). This means the study rationale should explain to the reader or examiner why the study is/was necessary. It is also sometimes called the “purpose” or “justification” of a study. While this is not difficult to grasp in itself, you might wonder how the rationale of the study is different from your research question or from the statement of the problem of your study, and how it fits into the rest of your thesis or research paper.
The rationale of the study links the background of the study to your specific research question and justifies the need for the latter on the basis of the former. In brief, you first provide and discuss existing data on the topic, and then you tell the reader, based on the background evidence you just presented, where you identified gaps or issues and why you think it is important to address those. The problem statement, lastly, is the formulation of the specific research question you choose to investigate, following logically from your rationale, and the approach you are planning to use to do that.
Table of Contents:
How to write a rationale for a research paper , how do you justify the need for a research study.
- Study Rationale Example: Where Does It Go In Your Paper?
The basis for writing a research rationale is preliminary data or a clear description of an observation. If you are doing basic/theoretical research, then a literature review will help you identify gaps in current knowledge. In applied/practical research, you base your rationale on an existing issue with a certain process (e.g., vaccine proof registration) or practice (e.g., patient treatment) that is well documented and needs to be addressed. By presenting the reader with earlier evidence or observations, you can (and have to) convince them that you are not just repeating what other people have already done or said and that your ideas are not coming out of thin air.
Once you have explained where you are coming from, you should justify the need for doing additional research–this is essentially the rationale of your study. Finally, when you have convinced the reader of the purpose of your work, you can end your introduction section with the statement of the problem of your research that contains clear aims and objectives and also briefly describes (and justifies) your methodological approach.
When is the Rationale for Research Written?
The author can present the study rationale both before and after the research is conducted.
- Before conducting research : The study rationale is a central component of the research proposal . It represents the plan of your work, constructed before the study is actually executed.
- Once research has been conducted : After the study is completed, the rationale is presented in a research article or PhD dissertation to explain why you focused on this specific research question. When writing the study rationale for this purpose, the author should link the rationale of the research to the aims and outcomes of the study.
What to Include in the Study Rationale
Although every study rationale is different and discusses different specific elements of a study’s method or approach, there are some elements that should be included to write a good rationale. Make sure to touch on the following:
- A summary of conclusions from your review of the relevant literature
- What is currently unknown (gaps in knowledge)
- Inconclusive or contested results from previous studies on the same or similar topic
- The necessity to improve or build on previous research, such as to improve methodology or utilize newer techniques and/or technologies
There are different types of limitations that you can use to justify the need for your study. In applied/practical research, the justification for investigating something is always that an existing process/practice has a problem or is not satisfactory. Let’s say, for example, that people in a certain country/city/community commonly complain about hospital care on weekends (not enough staff, not enough attention, no decisions being made), but you looked into it and realized that nobody ever investigated whether these perceived problems are actually based on objective shortages/non-availabilities of care or whether the lower numbers of patients who are treated during weekends are commensurate with the provided services.
In this case, “lack of data” is your justification for digging deeper into the problem. Or, if it is obvious that there is a shortage of staff and provided services on weekends, you could decide to investigate which of the usual procedures are skipped during weekends as a result and what the negative consequences are.
In basic/theoretical research, lack of knowledge is of course a common and accepted justification for additional research—but make sure that it is not your only motivation. “Nobody has ever done this” is only a convincing reason for a study if you explain to the reader why you think we should know more about this specific phenomenon. If there is earlier research but you think it has limitations, then those can usually be classified into “methodological”, “contextual”, and “conceptual” limitations. To identify such limitations, you can ask specific questions and let those questions guide you when you explain to the reader why your study was necessary:
Methodological limitations
- Did earlier studies try but failed to measure/identify a specific phenomenon?
- Was earlier research based on incorrect conceptualizations of variables?
- Were earlier studies based on questionable operationalizations of key concepts?
- Did earlier studies use questionable or inappropriate research designs?
Contextual limitations
- Have recent changes in the studied problem made previous studies irrelevant?
- Are you studying a new/particular context that previous findings do not apply to?
Conceptual limitations
- Do previous findings only make sense within a specific framework or ideology?
Study Rationale Examples
Let’s look at an example from one of our earlier articles on the statement of the problem to clarify how your rationale fits into your introduction section. This is a very short introduction for a practical research study on the challenges of online learning. Your introduction might be much longer (especially the context/background section), and this example does not contain any sources (which you will have to provide for all claims you make and all earlier studies you cite)—but please pay attention to how the background presentation , rationale, and problem statement blend into each other in a logical way so that the reader can follow and has no reason to question your motivation or the foundation of your research.
Background presentation
Since the beginning of the Covid pandemic, most educational institutions around the world have transitioned to a fully online study model, at least during peak times of infections and social distancing measures. This transition has not been easy and even two years into the pandemic, problems with online teaching and studying persist (reference needed) .
While the increasing gap between those with access to technology and equipment and those without access has been determined to be one of the main challenges (reference needed) , others claim that online learning offers more opportunities for many students by breaking down barriers of location and distance (reference needed) .

Rationale of the study
Since teachers and students cannot wait for circumstances to go back to normal, the measures that schools and universities have implemented during the last two years, their advantages and disadvantages, and the impact of those measures on students’ progress, satisfaction, and well-being need to be understood so that improvements can be made and demographics that have been left behind can receive the support they need as soon as possible.
Statement of the problem
To identify what changes in the learning environment were considered the most challenging and how those changes relate to a variety of student outcome measures, we conducted surveys and interviews among teachers and students at ten institutions of higher education in four different major cities, two in the US (New York and Chicago), one in South Korea (Seoul), and one in the UK (London). Responses were analyzed with a focus on different student demographics and how they might have been affected differently by the current situation.
How long is a study rationale?
In a research article bound for journal publication, your rationale should not be longer than a few sentences (no longer than one brief paragraph). A dissertation or thesis usually allows for a longer description; depending on the length and nature of your document, this could be up to a couple of paragraphs in length. A completely novel or unconventional approach might warrant a longer and more detailed justification than an approach that slightly deviates from well-established methods and approaches.
Consider Using Professional Editing Services
Now that you know how to write the rationale of the study for a research proposal or paper, you should make use of our free grammar checker , Wordvice AI, or receive professional proofreading services from Wordvice, including research paper editing services and manuscript editing services to polish your submitted research documents.
You can also find many more articles, for example on writing the other parts of your research paper , on choosing a title , or on making sure you understand and adhere to the author instructions before you submit to a journal, on the Wordvice academic resources pages.

Rationale for the Study
It is important for you to be able to explain the importance of the research you are conducting by providing valid arguments. Rationale for the study, also referred to as justification for the study, is reason why you have conducted your study in the first place. This part in your paper needs to explain uniqueness and importance of your research. Rationale for the study needs to be specific and ideally, it should relate to the following points:
1. The research needs to contribute to the elimination of a gap in the literature. Elimination of gap in the present literature is one of the compulsory requirements for your study. In other words, you don’t need to ‘re-invent the wheel’ and your research aims and objectives need to focus on new topics. For example, you can choose to conduct an empirical study to assess the implications of COVID-19 pandemic on the numbers of tourists visitors in your city. This might be previously undressed topic, taking into account that COVID-19 pandemic is a relatively recent phenomenon.
Alternatively, if you cannot find a new topic to research, you can attempt to offer fresh perspectives on existing management, business or economic issues. For example, while thousands of studies have been previously conducted to study various aspects of leadership, this topic as far from being exhausted as a research area. Specifically, new studies can be conducted in the area of leadership to analyze the impacts of new communication mediums such as TikTok, and other social networking sites on leadership practices.
You can also discuss the shortcomings of previous works devoted to your research area. Shortcomings in previous studies can be divided into three groups:
a) Methodological limitations . Methodology employed in previous study may be flawed in terms of research design, research approach or sampling.
b) Contextual limitations . Relevance of previous works may be non-existent for the present because external factors have changed.
c) Conceptual limitations . Previous studies may be unjustifiably bound up to a particular model or an ideology.
While discussing the shortcomings of previous studies you should explain how you are going to correct them. This principle is true to almost all areas in business studies i.e. gaps or shortcomings in the literature can be found in relation to almost all areas of business and economics.
2. The research can be conducted to solve a specific problem. It helps if you can explain why you are the right person and in the right position to solve the problem. You have to explain the essence of the problem in a detailed manner and highlight practical benefits associated with the solution of the problem. Suppose, your dissertation topic is “a study into advantages and disadvantages of various entry strategies into Chinese market”. In this case, you can say that practical implications of your research relates to assisting businesses aiming to enter Chinese market to do more informed decision making.
Alternatively, if your research is devoted to the analysis of impacts of CSR programs and initiatives on brand image, practical contributions of your study would relate to contributing to the level of effectiveness of CSR programs of businesses.
Additional examples of studies that can assist to address specific practical problems may include the following:
- A study into the reasons of high employee turnover at Hanson Brick
- A critical analysis of employee motivation problems at Esporta, Finchley Road, London
- A research into effective succession planning at Microsoft
- A study into major differences between private and public primary education in the USA and implications of these differences on the quality of education
However, it is important to note that it is not an obligatory for a dissertation to be associated with the solution of a specific problem. Dissertations can be purely theory-based as well. Examples of such studies include the following:
- Born or bred: revising The Great Man theory of leadership in the 21 st century
- A critical analysis of the relevance of McClelland’s Achievement theory to the US information technology industry
- Neoliberalism as a major reason behind the emergence of the global financial and economic crisis of 2007-2009
- Analysis of Lewin’s Model of Change and its relevance to pharmaceutical sector of France
3. Your study has to contribute to the level of professional development of the researcher . That is you. You have to explain in a detailed manner in what ways your research contributes to the achievement of your long-term career aspirations.
For example, you have selected a research topic of “ A critical analysis of the relevance of McClelland’s Achievement theory in the US information technology industry ”. You may state that you associate your career aspirations with becoming an IT executive in the US, and accordingly, in-depth knowledge of employee motivation in this industry is going to contribute your chances of success in your chosen career path.
Therefore, you are in a better position if you have already identified your career objectives, so that during the research process you can get detailed knowledge about various aspects of your chosen industry.

My e-book, The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance offers practical assistance to complete a dissertation with minimum or no stress. The e-book covers all stages of writing a dissertation starting from the selection to the research area to submitting the completed version of the work within the deadline.
John Dudovskiy

- Previous Article
- Next Article
Six Approaches to Justify Sample Sizes
Six ways to evaluate which effect sizes are interesting, the value of information, what is your inferential goal, additional considerations when designing an informative study, competing interests, data availability, sample size justification.

[email protected]
- Split-Screen
- Article contents
- Figures & tables
- Supplementary Data
- Peer Review
- Guest Access
- Get Permissions
- Cite Icon Cite
- Search Site
Daniël Lakens; Sample Size Justification. Collabra: Psychology 5 January 2022; 8 (1): 33267. doi: https://doi.org/10.1525/collabra.33267
Download citation file:
- Ris (Zotero)
- Reference Manager
An important step when designing an empirical study is to justify the sample size that will be collected. The key aim of a sample size justification for such studies is to explain how the collected data is expected to provide valuable information given the inferential goals of the researcher. In this overview article six approaches are discussed to justify the sample size in a quantitative empirical study: 1) collecting data from (almost) the entire population, 2) choosing a sample size based on resource constraints, 3) performing an a-priori power analysis, 4) planning for a desired accuracy, 5) using heuristics, or 6) explicitly acknowledging the absence of a justification. An important question to consider when justifying sample sizes is which effect sizes are deemed interesting, and the extent to which the data that is collected informs inferences about these effect sizes. Depending on the sample size justification chosen, researchers could consider 1) what the smallest effect size of interest is, 2) which minimal effect size will be statistically significant, 3) which effect sizes they expect (and what they base these expectations on), 4) which effect sizes would be rejected based on a confidence interval around the effect size, 5) which ranges of effects a study has sufficient power to detect based on a sensitivity power analysis, and 6) which effect sizes are expected in a specific research area. Researchers can use the guidelines presented in this article, for example by using the interactive form in the accompanying online Shiny app, to improve their sample size justification, and hopefully, align the informational value of a study with their inferential goals.
Scientists perform empirical studies to collect data that helps to answer a research question. The more data that is collected, the more informative the study will be with respect to its inferential goals. A sample size justification should consider how informative the data will be given an inferential goal, such as estimating an effect size, or testing a hypothesis. Even though a sample size justification is sometimes requested in manuscript submission guidelines, when submitting a grant to a funder, or submitting a proposal to an ethical review board, the number of observations is often simply stated , but not justified . This makes it difficult to evaluate how informative a study will be. To prevent such concerns from emerging when it is too late (e.g., after a non-significant hypothesis test has been observed), researchers should carefully justify their sample size before data is collected.
Researchers often find it difficult to justify their sample size (i.e., a number of participants, observations, or any combination thereof). In this review article six possible approaches are discussed that can be used to justify the sample size in a quantitative study (see Table 1 ). This is not an exhaustive overview, but it includes the most common and applicable approaches for single studies. 1 The first justification is that data from (almost) the entire population has been collected. The second justification centers on resource constraints, which are almost always present, but rarely explicitly evaluated. The third and fourth justifications are based on a desired statistical power or a desired accuracy. The fifth justification relies on heuristics, and finally, researchers can choose a sample size without any justification. Each of these justifications can be stronger or weaker depending on which conclusions researchers want to draw from the data they plan to collect.
All of these approaches to the justification of sample sizes, even the ‘no justification’ approach, give others insight into the reasons that led to the decision for a sample size in a study. It should not be surprising that the ‘heuristics’ and ‘no justification’ approaches are often unlikely to impress peers. However, it is important to note that the value of the information that is collected depends on the extent to which the final sample size allows a researcher to achieve their inferential goals, and not on the sample size justification that is chosen.
The extent to which these approaches make other researchers judge the data that is collected as informative depends on the details of the question a researcher aimed to answer and the parameters they chose when determining the sample size for their study. For example, a badly performed a-priori power analysis can quickly lead to a study with very low informational value. These six justifications are not mutually exclusive, and multiple approaches can be considered when designing a study.
The informativeness of the data that is collected depends on the inferential goals a researcher has, or in some cases, the inferential goals scientific peers will have. A shared feature of the different inferential goals considered in this review article is the question which effect sizes a researcher considers meaningful to distinguish. This implies that researchers need to evaluate which effect sizes they consider interesting. These evaluations rely on a combination of statistical properties and domain knowledge. In Table 2 six possibly useful considerations are provided. This is not intended to be an exhaustive overview, but it presents common and useful approaches that can be applied in practice. Not all evaluations are equally relevant for all types of sample size justifications. The online Shiny app accompanying this manuscript provides researchers with an interactive form that guides researchers through the considerations for a sample size justification. These considerations often rely on the same information (e.g., effect sizes, the number of observations, the standard deviation, etc.) so these six considerations should be seen as a set of complementary approaches that can be used to evaluate which effect sizes are of interest.
To start, researchers should consider what their smallest effect size of interest is. Second, although only relevant when performing a hypothesis test, researchers should consider which effect sizes could be statistically significant given a choice of an alpha level and sample size. Third, it is important to consider the (range of) effect sizes that are expected. This requires a careful consideration of the source of this expectation and the presence of possible biases in these expectations. Fourth, it is useful to consider the width of the confidence interval around possible values of the effect size in the population, and whether we can expect this confidence interval to reject effects we considered a-priori plausible. Fifth, it is worth evaluating the power of the test across a wide range of possible effect sizes in a sensitivity power analysis. Sixth, a researcher can consider the effect size distribution of related studies in the literature.
Since all scientists are faced with resource limitations, they need to balance the cost of collecting each additional datapoint against the increase in information that datapoint provides. This is referred to as the value of information (Eckermann et al., 2010) . Calculating the value of information is notoriously difficult (Detsky, 1990) . Researchers need to specify the cost of collecting data, and weigh the costs of data collection against the increase in utility that having access to the data provides. From a value of information perspective not every data point that can be collected is equally valuable (J. Halpern et al., 2001; Wilson, 2015) . Whenever additional observations do not change inferences in a meaningful way, the costs of data collection can outweigh the benefits.
The value of additional information will in most cases be a non-monotonic function, especially when it depends on multiple inferential goals. A researcher might be interested in comparing an effect against a previously observed large effect in the literature, a theoretically predicted medium effect, and the smallest effect that would be practically relevant. In such a situation the expected value of sampling information will lead to different optimal sample sizes for each inferential goal. It could be valuable to collect informative data about a large effect, with additional data having less (or even a negative) marginal utility, up to a point where the data becomes increasingly informative about a medium effect size, with the value of sampling additional information decreasing once more until the study becomes increasingly informative about the presence or absence of a smallest effect of interest.
Because of the difficulty of quantifying the value of information, scientists typically use less formal approaches to justify the amount of data they set out to collect in a study. Even though the cost-benefit analysis is not always made explicit in reported sample size justifications, the value of information perspective is almost always implicitly the underlying framework that sample size justifications are based on. Throughout the subsequent discussion of sample size justifications, the importance of considering the value of information given inferential goals will repeatedly be highlighted.
Measuring (Almost) the Entire Population
In some instances it might be possible to collect data from (almost) the entire population under investigation. For example, researchers might use census data, are able to collect data from all employees at a firm or study a small population of top athletes. Whenever it is possible to measure the entire population, the sample size justification becomes straightforward: the researcher used all the data that is available.
Resource Constraints
A common reason for the number of observations in a study is that resource constraints limit the amount of data that can be collected at a reasonable cost (Lenth, 2001) . In practice, sample sizes are always limited by the resources that are available. Researchers practically always have resource limitations, and therefore even when resource constraints are not the primary justification for the sample size in a study, it is always a secondary justification.
Despite the omnipresence of resource limitations, the topic often receives little attention in texts on experimental design (for an example of an exception, see Bulus and Dong (2021) ). This might make it feel like acknowledging resource constraints is not appropriate, but the opposite is true: Because resource limitations always play a role, a responsible scientist carefully evaluates resource constraints when designing a study. Resource constraint justifications are based on a trade-off between the costs of data collection, and the value of having access to the information the data provides. Even if researchers do not explicitly quantify this trade-off, it is revealed in their actions. For example, researchers rarely spend all the resources they have on a single study. Given resource constraints, researchers are confronted with an optimization problem of how to spend resources across multiple research questions.
Time and money are two resource limitations all scientists face. A PhD student has a certain time to complete a PhD thesis, and is typically expected to complete multiple research lines in this time. In addition to time limitations, researchers have limited financial resources that often directly influence how much data can be collected. A third limitation in some research lines is that there might simply be a very small number of individuals from whom data can be collected, such as when studying patients with a rare disease. A resource constraint justification puts limited resources at the center of the justification for the sample size that will be collected, and starts with the resources a scientist has available. These resources are translated into an expected number of observations ( N ) that a researcher expects they will be able to collect with an amount of money in a given time. The challenge is to evaluate whether collecting N observations is worthwhile. How do we decide if a study will be informative, and when should we conclude that data collection is not worthwhile?
When evaluating whether resource constraints make data collection uninformative, researchers need to explicitly consider which inferential goals they have when collecting data (Parker & Berman, 2003) . Having data always provides more knowledge about the research question than not having data, so in an absolute sense, all data that is collected has value. However, it is possible that the benefits of collecting the data are outweighed by the costs of data collection.
It is most straightforward to evaluate whether data collection has value when we know for certain that someone will make a decision, with or without data. In such situations any additional data will reduce the error rates of a well-calibrated decision process, even if only ever so slightly. For example, without data we will not perform better than a coin flip if we guess which of two conditions has a higher true mean score on a measure. With some data, we can perform better than a coin flip by picking the condition that has the highest mean. With a small amount of data we would still very likely make a mistake, but the error rate is smaller than without any data. In these cases, the value of information might be positive, as long as the reduction in error rates is more beneficial than the cost of data collection.
Another way in which a small dataset can be valuable is if its existence eventually makes it possible to perform a meta-analysis (Maxwell & Kelley, 2011) . This argument in favor of collecting a small dataset requires 1) that researchers share the data in a way that a future meta-analyst can find it, and 2) that there is a decent probability that someone will perform a high-quality meta-analysis that will include this data in the future (S. D. Halpern et al., 2002) . The uncertainty about whether there will ever be such a meta-analysis should be weighed against the costs of data collection.
One way to increase the probability of a future meta-analysis is if researchers commit to performing this meta-analysis themselves, by combining several studies they have performed into a small-scale meta-analysis (Cumming, 2014) . For example, a researcher might plan to repeat a study for the next 12 years in a class they teach, with the expectation that after 12 years a meta-analysis of 12 studies would be sufficient to draw informative inferences (but see ter Schure and Grünwald (2019) ). If it is not plausible that a researcher will collect all the required data by themselves, they can attempt to set up a collaboration where fellow researchers in their field commit to collecting similar data with identical measures. If it is not likely that sufficient data will emerge over time to reach the inferential goals, there might be no value in collecting the data.
Even if a researcher believes it is worth collecting data because a future meta-analysis will be performed, they will most likely perform a statistical test on the data. To make sure their expectations about the results of such a test are well-calibrated, it is important to consider which effect sizes are of interest, and to perform a sensitivity power analysis to evaluate the probability of a Type II error for effects of interest. From the six ways to evaluate which effect sizes are interesting that will be discussed in the second part of this review, it is useful to consider the smallest effect size that can be statistically significant, the expected width of the confidence interval around the effect size, and effects that can be expected in a specific research area, and to evaluate the power for these effect sizes in a sensitivity power analysis. If a decision or claim is made, a compromise power analysis is worthwhile to consider when deciding upon the error rates while planning the study. When reporting a resource constraints sample size justification it is recommended to address the five considerations in Table 3 . Addressing these points explicitly facilitates evaluating if the data is worthwhile to collect. To make it easier to address all relevant points explicitly, an interactive form to implement the recommendations in this manuscript can be found at https://shiny.ieis.tue.nl/sample_size_justification/ .
A-priori Power Analysis
When designing a study where the goal is to test whether a statistically significant effect is present, researchers often want to make sure their sample size is large enough to prevent erroneous conclusions for a range of effect sizes they care about. In this approach to justifying a sample size, the value of information is to collect observations up to the point that the probability of an erroneous inference is, in the long run, not larger than a desired value. If a researcher performs a hypothesis test, there are four possible outcomes:
A false positive (or Type I error), determined by the α level. A test yields a significant result, even though the null hypothesis is true.
A false negative (or Type II error), determined by β , or 1 - power. A test yields a non-significant result, even though the alternative hypothesis is true.
A true negative, determined by 1- α . A test yields a non-significant result when the null hypothesis is true.
A true positive, determined by 1- β . A test yields a significant result when the alternative hypothesis is true.
Given a specified effect size, alpha level, and power, an a-priori power analysis can be used to calculate the number of observations required to achieve the desired error rates, given the effect size. 3 Figure 1 illustrates how the statistical power increases as the number of observations (per group) increases in an independent t test with a two-sided alpha level of 0.05. If we are interested in detecting an effect of d = 0.5, a sample size of 90 per condition would give us more than 90% power. Statistical power can be computed to determine the number of participants, or the number of items (Westfall et al., 2014) but can also be performed for single case studies (Ferron & Onghena, 1996; McIntosh & Rittmo, 2020)

Although it is common to set the Type I error rate to 5% and aim for 80% power, error rates should be justified (Lakens, Adolfi, et al., 2018) . As explained in the section on compromise power analysis, the default recommendation to aim for 80% power lacks a solid justification. In general, the lower the error rates (and thus the higher the power), the more informative a study will be, but the more resources are required. Researchers should carefully weigh the costs of increasing the sample size against the benefits of lower error rates, which would probably make studies designed to achieve a power of 90% or 95% more common for articles reporting a single study. An additional consideration is whether the researcher plans to publish an article consisting of a set of replication and extension studies, in which case the probability of observing multiple Type I errors will be very low, but the probability of observing mixed results even when there is a true effect increases (Lakens & Etz, 2017) , which would also be a reason to aim for studies with low Type II error rates, perhaps even by slightly increasing the alpha level for each individual study.
Figure 2 visualizes two distributions. The left distribution (dashed line) is centered at 0. This is a model for the null hypothesis. If the null hypothesis is true a statistically significant result will be observed if the effect size is extreme enough (in a two-sided test either in the positive or negative direction), but any significant result would be a Type I error (the dark grey areas under the curve). If there is no true effect, formally statistical power for a null hypothesis significance test is undefined. Any significant effects observed if the null hypothesis is true are Type I errors, or false positives, which occur at the chosen alpha level. The right distribution (solid line) is centered on an effect of d = 0.5. This is the specified model for the alternative hypothesis in this study, illustrating the expectation of an effect of d = 0.5 if the alternative hypothesis is true. Even though there is a true effect, studies will not always find a statistically significant result. This happens when, due to random variation, the observed effect size is too close to 0 to be statistically significant. Such results are false negatives (the light grey area under the curve on the right). To increase power, we can collect a larger sample size. As the sample size increases, the distributions become more narrow, reducing the probability of a Type II error. 4

It is important to highlight that the goal of an a-priori power analysis is not to achieve sufficient power for the true effect size. The true effect size is unknown. The goal of an a-priori power analysis is to achieve sufficient power, given a specific assumption of the effect size a researcher wants to detect. Just like a Type I error rate is the maximum probability of making a Type I error conditional on the assumption that the null hypothesis is true, an a-priori power analysis is computed under the assumption of a specific effect size. It is unknown if this assumption is correct. All a researcher can do is to make sure their assumptions are well justified. Statistical inferences based on a test where the Type II error rate is controlled are conditional on the assumption of a specific effect size. They allow the inference that, assuming the true effect size is at least as large as that used in the a-priori power analysis, the maximum Type II error rate in a study is not larger than a desired value.
This point is perhaps best illustrated if we consider a study where an a-priori power analysis is performed both for a test of the presence of an effect, as for a test of the absence of an effect. When designing a study, it essential to consider the possibility that there is no effect (e.g., a mean difference of zero). An a-priori power analysis can be performed both for a null hypothesis significance test, as for a test of the absence of a meaningful effect, such as an equivalence test that can statistically provide support for the null hypothesis by rejecting the presence of effects that are large enough to matter (Lakens, 2017; Meyners, 2012; Rogers et al., 1993) . When multiple primary tests will be performed based on the same sample, each analysis requires a dedicated sample size justification. If possible, a sample size is collected that guarantees that all tests are informative, which means that the collected sample size is based on the largest sample size returned by any of the a-priori power analyses.
For example, if the goal of a study is to detect or reject an effect size of d = 0.4 with 90% power, and the alpha level is set to 0.05 for a two-sided independent t test, a researcher would need to collect 133 participants in each condition for an informative null hypothesis test, and 136 participants in each condition for an informative equivalence test. Therefore, the researcher should aim to collect 272 participants in total for an informative result for both tests that are planned. This does not guarantee a study has sufficient power for the true effect size (which can never be known), but it guarantees the study has sufficient power given an assumption of the effect a researcher is interested in detecting or rejecting. Therefore, an a-priori power analysis is useful, as long as a researcher can justify the effect sizes they are interested in.
If researchers correct the alpha level when testing multiple hypotheses, the a-priori power analysis should be based on this corrected alpha level. For example, if four tests are performed, an overall Type I error rate of 5% is desired, and a Bonferroni correction is used, the a-priori power analysis should be based on a corrected alpha level of .0125.
An a-priori power analysis can be performed analytically, or by performing computer simulations. Analytic solutions are faster but less flexible. A common challenge researchers face when attempting to perform power analyses for more complex or uncommon tests is that available software does not offer analytic solutions. In these cases simulations can provide a flexible solution to perform power analyses for any test (Morris et al., 2019) . The following code is an example of a power analysis in R based on 10000 simulations for a one-sample t test against zero for a sample size of 20, assuming a true effect of d = 0.5. All simulations consist of first randomly generating data based on assumptions of the data generating mechanism (e.g., a normal distribution with a mean of 0.5 and a standard deviation of 1), followed by a test performed on the data. By computing the percentage of significant results, power can be computed for any design.
p <- numeric(10000) # to store p-values for (i in 1:10000) { #simulate 10k tests x <- rnorm(n = 20, mean = 0.5, sd = 1) p[i] <- t.test(x)$p.value # store p-value } sum(p < 0.05) / 10000 # Compute power
There is a wide range of tools available to perform power analyses. Whichever tool a researcher decides to use, it will take time to learn how to use the software correctly to perform a meaningful a-priori power analysis. Resources to educate psychologists about power analysis consist of book-length treatments (Aberson, 2019; Cohen, 1988; Julious, 2004; Murphy et al., 2014) , general introductions (Baguley, 2004; Brysbaert, 2019; Faul et al., 2007; Maxwell et al., 2008; Perugini et al., 2018) , and an increasing number of applied tutorials for specific tests (Brysbaert & Stevens, 2018; DeBruine & Barr, 2019; P. Green & MacLeod, 2016; Kruschke, 2013; Lakens & Caldwell, 2021; Schoemann et al., 2017; Westfall et al., 2014) . It is important to be trained in the basics of power analysis, and it can be extremely beneficial to learn how to perform simulation-based power analyses. At the same time, it is often recommended to enlist the help of an expert, especially when a researcher lacks experience with a power analysis for a specific test.
When reporting an a-priori power analysis, make sure that the power analysis is completely reproducible. If power analyses are performed in R it is possible to share the analysis script and information about the version of the package. In many software packages it is possible to export the power analysis that is performed as a PDF file. For example, in G*Power analyses can be exported under the ‘protocol of power analysis’ tab. If the software package provides no way to export the analysis, add a screenshot of the power analysis to the supplementary files.

The reproducible report needs to be accompanied by justifications for the choices that were made with respect to the values used in the power analysis. If the effect size used in the power analysis is based on previous research the factors presented in Table 5 (if the effect size is based on a meta-analysis) or Table 6 (if the effect size is based on a single study) should be discussed. If an effect size estimate is based on the existing literature, provide a full citation, and preferably a direct quote from the article where the effect size estimate is reported. If the effect size is based on a smallest effect size of interest, this value should not just be stated, but justified (e.g., based on theoretical predictions or practical implications, see Lakens, Scheel, and Isager (2018) ). For an overview of all aspects that should be reported when describing an a-priori power analysis, see Table 4 .
Planning for Precision
Some researchers have suggested to justify sample sizes based on a desired level of precision of the estimate (Cumming & Calin-Jageman, 2016; Kruschke, 2018; Maxwell et al., 2008) . The goal when justifying a sample size based on precision is to collect data to achieve a desired width of the confidence interval around a parameter estimate. The width of the confidence interval around the parameter estimate depends on the standard deviation and the number of observations. The only aspect a researcher needs to justify for a sample size justification based on accuracy is the desired width of the confidence interval with respect to their inferential goal, and their assumption about the population standard deviation of the measure.
If a researcher has determined the desired accuracy, and has a good estimate of the true standard deviation of the measure, it is straightforward to calculate the sample size needed for a desired level of accuracy. For example, when measuring the IQ of a group of individuals a researcher might desire to estimate the IQ score within an error range of 2 IQ points for 95% of the observed means, in the long run. The required sample size to achieve this desired level of accuracy (assuming normally distributed data) can be computed by:
where N is the number of observations, z is the critical value related to the desired confidence interval, sd is the standard deviation of IQ scores in the population, and error is the width of the confidence interval within which the mean should fall, with the desired error rate. In this example, (1.96 × 15 / 2)^2 = 216.1 observations. If a researcher desires 95% of the means to fall within a 2 IQ point range around the true population mean, 217 observations should be collected. If a desired accuracy for a non-zero mean difference is computed, accuracy is based on a non-central t -distribution. For these calculations an expected effect size estimate needs to be provided, but it has relatively little influence on the required sample size (Maxwell et al., 2008) . It is also possible to incorporate uncertainty about the observed effect size in the sample size calculation, known as assurance (Kelley & Rausch, 2006) . The MBESS package in R provides functions to compute sample sizes for a wide range of tests (Kelley, 2007) .
What is less straightforward is to justify how a desired level of accuracy is related to inferential goals. There is no literature that helps researchers to choose a desired width of the confidence interval. Morey (2020) convincingly argues that most practical use-cases of planning for precision involve an inferential goal of distinguishing an observed effect from other effect sizes (for a Bayesian perspective, see Kruschke (2018) ). For example, a researcher might expect an effect size of r = 0.4 and would treat observed correlations that differ more than 0.2 (i.e., 0.2 < r < 0.6) differently, in that effects of r = 0.6 or larger are considered too large to be caused by the assumed underlying mechanism (Hilgard, 2021) , while effects smaller than r = 0.2 are considered too small to support the theoretical prediction. If the goal is indeed to get an effect size estimate that is precise enough so that two effects can be differentiated with high probability, the inferential goal is actually a hypothesis test, which requires designing a study with sufficient power to reject effects (e.g., testing a range prediction of correlations between 0.2 and 0.6).
If researchers do not want to test a hypothesis, for example because they prefer an estimation approach over a testing approach, then in the absence of clear guidelines that help researchers to justify a desired level of precision, one solution might be to rely on a generally accepted norm of precision to aim for. This norm could be based on ideas about a certain resolution below which measurements in a research area no longer lead to noticeably different inferences. Just as researchers normatively use an alpha level of 0.05, they could plan studies to achieve a desired confidence interval width around the observed effect that is determined by a norm. Future work is needed to help researchers choose a confidence interval width when planning for accuracy.
When a researcher uses a heuristic, they are not able to justify their sample size themselves, but they trust in a sample size recommended by some authority. When I started as a PhD student in 2005 it was common to collect 15 participants in each between subject condition. When asked why this was a common practice, no one was really sure, but people trusted there was a justification somewhere in the literature. Now, I realize there was no justification for the heuristics we used. As Berkeley (1735) already observed: “Men learn the elements of science from others: And every learner hath a deference more or less to authority, especially the young learners, few of that kind caring to dwell long upon principles, but inclining rather to take them upon trust: And things early admitted by repetition become familiar: And this familiarity at length passeth for evidence.”
Some papers provide researchers with simple rules of thumb about the sample size that should be collected. Such papers clearly fill a need, and are cited a lot, even when the advice in these articles is flawed. For example, Wilson VanVoorhis and Morgan (2007) translate an absolute minimum of 50+8 observations for regression analyses suggested by a rule of thumb examined in S. B. Green (1991) into the recommendation to collect ~50 observations. Green actually concludes in his article that “In summary, no specific minimum number of subjects or minimum ratio of subjects-to-predictors was supported”. He does discuss how a general rule of thumb of N = 50 + 8 provided an accurate minimum number of observations for the ‘typical’ study in the social sciences because these have a ‘medium’ effect size, as Green claims by citing Cohen (1988) . Cohen actually didn’t claim that the typical study in the social sciences has a ‘medium’ effect size, and instead said (1988, p. 13) : “Many effects sought in personality, social, and clinical-psychological research are likely to be small effects as here defined”. We see how a string of mis-citations eventually leads to a misleading rule of thumb.
Rules of thumb seem to primarily emerge due to mis-citations and/or overly simplistic recommendations. Simonsohn, Nelson, and Simmons (2011) recommended that “Authors must collect at least 20 observations per cell”. A later recommendation by the same authors presented at a conference suggested to use n > 50, unless you study large effects (Simmons et al., 2013) . Regrettably, this advice is now often mis-cited as a justification to collect no more than 50 observations per condition without considering the expected effect size. If authors justify a specific sample size (e.g., n = 50) based on a general recommendation in another paper, either they are mis-citing the paper, or the paper they are citing is flawed.
Another common heuristic is to collect the same number of observations as were collected in a previous study. This strategy is not recommended in scientific disciplines with widespread publication bias, and/or where novel and surprising findings from largely exploratory single studies are published. Using the same sample size as a previous study is only a valid approach if the sample size justification in the previous study also applies to the current study. Instead of stating that you intend to collect the same sample size as an earlier study, repeat the sample size justification, and update it in light of any new information (such as the effect size in the earlier study, see Table 6 ).
Peer reviewers and editors should carefully scrutinize rules of thumb sample size justifications, because they can make it seem like a study has high informational value for an inferential goal even when the study will yield uninformative results. Whenever one encounters a sample size justification based on a heuristic, ask yourself: ‘Why is this heuristic used?’ It is important to know what the logic behind a heuristic is to determine whether the heuristic is valid for a specific situation. In most cases, heuristics are based on weak logic, and not widely applicable. It might be possible that fields develop valid heuristics for sample size justifications. For example, it is possible that a research area reaches widespread agreement that effects smaller than d = 0.3 are too small to be of interest, and all studies in a field use sequential designs (see below) that have 90% power to detect a d = 0.3. Alternatively, it is possible that a field agrees that data should be collected with a desired level of accuracy, irrespective of the true effect size. In these cases, valid heuristics would exist based on generally agreed goals of data collection. For example, Simonsohn (2015) suggests to design replication studies that have 2.5 times as large sample sizes as the original study, as this provides 80% power for an equivalence test against an equivalence bound set to the effect the original study had 33% power to detect, assuming the true effect size is 0. As original authors typically do not specify which effect size would falsify their hypothesis, the heuristic underlying this ‘small telescopes’ approach is a good starting point for a replication study with the inferential goal to reject the presence of an effect as large as was described in an earlier publication. It is the responsibility of researchers to gain the knowledge to distinguish valid heuristics from mindless heuristics, and to be able to evaluate whether a heuristic will yield an informative result given the inferential goal of the researchers in a specific study, or not.
No Justification
It might sound like a contradictio in terminis , but it is useful to distinguish a final category where researchers explicitly state they do not have a justification for their sample size. Perhaps the resources were available to collect more data, but they were not used. A researcher could have performed a power analysis, or planned for precision, but they did not. In those cases, instead of pretending there was a justification for the sample size, honesty requires you to state there is no sample size justification. This is not necessarily bad. It is still possible to discuss the smallest effect size of interest, the minimal statistically detectable effect, the width of the confidence interval around the effect size, and to plot a sensitivity power analysis, in relation to the sample size that was collected. If a researcher truly had no specific inferential goals when collecting the data, such an evaluation can perhaps be performed based on reasonable inferential goals peers would have when they learn about the existence of the collected data.
Do not try to spin a story where it looks like a study was highly informative when it was not. Instead, transparently evaluate how informative the study was given effect sizes that were of interest, and make sure that the conclusions follow from the data. The lack of a sample size justification might not be problematic, but it might mean that a study was not informative for most effect sizes of interest, which makes it especially difficult to interpret non-significant effects, or estimates with large uncertainty.
The inferential goal of data collection is often in some way related to the size of an effect. Therefore, to design an informative study, researchers will want to think about which effect sizes are interesting. First, it is useful to consider three effect sizes when determining the sample size. The first is the smallest effect size a researcher is interested in, the second is the smallest effect size that can be statistically significant (only in studies where a significance test will be performed), and the third is the effect size that is expected. Beyond considering these three effect sizes, it can be useful to evaluate ranges of effect sizes. This can be done by computing the width of the expected confidence interval around an effect size of interest (for example, an effect size of zero), and examine which effects could be rejected. Similarly, it can be useful to plot a sensitivity curve and evaluate the range of effect sizes the design has decent power to detect, as well as to consider the range of effects for which the design has low power. Finally, there are situations where it is useful to consider a range of effect that is likely to be observed in a specific research area.
What is the Smallest Effect Size of Interest?
The strongest possible sample size justification is based on an explicit statement of the smallest effect size that is considered interesting. A smallest effect size of interest can be based on theoretical predictions or practical considerations. For a review of approaches that can be used to determine a smallest effect size of interest in randomized controlled trials, see Cook et al. (2014) and Keefe et al. (2013) , for reviews of different methods to determine a smallest effect size of interest, see King (2011) and Copay, Subach, Glassman, Polly, and Schuler (2007) , and for a discussion focused on psychological research, see Lakens, Scheel, et al. (2018) .
It can be challenging to determine the smallest effect size of interest whenever theories are not very developed, or when the research question is far removed from practical applications, but it is still worth thinking about which effects would be too small to matter. A first step forward is to discuss which effect sizes are considered meaningful in a specific research line with your peers. Researchers will differ in the effect sizes they consider large enough to be worthwhile (Murphy et al., 2014) . Just as not every scientist will find every research question interesting enough to study, not every scientist will consider the same effect sizes interesting enough to study, and different stakeholders will differ in which effect sizes are considered meaningful (Kelley & Preacher, 2012) .
Even though it might be challenging, there are important benefits of being able to specify a smallest effect size of interest. The population effect size is always uncertain (indeed, estimating this is typically one of the goals of the study), and therefore whenever a study is powered for an expected effect size, there is considerable uncertainty about whether the statistical power is high enough to detect the true effect in the population. However, if the smallest effect size of interest can be specified and agreed upon after careful deliberation, it becomes possible to design a study that has sufficient power (given the inferential goal to detect or reject the smallest effect size of interest with a certain error rate). A smallest effect of interest may be subjective (one researcher might find effect sizes smaller than d = 0.3 meaningless, while another researcher might still be interested in effects larger than d = 0.1), and there might be uncertainty about the parameters required to specify the smallest effect size of interest (e.g., when performing a cost-benefit analysis), but after a smallest effect size of interest has been determined, a study can be designed with a known Type II error rate to detect or reject this value. For this reason an a-priori power based on a smallest effect size of interest is generally preferred, whenever researchers are able to specify one (Aberson, 2019; Albers & Lakens, 2018; Brown, 1983; Cascio & Zedeck, 1983; Dienes, 2014; Lenth, 2001) .
The Minimal Statistically Detectable Effect
The minimal statistically detectable effect, or the critical effect size, provides information about the smallest effect size that, if observed, would be statistically significant given a specified alpha level and sample size (Cook et al., 2014) . For any critical t value (e.g., t = 1.96 for α = 0.05, for large sample sizes) we can compute a critical mean difference (Phillips et al., 2001) , or a critical standardized effect size. For a two-sided independent t test the critical mean difference is:
and the critical standardized mean difference is:
In Figure 4 the distribution of Cohen’s d is plotted for 15 participants per group when the true effect size is either d = 0 or d = 0.5. This figure is similar to Figure 2 , with the addition that the critical d is indicated. We see that with such a small number of observations in each group only observed effects larger than d = 0.75 will be statistically significant. Whether such effect sizes are interesting, and can realistically be expected, should be carefully considered and justified.

G*Power provides the critical test statistic (such as the critical t value) when performing a power analysis. For example, Figure 5 shows that for a correlation based on a two-sided test, with α = 0.05, and N = 30, only effects larger than r = 0.361 or smaller than r = -0.361 can be statistically significant. This reveals that when the sample size is relatively small, the observed effect needs to be quite substantial to be statistically significant.

It is important to realize that due to random variation each study has a probability to yield effects larger than the critical effect size, even if the true effect size is small (or even when the true effect size is 0, in which case each significant effect is a Type I error). Computing a minimal statistically detectable effect is useful for a study where no a-priori power analysis is performed, both for studies in the published literature that do not report a sample size justification (Lakens, Scheel, et al., 2018) , as for researchers who rely on heuristics for their sample size justification.
It can be informative to ask yourself whether the critical effect size for a study design is within the range of effect sizes that can realistically be expected. If not, then whenever a significant effect is observed in a published study, either the effect size is surprisingly larger than expected, or more likely, it is an upwardly biased effect size estimate. In the latter case, given publication bias, published studies will lead to biased effect size estimates. If it is still possible to increase the sample size, for example by ignoring rules of thumb and instead performing an a-priori power analysis, then do so. If it is not possible to increase the sample size, for example due to resource constraints, then reflecting on the minimal statistically detectable effect should make it clear that an analysis of the data should not focus on p values, but on the effect size and the confidence interval (see Table 3 ).
It is also useful to compute the minimal statistically detectable effect if an ‘optimistic’ power analysis is performed. For example, if you believe a best case scenario for the true effect size is d = 0.57 and use this optimistic expectation in an a-priori power analysis, effects smaller than d = 0.4 will not be statistically significant when you collect 50 observations in a two independent group design. If your worst case scenario for the alternative hypothesis is a true effect size of d = 0.35 your design would not allow you to declare a significant effect if effect size estimates close to the worst case scenario are observed. Taking into account the minimal statistically detectable effect size should make you reflect on whether a hypothesis test will yield an informative answer, and whether your current approach to sample size justification (e.g., the use of rules of thumb, or letting resource constraints determine the sample size) leads to an informative study, or not.
What is the Expected Effect Size?
Although the true population effect size is always unknown, there are situations where researchers have a reasonable expectation of the effect size in a study, and want to use this expected effect size in an a-priori power analysis. Even if expectations for the observed effect size are largely a guess, it is always useful to explicitly consider which effect sizes are expected. A researcher can justify a sample size based on the effect size they expect, even if such a study would not be very informative with respect to the smallest effect size of interest. In such cases a study is informative for one inferential goal (testing whether the expected effect size is present or absent), but not highly informative for the second goal (testing whether the smallest effect size of interest is present or absent).
There are typically three sources for expectations about the population effect size: a meta-analysis, a previous study, or a theoretical model. It is tempting for researchers to be overly optimistic about the expected effect size in an a-priori power analysis, as higher effect size estimates yield lower sample sizes, but being too optimistic increases the probability of observing a false negative result. When reviewing a sample size justification based on an a-priori power analysis, it is important to critically evaluate the justification for the expected effect size used in power analyses.
Using an Estimate from a Meta-Analysis
In a perfect world effect size estimates from a meta-analysis would provide researchers with the most accurate information about which effect size they could expect. Due to widespread publication bias in science, effect size estimates from meta-analyses are regrettably not always accurate. They can be biased, sometimes substantially so. Furthermore, meta-analyses typically have considerable heterogeneity, which means that the meta-analytic effect size estimate differs for subsets of studies that make up the meta-analysis. So, although it might seem useful to use a meta-analytic effect size estimate of the effect you are studying in your power analysis, you need to take great care before doing so.
If a researcher wants to enter a meta-analytic effect size estimate in an a-priori power analysis, they need to consider three things (see Table 5 ). First, the studies included in the meta-analysis should be similar enough to the study they are performing that it is reasonable to expect a similar effect size. In essence, this requires evaluating the generalizability of the effect size estimate to the new study. It is important to carefully consider differences between the meta-analyzed studies and the planned study, with respect to the manipulation, the measure, the population, and any other relevant variables.
Second, researchers should check whether the effect sizes reported in the meta-analysis are homogeneous. If not, and there is considerable heterogeneity in the meta-analysis, it means not all included studies can be expected to have the same true effect size estimate. A meta-analytic estimate should be used based on the subset of studies that most closely represent the planned study. Note that heterogeneity remains a possibility (even direct replication studies can show heterogeneity when unmeasured variables moderate the effect size in each sample (Kenny & Judd, 2019; Olsson-Collentine et al., 2020) ), so the main goal of selecting similar studies is to use existing data to increase the probability that your expectation is accurate, without guaranteeing it will be.
Third, the meta-analytic effect size estimate should not be biased. Check if the bias detection tests that are reported in the meta-analysis are state-of-the-art, or perform multiple bias detection tests yourself (Carter et al., 2019) , and consider bias corrected effect size estimates (even though these estimates might still be biased, and do not necessarily reflect the true population effect size).
Using an Estimate from a Previous Study
If a meta-analysis is not available, researchers often rely on an effect size from a previous study in an a-priori power analysis. The first issue that requires careful attention is whether the two studies are sufficiently similar. Just as when using an effect size estimate from a meta-analysis, researchers should consider if there are differences between the studies in terms of the population, the design, the manipulations, the measures, or other factors that should lead one to expect a different effect size. For example, intra-individual reaction time variability increases with age, and therefore a study performed on older participants should expect a smaller standardized effect size than a study performed on younger participants. If an earlier study used a very strong manipulation, and you plan to use a more subtle manipulation, a smaller effect size should be expected. Finally, effect sizes do not generalize to studies with different designs. For example, the effect size for a comparison between two groups is most often not similar to the effect size for an interaction in a follow-up study where a second factor is added to the original design (Lakens & Caldwell, 2021) .
Even if a study is sufficiently similar, statisticians have warned against using effect size estimates from small pilot studies in power analyses. Leon, Davis, and Kraemer (2011) write:
Contrary to tradition, a pilot study does not provide a meaningful effect size estimate for planning subsequent studies due to the imprecision inherent in data from small samples.
The two main reasons researchers should be careful when using effect sizes from studies in the published literature in power analyses is that effect size estimates from studies can differ from the true population effect size due to random variation, and that publication bias inflates effect sizes. Figure 6 shows the distribution for η p 2 for a study with three conditions with 25 participants in each condition when the null hypothesis is true and when there is a ‘medium’ true effect of η p 2 = 0.0588 (Richardson, 2011) . As in Figure 4 the critical effect size is indicated, which shows observed effects smaller than η p 2 = 0.08 will not be significant with the given sample size. If the null hypothesis is true effects larger than η p 2 = 0.08 will be a Type I error (the dark grey area), and when the alternative hypothesis is true effects smaller than η p 2 = 0.08 will be a Type II error (light grey area). It is clear all significant effects are larger than the true effect size ( η p 2 = 0.0588), so power analyses based on a significant finding (e.g., because only significant results are published in the literature) will be based on an overestimate of the true effect size, introducing bias.

But even if we had access to all effect sizes (e.g., from pilot studies you have performed yourself) due to random variation the observed effect size will sometimes be quite small. Figure 6 shows it is quite likely to observe an effect of η p 2 = 0.01 in a small pilot study, even when the true effect size is 0.0588. Entering an effect size estimate of η p 2 = 0.01 in an a-priori power analysis would suggest a total sample size of 957 observations to achieve 80% power in a follow-up study. If researchers only follow up on pilot studies when they observe an effect size in the pilot study that, when entered into a power analysis, yields a sample size that is feasible to collect for the follow-up study, these effect size estimates will be upwardly biased, and power in the follow-up study will be systematically lower than desired (Albers & Lakens, 2018) .
In essence, the problem with using small studies to estimate the effect size that will be entered into an a-priori power analysis is that due to publication bias or follow-up bias the effect sizes researchers end up using for their power analysis do not come from a full F distribution, but from what is known as a truncated F distribution (Taylor & Muller, 1996) . For example, imagine if there is extreme publication bias in the situation illustrated in Figure 6 . The only studies that would be accessible to researchers would come from the part of the distribution where η p 2 > 0.08, and the test result would be statistically significant. It is possible to compute an effect size estimate that, based on certain assumptions, corrects for bias. For example, imagine we observe a result in the literature for a One-Way ANOVA with 3 conditions, reported as F (2, 42) = 0.017, η p 2 = 0.176. If we would take this effect size at face value and enter it as our effect size estimate in an a-priori power analysis, the suggested sample size to achieve 80% power would suggest we need to collect 17 observations in each condition.
However, if we assume bias is present, we can use the BUCSS R package (S. F. Anderson et al., 2017) to perform a power analysis that attempts to correct for bias. A power analysis that takes bias into account (under a specific model of publication bias, based on a truncated F distribution where only significant results are published) suggests collecting 73 participants in each condition. It is possible that the bias corrected estimate of the non-centrality parameter used to compute power is zero, in which case it is not possible to correct for bias using this method. As an alternative to formally modeling a correction for publication bias whenever researchers assume an effect size estimate is biased, researchers can simply use a more conservative effect size estimate, for example by computing power based on the lower limit of a 60% two-sided confidence interval around the effect size estimate, which Perugini, Gallucci, and Costantini (2014) refer to as safeguard power . Both these approaches lead to a more conservative power analysis, but not necessarily a more accurate power analysis. It is simply not possible to perform an accurate power analysis on the basis of an effect size estimate from a study that might be biased and/or had a small sample size (Teare et al., 2014) . If it is not possible to specify a smallest effect size of interest, and there is great uncertainty about which effect size to expect, it might be more efficient to perform a study with a sequential design (discussed below).
To summarize, an effect size from a previous study in an a-priori power analysis can be used if three conditions are met (see Table 6 ). First, the previous study is sufficiently similar to the planned study. Second, there was a low risk of bias (e.g., the effect size estimate comes from a Registered Report, or from an analysis for which results would not have impacted the likelihood of publication). Third, the sample size is large enough to yield a relatively accurate effect size estimate, based on the width of a 95% CI around the observed effect size estimate. There is always uncertainty around the effect size estimate, and entering the upper and lower limit of the 95% CI around the effect size estimate might be informative about the consequences of the uncertainty in the effect size estimate for an a-priori power analysis.
Using an Estimate from a Theoretical Model
When your theoretical model is sufficiently specific such that you can build a computational model, and you have knowledge about key parameters in your model that are relevant for the data you plan to collect, it is possible to estimate an effect size based on the effect size estimate derived from a computational model. For example, if one had strong ideas about the weights for each feature stimuli share and differ on, it could be possible to compute predicted similarity judgments for pairs of stimuli based on Tversky’s contrast model (Tversky, 1977) , and estimate the predicted effect size for differences between experimental conditions. Although computational models that make point predictions are relatively rare, whenever they are available, they provide a strong justification of the effect size a researcher expects.
Compute the Width of the Confidence Interval around the Effect Size
If a researcher can estimate the standard deviation of the observations that will be collected, it is possible to compute an a-priori estimate of the width of the 95% confidence interval around an effect size (Kelley, 2007) . Confidence intervals represent a range around an estimate that is wide enough so that in the long run the true population parameter will fall inside the confidence intervals 100 - α percent of the time. In any single study the true population effect either falls in the confidence interval, or it doesn’t, but in the long run one can act as if the confidence interval includes the true population effect size (while keeping the error rate in mind). Cumming (2013) calls the difference between the observed effect size and the upper bound of the 95% confidence interval (or the lower bound of the 95% confidence interval) the margin of error.
If we compute the 95% CI for an effect size of d = 0 based on the t statistic and sample size (Smithson, 2003) , we see that with 15 observations in each condition of an independent t test the 95% CI ranges from d = -0.72 to d = 0.72 5 . The margin of error is half the width of the 95% CI, 0.72. A Bayesian estimator who uses an uninformative prior would compute a credible interval with the same (or a very similar) upper and lower bound (Albers et al., 2018; Kruschke, 2011) , and might conclude that after collecting the data they would be left with a range of plausible values for the population effect that is too large to be informative. Regardless of the statistical philosophy you plan to rely on when analyzing the data, the evaluation of what we can conclude based on the width of our interval tells us that with 15 observation per group we will not learn a lot.
One useful way of interpreting the width of the confidence interval is based on the effects you would be able to reject if the true effect size is 0. In other words, if there is no effect, which effects would you have been able to reject given the collected data, and which effect sizes would not be rejected, if there was no effect? Effect sizes in the range of d = 0.7 are findings such as “People become aggressive when they are provoked”, “People prefer their own group to other groups”, and “Romantic partners resemble one another in physical attractiveness” (Richard et al., 2003) . The width of the confidence interval tells you that you can only reject the presence of effects that are so large, if they existed, you would probably already have noticed them. If it is true that most effects that you study are realistically much smaller than d = 0.7, there is a good possibility that we do not learn anything we didn’t already know by performing a study with n = 15. Even without data, in most research lines we would not consider certain large effects plausible (although the effect sizes that are plausible differ between fields, as discussed below). On the other hand, in large samples where researchers can for example reject the presence of effects larger than d = 0.2, if the null hypothesis was true, this analysis of the width of the confidence interval would suggest that peers in many research lines would likely consider the study to be informative.
We see that the margin of error is almost, but not exactly, the same as the minimal statistically detectable effect ( d = 0.75). The small variation is due to the fact that the 95% confidence interval is calculated based on the t distribution. If the true effect size is not zero, the confidence interval is calculated based on the non-central t distribution, and the 95% CI is asymmetric. Figure 7 visualizes three t distributions, one symmetric at 0, and two asymmetric distributions with a noncentrality parameter (the normalized difference between the means) of 2 and 3. The asymmetry is most clearly visible in very small samples (the distributions in the plot have 5 degrees of freedom) but remains noticeable in larger samples when calculating confidence intervals and statistical power. For example, for a true effect size of d = 0.5 observed with 15 observations per group would yield d s = 0.50, 95% CI [-0.23, 1.22]. If we compute the 95% CI around the critical effect size, we would get d s = 0.75, 95% CI [0.00, 1.48]. We see the 95% CI ranges from exactly 0.00 to 1.48, in line with the relation between a confidence interval and a p value, where the 95% CI excludes zero if the test is statistically significant. As noted before, the different approaches recommended here to evaluate how informative a study is are often based on the same information.

Plot a Sensitivity Power Analysis
A sensitivity power analysis fixes the sample size, desired power, and alpha level, and answers the question which effect size a study could detect with a desired power. A sensitivity power analysis is therefore performed when the sample size is already known. Sometimes data has already been collected to answer a different research question, or the data is retrieved from an existing database, and you want to perform a sensitivity power analysis for a new statistical analysis. Other times, you might not have carefully considered the sample size when you initially collected the data, and want to reflect on the statistical power of the study for (ranges of) effect sizes of interest when analyzing the results. Finally, it is possible that the sample size will be collected in the future, but you know that due to resource constraints the maximum sample size you can collect is limited, and you want to reflect on whether the study has sufficient power for effects that you consider plausible and interesting (such as the smallest effect size of interest, or the effect size that is expected).
Assume a researcher plans to perform a study where 30 observations will be collected in total, 15 in each between participant condition. Figure 8 shows how to perform a sensitivity power analysis in G*Power for a study where we have decided to use an alpha level of 5%, and desire 90% power. The sensitivity power analysis reveals the designed study has 90% power to detect effects of at least d = 1.23. Perhaps a researcher believes that a desired power of 90% is quite high, and is of the opinion that it would still be interesting to perform a study if the statistical power was lower. It can then be useful to plot a sensitivity curve across a range of smaller effect sizes.

The two dimensions of interest in a sensitivity power analysis are the effect sizes, and the power to observe a significant effect assuming a specific effect size. These two dimensions can be plotted against each other to create a sensitivity curve. For example, a sensitivity curve can be plotted in G*Power by clicking the ‘X-Y plot for a range of values’ button, as illustrated in Figure 9 . Researchers can examine which power they would have for an a-priori plausible range of effect sizes, or they can examine which effect sizes would provide reasonable levels of power. In simulation-based approaches to power analysis, sensitivity curves can be created by performing the power analysis for a range of possible effect sizes. Even if 50% power is deemed acceptable (in which case deciding to act as if the null hypothesis is true after a non-significant result is a relatively noisy decision procedure), Figure 9 shows a study design where power is extremely low for a large range of effect sizes that are reasonable to expect in most fields. Thus, a sensitivity power analysis provides an additional approach to evaluate how informative the planned study is, and can inform researchers that a specific design is unlikely to yield a significant effect for a range of effects that one might realistically expect.

If the number of observations per group had been larger, the evaluation might have been more positive. We might not have had any specific effect size in mind, but if we had collected 150 observations per group, a sensitivity analysis could have shown that power was sufficient for a range of effects we believe is most interesting to examine, and we would still have approximately 50% power for quite small effects. For a sensitivity analysis to be meaningful, the sensitivity curve should be compared against a smallest effect size of interest, or a range of effect sizes that are expected. A sensitivity power analysis has no clear cut-offs to examine (Bacchetti, 2010) . Instead, the idea is to make a holistic trade-off between different effect sizes one might observe or care about, and their associated statistical power.
The Distribution of Effect Sizes in a Research Area
In my personal experience the most commonly entered effect size estimate in an a-priori power analysis for an independent t test is Cohen’s benchmark for a ‘medium’ effect size, because of what is known as the default effect . When you open G*Power, a ‘medium’ effect is the default option for an a-priori power analysis. Cohen’s benchmarks for small, medium, and large effects should not be used in an a-priori power analysis (Cook et al., 2014; Correll et al., 2020) , and Cohen regretted having proposed these benchmarks (Funder & Ozer, 2019) . The large variety in research topics means that any ‘default’ or ‘heuristic’ that is used to compute statistical power is not just unlikely to correspond to your actual situation, but it is also likely to lead to a sample size that is substantially misaligned with the question you are trying to answer with the collected data.
Some researchers have wondered what a better default would be, if researchers have no other basis to decide upon an effect size for an a-priori power analysis. Brysbaert (2019) recommends d = 0.4 as a default in psychology, which is the average observed in replication projects and several meta-analyses. It is impossible to know if this average effect size is realistic, but it is clear there is huge heterogeneity across fields and research questions. Any average effect size will often deviate substantially from the effect size that should be expected in a planned study. Some researchers have suggested to change Cohen’s benchmarks based on the distribution of effect sizes in a specific field (Bosco et al., 2015; Funder & Ozer, 2019; Hill et al., 2008; Kraft, 2020; Lovakov & Agadullina, 2017) . As always, when effect size estimates are based on the published literature, one needs to evaluate the possibility that the effect size estimates are inflated due to publication bias. Due to the large variation in effect sizes within a specific research area, there is little use in choosing a large, medium, or small effect size benchmark based on the empirical distribution of effect sizes in a field to perform a power analysis.
Having some knowledge about the distribution of effect sizes in the literature can be useful when interpreting the confidence interval around an effect size. If in a specific research area almost no effects are larger than the value you could reject in an equivalence test (e.g., if the observed effect size is 0, the design would only reject effects larger than for example d = 0.7), then it is a-priori unlikely that collecting the data would tell you something you didn’t already know.
It is more difficult to defend the use of a specific effect size derived from an empirical distribution of effect sizes as a justification for the effect size used in an a-priori power analysis. One might argue that the use of an effect size benchmark based on the distribution of effects in the literature will outperform a wild guess, but this is not a strong enough argument to form the basis of a sample size justification. There is a point where researchers need to admit they are not ready to perform an a-priori power analysis due to a lack of clear expectations (Scheel et al., 2020) . Alternative sample size justifications, such as a justification of the sample size based on resource constraints, perhaps in combination with a sequential study design, might be more in line with the actual inferential goals of a study.
So far, the focus has been on justifying the sample size for quantitative studies. There are a number of related topics that can be useful to design an informative study. First, in addition to a-priori or prospective power analysis and sensitivity power analysis, it is important to discuss compromise power analysis (which is useful) and post-hoc or retrospective power analysis (which is not useful, e.g., Zumbo and Hubley (1998) , Lenth (2007) ). When sample sizes are justified based on an a-priori power analysis it can be very efficient to collect data in sequential designs where data collection is continued or terminated based on interim analyses of the data. Furthermore, it is worthwhile to consider ways to increase the power of a test without increasing the sample size. An additional point of attention is to have a good understanding of your dependent variable, especially it’s standard deviation. Finally, sample size justification is just as important in qualitative studies, and although there has been much less work on sample size justification in this domain, some proposals exist that researchers can use to design an informative study. Each of these topics is discussed in turn.
Compromise Power Analysis
In a compromise power analysis the sample size and the effect are fixed, and the error rates of the test are calculated, based on a desired ratio between the Type I and Type II error rate. A compromise power analysis is useful both when a very large number of observations will be collected, as when only a small number of observations can be collected.
In the first situation a researcher might be fortunate enough to be able to collect so many observations that the statistical power for a test is very high for all effect sizes that are deemed interesting. For example, imagine a researcher has access to 2000 employees who are all required to answer questions during a yearly evaluation in a company where they are testing an intervention that should reduce subjectively reported stress levels. You are quite confident that an effect smaller than d = 0.2 is not large enough to be subjectively noticeable for individuals (Jaeschke et al., 1989) . With an alpha level of 0.05 the researcher would have a statistical power of 0.994, or a Type II error rate of 0.006. This means that for a smallest effect size of interest of d = 0.2 the researcher is 8.30 times more likely to make a Type I error than a Type II error.
Although the original idea of designing studies that control Type I and Type II error rates was that researchers would need to justify their error rates (Neyman & Pearson, 1933) , a common heuristic is to set the Type I error rate to 0.05 and the Type II error rate to 0.20, meaning that a Type I error is 4 times as unlikely as a Type II error. The default use of 80% power (or a 20% Type II or β error) is based on a personal preference of Cohen (1988) , who writes:
It is proposed here as a convention that, when the investigator has no other basis for setting the desired power value, the value .80 be used. This means that β is set at .20. This arbitrary but reasonable value is offered for several reasons (Cohen, 1965, pp. 98-99). The chief among them takes into consideration the implicit convention for α of .05. The β of .20 is chosen with the idea that the general relative seriousness of these two kinds of errors is of the order of .20/.05, i.e., that Type I errors are of the order of four times as serious as Type II errors. This .80 desired power convention is offered with the hope that it will be ignored whenever an investigator can find a basis in his substantive concerns in his specific research investigation to choose a value ad hoc.
We see that conventions are built on conventions: the norm to aim for 80% power is built on the norm to set the alpha level at 5%. What we should take away from Cohen is not that we should aim for 80% power, but that we should justify our error rates based on the relative seriousness of each error. This is where compromise power analysis comes in. If you share Cohen’s belief that a Type I error is 4 times as serious as a Type II error, and building on our earlier study on 2000 employees, it makes sense to adjust the Type I error rate when the Type II error rate is low for all effect sizes of interest (Cascio & Zedeck, 1983) . Indeed, Erdfelder, Faul, and Buchner (1996) created the G*Power software in part to give researchers a tool to perform compromise power analysis.
Figure 10 illustrates how a compromise power analysis is performed in G*Power when a Type I error is deemed to be equally costly as a Type II error, which for a study with 1000 observations per condition would lead to a Type I error and a Type II error of 0.0179. As Faul, Erdfelder, Lang, and Buchner (2007) write:
Of course, compromise power analyses can easily result in unconventional significance levels greater than α = .05 (in the case of small samples or effect sizes) or less than α = .001 (in the case of large samples or effect sizes). However, we believe that the benefit of balanced Type I and Type II error risks often offsets the costs of violating significance level conventions.

This brings us to the second situation where a compromise power analysis can be useful, which is when we know the statistical power in our study is low. Although it is highly undesirable to make decisions when error rates are high, if one finds oneself in a situation where a decision must be made based on little information, Winer (1962) writes:
The frequent use of the .05 and .01 levels of significance is a matter of convention having little scientific or logical basis. When the power of tests is likely to be low under these levels of significance, and when Type I and Type II errors are of approximately equal importance, the .30 and .20 levels of significance may be more appropriate than the .05 and .01 levels.
For example, if we plan to perform a two-sided t test, can feasibly collect at most 50 observations in each independent group, and expect a population effect size of 0.5, we would have 70% power if we set our alpha level to 0.05. We can choose to weigh both types of error equally, and set the alpha level to 0.149, to end up with a statistical power for an effect of d = 0.5 of 0.851 (given a 0.149 Type II error rate). The choice of α and β in a compromise power analysis can be extended to take prior probabilities of the null and alternative hypothesis into account (Maier & Lakens, 2022; Miller & Ulrich, 2019; Murphy et al., 2014) .
A compromise power analysis requires a researcher to specify the sample size. This sample size itself requires a justification, so a compromise power analysis will typically be performed together with a resource constraint justification for a sample size. It is especially important to perform a compromise power analysis if your resource constraint justification is strongly based on the need to make a decision, in which case a researcher should think carefully about the Type I and Type II error rates stakeholders are willing to accept. However, a compromise power analysis also makes sense if the sample size is very large, but a researcher did not have the freedom to set the sample size. This might happen if, for example, data collection is part of a larger international study and the sample size is based on other research questions. In designs where the Type II error rate is very small (and power is very high) some statisticians have also recommended to lower the alpha level to prevent Lindley’s paradox, a situation where a significant effect ( p < α ) is evidence for the null hypothesis (Good, 1992; Jeffreys, 1939) . Lowering the alpha level as a function of the statistical power of the test can prevent this paradox, providing another argument for a compromise power analysis when sample sizes are large (Maier & Lakens, 2022) . Finally, a compromise power analysis needs a justification for the effect size, either based on a smallest effect size of interest or an effect size that is expected. Table 7 lists three aspects that should be discussed alongside a reported compromise power analysis.
What to do if Your Editor Asks for Post-hoc Power?
Post-hoc, retrospective, or observed power is used to describe the statistical power of the test that is computed assuming the effect size that has been estimated from the collected data is the true effect size (Lenth, 2007; Zumbo & Hubley, 1998) . Post-hoc power is therefore not performed before looking at the data, based on effect sizes that are deemed interesting, as in an a-priori power analysis, and it is unlike a sensitivity power analysis where a range of interesting effect sizes is evaluated. Because a post-hoc or retrospective power analysis is based on the effect size observed in the data that has been collected, it does not add any information beyond the reported p value, but it presents the same information in a different way. Despite this fact, editors and reviewers often ask authors to perform post-hoc power analysis to interpret non-significant results. This is not a sensible request, and whenever it is made, you should not comply with it. Instead, you should perform a sensitivity power analysis, and discuss the power for the smallest effect size of interest and a realistic range of expected effect sizes.
Post-hoc power is directly related to the p value of the statistical test (Hoenig & Heisey, 2001) . For a z test where the p value is exactly 0.05, post-hoc power is always 50%. The reason for this relationship is that when a p value is observed that equals the alpha level of the test (e.g., 0.05), the observed z score of the test is exactly equal to the critical value of the test (e.g., z = 1.96 in a two-sided test with a 5% alpha level). Whenever the alternative hypothesis is centered on the critical value half the values we expect to observe if this alternative hypothesis is true fall below the critical value, and half fall above the critical value. Therefore, a test where we observed a p value identical to the alpha level will have exactly 50% power in a post-hoc power analysis, as the analysis assumes the observed effect size is true.
For other statistical tests, where the alternative distribution is not symmetric (such as for the t test, where the alternative hypothesis follows a non-central t distribution, see Figure 7 ), a p = 0.05 does not directly translate to an observed power of 50%, but by plotting post-hoc power against the observed p value we see that the two statistics are always directly related. As Figure 11 shows, if the p value is non-significant (i.e., larger than 0.05) the observed power will be less than approximately 50% in a t test. Lenth (2007) explains how observed power is also completely determined by the observed p value for F tests, although the statement that a non-significant p value implies a power less than 50% no longer holds.

When editors or reviewers ask researchers to report post-hoc power analyses they would like to be able to distinguish between true negatives (concluding there is no effect, when there is no effect) and false negatives (a Type II error, concluding there is no effect, when there actually is an effect). Since reporting post-hoc power is just a different way of reporting the p value, reporting the post-hoc power will not provide an answer to the question editors are asking (Hoenig & Heisey, 2001; Lenth, 2007; Schulz & Grimes, 2005; Yuan & Maxwell, 2005) . To be able to draw conclusions about the absence of a meaningful effect, one should perform an equivalence test, and design a study with high power to reject the smallest effect size of interest (Lakens, Scheel, et al., 2018) . Alternatively, if no smallest effect size of interest was specified when designing the study, researchers can report a sensitivity power analysis.
Sequential Analyses
Whenever the sample size is justified based on an a-priori power analysis it can be very efficient to collect data in a sequential design. Sequential designs control error rates across multiple looks at the data (e.g., after 50, 100, and 150 observations have been collected) and can reduce the average expected sample size that is collected compared to a fixed design where data is only analyzed after the maximum sample size is collected (Proschan et al., 2006; Wassmer & Brannath, 2016) . Sequential designs have a long history (Dodge & Romig, 1929) , and exist in many variations, such as the Sequential Probability Ratio Test (Wald, 1945) , combining independent statistical tests (Westberg, 1985) , group sequential designs (Jennison & Turnbull, 2000) , sequential Bayes factors (Schönbrodt et al., 2017) , and safe testing (Grünwald et al., 2019) . Of these approaches, the Sequential Probability Ratio Test is most efficient if data can be analyzed after every observation (Schnuerch & Erdfelder, 2020) . Group sequential designs, where data is collected in batches, provide more flexibility in data collection, error control, and corrections for effect size estimates (Wassmer & Brannath, 2016) . Safe tests provide optimal flexibility if there are dependencies between observations (ter Schure & Grünwald, 2019) .
Sequential designs are especially useful when there is considerable uncertainty about the effect size, or when it is plausible that the true effect size is larger than the smallest effect size of interest the study is designed to detect (Lakens, 2014) . In such situations data collection has the possibility to terminate early if the effect size is larger than the smallest effect size of interest, but data collection can continue to the maximum sample size if needed. Sequential designs can prevent waste when testing hypotheses, both by stopping early when the null hypothesis can be rejected, as by stopping early if the presence of a smallest effect size of interest can be rejected (i.e., stopping for futility). Group sequential designs are currently the most widely used approach to sequential analyses, and can be planned and analyzed using rpact (Wassmer & Pahlke, 2019) or gsDesign (K. M. Anderson, 2014) . 6
Increasing Power Without Increasing the Sample Size
The most straightforward approach to increase the informational value of studies is to increase the sample size. Because resources are often limited, it is also worthwhile to explore different approaches to increasing the power of a test without increasing the sample size. The first option is to use directional tests where relevant. Researchers often make directional predictions, such as ‘we predict X is larger than Y’. The statistical test that logically follows from this prediction is a directional (or one-sided) t test. A directional test moves the Type I error rate to one side of the tail of the distribution, which lowers the critical value, and therefore requires less observations to achieve the same statistical power.
Although there is some discussion about when directional tests are appropriate, they are perfectly defensible from a Neyman-Pearson perspective on hypothesis testing (Cho & Abe, 2013) , which makes a (preregistered) directional test a straightforward approach to both increase the power of a test, as the riskiness of the prediction. However, there might be situations where you do not want to ask a directional question. Sometimes, especially in research with applied consequences, it might be important to examine if a null effect can be rejected, even if the effect is in the opposite direction as predicted. For example, when you are evaluating a recently introduced educational intervention, and you predict the intervention will increase the performance of students, you might want to explore the possibility that students perform worse, to be able to recommend abandoning the new intervention. In such cases it is also possible to distribute the error rate in a ‘lop-sided’ manner, for example assigning a stricter error rate to effects in the negative than in the positive direction (Rice & Gaines, 1994) .
Another approach to increase the power without increasing the sample size, is to increase the alpha level of the test, as explained in the section on compromise power analysis. Obviously, this comes at an increased probability of making a Type I error. The risk of making either type of error should be carefully weighed, which typically requires taking into account the prior probability that the null-hypothesis is true (Cascio & Zedeck, 1983; Miller & Ulrich, 2019; Mudge et al., 2012; Murphy et al., 2014) . If you have to make a decision, or want to make a claim, and the data you can feasibly collect is limited, increasing the alpha level is justified, either based on a compromise power analysis, or based on a cost-benefit analysis (Baguley, 2004; Field et al., 2004) .
Another widely recommended approach to increase the power of a study is use a within participant design where possible. In almost all cases where a researcher is interested in detecting a difference between groups, a within participant design will require collecting less participants than a between participant design. The reason for the decrease in the sample size is explained by the equation below from Maxwell, Delaney, and Kelley (2017) . The number of participants needed in a two group within-participants design (NW) relative to the number of participants needed in a two group between-participants design (NB), assuming normal distributions, is:
The required number of participants is divided by two because in a within-participants design with two conditions every participant provides two data points. The extent to which this reduces the sample size compared to a between-participants design also depends on the correlation between the dependent variables (e.g., the correlation between the measure collected in a control task and an experimental task), as indicated by the (1- ρ ) part of the equation. If the correlation is 0, a within-participants design simply needs half as many participants as a between participant design (e.g., 64 instead 128 participants). The higher the correlation, the larger the relative benefit of within-participants designs, and whenever the correlation is negative (up to -1) the relative benefit disappears. Especially when dependent variables in within-participants designs are positively correlated, within-participants designs will greatly increase the power you can achieve given the sample size you have available. Use within-participants designs when possible, but weigh the benefits of higher power against the downsides of order effects or carryover effects that might be problematic in a within-participants design (Maxwell et al., 2017) . 7 For designs with multiple factors with multiple levels it can be difficult to specify the full correlation matrix that specifies the expected population correlation for each pair of measurements (Lakens & Caldwell, 2021) . In these cases sequential analyses might provide a solution.
In general, the smaller the variation, the larger the standardized effect size (because we are dividing the raw effect by a smaller standard deviation) and thus the higher the power given the same number of observations. Some additional recommendations are provided in the literature (Allison et al., 1997; Bausell & Li, 2002; Hallahan & Rosenthal, 1996) , such as:
Use better ways to screen participants for studies where participants need to be screened before participation.
Assign participants unequally to conditions (if data in the control condition is much cheaper to collect than data in the experimental condition, for example).
Use reliable measures that have low error variance (Williams et al., 1995) .
Smart use of preregistered covariates (Meyvis & Van Osselaer, 2018) .
It is important to consider if these ways to reduce the variation in the data do not come at too large a cost for external validity. For example, in an intention-to-treat analysis in randomized controlled trials participants who do not comply with the protocol are maintained in the analysis such that the effect size from the study accurately represents the effect of implementing the intervention in the population, and not the effect of the intervention only on those people who perfectly follow the protocol (Gupta, 2011) . Similar trade-offs between reducing the variance and external validity exist in other research areas.
Know Your Measure
Although it is convenient to talk about standardized effect sizes, it is generally preferable if researchers can interpret effects in the raw (unstandardized) scores, and have knowledge about the standard deviation of their measures (Baguley, 2009; Lenth, 2001) . To make it possible for a research community to have realistic expectations about the standard deviation of measures they collect, it is beneficial if researchers within a research area use the same validated measures. This provides a reliable knowledge base that makes it easier to plan for a desired accuracy, and to use a smallest effect size of interest on the unstandardized scale in an a-priori power analysis.
In addition to knowledge about the standard deviation it is important to have knowledge about the correlations between dependent variables (for example because Cohen’s d z for a dependent t test relies on the correlation between means). The more complex the model, the more aspects of the data-generating process need to be known to make predictions. For example, in hierarchical models researchers need knowledge about variance components to be able to perform a power analysis (DeBruine & Barr, 2019; Westfall et al., 2014) . Finally, it is important to know the reliability of your measure (Parsons et al., 2019) , especially when relying on an effect size from a published study that used a measure with different reliability, or when the same measure is used in different populations, in which case it is possible that measurement reliability differs between populations. With the increasing availability of open data, it will hopefully become easier to estimate these parameters using data from earlier studies.
If we calculate a standard deviation from a sample, this value is an estimate of the true value in the population. In small samples, our estimate can be quite far off, while due to the law of large numbers, as our sample size increases, we will be measuring the standard deviation more accurately. Since the sample standard deviation is an estimate with uncertainty, we can calculate a confidence interval around the estimate (Smithson, 2003) , and design pilot studies that will yield a sufficiently reliable estimate of the standard deviation. The confidence interval for the variance σ 2 is provided in the following formula, and the confidence for the standard deviation is the square root of these limits:
Whenever there is uncertainty about parameters, researchers can use sequential designs to perform an internal pilot study (Wittes & Brittain, 1990) . The idea behind an internal pilot study is that researchers specify a tentative sample size for the study, perform an interim analysis, use the data from the internal pilot study to update parameters such as the variance of the measure, and finally update the final sample size that will be collected. As long as interim looks at the data are blinded (e.g., information about the conditions is not taken into account) the sample size can be adjusted based on an updated estimate of the variance without any practical consequences for the Type I error rate (Friede & Kieser, 2006; Proschan, 2005) . Therefore, if researchers are interested in designing an informative study where the Type I and Type II error rates are controlled, but they lack information about the standard deviation, an internal pilot study might be an attractive approach to consider (Chang, 2016) .
Conventions as meta-heuristics
Even when a researcher might not use a heuristic to directly determine the sample size in a study, there is an indirect way in which heuristics play a role in sample size justifications. Sample size justifications based on inferential goals such as a power analysis, accuracy, or a decision all require researchers to choose values for a desired Type I and Type II error rate, a desired accuracy, or a smallest effect size of interest. Although it is sometimes possible to justify these values as described above (e.g., based on a cost-benefit analysis), a solid justification of these values might require dedicated research lines. Performing such research lines will not always be possible, and these studies might themselves not be worth the costs (e.g., it might require less resources to perform a study with an alpha level that most peers would consider conservatively low, than to collect all the data that would be required to determine the alpha level based on a cost-benefit analysis). In these situations, researchers might use values based on a convention.
When it comes to a desired width of a confidence interval, a desired power, or any other input values required to perform a sample size computation, it is important to transparently report the use of a heuristic or convention (for example by using the accompanying online Shiny app). A convention such as the use of a 5% Type 1 error rate and 80% power practically functions as a lower threshold of the minimum informational value peers are expected to accept without any justification (whereas with a justification, higher error rates can also be deemed acceptable by peers). It is important to realize that none of these values are set in stone. Journals are free to specify that they desire a higher informational value in their author guidelines (e.g., Nature Human Behavior requires registered reports to be designed to achieve 95% statistical power, and my own department has required staff to submit ERB proposals where, whenever possible, the study was designed to achieve 90% power). Researchers who choose to design studies with a higher informational value than a conventional minimum should receive credit for doing so.
In the past some fields have changed conventions, such as the 5 sigma threshold now used in physics to declare a discovery instead of a 5% Type I error rate. In other fields such attempts have been unsuccessful (e.g., Johnson (2013) ). Improved conventions should be context dependent, and it seems sensible to establish them through consensus meetings (Mullan & Jacoby, 1985) . Consensus meetings are common in medical research, and have been used to decide upon a smallest effect size of interest (for an example, see Fried, Boers, and Baker (1993) ). In many research areas current conventions can be improved. For example, it seems peculiar to have a default alpha level of 5% both for single studies and for meta-analyses, and one could imagine a future where the default alpha level in meta-analyses is much lower than 5%. Hopefully, making the lack of an adequate justification for certain input values in specific situations more transparent will motivate fields to start a discussion about how to improve current conventions. The online Shiny app links to good examples of justifications where possible, and will continue to be updated as better justifications are developed in the future.
Sample Size Justification in Qualitative Research
A value of information perspective to sample size justification also applies to qualitative research. A sample size justification in qualitative research should be based on the consideration that the cost of collecting data from additional participants does not yield new information that is valuable enough given the inferential goals. One widely used application of this idea is known as saturation and is indicated by the observation that new data replicates earlier observations, without adding new information (Morse, 1995) . For example, let’s imagine we ask people why they have a pet. Interviews might reveal reasons that are grouped into categories, but after interviewing 20 people, no new categories emerge, at which point saturation has been reached. Alternative philosophies to qualitative research exist, and not all value planning for saturation. Regrettably, principled approaches to justify sample sizes have not been developed for these alternative philosophies (Marshall et al., 2013) .
When sampling, the goal is often not to pick a representative sample, but a sample that contains a sufficiently diverse number of subjects such that saturation is reached efficiently. Fugard and Potts (2015) show how to move towards a more informed justification for the sample size in qualitative research based on 1) the number of codes that exist in the population (e.g., the number of reasons people have pets), 2) the probability a code can be observed in a single information source (e.g., the probability that someone you interview will mention each possible reason for having a pet), and 3) the number of times you want to observe each code. They provide an R formula based on binomial probabilities to compute a required sample size to reach a desired probability of observing codes.
A more advanced approach is used in Rijnsoever (2017) , which also explores the importance of different sampling strategies. In general, purposefully sampling information from sources you expect will yield novel information is much more efficient than random sampling, but this also requires a good overview of the expected codes, and the sub-populations in which each code can be observed. Sometimes, it is possible to identify information sources that, when interviewed, would at least yield one new code (e.g., based on informal communication before an interview). A good sample size justification in qualitative research is based on 1) an identification of the populations, including any sub-populations, 2) an estimate of the number of codes in the (sub-)population, 3) the probability a code is encountered in an information source, and 4) the sampling strategy that is used.
Providing a coherent sample size justification is an essential step in designing an informative study. There are multiple approaches to justifying the sample size in a study, depending on the goal of the data collection, the resources that are available, and the statistical approach that is used to analyze the data. An overarching principle in all these approaches is that researchers consider the value of the information they collect in relation to their inferential goals.
The process of justifying a sample size when designing a study should sometimes lead to the conclusion that it is not worthwhile to collect the data, because the study does not have sufficient informational value to justify the costs. There will be cases where it is unlikely there will ever be enough data to perform a meta-analysis (for example because of a lack of general interest in the topic), the information will not be used to make a decision or claim, and the statistical tests do not allow you to test a hypothesis with reasonable error rates or to estimate an effect size with sufficient accuracy. If there is no good justification to collect the maximum number of observations that one can feasibly collect, performing the study anyway is a waste of time and/or money (Brown, 1983; Button et al., 2013; S. D. Halpern et al., 2002) .
The awareness that sample sizes in past studies were often too small to meet any realistic inferential goals is growing among psychologists (Button et al., 2013; Fraley & Vazire, 2014; Lindsay, 2015; Sedlmeier & Gigerenzer, 1989) . As an increasing number of journals start to require sample size justifications, some researchers will realize they need to collect larger samples than they were used to. This means researchers will need to request more money for participant payment in grant proposals, or that researchers will need to increasingly collaborate (Moshontz et al., 2018) . If you believe your research question is important enough to be answered, but you are not able to answer the question with your current resources, one approach to consider is to organize a research collaboration with peers, and pursue an answer to this question collectively.
A sample size justification should not be seen as a hurdle that researchers need to pass before they can submit a grant, ethical review board proposal, or manuscript for publication. When a sample size is simply stated, instead of carefully justified, it can be difficult to evaluate whether the value of the information a researcher aims to collect outweighs the costs of data collection. Being able to report a solid sample size justification means a researcher knows what they want to learn from a study, and makes it possible to design a study that can provide an informative answer to a scientific question.
This work was funded by VIDI Grant 452-17-013 from the Netherlands Organisation for Scientific Research. I would like to thank Shilaan Alzahawi, José Biurrun, Aaron Caldwell, Gordon Feld, Yaov Kessler, Robin Kok, Maximilian Maier, Matan Mazor, Toni Saari, Andy Siddall, and Jesper Wulff for feedback on an earlier draft. A computationally reproducible version of this manuscript is available at https://github.com/Lakens/sample_size_justification. An interactive online form to complete a sample size justification implementing the recommendations in this manuscript can be found at https://shiny.ieis.tue.nl/sample_size_justification/.
I have no competing interests to declare.
A computationally reproducible version of this manuscript is available at https://github.com/Lakens/sample_size_justification .
The topic of power analysis for meta-analyses is outside the scope of this manuscript, but see Hedges and Pigott (2001) and Valentine, Pigott, and Rothstein (2010) .
It is possible to argue we are still making an inference, even when the entire population is observed, because we have observed a metaphorical population from one of many possible worlds, see Spiegelhalter (2019) .
Power analyses can be performed based on standardized effect sizes or effect sizes expressed on the original scale. It is important to know the standard deviation of the effect (see the ‘Know Your Measure’ section) but I find it slightly more convenient to talk about standardized effects in the context of sample size justifications.
These figures can be reproduced and adapted in an online shiny app: http://shiny.ieis.tue.nl/d_p_power/ .
Confidence intervals around effect sizes can be computed using the MOTE Shiny app: https://www.aggieerin.com/shiny-server/
Shiny apps are available for both rpact: https://rpact.shinyapps.io/public/ and gsDesign: https://gsdesign.shinyapps.io/prod/
You can compare within- and between-participants designs in this Shiny app: http://shiny.ieis.tue.nl/within_between .
Supplementary data
Recipient(s) will receive an email with a link to 'Sample Size Justification' and will not need an account to access the content.
Subject: Sample Size Justification
(Optional message may have a maximum of 1000 characters.)
Citing articles via
Email alerts, affiliations.
- Recent Content
- Special Collections
- All Content
- Submission Guidelines
- Publication Fees
- Journal Policies
- Editorial Team
- Online ISSN 2474-7394
- Copyright © 2023
Stay Informed
Disciplines.
- Ancient World
- Anthropology
- Communication
- Criminology & Criminal Justice
- Film & Media Studies
- Food & Wine
- Browse All Disciplines
- Browse All Courses
- Book Authors
- Booksellers
- Instructions
- Journal Authors
- Journal Editors
- Media & Journalists
- Planned Giving
About UC Press
- Press Releases
- Seasonal Catalog
- Acquisitions Editors
- Customer Service
- Exam/Desk Requests
- Media Inquiries
- Print-Disability
- Rights & Permissions
- UC Press Foundation
- © Copyright 2022 by the Regents of the University of California. All rights reserved. Privacy policy Accessibility
This Feature Is Available To Subscribers Only
Sign In or Create an Account
No internet connection.
All search filters on the page have been cleared., your search has been saved..
- All content
- Dictionaries
- Encyclopedias
- Expert Insights
- Foundations
- How-to Guides
- Journal Articles
- Little Blue Books
- Little Green Books
- Project Planner
- Tools Directory
- Sign in to my profile No Name
- Sign in Signed in
- My profile No Name
The SAGE Encyclopedia of Qualitative Research Methods
- Edited by: Lisa M. Given
- Publisher: SAGE Publications, Inc.
- Publication year: 2008
- Online pub date: December 27, 2012
- Discipline: Anthropology , Business and Management , Criminology and Criminal Justice , Communication and Media Studies , Counseling and Psychotherapy , Economics , Education , Geography , Health , History , Marketing , Nursing , Political Science and International Relations , Psychology , Social Policy and Public Policy , Social Work , Sociology
- Methods: Artistic inquiry , Action research
- DOI: https:// doi. org/10.4135/9781412963909
- Keywords: art , inquiry Show all Show less
- Print ISBN: 9781412941631
- Online ISBN: 9781412963909
- Buy the book icon link
Reader's guide
Entries a-z, subject index.
Qualitative research is designed to explore the human elements of a given topic, while specific qualitative methods examine how individuals see and experience the world. Qualitative approaches are typically used to explore new phenomena and to capture individuals' thoughts, feelings, or interpretations of meaning and process. Such methods are central to research conducted in education, nursing, sociology, anthropology, information studies, and other disciplines in the humanities, social sciences, and health sciences. Qualitative research projects are informed by a wide range of methodologies and theoretical frameworks.
The SAGE Encyclopedia of Qualitative Research Methods presents current and complete information as well as ready-to-use techniques, facts, and examples from the field of qualitative research in a very accessible style. In taking an interdisciplinary approach, these two volumes target a broad audience and fill a gap in the existing reference literature for a general guide to the core concepts that inform qualitative research practices. The entries cover every major facet of qualitative methods, including access to research participants, data coding, research ethics, the role of theory in qualitative research, and much more—all without overwhelming the informed reader.
Key Features
Defines and explains core concepts, describes the techniques involved in the implementation of qualitative methods, and presents an overview of qualitative approaches to research; Offers many entries that point to substantive debates among qualitative researchers regarding how concepts are labeled and the implications of such labels for how qualitative research is valuedl; Guides readers through the complex landscape of the language of qualitative inquiry; Includes contributors from various countries and disciplines that reflect a diverse spectrum of research approaches from more traditional, positivist approaches, through postmodern, constructionist ones; Presents some entries written in first-person voice and others in third-person voice to reflect the diversity of approaches that define qualitative work
Approaches and Methodologies; Arts-Based Research, Ties to; Computer Software; Data Analysis; Data Collection; Data Types and Characteristics; Dissemination; History of Qualitative Research; Participants; Quantitative Research, Ties to; Research Ethics; Rigor; Textual Analysis, Ties to; Theoretical and Philosophical Frameworks
The SAGE Encyclopedia of Qualitative Research Methods is designed to appeal to undergraduate and graduate students, practitioners, researchers, consultants, and consumers of information across the social sciences, humanities, and health sciences, making it a welcome addition to any academic or public library.
Front Matter
- Editorial Board
- List of Entries
- Reader's Guide
- About the Editor
- Contributors
- Introduction
Reader’s Guide
- A/r/tography
- Action Research
- Advocacy Research
- Applied Research
- Appreciative Inquiry
- Artifact Analysis
- Arts-Based Research
- Arts-Informed Research
- Autobiography
- Autoethnography
- Basic Research
- Clinical Research
- Collaborative Research
- Community-Based Research
- Comparative Research
- Content Analysis
- Conversation Analysis
- Covert Research
- Critical Action Research
- Critical Arts-Based Inquiry
- Critical Discourse Analysis
- Critical Ethnography
- Critical Hermeneutics
- Critical Research
- Cross-Cultural Research
- Discourse Analysis
- Document Analysis
- Duoethnography
- Ecological Research
- Emergent Design
- Empirical Research
- Empowerment Evaluation
- Ethnography
- Ethnomethodology
- Evaluation Research
- Evidence-Based Practice
- Explanatory Research
- Exploratory Data Analysis
- Feminist Research
- Field Research
- Foucauldian Discourse Analysis
- Genealogical Approach
- Grounded Theory
- Hermeneutics
- Heuristic Inquiry
- Historical Discourse Analysis
- Historical Research
- Historiography
- Indigenous Research
- Institutional Ethnography
- Institutional Research
- Interdisciplinary Research
- Internet in Qualitative Research
- Interpretive Inquiry
- Interpretive Phenomenology
- Interpretive Research
- Market Research
- Meta-Analysis
- Meta-Ethnography
- Meta-Synthesis
- Methodological Holism Versus Individualism
- Methodology
- Mixed Methods Research
- Multicultural Research
- Narrative Analysis
- Narrative Genre Analysis
- Narrative Inquiry
- Naturalistic Inquiry
- Observational Research
- Oral History
- Orientational Perspective
- Para-Ethnography
- Participatory Action Research (PAR)
- Performance Ethnography
- Phenomenography
- Phenomenology
- Place/Space in Qualitative Research
- Playbuilding
- Portraiture
- Program Evaluation
- Q Methodology
- Readers Theater
- Social Justice
- Social Network Analysis
- Survey Research
- Systemic Inquiry
- Theatre of the Oppressed
- Transformational Methods
- Unobtrusive Research
- Value-Free Inquiry
- Virtual Ethnography
- Virtual Research
- Visual Ethnography
- Visual Narrative Inquiry
- Bricolage and Bricoleur
- Connoisseurship
- Dance in Qualitative Research
- Ethnopoetics
- Fictional Writing
- Film and Video in Qualitative Research
- Literature in Qualitative Research
- Multimedia in Qualitative Research
- Music in Qualitative Research
- Photographs in Qualitative Research
- Photonovella and Photovoice
- Poetry in Qualitative Research
- Researcher as Artist
- Storytelling
- Visual Research
- Association for Qualitative Research (AQR)
- Center for Interpretive and Qualitative Research
- International Association of Qualitative Inquiry
- International Institute for Qualitative Methodology
- ResearchTalk, Inc.
- ATLAS.ti"(Software)
- Computer-Assisted Data Analysis
- Diction (Software)
- Ethnograph (Software)
- Framework (Software)
- HyperRESEARCH (Software)
- MAXqda (Software)
- NVivo (Software)
- Qualrus (Software)
- SuperHyperQual (Software)
- TextQuest (Software)
- Transana (Software)
- Analytic Induction
- ATLAS.ti" (Software)
- Audience Analysis
- Axial Coding
- Categorization
- Co-Constructed Narrative
- Codes and Coding
- Coding Frame
- Comparative Analysis
- Concept Mapping
- Conceptual Ordering
- Constant Comparison
- Context and Contextuality
- Context-Centered Knowledge
- Core Category
- Counternarrative
- Creative Writing
- Cultural Context
- Data Analysis
- Data Management
- Data Saturation
- Descriptive Statistics
- Discursive Practice
- Diversity Issues
- Embodied Knowledge
- Emergent Themes
- Emic/Etic Distinction
- Emotions in Qualitative Research
- Ethnographic Content Analysis
- Ethnostatistics
- Evaluation Criteria
- Everyday Life
- Experiential Knowledge
- Explanation
- Gender Issues
- Heteroglossia
- Historical Context
- Horizonalization
- Imagination in Qualitative Research
- In Vivo Coding
- Indexicality
- Interpretation
- Intertextuality
- Liminal Perspective
- Literature Review
- Lived Experience
- Marginalization
- Membership Categorization Device Analysis (MCDA)
- Memos and Memoing
- Meta-Narrative
- Negative Case Analysis
- Nonverbal Communication
- Open Coding
- Peer Review
- Psychological Generalization
- Rapid Assessment Process
- Reconstructive Analysis
- Recursivity
- Reflexivity
- Research Diaries and Journals
- Research Literature
- Researcher as Instrument
- Researcher Sensitivity
- Response Groups
- Rhythmanalysis
- Rigor in Qualitative Research
- Secondary Analysis
- Selective Coding
- Situatedness
- Social Context
- Systematic Sociological Introspection
- Tacit Knowledge
- Textual Analysis
- Thematic Coding and Analysis
- Theoretical Memoing
- Theoretical Saturation
- Thick Description
- Transcription
- Typological Analysis
- Understanding
- Video Intervention/Prevention Assessment
- Visual Data
- Visual Data Displays
- Writing Process
- Active Listening
- Audiorecording
- Captive Population
- Closed Question
- Cognitive Interview
- Convenience Sample
- Convergent Interviewing
- Conversational Interviewing
- Covert Observation
- Critical Incident Technique
- Data Archive
- Data Collection
- Data Generation
- Data Security
- Data Storage
- Diaries and Journals
- Email Interview
- Focus Groups
- Free Association Narrative Interview
- In-Depth Interview
- In-Person Interview
- Interactive Focus Groups
- Interactive Interview
- Interview Guide
- Interviewing
- Leaving the Field
- Life Stories
- Narrative Interview
- Narrative Texts
- Natural Setting
- Naturalistic Data
- Naturalistic Observation
- Negotiating Exit
- Neutral Question
- Neutrality in Qualitative Research
- Nonparticipant Observation
- Nonprobability Sampling
- Observation Schedule
- Open-Ended Question
- Participant Observation
- Peer Debriefing
- Pilot Study
- Probes and Probing
- Projective Techniques
- Prolonged Engagement
- Psychoanalytically Informed Observation
- Purposive Sampling
- Quota Sampling
- Random Sampling
- Recruiting Participants
- Research Problem
- Research Question
- Research Setting
- Research Team
- Researcher Roles
- Researcher Safety
- Sample Size
- Sampling Frame
- Secondary Data
- Semi-Structured Interview
- Sensitizing Concepts
- Serendipity
- Snowball Sampling
- Stratified Sampling
- Structured Interview
- Structured Observation
- Subjectivity Statement
- Telephone Interview
- Theoretical Sampling
- Triangulation
- Unstructured Interview
- Unstructured Observation
- Videorecording
- Virtual Interview
- Ethnography (Journal)
- Field Methods (Journal)
- Forum: Qualitative Social Research (Journal)
- International Journal of Qualitative Methods
- Journal of Contemporary Ethnography
- Journal of Mixed Methods Research
- Narrative Inquiry (Journal)
- Oral History Review (Journal)
- Qualitative Health Research (Journal)
- Qualitative Inquiry (Journal)
- Qualitative Report, The (Journal)
- Qualitative Research (Journal)
- Advances in Qualitative Methods Conference
- Ethnographic and Qualitative Research Conference
- First-Person Voice
- Interdisciplinary Qualitative Studies Conference
- International Congress of Qualitative Inquiry
- International Human Science Research Conference
- Publishing and Publication
- Qualitative Health Research Conference
- Representational Forms of Dissemination
- Research Proposal
- Education, Qualitative Research in
- Evolution of Qualitative Research
- Health Sciences, Qualitative Research in
- Humanities, Qualitative Research in
- Politics of Qualitative Research
- Qualitative Research, History of
- Social Sciences, Qualitative Research in
- Confidentiality
- Conflict of Interest
- Disengagement
- Disinterestedness
- Empowerment
- Informed Consent
- Insider/Outsider Status
- Intersubjectivity
- Key Informant
- Marginalized Populations
- Member Check
- Over-Rapport
- Participant
- Participants as Co-Researchers
- Reciprocity
- Researcher–Participant Relationships
- Secondary Participants
- Virtual Community
- Vulnerability
- Generalizability
- Objectivity
- Probability Sampling
- Quantitative Research
- Reductionism
- Reliability
- Replication
- Ethics Review Process
- Project Management
- Qualitative Research Summer Intensive
- Research Design
- Research Justification
- Theoretical Frameworks
- Thinking Qualitatively Workshop Conference
- Accountability
- Authenticity
- Ethics and New Media
- Ethics Codes
- Institutional Review Boards
- Integrity in Qualitative Research
- Relational Ethics
- Sensitive Topics
- Audit Trail
- Confirmability
- Credibility
- Dependability
- Inter- and Intracoder Reliability
- Observer Bias
- Subjectivity
- Transferability
- Translatability
- Transparency
- Trustworthiness
- Verification
- Discursive Psychology
- Chaos and Complexity Theories
- Constructivism
- Critical Humanism
- Critical Pragmatism
- Critical Race Theory
- Critical Realism
- Critical Theory
- Deconstruction
- Epistemology
- Essentialism
- Existentialism
- Feminist Epistemology
- Grand Narrative
- Grand Theory
- Nonessentialism
- Objectivism
- Postcolonialism
- Postmodernism
- Postpositivism
- Postrepresentation
- Poststructuralism
- Queer Theory
- Reality and Multiple Realities
- Representation
- Social Constructionism
- Structuralism
- Subjectivism
- Symbolic Interactionism
Sign in to access this content
Get a 30 day free trial, more like this, sage recommends.
We found other relevant content for you on other SAGE platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches
- Sign in/register
Navigating away from this page will delete your results
Please save your results to "My Self-Assessments" in your profile before navigating away from this page.
Sign in to my profile
Sign up for a free trial and experience all SAGE Research Methods has to offer.
You must have a valid academic email address to sign up.
Get off-campus access
- View or download all content my institution has access to.
- view my profile
Warning: The NCBI web site requires JavaScript to function. more...

An official website of the United States government
The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
- Browse Titles
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2022 Jan-.

StatPearls [Internet].
Qualitative study.
Steven Tenny ; Janelle M. Brannan ; Grace D. Brannan .
Affiliations
Last Update: September 18, 2022 .
- Introduction
Qualitative research is a type of research that explores and provides deeper insights into real-world problems. [1] Instead of collecting numerical data points or intervene or introduce treatments just like in quantitative research, qualitative research helps generate hypotheses as well as further investigate and understand quantitative data. Qualitative research gathers participants' experiences, perceptions, and behavior. It answers the hows and whys instead of how many or how much. It could be structured as a stand-alone study, purely relying on qualitative data or it could be part of mixed-methods research that combines qualitative and quantitative data. This review introduces the readers to some basic concepts, definitions, terminology, and application of qualitative research.
Qualitative research at its core, ask open-ended questions whose answers are not easily put into numbers such as ‘how’ and ‘why’. [2] Due to the open-ended nature of the research questions at hand, qualitative research design is often not linear in the same way quantitative design is. [2] One of the strengths of qualitative research is its ability to explain processes and patterns of human behavior that can be difficult to quantify. [3] Phenomena such as experiences, attitudes, and behaviors can be difficult to accurately capture quantitatively, whereas a qualitative approach allows participants themselves to explain how, why, or what they were thinking, feeling, and experiencing at a certain time or during an event of interest. Quantifying qualitative data certainly is possible, but at its core, qualitative data is looking for themes and patterns that can be difficult to quantify and it is important to ensure that the context and narrative of qualitative work are not lost by trying to quantify something that is not meant to be quantified.
However, while qualitative research is sometimes placed in opposition to quantitative research, where they are necessarily opposites and therefore ‘compete’ against each other and the philosophical paradigms associated with each, qualitative and quantitative work are not necessarily opposites nor are they incompatible. [4] While qualitative and quantitative approaches are different, they are not necessarily opposites, and they are certainly not mutually exclusive. For instance, qualitative research can help expand and deepen understanding of data or results obtained from quantitative analysis. For example, say a quantitative analysis has determined that there is a correlation between length of stay and level of patient satisfaction, but why does this correlation exist? This dual-focus scenario shows one way in which qualitative and quantitative research could be integrated together.
Examples of Qualitative Research Approaches
Ethnography
Ethnography as a research design has its origins in social and cultural anthropology, and involves the researcher being directly immersed in the participant’s environment. [2] Through this immersion, the ethnographer can use a variety of data collection techniques with the aim of being able to produce a comprehensive account of the social phenomena that occurred during the research period. [2] That is to say, the researcher’s aim with ethnography is to immerse themselves into the research population and come out of it with accounts of actions, behaviors, events, etc. through the eyes of someone involved in the population. Direct involvement of the researcher with the target population is one benefit of ethnographic research because it can then be possible to find data that is otherwise very difficult to extract and record.
Grounded Theory
Grounded Theory is the “generation of a theoretical model through the experience of observing a study population and developing a comparative analysis of their speech and behavior.” [5] As opposed to quantitative research which is deductive and tests or verifies an existing theory, grounded theory research is inductive and therefore lends itself to research that is aiming to study social interactions or experiences. [3] [2] In essence, Grounded Theory’s goal is to explain for example how and why an event occurs or how and why people might behave a certain way. Through observing the population, a researcher using the Grounded Theory approach can then develop a theory to explain the phenomena of interest.
Phenomenology
Phenomenology is defined as the “study of the meaning of phenomena or the study of the particular”. [5] At first glance, it might seem that Grounded Theory and Phenomenology are quite similar, but upon careful examination, the differences can be seen. At its core, phenomenology looks to investigate experiences from the perspective of the individual. [2] Phenomenology is essentially looking into the ‘lived experiences’ of the participants and aims to examine how and why participants behaved a certain way, from their perspective . Herein lies one of the main differences between Grounded Theory and Phenomenology. Grounded Theory aims to develop a theory for social phenomena through an examination of various data sources whereas Phenomenology focuses on describing and explaining an event or phenomena from the perspective of those who have experienced it.
Narrative Research
One of qualitative research’s strengths lies in its ability to tell a story, often from the perspective of those directly involved in it. Reporting on qualitative research involves including details and descriptions of the setting involved and quotes from participants. This detail is called ‘thick’ or ‘rich’ description and is a strength of qualitative research. Narrative research is rife with the possibilities of ‘thick’ description as this approach weaves together a sequence of events, usually from just one or two individuals, in the hopes of creating a cohesive story, or narrative. [2] While it might seem like a waste of time to focus on such a specific, individual level, understanding one or two people’s narratives for an event or phenomenon can help to inform researchers about the influences that helped shape that narrative. The tension or conflict of differing narratives can be “opportunities for innovation”. [2]
Research Paradigm
Research paradigms are the assumptions, norms, and standards that underpin different approaches to research. Essentially, research paradigms are the ‘worldview’ that inform research. [4] It is valuable for researchers, both qualitative and quantitative, to understand what paradigm they are working within because understanding the theoretical basis of research paradigms allows researchers to understand the strengths and weaknesses of the approach being used and adjust accordingly. Different paradigms have different ontology and epistemologies . Ontology is defined as the "assumptions about the nature of reality” whereas epistemology is defined as the “assumptions about the nature of knowledge” that inform the work researchers do. [2] It is important to understand the ontological and epistemological foundations of the research paradigm researchers are working within to allow for a full understanding of the approach being used and the assumptions that underpin the approach as a whole. Further, it is crucial that researchers understand their own ontological and epistemological assumptions about the world in general because their assumptions about the world will necessarily impact how they interact with research. A discussion of the research paradigm is not complete without describing positivist, postpositivist, and constructivist philosophies.
Positivist vs Postpositivist
To further understand qualitative research, we need to discuss positivist and postpositivist frameworks. Positivism is a philosophy that the scientific method can and should be applied to social as well as natural sciences. [4] Essentially, positivist thinking insists that the social sciences should use natural science methods in its research which stems from positivist ontology that there is an objective reality that exists that is fully independent of our perception of the world as individuals. Quantitative research is rooted in positivist philosophy, which can be seen in the value it places on concepts such as causality, generalizability, and replicability.
Conversely, postpositivists argue that social reality can never be one hundred percent explained but it could be approximated. [4] Indeed, qualitative researchers have been insisting that there are “fundamental limits to the extent to which the methods and procedures of the natural sciences could be applied to the social world” and therefore postpositivist philosophy is often associated with qualitative research. [4] An example of positivist versus postpositivist values in research might be that positivist philosophies value hypothesis-testing, whereas postpositivist philosophies value the ability to formulate a substantive theory.
Constructivist
Constructivism is a subcategory of postpositivism. Most researchers invested in postpositivist research are constructivist as well, meaning they think there is no objective external reality that exists but rather that reality is constructed. Constructivism is a theoretical lens that emphasizes the dynamic nature of our world. “Constructivism contends that individuals’ views are directly influenced by their experiences, and it is these individual experiences and views that shape their perspective of reality”. [6] Essentially, Constructivist thought focuses on how ‘reality’ is not a fixed certainty and experiences, interactions, and backgrounds give people a unique view of the world. Constructivism contends, unlike in positivist views, that there is not necessarily an ‘objective’ reality we all experience. This is the ‘relativist’ ontological view that reality and the world we live in are dynamic and socially constructed. Therefore, qualitative scientific knowledge can be inductive as well as deductive.” [4]
So why is it important to understand the differences in assumptions that different philosophies and approaches to research have? Fundamentally, the assumptions underpinning the research tools a researcher selects provide an overall base for the assumptions the rest of the research will have and can even change the role of the researcher themselves. [2] For example, is the researcher an ‘objective’ observer such as in positivist quantitative work? Or is the researcher an active participant in the research itself, as in postpositivist qualitative work? Understanding the philosophical base of the research undertaken allows researchers to fully understand the implications of their work and their role within the research, as well as reflect on their own positionality and bias as it pertains to the research they are conducting.
Data Sampling
The better the sample represents the intended study population, the more likely the researcher is to encompass the varying factors at play. The following are examples of participant sampling and selection: [7]
- Purposive sampling- selection based on the researcher’s rationale in terms of being the most informative.
- Criterion sampling-selection based on pre-identified factors.
- Convenience sampling- selection based on availability.
- Snowball sampling- the selection is by referral from other participants or people who know potential participants.
- Extreme case sampling- targeted selection of rare cases.
- Typical case sampling-selection based on regular or average participants.
Data Collection and Analysis
Qualitative research uses several techniques including interviews, focus groups, and observation. [1] [2] [3] Interviews may be unstructured, with open-ended questions on a topic and the interviewer adapts to the responses. Structured interviews have a predetermined number of questions that every participant is asked. It is usually one on one and is appropriate for sensitive topics or topics needing an in-depth exploration. Focus groups are often held with 8-12 target participants and are used when group dynamics and collective views on a topic are desired. Researchers can be a participant-observer to share the experiences of the subject or a non-participant or detached observer.
While quantitative research design prescribes a controlled environment for data collection, qualitative data collection may be in a central location or in the environment of the participants, depending on the study goals and design. Qualitative research could amount to a large amount of data. Data is transcribed which may then be coded manually or with the use of Computer Assisted Qualitative Data Analysis Software or CAQDAS such as ATLAS.ti or NVivo. [8] [9] [10]
After the coding process, qualitative research results could be in various formats. It could be a synthesis and interpretation presented with excerpts from the data. [11] Results also could be in the form of themes and theory or model development.
Dissemination
To standardize and facilitate the dissemination of qualitative research outcomes, the healthcare team can use two reporting standards. The Consolidated Criteria for Reporting Qualitative Research or COREQ is a 32-item checklist for interviews and focus groups. [12] The Standards for Reporting Qualitative Research (SRQR) is a checklist covering a wider range of qualitative research. [13]
Examples of Application
Many times a research question will start with qualitative research. The qualitative research will help generate the research hypothesis which can be tested with quantitative methods. After the data is collected and analyzed with quantitative methods, a set of qualitative methods can be used to dive deeper into the data for a better understanding of what the numbers truly mean and their implications. The qualitative methods can then help clarify the quantitative data and also help refine the hypothesis for future research. Furthermore, with qualitative research researchers can explore subjects that are poorly studied with quantitative methods. These include opinions, individual's actions, and social science research.
A good qualitative study design starts with a goal or objective. This should be clearly defined or stated. The target population needs to be specified. A method for obtaining information from the study population must be carefully detailed to ensure there are no omissions of part of the target population. A proper collection method should be selected which will help obtain the desired information without overly limiting the collected data because many times, the information sought is not well compartmentalized or obtained. Finally, the design should ensure adequate methods for analyzing the data. An example may help better clarify some of the various aspects of qualitative research.
A researcher wants to decrease the number of teenagers who smoke in their community. The researcher could begin by asking current teen smokers why they started smoking through structured or unstructured interviews (qualitative research). The researcher can also get together a group of current teenage smokers and conduct a focus group to help brainstorm factors that may have prevented them from starting to smoke (qualitative research).
In this example, the researcher has used qualitative research methods (interviews and focus groups) to generate a list of ideas of both why teens start to smoke as well as factors that may have prevented them from starting to smoke. Next, the researcher compiles this data. The research found that, hypothetically, peer pressure, health issues, cost, being considered “cool,” and rebellious behavior all might increase or decrease the likelihood of teens starting to smoke.
The researcher creates a survey asking teen participants to rank how important each of the above factors is in either starting smoking (for current smokers) or not smoking (for current non-smokers). This survey provides specific numbers (ranked importance of each factor) and is thus a quantitative research tool.
The researcher can use the results of the survey to focus efforts on the one or two highest-ranked factors. Let us say the researcher found that health was the major factor that keeps teens from starting to smoke, and peer pressure was the major factor that contributed to teens to start smoking. The researcher can go back to qualitative research methods to dive deeper into each of these for more information. The researcher wants to focus on how to keep teens from starting to smoke, so they focus on the peer pressure aspect.
The researcher can conduct interviews and/or focus groups (qualitative research) about what types and forms of peer pressure are commonly encountered, where the peer pressure comes from, and where smoking first starts. The researcher hypothetically finds that peer pressure often occurs after school at the local teen hangouts, mostly the local park. The researcher also hypothetically finds that peer pressure comes from older, current smokers who provide the cigarettes.
The researcher could further explore this observation made at the local teen hangouts (qualitative research) and take notes regarding who is smoking, who is not, and what observable factors are at play for peer pressure of smoking. The researcher finds a local park where many local teenagers hang out and see that a shady, overgrown area of the park is where the smokers tend to hang out. The researcher notes the smoking teenagers buy their cigarettes from a local convenience store adjacent to the park where the clerk does not check identification before selling cigarettes. These observations fall under qualitative research.
If the researcher returns to the park and counts how many individuals smoke in each region of the park, this numerical data would be quantitative research. Based on the researcher's efforts thus far, they conclude that local teen smoking and teenagers who start to smoke may decrease if there are fewer overgrown areas of the park and the local convenience store does not sell cigarettes to underage individuals.
The researcher could try to have the parks department reassess the shady areas to make them less conducive to the smokers or identify how to limit the sales of cigarettes to underage individuals by the convenience store. The researcher would then cycle back to qualitative methods of asking at-risk population their perceptions of the changes, what factors are still at play, as well as quantitative research that includes teen smoking rates in the community, the incidence of new teen smokers, among others. [14] [15]
Qualitative research functions as a standalone research design or in combination with quantitative research to enhance our understanding of the world. Qualitative research uses techniques including structured and unstructured interviews, focus groups, and participant observation to not only help generate hypotheses which can be more rigorously tested with quantitative research but also to help researchers delve deeper into the quantitative research numbers, understand what they mean, and understand what the implications are. Qualitative research provides researchers with a way to understand what is going on, especially when things are not easily categorized. [16]
- Issues of Concern
As discussed in the sections above, quantitative and qualitative work differ in many different ways, including the criteria for evaluating them. There are four well-established criteria for evaluating quantitative data: internal validity, external validity, reliability, and objectivity. The correlating concepts in qualitative research are credibility, transferability, dependability, and confirmability. [4] [11] The corresponding quantitative and qualitative concepts can be seen below, with the quantitative concept is on the left, and the qualitative concept is on the right:
- Internal validity--- Credibility
- External validity---Transferability
- Reliability---Dependability
- Objectivity---Confirmability
In conducting qualitative research, ensuring these concepts are satisfied and well thought out can mitigate potential issues from arising. For example, just as a researcher will ensure that their quantitative study is internally valid so should qualitative researchers ensure that their work has credibility.
Indicators such as triangulation and peer examination can help evaluate the credibility of qualitative work.
- Triangulation: Triangulation involves using multiple methods of data collection to increase the likelihood of getting a reliable and accurate result. In our above magic example, the result would be more reliable by also interviewing the magician, back-stage hand, and the person who "vanished." In qualitative research, triangulation can include using telephone surveys, in-person surveys, focus groups, and interviews as well as surveying an adequate cross-section of the target demographic.
- Peer examination: Results can be reviewed by a peer to ensure the data is consistent with the findings.
‘Thick’ or ‘rich’ description can be used to evaluate the transferability of qualitative research whereas using an indicator such as an audit trail might help with evaluating the dependability and confirmability.
- Thick or rich description is a detailed and thorough description of details, the setting, and quotes from participants in the research. [5] Thick descriptions will include a detailed explanation of how the study was carried out. Thick descriptions are detailed enough to allow readers to draw conclusions and interpret the data themselves, which can help with transferability and replicability.
- Audit trail: An audit trail provides a documented set of steps of how the participants were selected and the data was collected. The original records of information should also be kept (e.g., surveys, notes, recordings).
One issue of concern that qualitative researchers should take into consideration is observation bias. Here are a few examples:
- Hawthorne effect: The Hawthorne effect is the change in participant behavior when they know they are being observed. If a researcher was wanting to identify factors that contribute to employee theft and tells the employees they are going to watch them to see what factors affect employee theft, one would suspect employee behavior would change when they know they are being watched.
- Observer-expectancy effect: Some participants change their behavior or responses to satisfy the researcher's desired effect. This happens in an unconscious manner for the participant so it is important to eliminate or limit transmitting the researcher's views.
- Artificial scenario effect: Some qualitative research occurs in artificial scenarios and/or with preset goals. In such situations, the information may not be accurate because of the artificial nature of the scenario. The preset goals may limit the qualitative information obtained.
- Clinical Significance
Qualitative research by itself or combined with quantitative research helps healthcare providers understand patients and the impact and challenges of the care they deliver. Qualitative research provides an opportunity to generate and refine hypotheses and delve deeper into the data generated by quantitative research. Qualitative research does not exist as an island apart from quantitative research, but as an integral part of research methods to be used for the understanding of the world around us. [17]
- Enhancing Healthcare Team Outcomes
Qualitative research is important for all members of the health care team as all are affected by qualitative research. Qualitative research may help develop a theory or a model for health research that can be further explored by quantitative research. Much of the qualitative research data acquisition is completed by numerous team members including social works, scientists, nurses, etc. Within each area of the medical field, there is copious ongoing qualitative research including physician-patient interactions, nursing-patient interactions, patient-environment interactions, health care team function, patient information delivery, etc.
- Review Questions
- Access free multiple choice questions on this topic.
- Comment on this article.
This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.
- Cite this Page Tenny S, Brannan JM, Brannan GD. Qualitative Study. [Updated 2022 Sep 18]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2022 Jan-.
In this Page
Bulk download.
- Bulk download StatPearls data from FTP
Related information
- PMC PubMed Central citations
- PubMed Links to PubMed
Similar articles in PubMed
- Abstracts of Presentations at the Association of Clinical Scientists 143(rd) Meeting Louisville, KY May 11-14,2022. [Ann Clin Lab Sci. 2022] Abstracts of Presentations at the Association of Clinical Scientists 143(rd) Meeting Louisville, KY May 11-14,2022. . Ann Clin Lab Sci. 2022 May; 52(3):511-525.
- Suicidal Ideation. [StatPearls. 2022] Suicidal Ideation. Harmer B, Lee S, Duong TVH, Saadabadi A. StatPearls. 2022 Jan
- Macromolecular crowding: chemistry and physics meet biology (Ascona, Switzerland, 10-14 June 2012). [Phys Biol. 2013] Macromolecular crowding: chemistry and physics meet biology (Ascona, Switzerland, 10-14 June 2012). Foffi G, Pastore A, Piazza F, Temussi PA. Phys Biol. 2013 Aug; 10(4):040301. Epub 2013 Aug 2.
- The future of Cochrane Neonatal. [Early Hum Dev. 2020] The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Early Hum Dev. 2020 Nov; 150:105191. Epub 2020 Sep 12.
- Pilot Medical Certification. [StatPearls. 2022] Pilot Medical Certification. Matthews MJ, Stretanski MF. StatPearls. 2022 Jan
Recent Activity
- Qualitative Study - StatPearls Qualitative Study - StatPearls
Your browsing activity is empty.
Activity recording is turned off.
Turn recording back on
Connect with NLM
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies FOIA HHS Vulnerability Disclosure
Help Accessibility Careers
The Essay Writing Experts US Essay Experts
Disclaimer: This is an example of a student written essay. Click here for sample essays written by our professional writers.
View full disclaimer
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.
Justification of a Business Plan for Hotel
Reference this
If you need assistance with writing your essay, our professional essay writing service is here to help!
Cite This Work
To export a reference to this article please select a referencing stye below:
Related Services

Essay Writing Service

- Dissertation Writing Service

- Assignment Writing Service
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please:
Our academic writing and marking services can help you!
- Find out more about our Essay Writing Service
- Undergraduate 2:2
- 7 day delivery
- Marking Service
- Samples of our Service
- Full Service Portfolio
Related Lectures
Study for free with our range of university lectures!
- Accounting Lectures
- Business Lectures
- Economics Lectures
- Finance Lectures
- All Available Lectures

Freelance Writing Jobs
Looking for a flexible role? Do you have a 2:1 degree or higher?
Study Resources
Free resources to assist you with your university studies!
- Business Lecture Notes
- How to Write an Essay
- Essay Buyers Guide
- Referencing Tools
- Essay Writing Guides
- Masters Writing Guides
Search Support Articles
*You can also browse our support articles here >
Change Region / Country
Here you can choose which regional hub you wish to view, providing you with the most relevant information we have for your specific region. If your specific country is not listed, please select the UK version of the site, as this is best suited to international visitors.
United Kingdom
United States
United Arab Emirates
Saudi Arabia

IMAGES
VIDEO
COMMENTS
The justification of a research is also known as the rationale. Writing the justification or rationale comes from an in-depth search and analysis of the existing literature around the topic. A comprehensive literature search typically reveals gaps in previous studies that you may then wish to explore through your research.
As its name indicates, the justification of an investigation is the part that justifies the work, that is, within it a series of arguments must be highlighted that must be valid and powerful enough to prove the need to carry out the investigation.
research designs and justify your choose. Some research studies use mixed designs, so more than one design can be chosen. A mixed method approach, that is using both a quantitative and qualitative design, may also be chosen. A mixed methods approach requires that you conduct a full quantitative and a full qualitative study.
The justification statement should convey the relevance of the over-arching topic in which the grounded. The recommended length is 2 to 3 paragraphs. over-arching topic.Introduce the Explain the research focus and its relationship to the discipline or field of study that supports the need to conduct the proposed study.
The justification of the investigation refers to the basis of the investigation or the reason why the investigation is being carried out. The justification should include an explanation for the design used and the methods used in the research.
Justify how the study can improve policy or decision-making. Testing existing untested theory. Creating new theory, protocol or model. Justification based on personal or work experiences....
The justification of the study is basically why a particular research work was carried out. What was the problem identified that made a student want to carry out such research work. Here you will also capture why the methodology was adopted and also why the experiment was conducted; could it be practical or scholastic purposes.
This means social scientists can carry out various systematic investigations (research) on any social phenomenon that requires understanding and attention. Social phenomena of interest can be...
The rationale of the study explains the reason why the study was conducted (in an article or thesis) or why the study should be conducted (in a proposal). This means the study rationale should explain to the reader or examiner why the study is/was necessary. It is also sometimes called the "purpose" or "justification" of a study.
Rationale for the study, also referred to as justification for the study, is reason why you have conducted your study in the first place. This part in your paper needs to explain uniqueness and importance of your research. Rationale for the study needs to be specific and ideally, it should relate to the following points: 1. The research needs ...
In this overview article six approaches are discussed to justify the sample size in a quantitative empirical study: 1) collecting data from (almost) the entire population, 2) choosing a sample size based on resource constraints, 3) performing an a-priori power analysis, 4) planning for a desired accuracy, 5) using heuristics, or 6) explicitly …
HOW TO WRITE A JUSTIFICATION STATEMENT FOR YOUR STUDY RineCynth Advisory 800 subscribers Subscribe 21K views 1 year ago Developing a Research Proposal in this video Dr. Nelson, explains the...
Justification for Research What makes a good research question is often in the eye of the beholder, but there are several general best-practices criteria that can be used to assess the justification for research. Is the question scientifically well-posed, i.e. is it stated in a hypothetical form that leads to a research design and analysis with ...
3.7. Justification of choosing deductive research approach: A rational technique comprise with "fostering a speculation (or theories) in view of existing hypothesis and afterward fostering an exploration methodology to test the theory. It has been recorded "rational" signifies thinking from the specific to the general.
Step 1: Explain your methodological approach. Step 2: Describe your data collection methods. Step 3: Describe your analysis method. Step 4: Evaluate and justify the methodological choices you made. Tips for writing a strong methodology chapter. Frequently asked questions about methodology.
Qualitative research projects are informed by a wide range of methodologies and theoretical frameworks. The SAGE Encyclopedia of Qualitative Research Methods presents current and complete information as well as ready-to-use techniques, facts, and examples from the field of qualitative research in a very accessible style. In taking an ...
Qualitative research is a type of research that explores and provides deeper insights into real-world problems.[1] Instead of collecting numerical data points or intervene or introduce treatments just like in quantitative research, qualitative research helps generate hypotheses as well as further investigate and understand quantitative data. Qualitative research gathers participants ...
A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.
An updated intelligence assessment about the origins of the Covid-19 virus has reopened the long-simmering and unsolved debate about how the virus came to be -- and will fuel a new committee House ...
The Justification of a Business Plan for Rebecca's Hotel Introduction The significance of business plans in all types and sizes of business organizations can never be threaten. ... the study's research locale. The operations of 240 small family businesses in Austria's tourism destination industry were part of the study in 2003. Research ...