This question is sometimes all that editors will say to justify rejecting a manuscript based on a study that had too small a sample in the editors’ opinion. It is understandable that editors like large samples. After all the larger the sample the smaller the p value. This in turn will get you below the ‘magic’ criterion value .05 for the p value. Achieving this you get past the first editorial hurdle and you have “statistically significant” results. After all no self-respecting editor would publish non-significant findings no matter the quality of the methodology or the statistical power of the different analysis in question.
Am I starting to sound like a bitter and twisted researcher that only conducts studies with inadequate sample sizes? No, what you are reading are the words of a researcher and academic that has published articles based on small and large samples, ranging from about 40 to about 10,000. I am also an active reviewer and have reviewed for about 20 different journals in the last five years but back to the issue at hand.
Editors and reviewers have to realise a few things. First, increasing ones’ sample is not always an option. Gathering data can be very time consuming and costly. Second, study quality is not a function of sample size. Third, if the population being examined is small a sample from said population will inevitable be small. Fourth, unpublished studies are probably less likely to be included in relevant meta-analysis thus scientific knowledge is undermined. Fifth, GS4 are not always limited by small statistical power.
In summary, if GS4 are not published then knowledge is not disseminated leading to poorer theories and worse practical application of said research.