With this app, you will get a personal advice on how to improve your study according to your data and methodological approach. Answer all questions below in order to know how to control for possible flaws in your research, e.g. p-hacking. Don't worry if your answers are not the best ones because you will get guidance for the appropriate way of proceeding!





We suggest using Univariate Regression.

- You assume that there is a linear relationship between your dependent and independent variable(s) You can best test this by making a scatter plot

- You assume that your data come from a normal distribution You can test normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

- In the case of multiple dependent variables you assume absence of multicollinearity This means that the dependent variables cannot be correlated to each other. Tabachnick & Fidell (2012) suggest that no correlation should be above r = .90. You can also check this by computing the Variance Inflation Function which should not be bigger than 10.

- You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.

We suggest using Multiple Regression.

- You assume that there is a linear relationship between your dependent and independent variable(s) You can best test this by making a scatter plot

- You assume that your data come from a normal distribution You can test normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

- In the case of multiple dependent variables you assume absence of multicollinearity This means that the dependent variables cannot be correlated to each other. Tabachnick & Fidell (2012) suggest that no correlation should be above r = .90. You can also check this by computing the Variance Inflation Function which should not be bigger than 10.

- You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.


We suggest using Multivariate Regression.

- You assume that there is a linear relationship between your dependent and independent variable(s) You can best test this by making a scatter plot

- You assume that your data come from a normal distribution You can test normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

- In the case of multiple dependent variables you assume absence of multicollinearity This means that the dependent variables cannot be correlated to each other. Tabachnick & Fidell (2012) suggest that no correlation should be above r = .90. You can also check this by computing the Variance Inflation Function which should not be bigger than 10.

- You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.

We suggest using Multivariate Multiple Regression.

- You assume that there is a linear relationship between your dependent and independent variable(s) You can best test this by making a scatter plot

- You assume that your data come from a normal distribution You can test normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

- In the case of multiple dependent variables you assume absence of multicollinearity This means that the dependent variables cannot be correlated to each other. Tabachnick & Fidell (2012) suggest that no correlation should be above r = .90. You can also check this by computing the Variance Inflation Function which should not be bigger than 10.

- You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.



We suggest using Logistic Regression.

- You assume that there is a linear relationship between your dependent and independent variable(s) You can best test this by making a scatter plot

- You assume that your data come from a normal distribution You can test normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

- In the case of multiple dependent variables you assume absence of multicollinearity This means that the dependent variables cannot be correlated to each other. Tabachnick & Fidell (2012) suggest that no correlation should be above r = .90. You can also check this by computing the Variance Inflation Function which should not be bigger than 10.

- You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.

We suggest using Multiple Logistic Regression.

- You assume that there is a linear relationship between your dependent and independent variable(s) You can best test this by making a scatter plot

- You assume that your data come from a normal distribution You can test normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

- In the case of multiple dependent variables you assume absence of multicollinearity This means that the dependent variables cannot be correlated to each other. Tabachnick & Fidell (2012) suggest that no correlation should be above r = .90. You can also check this by computing the Variance Inflation Function which should not be bigger than 10.

- You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.


We suggest using Multivariate Logistic Regression.

- You assume that there is a linear relationship between your dependent and independent variable(s) You can best test this by making a scatter plot

- You assume that your data come from a normal distribution You can test normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

- In the case of multiple dependent variables you assume absence of multicollinearity This means that the dependent variables cannot be correlated to each other. Tabachnick & Fidell (2012) suggest that no correlation should be above r = .90. You can also check this by computing the Variance Inflation Function which should not be bigger than 10.

- You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.

We suggest using Multivariate Logistic Regression.

- You assume that there is a linear relationship between your dependent and independent variable(s) You can best test this by making a scatter plot

- You assume that your data come from a normal distribution You can test normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

- In the case of multiple dependent variables you assume absence of multicollinearity This means that the dependent variables cannot be correlated to each other. Tabachnick & Fidell (2012) suggest that no correlation should be above r = .90. You can also check this by computing the Variance Inflation Function which should not be bigger than 10.

- You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.




We suggest using ANCOVA.

You assume that your data come from a normal distribution You can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this first by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.

You assume that the sample cases are independent from each other.

You assume independence of the independent variable and covariate You can test this with a T-test (for 2 conditions) or an ANOVA ( >2 conditions) using the covariate as the dependent variable. The result should not be significant at an alpha level of 0.05.

We suggest using T-Test.

- You assume that the data of your dependent variable come from a normal distribution. You can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.


We suggest using ANCOVA.

You assume that your data come from a normal distribution You can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this first by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.

You assume that the sample cases are independent from each other.

You assume independence of the independent variable and covariate You can test this with a T-test (for 2 conditions) or an ANOVA ( >2 conditions) using the covariate as the dependent variable. The result should not be significant at an alpha level of 0.05.

We suggest using X-way ANOVA.

- You assume that your data come from a normal distribution You can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

- You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.

- You assume that the sample cases are independent from each other.



We suggest using MANCOVA.

- You assume that your data come from a multivariate normal distribution First, you can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05. For testing multivariate normality we advise the MVN package in R

- You assume multivariate homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this first by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05. Then you can test for multivariate homogeneity of variance by using the Box??s M Test. The result should not be significant at an alpha level of 0.001.

- You assume absence of multicolliniarity. This means that the dependent variables cannot be correlated to each other. Tabachnick & Fidell (2012) suggest that no correlation should be above r = .90. You can also check this by computing the Variance Inflation Function which should not be bigger than 10.

We suggest using MANCOVA.

- You assume that your data come from a multivariate normal distribution First, you can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05. For testing multivariate normality we advise the MVN package in R

- You assume multivariate homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this first by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05. Then you can test for multivariate homogeneity of variance by using the Box??s M Test. The result should not be significant at an alpha level of 0.001.

- You assume absence of multicolliniarity. This means that the dependent variables cannot be correlated to each other. Tabachnick & Fidell (2012) suggest that no correlation should be above r = .90. You can also check this by computing the Variance Inflation Function which should not be bigger than 10.


We suggest using MANCOVA.

- You assume that your data come from a multivariate normal distribution First, you can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05. For testing multivariate normality we advise the MVN package in R

- You assume multivariate homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this first by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05. Then you can test for multivariate homogeneity of variance by using the Box??s M Test. The result should not be significant at an alpha level of 0.001.

- You assume absence of multicolliniarity. This means that the dependent variables cannot be correlated to each other. Tabachnick & Fidell (2012) suggest that no correlation should be above r = .90. You can also check this by computing the Variance Inflation Function which should not be bigger than 10.

We suggest using Multivariate MANOVA.

You assume that your data come from a multivariate normal distribution First, you can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05. For testing multivariate normality we advise the MVN package in R

You assume multivariate homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this first by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05. Then you can test for multivariate homogeneity of variance by using the Box???s M Test. The result should not be significant at an alpha level of 0.001.

You assume absence of multicolliniarity. This means that the dependent variables cannot be correlated to each other. Tabachnick & Fidell (2012) suggest that no correlation should be above r = .90. You can also check this by computing the Variance Inflation Function which should not be bigger than 10.


We suggest using Chi Square Test.

- Your sample size should be big enough, this is especially important for the chi-square test. In no more than 20% of the contingency table the expected counts can be less than 5, and in every cell there should be at least one expected observation.

We suggest using Chi Square Test.

- Your sample size should be big enough, this is especially important for the chi-square test. In no more than 20% of the contingency table the expected counts can be less than 5, and in every cell there should be at least one expected observation.


We suggest using Chi Square Test.

- Your sample size should be big enough, this is especially important for the chi-square test. In no more than 20% of the contingency table the expected counts can be less than 5, and in every cell there should be at least one expected observation.

We suggest using Chi Square Test.

- Your sample size should be big enough, this is especially important for the chi-square test. In no more than 20% of the contingency table the expected counts can be less than 5, and in every cell there should be at least one expected observation.


We suggest using ANCOVA.

You assume that your data come from a normal distribution You can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this first by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.

You assume that the sample cases are independent from each other.

You assume independence of the independent variable and covariate You can test this with a T-test (for 2 conditions) or an ANOVA ( >2 conditions) using the covariate as the dependent variable. The result should not be significant at an alpha level of 0.05.

We suggest using Univariate ANOVA.

- You assume that your data come from a normal distribution You can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

- You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.

- You assume that the sample cases are independent from each other.


We suggest using ANCOVA.

You assume that your data come from a normal distribution You can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this first by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.

You assume that the sample cases are independent from each other.

You assume independence of the independent variable and covariate You can test this with a T-test (for 2 conditions) or an ANOVA ( >2 conditions) using the covariate as the dependent variable. The result should not be significant at an alpha level of 0.05.

We suggest using X-way ANOVA.

- You assume that your data come from a normal distribution You can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

- You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.

- You assume that the sample cases are independent from each other.


We suggest using MANCOVA.

- You assume that your data come from a multivariate normal distribution First, you can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05. For testing multivariate normality we advise the MVN package in R

- You assume multivariate homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this first by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05. Then you can test for multivariate homogeneity of variance by using the Box??s M Test. The result should not be significant at an alpha level of 0.001.

- You assume absence of multicolliniarity. This means that the dependent variables cannot be correlated to each other. Tabachnick & Fidell (2012) suggest that no correlation should be above r = .90. You can also check this by computing the Variance Inflation Function which should not be bigger than 10.

We suggest using Multivariate ANOVA.

- You assume that your data come from a normal distribution You can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05.

- You assume homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05.

- You assume that the sample cases are independent from each other.


We suggest using MANCOVA.

- You assume that your data come from a multivariate normal distribution First, you can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05. For testing multivariate normality we advise the MVN package in R

- You assume multivariate homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this first by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05. Then you can test for multivariate homogeneity of variance by using the Box??s M Test. The result should not be significant at an alpha level of 0.001.

- You assume absence of multicolliniarity. This means that the dependent variables cannot be correlated to each other. Tabachnick & Fidell (2012) suggest that no correlation should be above r = .90. You can also check this by computing the Variance Inflation Function which should not be bigger than 10.

We suggest using X-way MANOVA.

You assume that your data come from a multivariate normal distribution First, you can test univariate normality by using the Shapiro-Wilk test. The result should not be significant at an alpha level of 0.05. For testing multivariate normality we advise the MVN package in R

You assume multivariate homogeneity of variance. This means that you assume that the variance between the groups is equal. You can test this first by using the Levene???s Test of Equality of Variance. The result should not be significant at an alpha level of 0.05. Then you can test for multivariate homogeneity of variance by using the Box???s M Test. The result should not be significant at an alpha level of 0.001.

You assume absence of multicolliniarity. This means that the dependent variables cannot be correlated to each other. Tabachnick & Fidell (2012) suggest that no correlation should be above r = .90. You can also check this by computing the Variance Inflation Function which should not be bigger than 10.


We suggest using Chi Square Test.

- Your sample size should be big enough, this is especially important for the chi-square test. In no more than 20% of the contingency table the expected counts can be less than 5, and in every cell there should be at least one expected observation.

We suggest using Chi Square Test.

- Your sample size should be big enough, this is especially important for the chi-square test. In no more than 20% of the contingency table the expected counts can be less than 5, and in every cell there should be at least one expected observation.


We suggest using Chi Square Test.

- Your sample size should be big enough, this is especially important for the chi-square test. In no more than 20% of the contingency table the expected counts can be less than 5, and in every cell there should be at least one expected observation.

We suggest using Chi Square Test.

- Your sample size should be big enough, this is especially important for the chi-square test. In no more than 20% of the contingency table the expected counts can be less than 5, and in every cell there should be at least one expected observation.

The next step is to calculate the sample size that is implied by a certain power you want to achieve. Please check the boxes, if applicable.

Made by Seongjin Bien, Julian Burger, Gaby Lunansky, Francesca Freuli, Irene Sánchez, Maria Vlachou

Power Analysis Shiny App made by Alexander Coppock