library(wrappedtools)
#> Loading required package: tidyverse
#> -- Attaching packages --------------------------------------- tidyverse 1.3.1 --
#> v ggplot2 3.3.3 v purrr 0.3.4
#> v tibble 3.1.1 v dplyr 1.0.5
#> v tidyr 1.1.3 v stringr 1.4.0
#> v readr 1.4.0 v forcats 0.5.1
#> -- Conflicts ------------------------------------------ tidyverse_conflicts() --
#> x dplyr::filter() masks stats::filter()
#> x dplyr::lag() masks stats::lag()
#> Package wrappedtools is still experimental, be warned that there might be dragons
The goal of ‘wrappedtools’ is to make my (and possibly your) life a bit easier by a set of convenience functions for many common tasks like e.g. computation of mean and SD and pasting them with ±. Instead of
paste(round(mean(x),some_level), round(sd(x),some_level), sep=‘±’)
a simple meansd(x, roundDig = some_level) is enough.
You can install the released version of ‘wrappedtools’ from github with:
::install_github("abusjahn/wrappedtools") devtools
This is a basic example which shows you how to solve a common problem, that is, describe and test differences in some measures between 2 samples, rounding descriptive statistics to a reasonable precision in the process:
# Standard functions to obtain median and quartiles:
median(mtcars$mpg)
#> [1] 19.2
quantile(mtcars$mpg,probs = c(.25,.75))
#> 25% 75%
#> 15.425 22.800
# wrappedtools adds rounding and pasting:
median_quart(mtcars$mpg)
#> [1] "19 (15/23)"
# on a higher level, this logic leads to
compare2numvars(data = mtcars, dep_vars = c('wt','mpg', "disp"),
indep_var = 'am',
gaussian = F,
round_desc = 3)
#> # A tibble: 3 x 5
#> # Groups: Variable [3]
#> Variable desc_all `am 0` `am 1` p
#> <fct> <chr> <chr> <chr> <chr>
#> 1 wt 3.32 (2.53~ "Error in DESC(x = .$Value, ~ " \n unbenutztes Ar~ 0.001
#> 2 mpg 19.2 (15.3~ "Error in DESC(x = .$Value, ~ " \n unbenutztes Ar~ 0.002
#> 3 disp 196 (121/3~ "Error in DESC(x = .$Value, ~ " \n unbenutztes Ar~ 0.001
To explain the *wrapper’ part of the package name, here is another example, using the ks.test as test for a Normal distribution, where ksnormal simply wrapps around the ks.test function:
<- rnorm(100)
somedata ks.test(x = somedata, 'pnorm', mean=mean(somedata), sd=sd(somedata))
#>
#> One-sample Kolmogorov-Smirnov test
#>
#> data: somedata
#> D = 0.039945, p-value = 0.9972
#> alternative hypothesis: two-sided
ksnormal(somedata)
#> [1] 0.9972476
This should give you the general idea, I’ll try to expand this intro over time…