Skip to content

Commit

Permalink
Merged main into development branch. Resolved conflicts
Browse files Browse the repository at this point in the history
Merge branch 'main' into development_dosimetry-metrics

# Conflicts:
#	DESCRIPTION
#	NAMESPACE
#	R/helper.R
#	README.Rmd
  • Loading branch information
steffenhartmeyer committed Jan 10, 2024
2 parents 90f3ff2 + c9e30df commit e5aec2d
Show file tree
Hide file tree
Showing 91 changed files with 2,257 additions and 3,174 deletions.
5 changes: 3 additions & 2 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Package: LightLogR
Title: Work With Data from Wearable Light Loggers and Optical Radiation Dosimeters
Version: 0.2.1.9000
Version: 0.2.2.9000
Authors@R: c(
person("Johannes", "Zauner",
email = "[email protected]", role = c("aut", "cre"),
Expand All @@ -12,7 +12,8 @@ Authors@R: c(
email = "[email protected]", role = "aut",
comment = c(ORCID = "0000-0002-2813-2668")),
person("MeLiDos", role = "fnd"),
person("EURAMET", role = "fnd"),
person("EURAMET", role = "fnd", comment = "European Association of National Metrology Institutes. Website: www.euramet.org. Grant Number: 22NRM05 MeLiDos. Grant Statement: The project (22NRM05 MeLiDos) has received funding from the European Partnership on Metrology, co-financed from the European Union’s Horizon Europe Research and Innovation Programme and by the Participating States."),
person("European Union", role = "fnd", comment = "Co-funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or EURAMET. Neither the European Union nor the granting authority can be held responsible for them."),
person("TSCN-Lab", comment = c(URL = "www.tscnlab.org"), role = "cph"))
Description: LightLogR is a package under development as part of the MeLiDos
project aimed at developing a standard workflow for wearable light logger
Expand Down
24 changes: 14 additions & 10 deletions NAMESPACE
Original file line number Diff line number Diff line change
@@ -1,34 +1,38 @@
# Generated by roxygen2: do not edit by hand

export("%>%")
export(Brown.check)
export(Brown.rec)
export(Brown2reference)
export(Brown_check)
export(Brown_rec)
export(Datetime_breaks)
export(Datetime_limits)
export(aggregate_Datetime)
export(bright_dark_period)
export(create_Time.data)
export(count_difftime)
export(create_Timedata)
export(cut_Datetime)
export(data2reference)
export(dominant_epoch)
export(duration_above_threshold)
export(filter_Date)
export(filter_Datetime)
export(filter_Datetime_multiple)
export(filter_Time)
export(gap_finder)
export(gap_handler)
export(gapless_Datetimes)
export(gg_day)
export(gg_days)
export(gg_overview)
export(import.ActLumus)
export(import.Dataset)
export(import.LYS)
export(import.Statechanges)
export(interdaily_stability)
export(interval2state)
export(intradaily_variability)
export(join.datasets)
export(import)
export(import_Dataset)
export(import_Statechanges)
export(interval2state)
export(join_datasets)
export(sc2interval)
export(sleep.int2Brown)
export(sleep_int2Brown)
export(symlog_trans)
export(timing_above_threshold)
importFrom(magrittr,"%>%")
Expand Down
24 changes: 21 additions & 3 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,23 @@
# LightLogR 0.2.2.9000

* Bugfix for `LiDo` import

* Added import support for new devices: `LiDo`, `DeLux`, and `Speccy`

* Removed minor inconsistencies in naming conventions. Also, all imported columns will have syntactic naming now

* Added an option to all `gap` functions, to extend the gapless Datetime range to full days.

# LightLogR 0.2.1.9000

* Exports the up to now internal function `count_difftime()` that is the basis for `dominant_epoch()`. But whereas the latter gets only the most common epoch, `count_difftime()` returns a table with the counts of all epochs. This is useful in conjunction with `gap_finder()`, to check the distribution of data intervals.

* Added the `gg_days()` function to visualize multiple days of data in a single plot. Alongside come two helper functions, `Datetime_limits()` and `Datetime_breaks()`, to set the limits and breaks of the x-axis.

* Added the `filter_Datetime_multiple()` function to filter for multiple Datetime ranges depending on certain conditions, e.g. different filter cutoffs for different participants. It wraps around `filter_Datetime()` or `filter_Date()`.

* Reworked the internals of the light logger data import functions. They now use a more straightforward function factory approach. For users the only visible change it that device specific functions now have the form `import$device()` instead of the old `import.device()`.

* Added the `symlog_trans()` function from a [post on stack overflow](https://stackoverflow.com/a/14674703). This function leads to a better visualization of light logger data, as a logarithmic transformation is necessary, but values of 0 are common. The function was integrated as a default for `gg_day()` and will likely be the basis of upcoming visualization functions.

* Added the `aggregate_Datetime()` function to aggregate data to a given time interval.
Expand All @@ -16,11 +34,11 @@

* To `filter_Datetime()` and `filter_Date()` added the option to filter for group specific dates.

* Added the family of functions around `States` and `Reference` to import, process, and add states to light logger data, like sleep/wake times, wear times, or other data. This family includes `import.Statechanges()`, `sc2interval()`, `ìnterval2state()`, `data2reference()`, `sleep.int2Brown()`, `Brown.check()`, `Brown.rec()`, and `Brown2reference()`.
* Added the family of functions around `States` and `Reference` to import, process, and add states to light logger data, like sleep/wake times, wear times, or other data. This family includes `import_Statechanges()`, `sc2interval()`, `ìnterval2state()`, `data2reference()`, `sleep_int2Brown()`, `Brown_check()`, `Brown_rec()`, and `Brown2reference()`.

* Added the Article/Vignette "What´s in a Day" to demonstrate the LightLogR workflow.

* Added the convenience function `create_Time.data()` to create a Time-of-Day column in datasets.
* Added the convenience function `create_Timedata()` to create a Time-of-Day column in datasets.

* Added the family of `filter_Datetime()`, `filter_Date()` and `filter_Time()` functions to easily filter datasets.

Expand All @@ -30,7 +48,7 @@

* Added an automatic ID creation at import and streamlined the `import` functions.

* Added the function `join.datasets` to combine imported datasets with sensible constraints.
* Added the function `join_datasets` to combine imported datasets with sensible constraints.

# LightLogR 0.1.1.9000

Expand Down
40 changes: 20 additions & 20 deletions R/Brown.R
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
#' that checks if the illuminance of the dataset is within the recommended
#' illuminance levels, and a column that gives a label to the reference.
#'
#' On a lower level, the function uses [Brown.rec()] and [Brown.check()] to
#' On a lower level, the function uses [Brown_rec()] and [Brown_check()] to
#' create the required information.
#'
#' @param dataset A dataframe that contains a column with the Brown states
Expand All @@ -20,10 +20,10 @@
#' throw an error.
#' @param Reference.label The label that will be used for the reference. Expects
#' a `character` scalar.
#' @param overwrite.Reference If `TRUE` (defaults to `FALSE`), the function will
#' @param overwrite If `TRUE` (defaults to `FALSE`), the function will
#' overwrite the `Brown.rec.colname` column if it already exists.
#' @param ... Additional arguments that will be passed to [Brown.rec()] and
#' [Brown.check()]. This is only relevant to correct the names of the daytime
#' @param ... Additional arguments that will be passed to [Brown_rec()] and
#' [Brown_check()]. This is only relevant to correct the names of the daytime
#' states or the thresholds used within these states. See the documentation of
#' these functions for more information.
#'
Expand All @@ -46,7 +46,7 @@ Brown2reference <- function(dataset,
Brown.state.colname = State.Brown,
Brown.rec.colname = Reference,
Reference.label = "Brown et al. (2022)",
overwrite.Reference = FALSE,
overwrite = FALSE,
...) {


Expand All @@ -58,10 +58,10 @@ Brown2reference <- function(dataset,
Reference.label.column.str <- paste0(Brown.rec.colname.str, ".check")

#give an error or warning if the reference column is present
if(Brown.rec.colname.str %in% names(dataset) & !overwrite.Reference)
stop("A Reference column with the given (or default) name is already part of the dataset. Please remove the column, choose a different name, or set `overwrite.Reference = TRUE`")
if(Brown.rec.colname.str %in% names(dataset) & !overwrite)
stop("A Reference column with the given (or default) name is already part of the dataset. Please remove the column, choose a different name, or set `overwrite = TRUE`")
if(Brown.rec.colname.str %in% names(dataset))
warning("A Reference column with the given (or default) name is already part of the dataset. It is overwritten, because `overwrite.Reference = TRUE ` was set.")
warning("A Reference column with the given (or default) name is already part of the dataset. It is overwritten, because `overwrite = TRUE ` was set.")

stopifnot(
"dataset is not a dataframe" = is.data.frame(dataset),
Expand All @@ -71,8 +71,8 @@ Brown2reference <- function(dataset,
Brown.state.colname.defused %in% names(dataset),
"MEDI.colname must be a numeric column" =
is.numeric(dataset[[MEDI.colname.defused]]),
"overwrite.Reference must be a logical" =
is.logical(overwrite.Reference)
"overwrite must be a logical" =
is.logical(overwrite)
)

#check whether the dataset has the right labels
Expand All @@ -83,15 +83,15 @@ Brown2reference <- function(dataset,
dataset <- dataset %>%
dplyr::mutate(
{{ Brown.rec.colname }} :=
Brown.rec(state = {{ Brown.state.colname }},
Brown_rec(state = {{ Brown.state.colname }},
...)
)

#add a column with the checks
dataset <- dataset %>%
dplyr::mutate(
!!Reference.label.column.str :=
Brown.check(
Brown_check(
value = {{ MEDI.colname }},
state = {{ Brown.state.colname }},
...)
Expand Down Expand Up @@ -135,10 +135,10 @@ Brown2reference <- function(dataset,
#' @examples
#' states <- c("day", "evening", "night", "day")
#' values <- c(100, 10, 1, 300)
#' Brown.check(values, states)
#' Brown.check(values, states, Brown.day.th = 100)
#' Brown_check(values, states)
#' Brown_check(values, states, Brown.day.th = 100)
#'
Brown.check <- function(value,
Brown_check <- function(value,
state,
Brown.day = "day",
Brown.evening = "evening",
Expand Down Expand Up @@ -176,7 +176,7 @@ Brown.check <- function(value,
#' illuminance/MEDI levels by Brown et al. (2022) for a given state. The
#' function is vectorized.
#'
#' @inheritParams Brown.check
#' @inheritParams Brown_check
#' @param state The state from Brown et al. (2022). Needs to be a character
#' vector.
#'
Expand All @@ -190,10 +190,10 @@ Brown.check <- function(value,
#' @family Brown
#' @examples
#' states <- c("day", "evening", "night")
#' Brown.rec(states)
#' Brown.rec(states, Brown.day.th = 100)
#' Brown_rec(states)
#' Brown_rec(states, Brown.day.th = 100)
#'
Brown.rec <- function(state,
Brown_rec <- function(state,
Brown.day = "day",
Brown.evening = "evening",
Brown.night = "night",
Expand All @@ -215,4 +215,4 @@ Brown.rec <- function(state,
state == Brown.night ~ Brown.night.th,
.default = NA
)
}
}
2 changes: 1 addition & 1 deletion R/aaa.r
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Time <- mEDI <- Time.data <- Datetime <- timestamp <- tz <- Day.data <- `DATE/TIME` <- n <- Datetime.rounded <- id <- sleep.colname.string <- file.name <- Interval <- original.datapoints.fleeting <- MEDI <- State.Brown <- Reference <- Reference.check <- Id <- Start.date.shift <- data <- Shift <- `MELANOPIC EDI` <- State <- group <- End <- Start <- Quant.x <- Quant.y <- is.implicit <- group.indices <- Id2 <- gap.id <- start <- end <- NULL
Time <- mEDI <- Time.data <- Datetime <- timestamp <- tz <- Day.data <- `DATE/TIME` <- n <- Datetime.rounded <- id <- sleep.colname.string <- file.name <- Interval <- original.datapoints.fleeting <- MEDI <- State.Brown <- Reference <- Reference.check <- Id <- Start.date.shift <- data <- Shift <- `MELANOPIC EDI` <- State <- group <- End <- Start <- Quant.x <- Quant.y <- is.implicit <- group.indices <- Id2 <- gap.id <- start <- end <- path <- auto.id <- n_max <- manual.id <- silent <- NULL

empty_function <- function() {
rsconnect::accountInfo()
Expand Down
4 changes: 2 additions & 2 deletions R/create_Time.data.R → R/create_Timedata.R
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@
#' @export
#' @importFrom rlang :=
#' @examples
#' sample.data.environment %>% create_Time.data
create_Time.data <- function(dataset,
#' sample.data.environment %>% create_Timedata()
create_Timedata <- function(dataset,
Datetime.colname = Datetime,
Time.data = Time.data,
output.dataset = TRUE) {
Expand Down
9 changes: 7 additions & 2 deletions R/data.r
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,20 @@
#' @format `sample.data.environment` A tibble with 69,120 rows and 3 columns:
#' \describe{
#' \item{Datetime}{POSIXct Datetime}
#' \item{MELANOPIC EDI}{melanopic EDI measurement data. Unit is lux.}
#' \item{Source}{A `character` vector indicating whether the data is from the `Participant` or from the `Environment`.}
#' \item{MEDI}{melanopic EDI measurement data. Unit is lux.}
#' \item{Id}{A `character` vector indicating whether the data is from the `Participant` or from the `Environment`.}
#' }
#' @source <https://www.tscnlab.org>
"sample.data.environment"


#' A vector of all supported devices for import functions
#'
#' These are all supported devices where there is a dedicated import function.
#' Import functions can be called either through [import_Dataset()] with the
#' respective `device = "device"` argument, or directly, e.g.,
#' `import$ActLumus()`.
#'
#' @format `supported.devices` A character vector, listing all supported devices
#' \describe{
#' \item{suppored.devices}{strings}
Expand Down
44 changes: 20 additions & 24 deletions R/data2reference.R
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,12 @@
#' Need to be the same in both sets. Default is `Datetime`.
#' @param Data.column Data column in the `Reference.data` that is then converted
#' to a reference. Default is `MEDI`.
#' @param ID.column Name of the `ID.column` in both the `dataset` and the
#' @param Id.column Name of the `Id.column` in both the `dataset` and the
#' `Reference.data`.
#' @param Reference.column Name of the reference column that will be added to
#' the `dataset`. Default is `Reference`. Cannot be the same as any other
#' column in the `dataset` and will throw an error if it is.
#' @param overwrite.Reference If `TRUE` (defaults to `FALSE`), the function will
#' @param overwrite If `TRUE` (defaults to `FALSE`), the function will
#' overwrite the `Reference.colname` column if it already exists.
#' @param filter.expression.reference Expression that is used to filter the
#' `Reference.data` before it is used as reference. Default is `NULL`. See
Expand Down Expand Up @@ -65,45 +65,41 @@
#'
#' gg_reference <- function(dataset) {
#' dataset %>%
#' ggplot(aes(x = Datetime, y = `MELANOPIC EDI`, color = Source)) +
#' geom_line(size = 1) +
#' ggplot(aes(x = Datetime, y = MEDI, color = Id)) +
#' geom_line(linewidth = 1) +
#' geom_line(aes(y = Reference), color = "black", size = 0.25, linetype = "dashed") +
#' theme_minimal() + facet_wrap(~ Source, scales = "free_y")
#' theme_minimal() + facet_wrap(~ Id, scales = "free_y")
#' }
#'
#' #in this example, each data point is its own reference
#' sample.data.environment %>% group_by(Source) %>%
#' data2reference(Data.column = `MELANOPIC EDI`, ID.column = Source) %>%
#' sample.data.environment %>%
#' data2reference() %>%
#' gg_reference()
#'
#' #in this example, the first day of each ID is the reference for the other days
#' #this requires grouping of the Data by Day, which is then specified in across.id
#' #also, shift.start needs to be set to TRUE, to shift the reference data to the
#' #start of the groupings
#' sample.data.environment %>% group_by(Source, Day = as_date(Datetime)) %>%
#' sample.data.environment %>% group_by(Id, Day = as_date(Datetime)) %>%
#' data2reference(
#' Data.column = `MELANOPIC EDI`,
#' ID.column = Source,
#' filter.expression.reference = as_date(Datetime) == min(as_date(Datetime)),
#' shift.start = TRUE,
#' across.id = "Day") %>%
#' gg_reference()
#'
#' #in this example, the Environment Data will be used as a reference
#' sample.data.environment %>% group_by(Source) %>%
#' sample.data.environment %>%
#' data2reference(
#' Data.column = `MELANOPIC EDI`,
#' ID.column = Source,
#' filter.expression.reference = Source == "Environment",
#' filter.expression.reference = Id == "Environment",
#' across.id = TRUE) %>%
#' gg_reference()
data2reference <- function(dataset,
Reference.data = dataset,
Datetime.column = Datetime,
Data.column = MEDI,
ID.column = Id,
Id.column = Id,
Reference.column = Reference,
overwrite.Reference = FALSE,
overwrite = FALSE,
filter.expression.reference = NULL,
across.id = FALSE,
shift.start = FALSE,
Expand All @@ -115,10 +111,10 @@ data2reference <- function(dataset,

Datetime.column.str <- colname.defused({{ Datetime.column }})
Data.column.str <- colname.defused({{ Data.column }})
ID.column.str <- colname.defused({{ ID.column }})
Id.column.str <- colname.defused({{ Id.column }})
Reference.column.str <- colname.defused({{ Reference.column }})

existing.names <- c(Datetime.column.str, Data.column.str, ID.column.str)
existing.names <- c(Datetime.column.str, Data.column.str, Id.column.str)

#give an error if dataset is not a data.frame
if(!is.data.frame(dataset)) stop("dataset is not a data.frame")
Expand All @@ -136,10 +132,10 @@ data2reference <- function(dataset,
if(!is.numeric(length.restriction.seconds)) stop("length.restriction.seconds is not a numeric")

#give an error or warning if the reference column is present
if(Reference.column.str %in% names(dataset) & !overwrite.Reference)
if(Reference.column.str %in% names(dataset) & !overwrite)
stop("A Reference column with the given (or default) name is already part of the dataset. Please remove the column or choose a different name")
if(Reference.column.str %in% names(dataset))
warning("A Reference column with the given (or default) name is already part of the dataset. It is overwritten, because `overwrite.Reference = TRUE ` was set.")
warning("A Reference column with the given (or default) name is already part of the dataset. It is overwritten, because `overwrite = TRUE ` was set.")


# Manipulation ----------------------------------------------------------
Expand All @@ -148,12 +144,12 @@ data2reference <- function(dataset,

#if the dataset has no grouping, group by the ID column
if(dplyr::n_groups(dataset) == 0) {
dataset <- dataset %>% dplyr::group_by({{ ID.column }})
dataset <- dataset %>% dplyr::group_by({{ Id.column }})
}

#if the Reference.data has no grouping, group by the ID column
if(dplyr::n_groups(Reference.data) == 0) {
Reference.data <- Reference.data %>% dplyr::group_by({{ ID.column }})
Reference.data <- Reference.data %>% dplyr::group_by({{ Id.column }})
}

#set arguments based on the across.id argument
Expand Down Expand Up @@ -311,8 +307,8 @@ data2reference <- function(dataset,
interval2state(
State.interval.dataset = Reference.data,
State.colname = {{ Reference.column }},
ID.colname.dataset = {{ ID.column }},
ID.colname.interval = {{ ID.column }})
Id.colname.dataset = {{ Id.column }},
Id.colname.interval = {{ Id.column }})

#if there is a reference label given, apply it to the dataset
dataset <-
Expand Down
Loading

0 comments on commit e5aec2d

Please sign in to comment.