Skip to content

Commit

Permalink
typo fixes and slight changes to docs for clarity
Browse files Browse the repository at this point in the history
  • Loading branch information
tktran11 committed Nov 28, 2023
1 parent 54b79e4 commit 18cc398
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 25 deletions.
28 changes: 14 additions & 14 deletions 00_core.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -514,7 +514,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example, with log dates starting at the default value of 4 (4:00 AM), we see that two logs from very early morning on 12-09-2017 are counted as being logged on 12-08-2017 instead."
"In this example, with log dates starting at the default value of 4 (4:00 AM), we see that two logs from very early morning on 2017-12-09 are counted as being logged on 2017-12-08 instead."
]
},
{
Expand Down Expand Up @@ -681,7 +681,7 @@
" h:int = 4,\n",
" date_col:int = 5) -> pd.Series:\n",
" \"\"\"\n",
" Extracts time from a datetime column and after shifting datetime by 'h' hours.\n",
" Extracts time from a datetime column after shifting datetime by 'h' hours.\n",
" A day starts 'h' hours early if 'h' is negative, or 'h' hours later if 'h' is\n",
" positive.\n",
" \n",
Expand Down Expand Up @@ -1245,7 +1245,7 @@
" dataframes are read as is.\n",
" h\n",
" Number of hours to shift the definition of 'date' by. h = 4 would indicate that a log date begins at\n",
" 4:00 AM and ends the following calendar day at 3:59:59. Float representations of time would therefore\n",
" 4:00 AM and ends the following calendar day at 3:59:59 AM. Float representations of time would therefore\n",
" go from 4.0 (inclusive) to 28.0 (exclusive) to represent 'date' membership for days shifted from their\n",
" original calendar date.\n",
" identifier\n",
Expand All @@ -1259,7 +1259,7 @@
" Returns\n",
" -------\n",
" food_data\n",
" Dataframe with additional date, flat time, and week from start columns.\n",
" Dataframe with additional date, float time, and week from start columns.\n",
" \"\"\"\n",
" food_data = file_loader(data_source)\n",
" # identifier column(s) should be 0 and 1, with 1 being the study specific identifier\n",
Expand Down Expand Up @@ -1415,7 +1415,7 @@
" time_col:int = 7) -> np.array:\n",
" \"\"\"\n",
" Calculates if each log is considered to be within a 'good logging day'. A log day is considered 'good' if there \n",
" are more than the minimum number of required logs, with a minimum specified hour separation between the first and last\n",
" are at least the minimum number of required logs, with a minimum specified hour separation between the first and last\n",
" log for that log date. It is recommended that you use find_date and find_float_time to generate necessary date and\n",
" time columns for this function.\n",
" \n",
Expand Down Expand Up @@ -2284,13 +2284,13 @@
" Parameters\n",
" ----------\n",
" data_source\n",
" String file or folder path. Single .json or .csv paths create a pd.DataFrame. \n",
" Folder paths with files matching the input pattern are read together into a single pd.DataFrame. Existing\n",
" String file or folder path. Single .json or .csv paths create a pd.DataFrame. Folder paths\n",
" with files matching the input pattern are read together into a single pd.DataFrame. Existing\n",
" dataframes are read as is. A column 'food_type' is required to be within the data.\n",
" \n",
" food_type\n",
" A single food type, or list of food types. Valid types are 'f': food, 'b': beverage,\n",
" 'w': water, and 'm': medication.\n",
" A single food type, or list of food types. Valid types are 'f': food, 'b': beverage, 'w': water,\n",
" and 'm': medication.\n",
" \n",
" Returns\n",
" -------\n",
Expand Down Expand Up @@ -2332,7 +2332,7 @@
"1. 'm': Medication\n",
"\n",
"\n",
"Flavored water beverages such as La Croix are counted as 'water' as not as 'beverage'."
"Flavored water beverages such as La Croix are counted as 'water' and not as 'beverage'."
]
},
{
Expand Down Expand Up @@ -3327,7 +3327,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The second product of this function is three lists that outline which days are not compliant with one of the definitions above. The first list (index 0) consists of dates that are not 'good' logging days, the second contains days that are not 'good' window days. The final list consists of dates that are not adherent (neither 'good' window nor 'good' logging dates)"
"The second product of this function is three lists that outline which days are not compliant with one of the definitions above. The first list (index 0) consists of dates that are not 'good' logging days, the second contains days that are not 'good' window days. The final list consists of dates that are not adherent (neither 'good' window nor 'good' logging dates)."
]
},
{
Expand Down Expand Up @@ -3689,7 +3689,7 @@
" date_col:int = 6,\n",
" time_col:int = 7) -> pd.DataFrame:\n",
" \"\"\"\n",
" Reports the number of good 'logging' days for each user, in descending order based on number of 'good' logging days.\n",
" Reports the number of 'good' logging days for each user, in descending order based on number of 'good' logging days.\n",
" \n",
" Parameters\n",
" ----------\n",
Expand Down Expand Up @@ -4849,7 +4849,7 @@
" report_level:int = 2,\n",
" txt:bool = False) -> pd.DataFrame:\n",
" \"\"\"\n",
" Summarizes participant data for each experiment phase and eating window assignment, Summary includes number of days,\n",
" Summarizes participant data for each experiment phase and eating window assignment. Summary includes number of days,\n",
" total number of logs, number of food/beverage logs, number of medication logs, number of water logs,\n",
" eating window duration information, first and last caloric log information, and adherence.\n",
" \n",
Expand Down Expand Up @@ -5512,7 +5512,7 @@
" time_col:int = 7) -> matplotlib.figure.Figure:\n",
" \"\"\"\n",
" Represents mean and standard deviation of first caloric intake time for each participant\n",
" as a scatter plot, with the x-axis as participants and the y-axis as time.\n",
" as a scatter plot, with participants as the x-axis and time as the y-axis.\n",
" It is recommended that you use find_date and find_float_time to generate necessary date and\n",
" time columns for this function.\n",
" \n",
Expand Down
22 changes: 11 additions & 11 deletions treets/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ def find_float_time(data_source:str|pd.DataFrame,
h:int = 4,
date_col:int = 5) -> pd.Series:
"""
Extracts time from a datetime column and after shifting datetime by 'h' hours.
Extracts time from a datetime column after shifting datetime by 'h' hours.
A day starts 'h' hours early if 'h' is negative, or 'h' hours later if 'h' is
positive.
Expand Down Expand Up @@ -256,7 +256,7 @@ def load_food_data(data_source:str|pd.DataFrame,
dataframes are read as is.
h
Number of hours to shift the definition of 'date' by. h = 4 would indicate that a log date begins at
4:00 AM and ends the following calendar day at 3:59:59. Float representations of time would therefore
4:00 AM and ends the following calendar day at 3:59:59 AM. Float representations of time would therefore
go from 4.0 (inclusive) to 28.0 (exclusive) to represent 'date' membership for days shifted from their
original calendar date.
identifier
Expand All @@ -270,7 +270,7 @@ def load_food_data(data_source:str|pd.DataFrame,
Returns
-------
food_data
Dataframe with additional date, flat time, and week from start columns.
Dataframe with additional date, float time, and week from start columns.
"""
food_data = file_loader(data_source)
# identifier column(s) should be 0 and 1, with 1 being the study specific identifier
Expand Down Expand Up @@ -324,7 +324,7 @@ def in_good_logging_day(data_source:str|pd.DataFrame,
time_col:int = 7) -> np.array:
"""
Calculates if each log is considered to be within a 'good logging day'. A log day is considered 'good' if there
are more than the minimum number of required logs, with a minimum specified hour separation between the first and last
are at least the minimum number of required logs, with a minimum specified hour separation between the first and last
log for that log date. It is recommended that you use find_date and find_float_time to generate necessary date and
time columns for this function.
Expand Down Expand Up @@ -996,13 +996,13 @@ def get_types(data_source:str|pd.DataFrame,
Parameters
----------
data_source
String file or folder path. Single .json or .csv paths create a pd.DataFrame.
Folder paths with files matching the input pattern are read together into a single pd.DataFrame. Existing
String file or folder path. Single .json or .csv paths create a pd.DataFrame. Folder paths
with files matching the input pattern are read together into a single pd.DataFrame. Existing
dataframes are read as is. A column 'food_type' is required to be within the data.
food_type
A single food type, or list of food types. Valid types are 'f': food, 'b': beverage,
'w': water, and 'm': medication.
A single food type, or list of food types. Valid types are 'f': food, 'b': beverage, 'w': water,
and 'm': medication.
Returns
-------
Expand Down Expand Up @@ -1625,7 +1625,7 @@ def users_sorted_by_logging(data_source:str|pd.DataFrame,
date_col:int = 6,
time_col:int = 7) -> pd.DataFrame:
"""
Reports the number of good 'logging' days for each user, in descending order based on number of 'good' logging days.
Reports the number of 'good' logging days for each user, in descending order based on number of 'good' logging days.
Parameters
----------
Expand Down Expand Up @@ -1992,7 +1992,7 @@ def summarize_data_with_experiment_phases(food_data:pd.DataFrame,
report_level:int = 2,
txt:bool = False) -> pd.DataFrame:
"""
Summarizes participant data for each experiment phase and eating window assignment, Summary includes number of days,
Summarizes participant data for each experiment phase and eating window assignment. Summary includes number of days,
total number of logs, number of food/beverage logs, number of medication logs, number of water logs,
eating window duration information, first and last caloric log information, and adherence.
Expand Down Expand Up @@ -2228,7 +2228,7 @@ def first_cal_mean_with_error_bar(data_source:str|pd.DataFrame,
time_col:int = 7) -> matplotlib.figure.Figure:
"""
Represents mean and standard deviation of first caloric intake time for each participant
as a scatter plot, with the x-axis as participants and the y-axis as time.
as a scatter plot, with participants as the x-axis and time as the y-axis.
It is recommended that you use find_date and find_float_time to generate necessary date and
time columns for this function.
Expand Down

0 comments on commit 18cc398

Please sign in to comment.