Skip to content

Commit

Permalink
fix tests and docs
Browse files Browse the repository at this point in the history
  • Loading branch information
mschwoer committed Nov 18, 2024
1 parent 06aa614 commit af7cf8b
Show file tree
Hide file tree
Showing 5 changed files with 7 additions and 7 deletions.
2 changes: 1 addition & 1 deletion docs/api_reference/dataset.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ DataSet

DataSet
~~~~~~~~~~~
.. automodule:: alphastats.DataSet
.. automodule:: alphastats.dataset.dataset.DataSet
:members:
:undoc-members:
:inherited-members:
6 changes: 3 additions & 3 deletions docs/import_data.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ maxquant_data = alphastats.MaxQuantLoader(
file="testfiles/maxquant_proteinGroups.txt"
)

dataset = alphastats.DataSet(
dataset = alphastats.dataset.dataset.DataSet(
loader = maxquant_data,
metadata_path_or_df="../testfiles/maxquant/metadata.xlsx",
sample_column="sample"
Expand Down Expand Up @@ -115,7 +115,7 @@ To compare samples across various conditions in the downstream analysis, a metad

## Creating a DataSet

The whole downstream analysis can be performed on the alphastats.DataSet. To create the DataSet you need to provide the loader object as well as the metadata.
The whole downstream analysis can be performed on the alphastats.dataset.dataset.DataSet. To create the DataSet you need to provide the loader object as well as the metadata.

```python
import alphastats
Expand All @@ -124,7 +124,7 @@ maxquant_data = alphastats.MaxQuantLoader(
file="testfiles/maxquant_proteinGroups.txt"
)

dataset = alphastats.DataSet(
dataset = alphastats.dataset.dataset.DataSet(
loader = maxquant_data,
metadata_path_or_df="../testfiles/maxquant/metadata.xlsx",
sample_column="sample"
Expand Down
2 changes: 1 addition & 1 deletion nbs/liu_2019.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@
"metadata": {},
"source": [
"We are going to load the `proteinGroups.txt` and the corresponding metadatafile. You can find the data [here](https://github.com/MannLabs/alphapeptstats/tree/main/testfiles/maxquant) or on ProteomeXchange [PXD011839](http://proteomecentral.proteomexchange.org/cgi/GetDataset?ID=PXD011839).\n",
"To load the proteomis data you need to create a loader object using `alphastats.MaxQuantLoader`. The whole downstream analysis will be performed on a `alphastats.DataSet`. To create the DataSet you need to provide the loader object as well as the metadata."
"To load the proteomis data you need to create a loader object using `alphastats.MaxQuantLoader`. The whole downstream analysis will be performed on a `alphastats.dataset.dataset.DataSet`. To create the DataSet you need to provide the loader object as well as the metadata."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion nbs/ramus_2016.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@
" filter_columns=[],\n",
")\n",
"\n",
"ds = alphastats.DataSet(\n",
"ds = alphastats.dataset.dataset.DataSet(\n",
" loader=loader, metadata_path_or_df=\"metadata.csv\", sample_column=\"sample\"\n",
")"
]
Expand Down
2 changes: 1 addition & 1 deletion tests/test_DataSet.py
Original file line number Diff line number Diff line change
Expand Up @@ -491,7 +491,7 @@ def test_preprocess_subset(self):
self.obj.preprocess(subset=True)
self.assertEqual(self.obj.mat.shape[0], 48)

@patch("alphastats.DataSet.DataSet.tukey_test")
@patch("alphastats.dataset.dataset.DataSet.tukey_test")
def test_anova_without_tukey(self, mock):
# TODO: Check why 4 extra rows are generated here. This is not due to changes made to 0 and nan filtering.
anova_results = self.obj.anova(column="disease", protein_ids="all", tukey=False)
Expand Down

0 comments on commit af7cf8b

Please sign in to comment.