Skip to content

Commit 72b3e66

Browse files
pjbullisms
authored andcommitted
Add explanation of pip package for src
1 parent b3e9dfa commit 72b3e66

File tree

2 files changed

+9
-14
lines changed

2 files changed

+9
-14
lines changed

docs/docs/index.md

Lines changed: 8 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -102,6 +102,7 @@ cookiecutter https://github.com/drivendata/cookiecutter-data-science
102102
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
103103
│ generated with `pip freeze > requirements.txt`
104104
105+
├── setup.py <- Make this project pip installable with `pip install -e`
105106
├── src <- Source code for use in this project.
106107
│   ├── __init__.py <- Makes src a Python module
107108
│ │
@@ -140,25 +141,18 @@ Since notebooks are challenging objects for source control (e.g., diffs of the `
140141

141142
1. Follow a naming convention that shows the owner and the order the analysis was done in. We use the format `<step>-<ghuser>-<description>.ipynb` (e.g., `0.3-bull-visualize-distributions.ipynb`).
142143

143-
2. Refactor the good parts. Don't write code to do the same task in multiple notebooks. If it's a data preprocessing task, put it in the pipeline at `src/data/make_dataset.py` and load data from `data/interim`. If it's useful utility code, refactor it to `src` and import it into notebooks with a cell like the following. If updating the system path is icky to you, we'd recommend making a Python package (there is a [cookiecutter for that](https://github.com/audreyr/cookiecutter-pypackage) as well) and installing that as an editable package with `pip install -e`.
144+
2. Refactor the good parts. Don't write code to do the same task in multiple notebooks. If it's a data preprocessing task, put it in the pipeline at `src/data/make_dataset.py` and load data from `data/interim`. If it's useful utility code, refactor it to `src`.
145+
146+
Now by default we turn the project into a Python package (see the `setup.py` file). You can import your code and use it in notebooks with a cell like the following:
144147

145148
```
146-
# Load the "autoreload" extension
149+
# OPTIONAL: Load the "autoreload" extension so that code can change
147150
%load_ext autoreload
148151
149-
# always reload modules marked with "%aimport"
150-
%autoreload 1
151-
152-
import os
153-
import sys
154-
155-
# add the 'src' directory as one where we can import modules
156-
src_dir = os.path.join(os.getcwd(), os.pardir, 'src')
157-
sys.path.append(src_dir)
152+
# OPTIONAL: always reload modules so that as you change code in src, it gets loaded
153+
%autoreload 2
158154
159-
# import my method from the source code
160-
%aimport preprocess.build_features
161-
from preprocess.build_features import remove_invalid_data
155+
from src.data import make_dataset
162156
```
163157

164158
### Analysis is a DAG

{{ cookiecutter.repo_name }}/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@ Project Organization
3131
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
3232
│ generated with `pip freeze > requirements.txt`
3333
34+
├── setup.py <- makes project pip installable (pip install -e .) so src can be imported
3435
├── src <- Source code for use in this project.
3536
│   ├── __init__.py <- Makes src a Python module
3637
│ │

0 commit comments

Comments
 (0)