@@ -32,6 +32,57 @@ $ conda install cookiecutter
32
32
[ ![ asciicast] ( https://asciinema.org/a/9bgl5qh17wlop4xyxu9n9wr02.png )] ( https://asciinema.org/a/9bgl5qh17wlop4xyxu9n9wr02 )
33
33
34
34
35
+ ### The resulting directory structure
36
+ ------------
37
+
38
+ The directory structure of your new project looks like this:
39
+
40
+ ```
41
+ ├── LICENSE
42
+ ├── Makefile <- Makefile with commands like `make data` or `make train`
43
+ ├── README.md <- The top-level README for developers using this project.
44
+ ├── data
45
+ │ ├── external <- Data from third party sources.
46
+ │ ├── interim <- Intermediate data that has been transformed.
47
+ │ ├── processed <- The final, canonical data sets for modeling.
48
+ │ └── raw <- The original, immutable data dump.
49
+ │
50
+ ├── docs <- A default Sphinx project; see sphinx-doc.org for details
51
+ │
52
+ ├── models <- Trained and serialized models, model predictions, or model summaries
53
+ │
54
+ ├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
55
+ │ the creator's initials, and a short `-` delimited description, e.g.
56
+ │ `1.0-jqp-initial-data-exploration`.
57
+ │
58
+ ├── references <- Data dictionaries, manuals, and all other explanatory materials.
59
+ │
60
+ ├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
61
+ │ └── figures <- Generated graphics and figures to be used in reporting
62
+ │
63
+ ├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
64
+ │ generated with `pip freeze > requirements.txt`
65
+ │
66
+ ├── src <- Source code for use in this project.
67
+ │ ├── __init__.py <- Makes src a Python module
68
+ │ │
69
+ │ ├── data <- Scripts to download or generate data
70
+ │ │ └── make_dataset.py
71
+ │ │
72
+ │ ├── features <- Scripts to turn raw data into features for modeling
73
+ │ │ └── build_features.py
74
+ │ │
75
+ │ ├── models <- Scripts to train models and then use trained models to make
76
+ │ │ │ predictions
77
+ │ │ ├── predict_model.py
78
+ │ │ └── train_model.py
79
+ │ │
80
+ │ └── visualization <- Scripts to create exploratory and results oriented visualizations
81
+ │ └── visualize.py
82
+ │
83
+ └── tox.ini <- tox file with settings for running tox; see tox.testrun.org
84
+ ```
85
+
35
86
## Contributing
36
87
37
88
We welcome contributions! [ See the docs for guidelines] ( https://drivendata.github.io/cookiecutter-data-science/#contributing ) .
0 commit comments