license: apache-2.0
Set the Cookiecutter-MLOps in Hugging Face
Create Model repository in Hugging Face (e.g. myHFrepo)
Clone your Hugging face repo to your local directory:
cd /path/to/parent directory of project folder git clone [email protected]:USERNAME/myHFrepo
For ssh connection check here
Create your virtual environment (e.g. jointvenv)
cd myHFrepo python -m venv jointvenv source jointvenv/bin/activate
Transfer and set DagsHub's cookiecutter template employing MLOps best practices to your Huggingface repo
git clone https://dagshub.com/DagsHub/Cookiecutter-MLOps.git
4.1 Delete git files cloned from Cookiecutter-MLOps repo
rm -r /path/to/myHFrepo/Cookiecutter-MLOps/.git
4.2 dResolve conflicts with .gitattributes and README.md
cat /path/to/myHFrepo/Cookiecutter-MLOps/.gitattributes >> /path/to/myHFrepo/.gitattributes
rm /path/to/myHFrepo/Cookiecutter-MLOps/.gitattributes
git add .gitattributes
git commit -m "Paste .gitattributes info from DagsHub/Cookiecutter-MLOps"
cat /path/to/myHFrepo/Cookiecutter-MLOps/README.md >> /path/to/myHFrepo/README.md
rm /path/to/myHFrepo/Cookiecutter-MLOps/README.md
git add README.md
git commit -m "Paste README info from DagsHub/Cookiecutter-MLOps"
4.3 Move remaining files from DagsHub/Cookiecutter-MLOps yo your Hugging Face repo .gitattributes and README.md
cd /path/to/myHFrepo/Cookiecutter-MLOps
mv * .[^.]* ..
cd /path/to/myHFrepo
rm -r /path/to/myHFrepo/Cookiecutter-MLOps
Add venv folder to.gitignore
echo '' >> .gitignore echo '#'Virtual Environment >> .gitignore echo jointvenv/ >> .gitignore git add . git commit -m "add remaining DagsHub/Cookiecutter-MLOps repo content"
Run step 2 from DagsHub/Cookiecutter-MLOps
make dirs
Run step 4 from DagsHub/Cookiecutter-MLOps
make requirements
Keep record of your own requirements
mv requirements.txt requirementsCookiecutter-MLOps.txt git add requirementsCookiecutter-MLOps.txt git commit -m "external requirements from Cookiecutter-MLOps"
pip freeze > requirements.txt git add requirements.txt git commit -m "First report venv requirements"
Push your changes to the remote Hugging face repository
git push origin main
Optional Create Model repository in your Hugging Face organization (e.g. myHFrepo)
git remote add dcc [email protected]:MYORG/mywslHFrepo
git pull dcc main --allow-unrelated-histories
Resolve conflicts in .gitattributes and README.md
git add .
git commit -m "Merge HuggingFace individual and organization repos"
git push dcc main
============================== Cookiecutter-MLOps
A cookiecutter template employing MLOps best practices, so you can focus on building machine learning products while having MLOps best practices applied.
Instructions
- Clone the repo.
- Run
make dirs
to create the missing parts of the directory structure described below. - Optional: Run
make virtualenv
to create a python virtual environment. Skip if using conda or some other env manager.- Run
source env/bin/activate
to activate the virtualenv.
- Run
- Run
make requirements
to install required python packages. - Put the raw data in
data/raw
. - To save the raw data to the DVC cache, run
dvc add data/raw
- Edit the code files to your heart's desire.
- Process your data, train and evaluate your model using
dvc repro
ormake reproduce
- To run the pre-commit hooks, run
make pre-commit-install
- For setting up data validation tests, run
make setup-setup-data-validation
- For running the data validation tests, run
make run-data-validation
- When you're happy with the result, commit files (including .dvc files) to git.
Project Organization
βββ LICENSE
βββ Makefile <- Makefile with commands like `make dirs` or `make clean`
βββ README.md <- The top-level README for developers using this project.
βββ data
β βββ processed <- The final, canonical data sets for modeling.
β βββ raw <- The original, immutable data dump
β
βββ models <- Trained and serialized models, model predictions, or model summaries
β
βββ notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
β the creator's initials, and a short `-` delimited description, e.g.
β `1.0-jqp-initial-data-exploration`.
βββ references <- Data dictionaries, manuals, and all other explanatory materials.
βββ reports <- Generated analysis as HTML, PDF, LaTeX, etc.
β βββ figures <- Generated graphics and figures to be used in reporting
β βββ metrics.txt <- Relevant metrics after evaluating the model.
β βββ training_metrics.txt <- Relevant metrics from training the model.
β
βββ requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
β generated with `pip freeze > requirements.txt`
β
βββ setup.py <- makes project pip installable (pip install -e .) so src can be imported
βββ src <- Source code for use in this project.
β βββ __init__.py <- Makes src a Python module
β β
β βββ data <- Scripts to download or generate data
β β βββ great_expectations <- Folder containing data integrity check files
β β βββ make_dataset.py
β β βββ data_validation.py <- Script to run data integrity checks
β β
β βββ models <- Scripts to train models and then use trained models to make
β β β predictions
β β βββ predict_model.py
β β βββ train_model.py
β β
β βββ visualization <- Scripts to create exploratory and results oriented visualizations
β βββ visualize.py
β
βββ .pre-commit-config.yaml <- pre-commit hooks file with selected hooks for the projects.
βββ dvc.lock <- constructs the ML pipeline with defined stages.
βββ dvc.yaml <- Traing a model on the processed data.
Project based on the cookiecutter data science project template. #cookiecutterdatascience
To create a project like this, just go to https://dagshub.com/repo/create and select the Cookiecutter DVC project template.
Made with πΆ by DAGsHub.