Skip to content

Image classification project using logistic regression to distinguish between dogs, ducks, and foxes. Images were obtained from public APIs.

Notifications You must be signed in to change notification settings

caio2983/AnimalsClassifier

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Animals Classifier

Machine Learning Project where I developed an image classifier to distinguish between dogs, ducks, and foxes using Logistic Regression. The images were obtained through public APIs specific to each category. I performed data cleaning and organization using the Pandas library, and used Scikit-learn for model building and evaluation. Matplotlib was also used to generate visualizations that supported the analysis of results.

📁 Dataset Generation

The script automatically creates an images/ folder with three subdirectories:

images/ ├── dogs/ ├── foxes/ └── ducks/

It then downloads 1000 images from the following public APIs:

Each image is resized to 64x64 pixels and represented as a 3-dimensional array with the shape (64, 64, 3), where the third dimension corresponds to the RGB color channels. All images are stored in the feature array X, with their corresponding labels placed in y. The dataset is then split into 80% for training and 20% for testing.

Logistic Regression is applied to the data:

image

Performance Measures

Accuracy

from sklearn.metrics import accuracy_score accuracy = accuracy_score(y_test, y_pred

Accuracy on the test set: 0.91

This means that 91% of the predictions made by the model on the test set were correct. In other words, the model correctly classified 91% of the images from the test set.

Classification Report

image

Precision

Precision measures the proportion of correctly predicted positive instances for each class, relative to all instances the model predicted as positive. For example, a precision of 93% for the 'dog' class means that, in 93% of the cases where the model predicted 'dog', the prediction was correct.

Recall

Recall measures the proportion of correctly predicted positive instances for each class, relative to all instances that actually belong to that class.

F1-Score

The F1-score is the harmonic mean of precision and recall, providing a single metric that balances both. Unlike the arithmetic mean, the harmonic mean emphasizes lower values, meaning the F1-score will only be high if both precision and recall are high. It ranges from 0 to 1, where values closer to 1 indicate that the model has achieved a good balance between correctly identifying positive instances (recall) and minimizing false positives (precision).

Confusion Matrix

The confusion matrix is a table that shows how well a classification model performs. It compares the model’s predicted labels with the true labels, displaying:

True Positives (TP): correctly predicted positives

True Negatives (TN): correctly predicted negatives

False Positives (FP): incorrect positive predictions

False Negatives (FN): incorrect negative predictions

About

Image classification project using logistic regression to distinguish between dogs, ducks, and foxes. Images were obtained from public APIs.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published