Buy ebook (Ebook) Hands-on Scikit-Learn for machine learning applications: data science fundamentals with Python by David Paper ISBN 9780933333338, 9781484253724, 9781484253731, 9789109027774, 0933333331, 1484253728, 1484253736, 9109027777 cheap price
Buy ebook (Ebook) Hands-on Scikit-Learn for machine learning applications: data science fundamentals with Python by David Paper ISBN 9780933333338, 9781484253724, 9781484253731, 9789109027774, 0933333331, 1484253728, 1484253736, 9109027777 cheap price
com
DOWLOAD EBOOK
ebooknice.com
ebooknice.com
https://ebooknice.com/product/sat-ii-success-
math-1c-and-2c-2002-peterson-s-sat-ii-success-1722018
ebooknice.com
ebooknice.com
(Ebook) Cambridge IGCSE and O Level History Workbook 2C -
Depth Study: the United States, 1919-41 2nd Edition by
Benjamin Harrison ISBN 9781398375147, 9781398375048,
1398375144, 1398375047
https://ebooknice.com/product/cambridge-igcse-and-o-level-history-
workbook-2c-depth-study-the-united-states-1919-41-2nd-edition-53538044
ebooknice.com
ebooknice.com
ebooknice.com
ebooknice.com
David Paper
Trademarked names, logos, and images may appear in this book. Rather
than use a trademark symbol with every occurrence of a trademarked
name, logo, or image we use the names, logos, and images only in an
editorial fashion and to the benefit of the trademark owner, with no
intention of infringement of the trademark. The use in this publication
of trade names, trademarks, service marks, and similar terms, even if
they are not identified as such, is not to be taken as an expression of
opinion as to whether or not they are subject to proprietary rights.
While the advice and information in this book are believed to be true
and accurate at the date of publication, neither the authors nor the
editors nor the publisher can accept any legal responsibility for any
errors or omissions that may be made. The publisher makes no
warranty, express or implied, with respect to the material contained
herein.
1. Introduction to Scikit-Learn
David Paper1
Scikit-Learn is a Python library that provides simple and efficient tools for implementing
supervised and unsupervised machine learning algorithms. The library is accessible to everyone
because it is open source and commercially usable. It is built on NumPY, SciPy, and matplolib
libraries, which means it is reliable, robust, and core to the Python language.
Scikit-Learn is focused on data modeling rather than data loading, cleansing, munging or
manipulating. It is also very easy to use and relatively clean of programming bugs.
Machine Learning
Machine learning is getting computers to program themselves. We use algorithms to make this
happen. An algorithm is a set of rules used to calculate or problem solve with a computer.
Machine learning advocates create, study, and apply algorithms to improve performance on
data-driven tasks. They use tools and technology to answer questions about data by training a
machine how to learn.
The goal is to build robust algorithms that can manipulate input data to predict an output
while continually updating outputs as new data becomes available. Any information or data sent
to a computer is considered input. Data produced by a computer is considered output.
In the machine learning community, input data is referred to as the feature set and output data
is referred to as the target. The feature set is also referred to as the feature space. Sample data is
typically referred to as training data. Once the algorithm is trained with sample data, it can make
predictions on new data. New data is typically referred to as test data.
Machine learning is divided into two main areas: supervised and unsupervised learning. Since
machine learning typically focuses on prediction based on known properties learned from
training data, our focus is on supervised learning.
Supervised learning is when the data set contains both inputs (or the feature set) and desired
outputs (or targets). That is, we know the properties of the data. The goal is to make predictions.
This ability to supervise algorithm training is a big part of why machine learning has become so
popular.
To classify or regress new data, we must train on data with known outcomes. We classify data
by organizing it into relevant categories. We regress data by finding the relationship between
feature set data and target data.
With unsupervised learning, the data set contains only inputs but no desired outputs (or
targets). The goal is to explore the data and find some structure or way to organize it. Although
not the focus of the book, we will explore a few unsupervised learning scenarios.
Anaconda
You can use any Python installation, but I recommend installing Python with Anaconda for several
reasons. First, it has over 15 million users. Second, Anaconda allows easy installation of the
desired version of Python. Third, it preinstalls many useful libraries for machine learning
including Scikit-Learn. Follow this link to see the Anaconda package lists for your operating
system and Python version: https://docs.anaconda.com/anaconda/packages/pkg-
docs/. Fourth, it includes several very popular editors including IDLE, Spyder, and Jupyter
Notebooks. Fifth, Anaconda is reliable and well-maintained and removes compatibility
bottlenecks.
You can easily download and install Anaconda with this link:
https://www.anaconda.com/download/. You can update with this link:
https://docs.anaconda.com/anaconda/install/update-version/. Just open
Anaconda and follow instructions. I recommend updating to the current version.
Scikit-Learn
Python’s Scikit-Learn is one of the most popular machine learning libraries. It is built on Python
libraries NumPy, SciPy, and Matplotlib. The library is well-documented, open source,
commercially usable, and a great vehicle to get started with machine learning. It is also very
reliable and well-maintained, and its vast collection of algorithms can be easily incorporated into
your projects. Scikit-Learn is focused on modeling data rather than loading, manipulating,
visualizing, and summarizing data. For such activities, other libraries such as NumPy, pandas,
Matplotlib, and seaborn are covered as encountered. The Scikit-Learn library is imported into a
Python script as sklearn.
Data Sets
A great way to understand machine learning application is by working through Python data-
driven code examples. We use either Scikit-Learn, UCI Machine Learning, or seaborn data sets for
all examples. The Scikit-Learn data sets package embeds some small data sets for getting started
and helpers to fetch larger data sets commonly used in the machine learning library to benchmark
algorithms on data from the world at large. The UCI Machine Learning Repository maintains 468
data sets to serve the machine learning community. Seaborn provides an API on top of Matplotlib
that offers simplicity when working with plot styles, color defaults, and high-level functions for
common statistical plot types that facilitate visualization. It also integrates nicely with Pandas
DataFrame functionality.
We chose the data sets for our examples because the machine learning community uses them
for learning, exploring, benchmarking, and validating, so we can compare our results to others
while learning how to apply machine learning algorithms.
Our data sets are categorized as either classification or regression data. Classification data
complexity ranges from simple to relatively complex. Simple classification data sets include
load_iris, load_wine, bank.csv, and load_digits. Complex classification data sets include
fetch_20newsgroups, MNIST, and fetch_1fw_people. Regression data sets include tips, redwine.csv,
whitewine.csv, and load_boston.
Characterize Data
Before working with algorithms, it is best to understand the data characterization. Each data set
was carefully chosen to help you gain experience with the most common aspects of machine
learning. We begin by describing the characteristics of each data set to better understand its
composition and purpose. Data sets are organized by classification and regression data.
Classification data is further organized by complexity. That is, we begin with simple
classification data sets that are not complex so that the reader can focus on the machine learning
content rather than on the data. We then move onto more complex data sets.
Iris Data
The first data set we characterize is load_iris, which consists of Iris flower data. Iris is a
multivariate data set consisting of 50 samples from each of three species of iris (Iris setosa, Iris
virginica, and Iris versicolor). Each sample contains four features, namely, length and width of
sepals and petals in centimeters. Iris is a typical test case for machine learning classification. It is
also one of the best known data sets in the data science literature, which means you can test your
results against many other verifiable examples.
The first code example shown in Listing 1-1 loads Iris data, displays its keys, shape of the
feature set and target, feature and target names, a slice from the DESCR key, and feature
importance (from most to least).
if __name__ == "__main__":
br = '\n'
iris = datasets.load_iris()
keys = iris.keys()
print (keys, br)
X = iris.data
y = iris.target
print ('features shape:', X.shape)
print ('target shape:', y.shape, br)
features = iris.feature_names
targets = iris.target_names
print ('feature set:')
print (features, br)
print ('targets:')
print (targets, br)
print (iris.DESCR[525:900], br)
rnd_clf = RandomForestClassifier(random_state=0,
n_estimators=100)
rnd_clf.fit(X, y)
rnd_name = rnd_clf.__class__.__name__
feature_importances = rnd_clf.feature_importances_
importance = sorted(zip(feature_importances, features),
reverse=True)
print ('most important features' + ' (' + rnd_name + '):')
[print (row) for i, row in enumerate(importance)]
Listing 1-1 Characterize the Iris data set
Go ahead and execute the code from Listing 1-1. Remember that you can find the example
from the book’s example download. You don’t need to type the example by hand. It’s easier to
access the example download and copy/paste.
Your output from executing Listing 1-1 should resemble the following:
feature set:
['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal
width (cm)']
targets:
['setosa' 'versicolor' 'virginica']
Tip RandomForestClassifier is a powerful machine learning algorithm that not only models
training data, but returns feature importance.
Wine Data
The next data set we characterize is load_wine. The load_wine data set consists of 178 data
elements. Each element has thirteen features that describe three target classes. It is considered a
classic in the machine learning community and offers an easy multi-classification data set.
The next code example shown in Listing 1-2 loads wine data and displays its keys, shape of the
feature set and target, feature and target names, a slice from the DESCR key, and feature
importance (from most to least).
if __name__ == "__main__":
br = '\n'
data = load_wine()
keys = data.keys()
print (keys, br)
X, y = data.data, data.target
print ('features:', X.shape)
print ('targets', y.shape, br)
print (X[0], br)
features = data.feature_names
targets = data.target_names
print ('feature set:')
print (features, br)
print ('targets:')
print (targets, br)
rnd_clf = RandomForestClassifier(random_state=0,
n_estimators=100)
rnd_clf.fit(X, y)
rnd_name = rnd_clf.__class__.__name__
feature_importances = rnd_clf.feature_importances_
importance = sorted(zip(feature_importances, features),
reverse=True)
n = 6
print (n, 'most important features' + ' (' + rnd_name + '):')
[print (row) for i, row in enumerate(importance) if i < n]
Listing 1-2 Characterize load_wine
After executing code from Listing 1-2, your output should resemble the following:
feature set:
['alcohol', 'malic_acid', 'ash', 'alcalinity_of_ash', 'magnesium',
'total_phenols', 'flavanoids', 'nonflavanoid_phenols',
'proanthocyanins', 'color_intensity', 'hue',
'od280/od315_of_diluted_wines', 'proline']
targets:
['class_0' 'class_1' 'class_2']
Tip To create (instantiate) a machine learning algorithm (model), just assign it to a variable
(e.g., model = algorithm()). To train based on the model, just fit it to the data (e.g., model.fit(X,
y)).
The code begins by importing load_wine and RandomForestClassifier. The main block displays
keys, loads data into X and y, displays the first vector from feature set X, displays shapes, and
displays feature set and target information. The code concludes by training X with
RandomForestClassifier, so we can display the six most important features. Notice that we display
the first vector from feature set X to verify that all features are numeric.
Bank Data
The next code example shown in Listing 1-3 works with bank data. The bank.csv data set is
composed of direct marketing campaigns from a Portuguese banking institution. The target is
described by whether a client will subscribe (yes/no) to a term deposit (target label y). It consists
of 41188 data elements with 20 features for each element. A 10% random sample of 4119 data
elements is also available from this site for more computationally expensive algorithms such as
svm and KNeighborsClassifier.
import pandas as pd
if __name__ == "__main__":
br = '\n'
f = 'data/bank.csv'
bank = pd.read_csv(f)
features = list(bank)
print (features, br)
X = bank.drop(['y'], axis=1).values
y = bank['y'].values
print (X.shape, y.shape, br)
print (bank[['job', 'education', 'age', 'housing',
'marital', 'duration']].head())
Listing 1-3 Characterize bank data
After executing code from Listing 1-3, your output should resemble the following:
The code example begins by importing the pandas package. The main block loads bank data
from a CSV file into a Pandas DataFrame and displays the column names (or features). To retrieve
column names from pandas, all we need to do is make the DataFrame a list and assign the result
to a variable. Next, feature set X and target y are created. Finally, X and y shapes are displayed as
well as a few choice features.
Digits Data
The final code example in this subsection is load_digits. The load_digits data set consists of 1797 8
× 8 handwritten images. Each image is represented by 64 pixels (based on an 8 × 8 matrix), which
make up the feature set. Ten targets are predicted represented by digits zero to nine.
Listing 1-4 contains the code that characterizes load_digits.
import numpy as np
from sklearn.datasets import load_digits
import matplotlib.pyplot as plt
if __name__ == "__main__":
br = '\n'
digits = load_digits()
print (digits.keys(), br)
print ('2D shape of digits data:', digits.images.shape, br)
X = digits.data
y = digits.target
print ('X shape (8x8 flattened to 64 pixels):', end=' ')
print (X.shape)
print ('y shape:', end=' ')
print (y.shape, br)
i = 500
print ('vector (flattened matrix) of "feature" image:')
print (X[i], br)
print ('matrix (transformed vector) of a "feature" image:')
X_i = np.array(X[i]).reshape(8, 8)
print (X_i, br)
print ('target:', y[i], br)
print ('original "digits" image matrix:')
print (digits.images[i])
plt.figure(1, figsize=(3, 3))
plt.title('reshaped flattened vector')
plt.imshow(X_i, cmap="gray", interpolation="gaussian")
plt.figure(2, figsize=(3, 3))
plt.title('original images dataset')
plt.imshow(digits.images[i], cmap="gray",
interpolation='gaussian')
plt.show()
Listing 1-4 Characterize load_digits
After executing code from Listing 1-4, your output should resemble the following:
target: 8
The code begins by importing numpy, load_digits, and matplotlib packages. The main block
places load_digits into the digits variable and displays its keys: data, target, target_names, images,
and DESCR. It continues by displaying the two-dimensional (2D) shape of images contained in
images. Data in images are represented by 1797 8 × 8 matrices. Next, feature data (represented as
vectors) are placed in X and target data in y.
A feature vector is one that contains information about an object’s important characteristics.
Data in data are represented by 1797 64-pixel feature vectors. A simple feature representation of
an image is the raw intensity value of each pixel. So, an 8 × 8 image is represented by 64 pixels.
Machine learning algorithms process feature data as vectors, so each element in data must be a
one-dimensional (1D) vector representation of its 2D image matrix.
Tip Feature data must be composed of vectors to work with machine learning algorithm.
The code continues by displaying the feature vector of the 500th image. Next, the 500th feature
vector is transformed from its flattened 1D vector shape into a 2D image matrix and displayed
with the NumPy reshape function. The code continues by displaying the target value y of the
500th image. Next, the 500th image matrix is displayed by referencing images.
The reason we transformed the image from its 1D flattened vector state to the 2D image
matrix is that most data sets don’t include an images object like load_data. So, to visualize and
process data with machine learning algorithms, we must be able to manually flatten images and
transform flattened images back to their original 2D matrix shape.
The code concludes by visualizing the 500th image in two ways. First, we use the flattened
vector X_i. Second, we reference images. While machine learning algorithms require feature
vectors, function imshow requires 2D image matrices to visualize.
Newsgroup Data
The first data set we characterize is fetch_20newsgroups, which consists of approximately 18000
posts on 20 topics. Data is split into train-test subsets. The split is based on messages posted
before and after a specific date.
Listing 1-5 contains the code that characterizes fetch_20newsgroups.
if __name__ == "__main__":
br = '\n'
train = fetch_20newsgroups(subset='train')
test = fetch_20newsgroups(subset='test')
print ('data:')
print (train.target.shape, 'shape of train data')
print (test.target.shape, 'shape of test data', br)
targets = test.target_names
print (targets, br)
categories = ['rec.autos', 'rec.motorcycles', 'sci.space',
'sci.med']
train = fetch_20newsgroups(subset='train',
categories=categories)
test = fetch_20newsgroups(subset='test',
categories=categories)
print ('data subset:')
print (train.target.shape, 'shape of train data')
print (test.target.shape, 'shape of test data', br)
targets = train.target_names
print (targets)
Listing 1-5 Characterize fetch_20newsgroups
After executing code from Listing 1-5, your output should resemble the following:
data:
(11314,) shape of train data
(7532,) shape of test data
data subset:
(2379,) shape of train data
(1584,) shape of test data
The code begins by importing fetch_20newsgroups. The main block begins by loading train
and test data and displaying their shapes. Training data consists of 11314 postings, while test
data consists of 7532 postings. The code continues by displaying target names and categories.
Next, train and test data are created from a subset of categories. The code concludes by displaying
shapes and target names of the subset.
MNIST Data
The next data set we characterize is MNIST. MNIST (Modified National Institute of Standards and
Technology) is a large database of handwritten digits commonly used for training and testing in
the machine learning community and other industrial image processing applications. MNIST
contains 70000 examples of handwritten digit images labeled from 0 to 9 of size 28 × 28. Each
target (or label) is stored as a digit value. The feature set is a matrix of 70000 28 × 28 images
automatically flattened to 784 pixels each. So, each of the 70000 data elements is a vector of
length 784. The target set is a vector of 70000 digit values.
Listing 1-6 contains the code that characterizes MNIST.
import numpy as np
from random import randint
import matplotlib.pyplot as plt
if __name__ == "__main__":
br = '\n'
X = np.load('data/X_mnist.npy')
y = np.load('data/y_mnist.npy')
target = np.load('data/mnist_targets.npy')
print ('labels (targets):')
print (target, br)
print ('feature set shape:')
print (X.shape, br)
print ('target set shape:')
print (y.shape, br)
indx = randint(0, y.shape[0]-1)
target = y[indx]
X_pixels = np.array(X[indx])
print ('the feature image consists of', len(X_pixels),
'pixels')
X_image = X_pixels.reshape(28, 28)
plt.figure(1, figsize=(3, 3))
title = 'image @ indx ' + str(indx) + ' is digit ' \
+ str(int(target))
plt.title(title)
plt.imshow(X_image, cmap="gray")
digit = 7
target, X_pixels = find_image(X, y, digit)
X_image = X_pixels.reshape(28, 28)
plt.figure(2, figsize=(3, 3))
title = 'find first ' + str(int(target)) + ' in dataset'
plt.title(title)
plt.imshow(X_image, cmap="gray")
plt.show()
Listing 1-6 Characterize MNIST
After executing code from Listing 1-6, your output should resemble the following:
labels (targets):
[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
Listing 1-6 also displays Figures 1-3 and 1-4. Figure 1-3 is the reshaped image of digit 1 at
index 6969. Figure 1-4 is the first image of digit 7 in the data set.
Figure 1-3 Reshaped flattened vector of image at index 6969
import numpy as np
import matplotlib.pyplot as plt
if __name__ == "__main__":
br = '\n'
X = np.load('data/X_faces.npy')
y = np.load('data/y_faces.npy')
targets = np.load('data/faces_targets.npy')
print ('shape of feature and target data:')
print (X.shape)
print (y.shape, br)
print ('target faces:')
print (targets)
X_i = np.array(X[0]).reshape(50, 37)
image_name = targets[y[0]]
fig, ax = plt.subplots()
image = ax.imshow(X_i, cmap="bone")
plt.title(image_name)
plt.show()
Listing 1-7 Characterize fetch_1fw_people
After executing code from Listing 1-7, your output should resemble the following:
target faces:
['Ariel Sharon' 'Colin Powell' 'Donald Rumsfeld' 'George W Bush'
'Gerhard Schroeder' 'Hugo Chavez' 'Tony Blair']
Listing 1-7 also displays Figure 1-5. Figure 1-5 is the reshaped image of the first data element
in the data set.
Figure 1-5 Reshaped image of the first data element in the data set
The code begins by importing requisite packages. The main block loads data into X, y, and
targets from NumPy files. The code continues by printing shapes of X and y. X contains 1288
1850-pixel vectors and y contains 1288 target values. Target labels are then displayed. The code
concludes by reshaping the first feature vector to a 50 × 37 image and displaying it with function
imshow.
Regression Data
We now change gears away from classification and move into regression. Regression is a machine
learning technique for predicting a numerical value based on the independent variables (or
feature set) of a data set. That is, we are measuring the impact of the feature set on a numerical
output. The first data set we characterize for regression is tips.
Tips Data
The tips data set is integrated with the seaborn library. It consists of food server tips in
restaurants and related factors including tip, price of meal, and time of day. Specifically, features
include total_bill (price of meal), tip (gratuity), sex (male or female), smoker (yes or no), day
(Thursday, Friday, Saturday, or Sunday), time (day or night), and size of the party. Features are
coded as follows: total_bill (US dollars), tip (US dollars), sex (0=male, 1=female), smoker (0=no,
1=yes), day (3=Thur, 4=Fri, 5= Sat, 6=Sun). Tips data is represented by 244 elements with six
features predicting one target. The target being tips received from customers.
Listing 1-8 characterizes tips data.
if __name__ == "__main__":
br = '\n'
sns.set(color_codes=True)
tips = sns.load_dataset('tips')
print (tips.head(), br)
X = tips.drop(['tip'], axis=1).values
y = tips['tip'].values
print (X.shape, y.shape)
Listing 1-8 Characterize the tips data set
After executing code from Listing 1-8, your output should resemble the following:
(244, 6) (244,)
The code begins by loading tips as a Pandas DataFrame, displaying the first five records,
converting data to NumPy, and displaying the feature set and target shapes. Seaborn data is
automatically loaded as a Pandas DataFrame. We couldn’t get feature importance because
RandomForestClassifier expects numeric data. It takes a great deal of data wrangling to get the
data set into this form. We will transform categorical data to numeric in later chapters.
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
if __name__ == "__main__":
br = '\n'
f = 'data/redwine.csv'
red_wine = pd.read_csv(f)
X = red_wine.drop(['quality'], axis=1)
y = red_wine['quality']
print (X.shape)
print (y.shape, br)
features = list(X)
rfr = RandomForestRegressor(random_state=0,
n_estimators=100)
rfr_name = rfr.__class__.__name__
rfr.fit(X, y)
feature_importances = rfr.feature_importances_
importance = sorted(zip(feature_importances, features),
reverse=True)
n = 3
print (n, 'most important features' + ' (' + rfr_name + '):')
[print (row) for i, row in enumerate(importance) if i < n]
for row in importance:
print (row)
print ()
print (red_wine[['alcohol', 'sulphates', 'volatile acidity',
'total sulfur dioxide', 'quality']]. head())
Listing 1-9 Characterize redwine
After executing code from Listing 1-9, your output should resemble the following:
(1599, 11)
(1599,)
The code example begins by loading pandas and RandomForestRegressor packages. The main
block loads redwine.csv into a Pandas DataFrame. It then displays feature and target shapes. The
code concludes by training pandas data with RandomForestRegressor, displaying the three most
important features, and displaying the first five records from the data set.
RandomForestRegressor is also an ensemble algorithm, but it is used when the target is numeric
or continuous.
Tip Always hard-code random_state (e.g., random_state=0) for algorithms that use this
parameter to stabilize results.
The white wine example follows the exact same logic, but output differs in terms of data set size
and feature importance.
Listing 1-10 characterizes whitewine.csv.
if __name__ == "__main__":
br = '\n'
f = 'data/whitewine.csv'
white_wine = pd.read_csv(f)
X = white_wine.drop(['quality'], axis=1)
y = white_wine['quality']
print (X.shape)
print (y.shape, br)
features = list(X)
rfr = RandomForestRegressor(random_state=0,
n_estimators=100)
rfr_name = rfr.__class__.__name__
rfr.fit(X, y)
feature_importances = rfr.feature_importances_
importance = sorted(zip(feature_importances, features),
reverse=True)
n = 3
print (n, 'most important features' + ' (' + rfr_name + '):')
[print (row) for i, row in enumerate(importance) if i < n]
print ()
print (white_wine[['alcohol', 'sulphates',
'volatile acidity',
'total sulfur dioxide',
'quality']]. head())
Listing 1-10 Characterize whitewine
After executing code from Listing 1-10, your output should resemble the following:
(4898, 11)
(4898,)
Translator: R. C. Trevelyan
Language: English
TRANSLATIONS FROM
LUCRETIUS
BY
R. C. TREVELYAN
TO
G. LOWES DICKINSON
TRANSLATIONS FROM
LUCRETIUS
BOOK I, lines 1-328
BOOK II, lines 991-1174
BOOK III, lines 1-160
BOOK III, lines 830-1094
BOOK IV, lines 962-1287
BOOK V
BOOK VI, lines 1-95
BOOK I, lines 1-328
Thou mother of the Aenead race, delight
Of men and deities, bountiful Venus, thou
Who under the sky’s gliding constellations
Fillest ship-carrying ocean with thy presence
And the corn-bearing lands, since through thy power
Each kind of living creature is conceived
Then riseth and beholdeth the sun’s light:
Before thee and thine advent the winds and clouds
Of heaven take flight, O goddess: daedal earth
Puts forth sweet-scented flowers beneath thy feet:
Beholding thee the smooth deep laughs, the sky
Grows calm and shines with wide-outspreading light.
For soon as the day’s vernal countenance
Has been revealed, and fresh from wintry bonds
Blows the birth-giving breeze of the West wind,
First do the birds of air give sign of thee,
Goddess, and thine approach, as through their hearts
Thine influence smites. Next the wild herds of beasts
Bound over the rich pastures and swim through
The rapid streams, as captured by thy charm
Each one with eager longing follows thee
Whithersoever thou wouldst lure them on.
And thus through seas, mountains and rushing rivers,
Through the birds’ leafy homes and the green plains,
Striking bland love into the hearts of all,
Thou art the cause that following his lust
Each should renew his race after his kind.
Therefore since thou alone art nature’s mistress,
And since without thine aid naught can rise forth
Into the glorious regions of the light,
Nor aught grow to be gladsome and delectable,
Thee would I win to help me while I write
These verses, wherein I labour to describe
The nature of things in honour of my friend
This scion of the Memmian house, whom thou
Hast willed to be found peerless all his days
In every grace. Therefore the more, great deity,
Grant to my words eternal loveliness:
Cause meanwhile that the savage works of warfare
Over all seas and lands sink hushed to rest.
For thou alone hast power to bless mankind
With tranquil peace; since of war’s savage works
Mavors mighty in battle hath control,
Who oft flings himself back upon thy lap,
Quite vanquished by love’s never-healing wound;
And so with upturned face and shapely neck
Thrown backward, feeds with love his hungry looks,
Gazing on thee, goddess, while thus he lies
Supine, and on thy lips his spirit hangs.
O’er him thus couched upon thy holy body
Do thou bend down to enfold him, and from thy lips
Pour tender speech, petitioning calm peace,
O glorious divinity, for thy Romans.
For nor can we in our country’s hour of trouble
Toil with a mind untroubled at our task,
Nor yet may the famed child of Memmius
Be spared from public service in such times.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebooknice.com