Artificial Intelligence in Breast Cancer
Artificial Intelligence in Breast Cancer
1. INTRODUCTION
AI is often characterised as "a system's capacity to properly understand external input, learn from such
data, and apply those learnings to fulfil specified objectives and tasks via flexible adaptation." The
enormous increase of computer functions connected to large data penetration during the last 50 years
has pushed AI applications into new domains (1). AI may now be found in voice recognition, facial
identification, autonomous vehicles, and other emerging technologies, and the use of AI in medical
imaging has progressively become an important study area. Deep learning (DL) algorithms, in
particular, have made great progress in image identification tasks. Methods ranging from
convolutional neural networks to variational autoencoders have been discovered across a wide range
of medical image processing applications, promoting the fast growth of medical imaging (2). In the
realm of medical image analysis, AI has made significant contributions to early diagnosis, illness
evaluation, and treatment response evaluations for diseases such as pancreatic cancer (3), liver disease
(4), breast cancer (5), chest disease (6), and neurological tumors (7).
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
CAJMNS Volume: 02 Issue: 06 | Nov-Dec 2021
In 2018, around 2.1 million new instances of breast cancer were detected globally, accounting for
almost one-fourth of all cancer cases among women (8). Breast cancer is the most often diagnosed
cancer in most nations (154 of 185) and the leading cause of cancer mortality in more than 100
countries (9). Breast cancer has a significant influence on women's physical and emotional health,
endangering their lives and wellbeing. Breast cancer detection and treatment have become serious
public health issues across the globe. The precise diagnosis, particularly early identification and
treatment of breast cancer, has a significant influence on the prognosis. Early breast cancer has a
clinical cure rate of more than 90%; in the medium stage, it is 50-70 percent, and in the late stage, the
therapeutic impact is extremely low. Mammography, ultrasound, and MRI are now crucial screening
and additional diagnostic tools for breast cancer, as well as significant methods of detection, staging,
effectiveness assessments, and follow-up exams (10).
Currently, radiologists view, analyse, and diagnose breast pictures. With a heavy and long-term
workload, radiologists are more prone to misread pictures owing to weariness, resulting in a mistake or
missing diagnosis, which AI may help to prevent. Computer-aided diagnosis (CAD) has been applied
to eliminate human mistakes. A proper algorithm completes the processing and analysis of a picture in
CAD systems (11). The most recent innovation is deep learning (DL), particularly convolutional
neural networks (CNNs), which has made tremendous progress in medical imaging (12). This article
provides a short history of AI before focusing on its applications in breast mammography, ultrasound,
and MRI image processing. This study also explores the potential for AI's use in medical imaging.
2. BRIEF OVERVIEW OF AI
AI refers to the capacity of application computers to learn and solve issues by imitating humans or
human brain processes (13). It has been more than 60 years since John McCarthy proposed the notion
of artificial intelligence in 1956. AI technology has advanced at a breakneck pace in the last 10 years.
As a branch of computer science, it aims to create a new type of intelligent machine that responds like
a human brain; its application field is broad, and includes robots, image recognition, language
recognition, natural language processing, data mining, pattern recognition, and expert systems, among
other things (14, 15). AI may be used in the medical profession for health management, clinical
decision support, medical imaging, illness screening and early disease prediction, medical
records/literature analysis, and hospital administration, among other things. AI can evaluate medical
pictures and data for illness screening and prediction, as well as assisting clinicians in diagnosis. In
2018, Al-antari MA et al. investigated a comprehensive integrated CAD system that can be utilised for
detection, segmentation, and classification of masses in mammography, and its accuracy was more
than 92 percent in all areas (16). Alejandro Rodriguez-Ruiz et al. collected 2654 exams and readings
from 101 radiologists, using a trained AI system to score the possibility of cancer on a scale of 1 to 10,
and discovered that using an AI score of 2 as the threshold could reduce the workload by 17 percent,
proving that AI automatic preselection can significantly reduce radiologists' workload (17).
One of the most essential techniques to develop AI is via machine learning (ML). There are two types
of machine learning: unsupervised and supervised. Unsupervised ML classifies radiomics features
without relying on any information supplied by or defined by a previously accessible collection of
imaging data of the same kind as the one under analysis. Supervised ML approaches are initially
trained using an accessible data archive, which implies that the algorithm parameters are modified
until the method delivers an ideal tradeoff between its ability to fit the training set and its
generalisation power when a new data example comes. Sparsity-enhancing regularisation networks can
generate predictions in the realm of supervised ML while also identifying the extracted characteristics
that most influence such predictions (18). ML refers to computer techniques that use picture
characteristics extracted by radiomics as input to predict illness outcomes on follow-up, such as linear
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
CAJMNS Volume: 02 Issue: 06 | Nov-Dec 2021
regression, K-means, decision trees, random forest, PCA (principal component analysis), SVM
(support vector machine), and ANNs (artificial neural networks).
DL, one of the neural network-based AI systems, is constructed by developing models that mimic the
human brain and is now regarded as the most advanced technology for picture categorization. Neural
networks initially imitate nerve cells before attempting to replicate the human brain with the use of a
simulation model known as a perceptron. A neural network is made up of layers that are continuous,
such as the input layer, the hidden layer, and the output layer. The hidden layer contains a
convolutional layer, a pooling layer, and a fully connected layer, while the input layer may handle
multidimensional data. The feature map formed in the convolutional layer is first processed via a non-
linear activation function before being transmitted to the pooling layer for downsampling. The output
is then sent to the fully connected layer, which classifies the total conclusion, and the output layer
sends data analysis findings directly. A multilayer perceptron is built by constructing and organising
layers of perceptrons in which all nodes in the model are completely linked, allowing it to solve more
complicated issues (19). CNNs' learning paradigm also includes supervised learning and unsupervised
learning; supervised learning refers to the training technique in which both the observed training data
and the accompanying ground truth labels for that data (also known as "targets") are necessary for
training the model. Unsupervised learning, on the other hand, uses training data with no diagnostic or
normal/abnormal labelling. In picture classification challenges, supervised learning seems to be the
most preferred strategy at the moment (20).
3. APPLICATIONS OF AI IN MAMMOGRAPHY
Mammography is one of the most extensively used modalities for screening for breast cancer (21, 22).
Mammography is a noninvasive detection technology with low discomfort, simple operation, excellent
resolution, and good repeatability. The preserved picture is not restricted by age or body form and may
be compared before and after. Mammography may detect breast lumps that physicians cannot palpate
and can consistently distinguish benign lesions from malignant malignancies of the breast.
Mammograms are presently obtained using full-field digital mammography (DM) devices and are
available in both raw imaging data (for processing) and postprocessed data (for presentation) (23, 24).
AI has been utilised in most research to analyse mammography pictures, mostly for the detection and
categorization of breast mass and microcalcifications, breast mass segmentation, breast density
evaluation, breast cancer risk assessment, and image quality enhancement.
4. DETECTION AND CLASSIFICATION OF BREAST MASSES
Masses are one of the most prevalent signs of breast cancer among the several abnormalities observed
on mammograms. Because of variations in form, size, and borders, masses are difficult to identify and
diagnose, particularly in the presence of thick breasts. As a result, mass detection is a critical stage in
CAD. Some studies proposed a Crow search optimization-based intuitionistic fuzzy clustering
approach with neighbourhood attraction (CrSA-IFCM-NA), and it has been demonstrated that CrSA-
IFCM-NA effectively separated the masses from mammogram images and had good results in terms
of cluster validity indices, indicating clear segmentation of the regions (24). Others created a fully
integrated CAD system that used a regional DL approach You-Only-Look-Once (YOLO) and a new
deep network model full resolution convolutional network (FrCN) and a deep CNN to detect, segment,
and classify masses in mammograms, and used the INbreast dataset to verify that quality detection
accuracy reached 98.96 percent, effectively assisting radiologists in making an accurate diagnosis (16,
25, 26).
5. DETECTION AND CLASSIFICATION OF MICROCALCIFICATIONS
Breast calcifications are little spots of calcium salts in the breast tissue that show on mammography as
small white spots. Calcifications are classified into two types: microcalcifications and
macrocalcifications. Macrocalcifications are huge, coarse calcifications that are typically benign and
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
CAJMNS Volume: 02 Issue: 06 | Nov-Dec 2021
age-related. Microcalcifications, with diameters ranging from 0.1 mm to 1 mm and with or without
visible masses, may be early indicators of breast cancer (27). Several CAD methods are now being
developed to identify calcifications in mammography pictures. Cai H et al. created a CNN model for
the detection, analysis, and classification of microcalcifications in mammography images, confirming
that the function of the CNN model to extract images outperformed handcrafted features; they
achieved a classification precision of 89.32 percent and a sensitivity of 86.89 percent by using filtered
deep features that are fully utilised by the proposed CNN structure for traditional descriptors (28).
Zobia Suhail et al. created a novel method for classifying benign and malignant microcalcifications by
combining an improved Fisher linear discriminant analysis approach for the linear transformation of
segmented microcalcification data with an SVM variant to distinguish between the two classes; 288
ROIs (139 malignant and 149 benign) in the Digital Database for Screening Mammography (DDSM)
were classified with an average accuracy of 80%. (29). Jian W et al. created a CAD system based on
the dual-tree complex wavelet transform (DT-CWT) to identify breast microcalcifications (30). Guo Y
et al. suggested a novel hybrid approach for detecting microcalcification in mammograms by
combining contourlet transform with nonlinking simplified pulse-coupled neural network (31). An
artificial neural network can identify, segment, and categorise masses and microcalcifications in
mammography, serving as a reference for radiologists and considerably enhancing radiologists' job
efficiency and accuracy.
6. BREAST MASS SEGMENTATION
True mass segmentation is closely connected to the patient's successful therapy. Fuzzy contours were
employed by some researchers to automatically segregate breast masses from mammograms, and the
ROIs retrieved from the miniMIAS database were analysed. The average true positive rate was 91.12
percent, and the accuracy was 88.08 percent, according to the data (32). Due to low-contrast
mammography pictures, uneven forms of masses, spiculated edges, and the existence of intensity
changes in pixels, global segmentation of masses on mammograms is a complicated operation. Some
researchers employed the mesh-free radial basis function collocation technique to evolve a level set
function for segmentation of the breast and suspicious mass areas. The suspicious regions were then
classified as abnormal or normal using an SVM classifier. The sensitivity and specificity for the
DDSM dataset were found to be 97.12 percent and 92.43 percent, respectively (33). Plane fitting and
dynamic programming were used to identify and categorise breast mass in mammography,
considerably improving the accuracy of segmentation of breast lesions (34). Correct segmentation of
breast lesions ensures appropriate disease categorization and diagnosis (35). The use of an autonomous
picture segmentation method demonstrates the use and promise of deep learning in precision medical
systems.
7. BREAST DENSITY ASSESMENT
Breast density is a significant risk factor for breast cancer and is often assessed using two-dimensional
(2D) mammography. Those with greater breast density are two to six times more likely to get breast
cancer than women with low breast density (36). Mammographic density has historically been
measured as the absolute or relative quantity (as a proportion of total breast size) occupied by dense
tissue, which shows as white "cotton-like" patches on mammographic pictures (37). Accurate and
consistent breast density assessment is very desired in the present context of breast density
identification to give physicians and patients with better informed clinical decision-making assistance.
Many research have indicated that AI technology may help with mammographic breast density
measurement (BD). Mohamed AA et al. investigated a CNN model based on the Breast Imaging
Reporting and Data System (BI-RADS) for BD categorization and classified the density of large (i.e.,
22000 images) DM datasets (i.e., "scattered density" and "heterogeneous density"); they found that
increasing the number of training samples resulted in a higher area under the receiver operating
characteristic curve (AUC) of 0.94-0.98. (38). They also utilised a CNN model to demonstrate that
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
CAJMNS Volume: 02 Issue: 06 | Nov-Dec 2021
radiologists employed a medial oblique (MLO) view rather than a head-to-tail (CC) view to classify
the category of BD (39). Le Boulc'h M and colleagues assessed the agreement between
DenSeeMammo (an AI-based automatic BD assessment software approved by the Food and Drug
Administration) and visual assessment by a senior and a junior radiologist on DM and discovered that
the BD assessment between the senior radiologist and the AI model was basically the same
(weighted=0.79; 95 percent CI:0.73-084). (40). Lehman CD et al. created and tested a DL model to
evaluate BD using 58 894 randomly chosen digital mammograms, and they implemented the model
using PyTorch and a deep CNN, ResNet-18. And it is concluded that the agreement between density
assessments with the DL model and those of the original interpreting radiologist was good (k = 0.67;
95 percent CI: 0.66, 0.68), and in the four-way BI-RADS categorization, the interpreting radiologist
accepted 9729 of 10763 (90 percent ; 95 percent CI: 90 percent, 91 percent) DL assessments (41). AI-
assisted MBD evaluation may minimise variance among radiologists, better forecast the risk of breast
cancer, and serve as a foundation for future diagnosis and treatment.
8. BREAST CANCER RISK ASSESMENT
Breast cancer's high incidence and fatality rate endangers women's physical and emotional health. As
Sun YS et al. concluded in 2017, there are many known risk factors for breast cancer, including
ageing, family history, reproductive factors (early menarche, late menopause, late age at first
pregnancy, and low parity), oestrogen (endogenous and exogenous estrogens), lifestyle (excessive
alcohol consumption, too much dietary fat intake, smoking), and oestrogen (endogenous and
exogenous estrogens).
According to the relevant literature, AI research in breast cancer risk prediction is also highly
widespread. Nindrea RD et al. conducted a systematic review of published ML algorithms for breast
cancer risk prediction between January 2000 and May 2018, summarised and compared five ML
algorithms, including SVM, ANN, decision tree (DT), naive Bayes, and K-nearest neighbour (KNN),
and confirmed that the SVM algorithm was able to calculate breast cancer risk with greater accuracy
than other ML algorithms (43). According to some studies, mammography results, risk factors, and
clinical findings were analysed and learned using an ANN in conjunction with cytopathological
diagnosis to evaluate the risk of breast cancer for doctors to estimate the probability of malignancy and
improve the positive predictive value (PPV) of the decision to perform biopsy (44). Yala A and his
colleagues also created a hybrid DL model that uses both the full-field mammography and
conventional risk variables, and discovered that it was more accurate than the Tyrer-Cusick model,
which is currently used in clinical practise (45). As a consequence, AI predicts breast cancer risk with
more accuracy than previous approaches, allowing clinicians to advise high-risk groups in the
implementation of suitable measures to lower the occurrence of breast cancer.
9. IMAGE QUALITY ASSESMENT
Accurate illness diagnosis is dependent on good picture quality. Clear pictures are favourable to the
identification and diagnosis of microscopic lesions, and image quality has a substantial influence on
the diagnosis rate and accuracy rate of AI for evaluating breast illnesses on mammography. One after
the other, computer techniques for increasing picture quality have been devised. Multi-scale shearlet
transform may provide multi-resolution findings because it offers additional information on the data
phase, directionality, and shift invariance. This is useful for detecting cancer cells, especially those
with tiny outlines. Shenbagavalli P and his colleagues used a shearlet transform image enhancement
approach to improve mammography image quality and identified the DDSM database as benign or
malignant with an accuracy of up to 93.45 percent (11). Teare P et al. used a novel form of a false
colour enhancement method, contrast-limited adaptive histogram equalisation (CLAHE), to optimise
the characteristics of mammography. Using dual deep CNNs at different scales for classification of
mammogram images and derivative patches combined with a random forest gating network, they
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
CAJMNS Volume: 02 Issue: 06 | Nov-Dec 2021
achieved a sensitivity of 0.91 and a specificity of 0.80. (46). Because picture quality is the foundation
of an accurate diagnosis, thorough image quality assessment and improvement methods must be
implemented in order to successfully aid radiologists and ANN systems in further analysis and
diagnosis (Table 1).
10. APPLICATIONS OF AI IN BREAST ULTRASOUND
Ultrasound offers several benefits as a diagnostic technology with a high usage rate, such as easy
operation, no radiation, and real-time operation. As a result, ultrasound imaging has increasingly
become a widely used imaging technique for the detection and diagnosis of breast cancer. To eliminate
missing or misdiagnosed breast lesions due to a lack of physician expertise or subjective impact, and
to achieve quantification and uniformity of ultrasound diagnosis, an AI system to identify and
diagnose breast lesions in ultrasound pictures was created (47). According to related research (48, 49),
AI systems are mostly employed in breast ultrasound imaging for the detection and segmentation of
ROIs, feature extraction, and classification of benign and malignant lesions.
11. IDENTIFICATION AND SEGMENTATION OF ROIs
To correctly portray and identify breast lesions, the lesions must first be separated from the
surrounding tissue. Manual segmentation of breast pictures was mostly performed by ultrasound
physicians in the present clinical work; this procedure not only relies on the doctors' working
expertise, but also requires time and effort. Furthermore, since breast ultrasound pictures have poor
contrast, unclear borders, and a lot of shadows, an AI-based automated segmentation approach for
breast ultrasound image lesions is presented. The method of segmenting breast ultrasound pictures
entails detecting a ROI encompassing the lesion and delineating its outlines. Hu Y et al. suggested an
automated tumour segmentation approach that merged a DFCN with a phase-based active contour
(PBAC) model. Following training, 170 breast ultrasound pictures were detected and segmented, with
a mean DSC of 88.97%, demonstrating that the suggested segmentation approach may partially
substitute manual segmentation results in medical analysis (50). Kumar V. et al. proposed a multi-U-
net algorithm and segmented masses from 258 women's breast ultrasound images, achieving a mean
Dice coefficient of 0.82, a true positive fraction (TPF) of 0.84, and a false positive fraction (FPF) of
0.01, all of which are clearly better than the original U-net algorithm's results (51). To segment
ultrasound pictures of breast cancers, Feng Y. et al. used a Hausdorff-based fuzzy c-means (FCM)
method with an adaptive area selection approach. The neighbourhood surrounding each pixel is
adaptively chosen for Hausdorff distance measurement based on mutual information between areas.
The findings demonstrated that the adaptive Hausdorff-based FCM algorithm outperformed the
Hausdorff-based and classic FCM methods (52). The detection and segmentation of lesions in breast
ultrasound pictures saves ultrasound specialists a significant amount of time in swiftly identifying and
diagnosing illnesses, as well as providing a basis and guarantee for the development of AI for
automated diagnosis of breast disorders.
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
CAJMNS Volume: 02 Issue: 06 | Nov-Dec 2021
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
CAJMNS Volume: 02 Issue: 06 | Nov-Dec 2021
breast illnesses. BIRADS is the primary classification system used by physicians to categorise lesions
in breast ultrasound imaging. AI systems with benign and malignant categorization features have
progressively been created to assist clinicians with varying levels of knowledge to achieve a consistent
decision. Cirtisis A et al. used a deep convolution neural network (dCNN) to classify an internal data
set and an external test data set and categorised breast ultrasound pictures into BI-RADS 2-3 and BI-
RADS 4-5. The findings indicated that the dCNN had a classification accuracy of 93.1 percent
(external 95.3 percent), while radiologists had a classification accuracy of 91.6 5.4 percent (external
94.1 1.2 percent). This demonstrates that dCNNs may be used to simulate human decision making
(56). Becker AS et al. analysed 637 breast ultrasound pictures using DL software (84 malignant and
553 benign lesions). The programme was trained using a randomly selected subset of the photos
(n=445, 70%), and the remaining instances (n=192) were utilised to verify the resultant model
throughout the training process. The results were compared to three readers with varying levels of skill
(a radiologist, a resident, and a qualified medical student), and the findings revealed that the neural
network, which had only been trained on a few hundred instances, had equivalent accuracy to a
radiologist's reading. The neural network performed better than a medical student who was taught with
the identical training data set (57). This study suggests that AI-assisted categorization and diagnosis of
breast illnesses may greatly reduce physicians' diagnostic time and enhance the diagnostic accuracy of
novice clinicians (Table 2).
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
CAJMNS Volume: 02 Issue: 06 | Nov-Dec 2021
decision. Cirtisis A et al. used a deep convolution neural network (dCNN) to classify an internal data
set and an external test data set and categorised breast ultrasound pictures into BI-RADS 2-3 and BI-
RADS 4-5. The findings indicated that the dCNN had a classification accuracy of 93.1 percent
(external 95.3 percent), while radiologists had a classification accuracy of 91.6 5.4 percent (external
94.1 1.2 percent). This demonstrates that dCNNs may be used to simulate human decision making
(56). Becker AS et al. analysed 637 breast ultrasound pictures using DL software (84 malignant and
553 benign lesions). The programme was trained using a randomly selected subset of the photos
(n=445, 70%), and the remaining instances (n=192) were utilised to verify the resultant model
throughout the training process. The results were compared to three readers with varying levels of skill
(a radiologist, a resident, and a qualified medical student), and the findings revealed that the neural
network, which had only been trained on a few hundred instances, had equivalent accuracy to a
radiologist's reading. The neural network performed better than a medical student who was taught with
the identical training data set (57). This study suggests that AI-assisted categorization and diagnosis of
breast illnesses may greatly reduce physicians' diagnostic time and enhance the diagnostic accuracy of
novice clinicians (Table 2).
14. APPLICATION OF AI IN BREAST MRI
MRI is the most sensitive method for detecting breast cancer and is generally recommended as a
complement to mammography for high-risk patients (59). Through a variety of scanning sequences,
MRI can completely examine the form, size, scope, and blood circulation of breast masses. However,
because of the problems of limited specificity, high cost, lengthy examination time, and patient
selectivity, it is not as widely utilised as mammography and ultrasound tests. The majority of research
on breast imaging and DL have focused on mammography, with little evidence available for breast
MRI (60). The detection, segmentation, characterization, and categorization of breast lesions are the
primary goals of DL research in breast MRI (61–64). Ignacio Alvarez Illan et al. used a CAD system
to identify and segment non-mass enhanced lesions on dynamic contrast-enhanced magnetic resonance
imaging (DCE-MRI) of the breast, and the improved CAD system lowered and controlled the false
positive rate, yielding good results (65). Herent P. et al. created a deep learning model to identify,
describe, and categorise lesions on breast MRI (mammary glands, benign lesions, invasive ductal
carcinoma, and other malignant lesions), and it performed well (60). Antropova N. et al. used
maximum intensity projection images to include the dynamic and volumetric components of DCE-
MRIs into breast lesion categorization using DL techniques. The findings shown that combining
volumetric and dynamic DCE-MRI components may greatly enhance CNN-based lesion
categorization (66). Jiang Y. et al. recruited 19 breast imaging radiologists (eight academics and
eleven private practices) to classify DCE-MRI as benign or malignant, and compared the classification
results obtained using only conventionally available CAD evaluation software, including kinetic maps,
with those obtained using AI analytics via CAD software. The usage of AI systems was demonstrated
to increase radiologists' performance in distinguishing benign and malignant breast tumours on MRI
(67). Breast MRI is still required to assess people who are at high risk of developing breast cancer. The
CAD method may increase examination sensitivity, lower false positive rates, and eliminate needless
biopsies and psychological stress on patients (68). (Table 3).
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
CAJMNS Volume: 02 Issue: 06 | Nov-Dec 2021
15. CONCLUSION
AI, especially deep learning, is becoming more commonly employed in medical imaging and performs
well in medical image processing tasks. AI can deliver objective and useful information to physicians
while reducing their burden and the rates of missed diagnosis and misdiagnosis because to its benefits
of high computation speed, strong repeatability, and little tiredness (72). At the moment, the CAD
system for breast cancer screening is being extensively researched. These methods can recognise and
segment breast lesions, extract characteristics, categorise them, estimate BD and the risk of breast
cancer, and assess treatment impact and prognosis in mammography, ultrasound, MRI, and other
imaging tests (39, 73–78). These technologies have significant benefits and possibilities for alleviating
doctor pressure, optimising resource allocation, and enhancing accuracy.
16. CHALLENGES AND PROSPECTS
AI is still in the "weak AI" stage. Despite fast advancements in the medical industry over the last
decade, it is still a long way from being completely integrated into doctors' practise and widespread
use throughout the globe. CAD systems for breast cancer screening currently have numerous
drawbacks, including a lack of large-scale public datasets, reliance on ROI annotation, high image
quality requirements, geographical disparities, and overfitting and binary classification issues.
Furthermore, AI is primarily aimed at one task training and cannot do several tasks at the same time,
which is one of the hurdles and difficulties that DL encounters in the development of breast imaging.
Meanwhile, they give a fresh push for the development of breast imaging diagnostic disciplines and
demonstrate the enormous future potential of intelligent medical imaging.
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
CAJMNS Volume: 02 Issue: 06 | Nov-Dec
Nov 2021
Aside from standard imaging approaches, CAD systems based on DL are quickly evolving in the
domains of digital breast tomosynthesis (79 (79–81), ultrasound
ltrasound elastography (82), contrast-enhanced
contrast
mammography, ultrasound, and MRI et al (83, 84). We think that artificial intelligence in breast
imaging may be utilised to not only identify, categorise, and forecast breast disorders, but also to
further classify
ssify particular breast diseases (e.g., breast fibroplasia) and predict lymph node metastasis
(85) and disease recurrence (86). It is expected that as AI technology advances, radiologists will
achieve higher accuracy, greater efficiency, and more accurate classification and determination of
adjuvant treatment for breast diseases, resulting in earlier detection, diagnosis, and treatment of breast
cancer for the vast majority of patients.
Funding
This paper is funded by Mrs Rita Biswas and Mr Arun Kumar Chakraborty. The authors really
appreciate their efforts.
Author’s Biography
NAME: Mr Anir
Anirban Chakraborty
DESIGNATION: Scientist
REFERENCES
1. Bouletreau P, Makaremi M, Ibrahim B, Louvrier A, Sigaux N. A
Artificial
cial Intelligence: Applications
in Orthognathic Surgery. J Stomatol Oral Maxillofac Surg (2019) 120(4):347–54.
120(4):347 doi:
10.1016/j.jormas.2019.06.001
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
CAJMNS Volume: 02 Issue: 06 | Nov-Dec 2021
18. Tagliafico AS, Piana M, Schenone D, Lai R, Massone AM, Houssami N. Overview of Radiomics
in Breast Cancer Diagnosis and Prognostication. Breast (2020) 49:74–80. doi:
10.1016/j.breast.2019.10.018
19. Yanagawa M, Niioka H, Hata A, Kikuchi N, Honda O, Kurakami H, et al.Application of Deep
Learning (3-Dimensional Convolutional Neural Network) for the Prediction of Pathological
Invasiveness in Lung Adenocarcinoma: A Preliminary Study. Med (Baltimore) (2019) 98(25):
e16119. doi: 10.1097/MD.0000000000016119
20. Le EPV, Wang Y, Huang Y, Hickman S, Gilbert FJ. Artificial Intelligence in Breast Imaging. Clin
Radiol (2019) 74(5):357–66. doi: 10.1016/j.crad.2019.02.006
21. Welch HG, Prorok PC, O’Malley AJ, Kramer BS. Breast-Cancer Tumor Size, Overdiagnosis, and
Mammography Screening Effectiveness. N Engl J Med (2016) 375(15):1438–47. doi:
10.1056/NEJMoa1600249
22. McDonald ES, Oustimov A, Weinstein SP, Synnestvedt MB, Schnall M, ConantEF. Effectiveness
of Digital Breast Tomosynthesis Compared With Digital Mammography: Outcomes Analysis
From 3 Years of Breast Cancer Screening. JAMA Oncol (2016) 2(6):737–43. doi:
10.1001/jamaoncol.2015.5536
23. Alsheh Ali M, Eriksson M, Czene K, Hall P, Humphreys K. Detection ofPotential
Microcalcification Clusters Using Multivendor for-Presentation Digital Mammograms for Short-
Term Breast Cancer Risk Estimation. Med Phys (2019) 46(4):1938–46. doi: 10.1002/mp.13450
24. S P, NK V, S S. Breast Cancer Detection Using Crow Search OptimizationBased Intuitionistic
Fuzzy Clustering With Neighborhood Attraction. Asian Pac J Cancer Prev (2019) 20(1):157–65.
doi: 10.31557/APJCP.2019.20.1.157
25. Jiang Y, Inciardi MF, Edwards AV, Papaioannou J. Interpretation Time Usinga Concurrent-Read
Computer-Aided Detection System for Automated Breast Ultrasound in Breast Cancer Screening
of Women With Dense Breast Tissue. AJR Am J Roentgenol (2018) 211(2):452–61. doi:
10.2214/AJR.18.19516
26. Fan M, Li Y, Zheng S, Peng W, Tang W, Li L. Computer-Aided Detection ofMass in Digital
Breast Tomosynthesis Using a Faster Region-Based Convolutional Neural Network. Methods
(2019) 166:103–11. doi: 10.1016/ j.ymeth.2019.02.010
27. Cruz-Bernal A, Flores-Barranco MM, Almanza-Ojeda DL, Ledesma S, IbarraManzano MA.
Analysis of the Cluster Prominence Feature for Detecting Calcifications in Mammograms. J
Healthc Eng (2018) 2018:2849567. doi: 10.1155/2018/2849567
28. Cai H, Huang Q, Rong W, Song Y, Li J, Wang J, et al. Breast Microcalcification Diagnosis Using
Deep Convolutional Neural Network From Digital Mammograms. Comput Math Methods Med
(2019) 2019:2717454. doi:10.1155/2019/2717454
29. Suhail Z, Denton ERE, Zwiggelaar R. Classification of Micro-Calcification in Mammograms
Using Scalable Linear Fisher Discriminant Analysis. Med Biol Eng Comput (2018) 56(8):1475–
85. doi: 10.1007/s11517-017-1774-z
30. Jian W, Sun X, Luo S. Computer-Aided Diagnosis of Breast Microcalcifications Based on Dual-
Tree Complex Wavelet Transform. BioMed Eng Online (2012) 11:96. doi: 10.1186/1475-925X-
11-96
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
CAJMNS Volume: 02 Issue: 06 | Nov-Dec 2021
31. Guo Y, Dong M, Yang Z, Gao X, Wang K, Luo C, et al. A New Method of Detecting Micro-
Calcification Clusters in Mammograms Using Contourlet Transform and non-Linking Simplified
PCNN. Comput Methods Prog BioMed (2016) 130:31–45. doi: 10.1016/j.cmpb.2016.02.019
32. Hmida M, Hamrouni K, Solaiman B, Boussetta S. Mammographic MassSegmentation Using
Fuzzy Contours. Comput Methods Prog BioMed (2018) 164:131–42. doi:
10.1016/j.cmpb.2018.07.005
33. Kashyap KL, Bajpai MK, Khanna P. Globally Supported Radial Basis FunctionBased Collocation
Method for Evolution of Level Set in Mass Segmentation Using Mammograms. Comput Biol Med
(2017) 87:22–37. doi: 10.1016/ j.compbiomed.2017.05.015
34. Song E, Jiang L, Jin R, Zhang L, Yuan Y, Li Q. Breast Mass Segmentation inMammography
Using Plane Fitting and Dynamic Programming. Acad Radiol (2009) 16(7):826–35. doi:
10.1016/j.acra.2008.11.014
35. Anderson NH, Hamilton PW, Bartels PH, Thompson D, Montironi R, SloanJM. Computerized
Scene Segmentation for the Discrimination of Architectural Features in Ductal Proliferative
Lesions of the Breast. J Pathol (1997) 181(4):374–80. doi: 10.1002/(SICI)1096-
9896(199704)181:4<374:: AID-PATH795>3.0.CO;2-N
36. Fowler EEE, Smallwood AM, Khan NZ, Kilpatrick K, Sellers TA, Heine J.Technical Challenges
in Generalizing Calibration Techniques for Breast Density Measurements. Med Phys (2019)
46(2):679–88. doi: 10.1002/mp.13325
37. Hudson S, Vik Hjerkind K, Vinnicombe S, Allen S, Trewin C, Ursin G, et al. Adjusting for BMI in
Analyses of Volumetric Mammographic Density and Breast Cancer Risk. Breast Cancer Res
(2018) 20(1):156. doi: 10.1186/s13058018-1078-8
38. Mohamed AA, Berg WA, Peng H, Luo Y, Jankowitz RC, Wu S. A Deep Learning Method for
Classifying Mammographic Breast Density Categories. Med Phys (2018) 45(1):314–21. doi:
10.1002/mp.12683
39. Mohamed AA, Luo Y, Peng H, Jankowitz RC, Wu S. Understanding ClinicalMammographic
Breast Density Assessment: A Deep Learning Perspective. J Digit Imaging (2018) 31(4):387–92.
doi: 10.1007/s10278-017-0022-2
40. Le Boulc’h M, Bekhouche A, Kermarrec E, Milon A, Abdel Wahab C, Zilberman S, et al.
Comparison of Breast Density Assessment Between Human Eye and Automated Software on
Digital and Synthetic Mammography: Impact on Breast Cancer Risk. Diagn Interv Imaging (2020)
101(12):811–9. doi: 10.1016/j.diii.2020.07.004
41. Lehman CD, Yala A, Schuster T, Dontchos B, Bahl M, Swanson K, et al.Mammographic Breast
Density Assessment Using Deep Learning: Clinical Implementation. Radiology (2019) 290(1):52–
8. doi: 10.1148/radiol. 2018180694
42. Sun YS, Zhao Z, Yang ZN, Xu F, Lu HJ, Zhu ZY, et al. Risk Factors andPreventions of Breast
Cancer. Int J Biol Sci (2017) 13(11):1387–97. doi: 10.7150/ijbs.21635
43. Nindrea RD, Aryandono T, Lazuardi L, Dwiprahasto I. Diagnostic Accuracyof Different Machine
Learning Algorithms for Breast Cancer Risk Calculation: A Meta-Analysis. Asian Pac J Cancer
Prev (2018) 19(7):1747–52. doi: 10.22034/APJCP.2018.19.7.1747
44. Sepandi M, Taghdir M, Rezaianzadeh A, Rahimikazerooni S. Assessing BreastCancer Risk With
an Artificial Neural Network. Asian Pac J Cancer Prev (2018) 19(4):1017–9. doi:
10.31557/APJCP.2018.19.12.3571
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
CAJMNS Volume: 02 Issue: 06 | Nov-Dec 2021
45. Yala A, Lehman C, Schuster T, Portnoi T, Barzilay R. A Deep LearningMammography-Based
Model for Improved Breast Cancer Risk Prediction. Radiology (2019) 292(1):60–6. doi:
10.1148/radiol.2019182716
46. Teare P, Fishman M, Benzaquen O, Toledano E, Elnekave E. MalignancyDetection on
Mammography Using Dual Deep Convolutional Neural Networks and Genetically Discovered
False Color Input Enhancement. J Digit Imaging (2017) 30(4):499–505. doi: 10.1007/s10278-017-
9993-2
47. Akkus Z, Cai J, Boonrod A, Zeinoddini A, Weston AD, Philbrick KA, et al. ASurvey of Deep-
Learning Applications in Ultrasound: Artificial IntelligencePowered Ultrasound for Improving
Clinical Workflow. J Am Coll Radiol (2019) 16(9 Pt B):1318–28. doi: 10.1016/j.jacr.2019.06.004
48. Wu GG, Zhou LQ, Xu JW, Wang JY, Wei Q, Deng YB, et al. Artificial Intelligence in Breast
Ultrasound. World J Radiol (2019) 11(2):19–26. doi: 10.4329/wjr.v11.i2.19
49. Park HJ, Kim SM, La Yun B, Jang M, Kim B, Jang JY, et al. A Computer-AidedDiagnosis System
Using Artificial Intelligence for the Diagnosis and Characterization of Breast Masses on
Ultrasound: Added Value for the Inexperienced Breast Radiologist. Med (Baltimore) (2019)
98(3):e14146. doi: 10.1097/MD.0000000000014146
50. Hu Y, Guo Y, Wang Y, Yu J, Li J, Zhou S, et al. Automatic TumorSegmentation in Breast
Ultrasound Images Using a Dilated Fully Convolutional Network Combined With an Active
Contour Model. Med Phys (2019) 46(1):215–28. doi: 10.1002/mp.13268
51. Kumar V, Webb JM, Gregory A, Denis M, Meixner DD, Bayat M, et al.Automated and Real-Time
Segmentation of Suspicious Breast Masses Using Convolutional Neural Network. PloS One (2018)
13(5):e0195816. doi: 10.1371/journal.pone.0195816
52. Feng Y, Dong F, Xia X, Hu CH, Fan Q, Hu Y, et al. An Adaptive Fuzzy CMeans Method Utilizing
Neighboring Information for Breast Tumor Segmentation in Ultrasound Images. Med Phys (2017)
44(7):3752–60. doi: 10.1002/mp.12350
53. Hsu SM, Kuo WH, Kuo FC, Liao YY. Breast Tumor Classification Using Different Features of
Quantitative Ultrasound Parametric Images. Int J Comput Assist Radiol Surg (2019) 14(4):623–33.
doi: 10.1007/s11548-018-01908-8
54. Zhang Q, Xiao Y, Dai W, Suo J, Wang C, Shi J, et al. Deep Learning BasedClassification of
Breast Tumors With Shear-Wave Elastography. Ultrasonics (2016) 72:150–7. doi:
10.1016/j.ultras.2016.08.004
55. Choi JH, Kang BJ, Baek JE, Lee HS, Kim SH. Application of Computer-AidedDiagnosis in Breast
Ultrasound Interpretation: Improvements in Diagnostic Performance According to Reader
Experience. Ultrasonography (2018) 37 (3):217–25. doi: 10.14366/usg.17046
56. Ciritsis A, Rossi C, Eberhard M, Marcon M, Becker AS, Boss A. AutomaticClassification of
Ultrasound Breast Lesions Using a Deep Convolutional Neural Network Mimicking Human
Decision-Making. Eur Radiol (2019) 29 (10):5458–68. doi: 10.1007/s00330-019-06118-7
57. Becker AS, Mueller M, Stoffel E, Marcon M, Ghafoor S, Boss A. Classification of Breast Cancer
in Ultrasound Imaging Using a Generic Deep Learning Analysis Software: A Pilot Study. Br J
Radiol (2018) 91(1083):20170576. doi: 10.1259/bjr.20170576
58. Zheng X, Yao Z, Huang Y, Yu Y, Wang Y, Liu Y, et al. Deep learningradiomics can predict
axillary lymph node status in early-stage breast cancer. Nat Commun (2020) 11(1):1236. doi:
10.1038/s41467-020-15027-z
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
CAJMNS Volume: 02 Issue: 06 | Nov-Dec 2021
59. Sheth D, Giger ML. Artificial Intelligence in the Interpretation of Breast Cancer on MRI. J Magn
Reson Imaging (2020) 51(5):1310–24. doi: 10.1002/ jmri.26878
60. Herent P, Schmauch B, Jehanno P, Dehaene O, Saillard C, Balleyguier C, et al.Detection and
Characterization of MRI Breast Lesions Using Deep Learning. Diagn Interv Imaging (2019)
100(4):219–25. doi: 10.1016/j.diii.2019.02.008
61. Xu X, Fu L, Chen Y, Larsson R, Zhang D, Suo S, et al. Breast RegionSegmentation Being
Convolutional Neural Network in Dynamic Contrast Enhanced MRI. Annu Int Conf IEEE Eng
Med Biol Soc (2018) 2018:750–3. doi: 10.1109/EMBC.2018.8512422
62. Truhn D, Schrading S, Haarburger C, Schneider H, Merhof D, Kuhl C.Radiomic Versus
Convolutional Neural Networks Analysis for Classification of Contrast-Enhancing Lesions at
Multiparametric Breast MRI. Radiology (2019) 290(2):290–7. doi: 10.1148/radiol.2018181352
63. Gallego-Ortiz C, Martel AL. A Graph-Based Lesion Characterization andDeep Embedding
Approach for Improved Computer-Aided Diagnosis of Nonmass Breast MRI Lesions. Med Image
Anal (2019) 51:116–24. doi:10.1016/j.media.2018.10.011
64. Vidic I, Egnell L, Jerome NP, Teruel JR, Sjobakk TE, Ostlie A, et al. SupportVector Machine for
Breast Cancer Classification Using Diffusion-Weighted MRI Histogram Features: Preliminary
Study. J Magn Reson Imaging (2018) 47 (5):1205–16. doi: 10.1002/jmri.25873
65. Illan IA, Ramirez J, Gorriz JM, Marino MA, Avendano D, Helbich T, et al.Automated Detection
and Segmentation of Nonmass-Enhancing Breast Tumors With Dynamic Contrast-Enhanced
Magnetic Resonance Imaging. Contrast Media Mol Imaging (2018) 2018:5308517. doi:
10.1155/2018/ 5308517
66. Antropova N, Abe H, Giger ML. Use of Clinical MRI Maximum IntensityProjections for
Improved Breast Lesion Classification With Deep Convolutional Neural Networks. J Med Imaging
(Bellingham) (2018) 5 (1):014503. doi: 10.1117/1.JMI.5.1.014503
67. Jiang Y, Edwards AV, Newstead GM. Artificial Intelligence Applied to Breast MRI for Improved
Diagnosis. Radiology (2021) 298(1):38–46. doi: 10.1148/ radiol.2020200292
68. Meyer-Base A, Morra L, Meyer-Base U, Pinker K. Current Status and FuturePerspectives of
Artificial Intelligence in Magnetic Resonance Breast Imaging. Contrast Media Mol Imaging (2020)
2020:6805710. doi: 10.1155/2020/ 6805710
69. Hao W, Gong J, Wang S, Zhu H, Zhao B, Peng W. Application of MRIRadiomics-Based Machine
Learning Model to Improve Contralateral BIRADS 4 Lesion Assessment. Front Oncol (2020)
10:531476. doi: 10.3389/ fonc.2020.531476
70. Song SE, Seo BK, Cho KR, Woo OH, Son GS, Kim C, et al. Computer-aideddetection (CAD)
system for breast MRI in assessment of local tumor extent, nodal status, and multifocality of
invasive breast cancers: preliminary study. Cancer Imaging (2015) 15(1):1. doi: 10.1186/s40644-
015-0036-2
71. Tahmassebi A, Wengert GJ, Helbich TH, Bago-Horvath Z, Alaei S, Bartsch R,et al. Impact of
Machine Learning With Multiparametric Magnetic Resonance Imaging of the Breast for Early
Prediction of Response to Neoadjuvant Chemotherapy and Survival Outcomes in Breast Cancer
Patients. Invest Radiol (2019) 54(2):110-7. doi: 10.1097/RLI.0000000000000518
72. Morgan MB, Mates JL. Applications of Artificial Intelligence in Breast Imaging. Radiol Clin
North Am (2021) 59(1):139–48. doi: 10.1016/j.rcl.2020.08.007
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
CAJMNS Volume: 02 Issue: 06 | Nov-Dec 2021
73. Hupse R, Samulski M, Lobbes MB, Mann RM, Mus R, den Heeten GJ, et al.Computer-Aided
Detection of Masses at Mammography: Interactive Decision Support Versus Prompts. Radiology
(2013) 266(1):123–9. doi: 10.1148/ radiol.12120218
74. Quellec G, Lamard M, Cozic M, Coatrieux G, Cazuguel G. Multiple-InstanceLearning for
Anomaly Detection in Digital Mammography. IEEE Trans Med Imaging (2016) 35(7):1604–14.
doi: 10.1109/TMI.2016.2521442
75. Mendelson EB. Artificial Intelligence in Breast Imaging: Potentials and Limitations. AJR Am J
Roentgenol (2019) 212(2):293–9. doi: 10.2214/ AJR.18.20532
76. Qi X, Zhang L, Chen Y, Pi Y, Chen Y, Lv Q, et al. Automated Diagnosis ofBreast
Ultrasonography Images Using Deep Neural Networks. Med Image Anal (2019) 52:185–98. doi:
10.1016/j.media.2018.12.006
77. Kooi T, Litjens G, van Ginneken B, Gubern-Merida A, Sanchez CI, Mann R,et al. Large Scale
Deep Learning for Computer Aided Detection of Mammographic Lesions. Med Image Anal (2017)
35:303–12. doi: 10.1016/ j.media.2016.07.007
78. Kim J, Kim HJ, Kim C, Kim WH. Artificial Intelligence in Breast Ultrasonography.
Ultrasonography (2021) 40(2):183–90. doi: 10.14366/ usg.20117
79. Sechopoulos I, Teuwen J, Mann R. Artificial Intelligence for Breast Cancer Detection in
Mammography and Digital Breast Tomosynthesis: State of the Art. Semin Cancer Biol (2020)
72:214-25. doi: 10.1016/j.semcancer.2020.06.002
80. Skaane P, Bandos AI, Niklason LT, Sebuodegard S, Osteras BH, Gullien R, et al. Digital
Mammography Versus Digital Mammography Plus Tomosynthesis in Breast Cancer Screening:
The Oslo Tomosynthesis Screening Trial. Radiology (2019) 291(1):23–30. doi:
10.1148/radiol.2019182394
81. Lotter W, Diab AR, Haslam B, Kim JG, Grisot G, Wu E, et al. Robust BreastCancer Detection in
Mammography and Digital Breast Tomosynthesis Using an Annotation-Efficient Deep Learning
Approach. Nat Med (2021) 27(2):244– 9. doi: 10.1038/s41591-020-01174-9
82. Zhang Q, Song S, Xiao Y, Chen S, Shi J, Zheng H. Dual-Mode ArtificiallyIntelligent Diagnosis of
Breast Tumors in Shear-Wave Elastography and BMode Ultrasound Using Deep Polynomial
Networks. Med Eng Phys (2019) 64:1–6. doi: 10.1016/j.medengphy.2018.12.005
83. Adachi M, Fujioka T, Mori M, Kubota K, Kikuchi Y, Xiaotong W, et al.Detection and Diagnosis
of Breast Cancer Using Artificial Intelligence Based Assessment of Maximum Intensity Projection
Dynamic Contrast-Enhanced Magnetic Resonance Images. Diag (Basel) (2020) 10(5):330. doi:
10.3390/ diagnostics10050330
84. Dalmis MU, Gubern-Merida A, Vreemann S, Bult P, Karssemeijer N, Mann R,et al. Artificial
Intelligence-Based Classification of Breast Lesions Imaged With a Multiparametric Breast MRI
Protocol With Ultrafast DCE-MRI, T2, and CT scan.
Copyright (c) 2021 Author (s). This is an open-access article distributed under the terms of Creative Commons
Attribution License (CC BY).To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/