ORIGINAL ARTICLE
Ahead of print publication  

A preliminary study of sperm identification in microdissection testicular sperm extraction samples with deep convolutional neural networks


1 Department of Computer Science, Stanford University, Stanford, CA 94305, USA
2 Department of Urology, University of Utah Health, Salt Lake City, UT 84108, USA
3 Department of Obstetrics and Gynecology, Stanford Children’s Health, Stanford, CA 94305, USA
4 Department of Urology, Stanford University School of Medicine, Stanford, CA 94305, USA

Date of Submission24-Feb-2020
Date of Acceptance03-Aug-2020
Date of Web Publication23-Oct-2020

Correspondence Address:
Daniel J Wu,
Department of Computer Science, Stanford University, Stanford, CA 94305,
USA
Odgerel Badamjav,
Department of Urology, University of Utah Health, Salt Lake City, UT 84108,
USA
Login to access the Email id

Source of Support: None, Conflict of Interest: None

DOI: 10.4103/aja.aja_66_20

PMID: 33106465

  Abstract 


Sperm identification and selection is an essential task when processing human testicular samples for in vitro fertilization. Locating and identifying sperm cell(s) in human testicular biopsy samples is labor intensive and time consuming. We developed a new computer-aided sperm analysis (CASA) system, which utilizes deep learning for near human-level performance on testicular sperm extraction (TESE), trained on a custom dataset. The system automates the identification of sperm in testicular biopsy samples. A dataset of 702 de-identified images from testicular biopsy samples of 30 patients was collected. Each image was normalized and passed through glare filters and diffraction correction. The data were split 80%, 10%, and 10% into training, validation, and test sets, respectively. Then, a deep object detection network, composed of a feature extraction network and object detection network, was trained on this dataset. The model was benchmarked against embryologists' performance on the detection task. Our deep learning CASA system achieved a mean average precision (mAP) of 0.741, with an average recall (AR) of 0.376 on our dataset. Our proposed method can work in real time; its speed is effectively limited only by the imaging speed of the microscope. Our results indicate that deep learning-based technologies can improve the efficiency of finding sperm in testicular biopsy samples.

Keywords: artificial intelligence; computer vision; male infertility; microdissection testicular sperm extraction; sperm


Article in PDF

How to cite this URL:
Wu DJ, Badamjav O, Reddy VV, Eisenberg M, Behr B. A preliminary study of sperm identification in microdissection testicular sperm extraction samples with deep convolutional neural networks. Asian J Androl [Epub ahead of print] [cited 2020 Nov 25]. Available from: https://www.ajandrology.com/preprintarticle.asp?id=298919


  Introduction Top


Approximately 1% of all men and 10% of infertile men have azoospermia, the absence of sperm in the ejaculate.[1] Of all azoospermic men, over half have nonobstructive azoospermia (NOA) due to spermatogenic failure. NOA is defined as the absence of spermatozoa in semen because of minimal or no sperm production in the testis.[2] In men with NOA, microdissection testicular sperm extraction (micro-TESE), a meticulous microsurgical exploration of the testicular parenchyma to search for seminiferous tubules housing intact spermatogenesis, is routinely performed.[3],[4],[5]

As part of this procedure, most centers perform intra- and postoperative microscopic assessment of the extracted testicular tissue. This manual process is highly labor intensive, time consuming, and shows significant variability;[6] however, this remains the gold standard for identification and collection of potential sperm recovered from NOA patients.[7]

There are no existing computer-aided sperm analysis (CASA) systems for analysis of testicular biopsies, the source of sperm for patients with azoospermia. Clinics treating patients with severe infertility primarily rely on manual image analysis for sperm identification in testicular tissue.[7] This process requires careful processing in the laboratory in order to separate and identify sperm within the biopsied tissue. Not only is manual microscopic testicular specimen examination time consuming and tedious, but also it is highly dependent on skill of the personnel. Sperm can easily be overlooked due to several variables, including incomplete cell dissociation, elevated levels of other cell types, inexperience, exhaustion, and human error.[8] For patients who produce a small number of sperm to begin with, failing to identify even a couple of sperm could mean the difference between a successful or unsuccessful testicular sperm extraction.

The field of computer vision technologies powered by deep learning methods offers exciting opportunities, especially when considering applications in low-cell number or single-cell analysis. With the increasing efficiency and robustness of deep learning methods for computer vision tasks,[9] we are optimistic that a CASA system utilizing deep neural networks can achieve robust performance on tasks associated with testicular biopsy analysis. However, careful sample preparation and processing is necessary for building robust CASA systems.

Deep neural networks have been used in a variety of domains to learn the underlying relationships between raw data and complex, higher-order tasks. Traditional methods often require researchers to specify an explicit methodology for solving a task – often a difficult, if not impossible, requirement. Deep neural networks, however, only require properly labelled datasets, and can learn how to solve the task from the data. Training deep neural networks involves defining a loss function – a heuristic measure of how well the network is accomplishing its task – and then using nonconvex optimization techniques to adjust the parameters of the network in order to minimize that loss function and thereby increase the performance of the neural network.

One task well suited for deep neural networks is object recognition – the parsing of an image, represented by the raw RGB values of each pixel, in order to determine the presence and location of objects in that image. Here, we translate basic research on deep learning for object recognition into an applied CASA tool for the automated identification of spermatozoa.


  Materials and Methods Top


Dataset

Deep learning methods require a large amount of training data. To the best of our knowledge, there are no existing large-scale image datasets of testicular biopsies. After institutional review board (IRB) approval at Stanford University, Stanford, CA, USA (Approval No. 41652), we collected a novel dataset of 702 de-identified images from testicular biopsy samples of 30 patients. Consent from all participants was received at the time of semen collection [Figure 1].
Figure 1: Sample testicular biopsy image with embryologist-labeled bounding boxes.

Click here to view


Sample processing

The samples we obtained for this study were limited to the residual fraction of the prepared TESE samples for cryopreservation following micro-TESE procedures. The images were collected from these samples immediately after the procedure(s). In order to effectively obtain this fraction, we added 1.0 ml of Quinn's Advantage Medium with HEPES (SAGE, Trumbull, CT, USA) with 20% Human Serum Albumin (SAGE), analyzed under the inverted microscope (IX73; Olympus, Tokyo, Japan) and captured images of sperm found in these samples. Hence, these TESE samples were more diluted and less complex compared to original TESE samples.

Characteristics

The data are all composite images of testicular biopsies from 30 distinct patients. These images vary in sperm phenotype, cellular clutter, tissue superstructure, imaging modality, size, and resolution. The dataset is comprised of two types of images: dense testicular tissue, collected from an inverted microscope (IX73; Olympus), at 1680 × 1050 resolution, or diffuse tissue, collected at 640 × 400 resolution.

Data preprocessing

At the time of collection, each image was normalized, passed through glare filters and diffraction correction (Hamilton Thorne, HT video and image capture software version 3, Hamilton Thorne, Inc., Beverly, MA, USA), and had microscopy artifacts removed.

Images were then anonymized and aggregated. A single embryologist annotated each image with bounding boxes around each identified spermatozoon. The data were split 80%, 10%, and 10% into training, validation, and test sets, respectively, and kept separate.

The dataset, with corresponding annotations, was parsed into the same format as Microsoft's Common Objects in Context (COCO) image dataset, a standard convention for object detection datasets. This raw dataset was then augmented at training time by the methods described below.

Data augmentation

In order to increase the adequate size of our dataset, we applied various data augmentation techniques. Images were normalized (the RGB values of images were scaled to be between 0 and 1), randomly flipped horizontally, randomly cropped, padded, and jittered. Applying these augmentation techniques has been shown to increase the robustness of deep neural networks.[10] Images were then linearly resampled to match the input size of 640 × 640.

Hard example mining

Our labelers were instructed to draw bounding boxes around all the spermatozoa they identified in each image. This means that our dataset only contains bounding boxes of “positive” examples and does not contain “negative” examples, i.e ., bounding boxes for the “background” class. However, the neural network must also learn to recognize which parts of the image do not contain sperm, i.e ., correspond to the “background” class. As such, we dynamically generate “negative” bounding boxes in each image to train the model on.

In more technical terms, a bounding box is considered a negative example if it has less than a 0.5 intersection-over-union ratio (IOU, defined in our Metrics section, below) with any labeled spermatozoa in an image. These boxes are dynamically selected during training. As there are significantly more possible boxes containing “background” rather than sperm, we decided to limit the number of negative boxes to three per positive box. In each image with k spermatozoa, we select the 3k negative examples with the largest loss (i.e ., the images the model made the greatest error on) in that image. These negative examples are temporarily added to the dataset for that iteration of model training. This procedure is known as hard example mining and is a method by which we pick the most informative negative examples for training the network while preventing a class imbalance between negative and positive examples.

Model

Sperm identification is an object detection task – object detection models take in an image and annotate it with bounding boxes. Each bounding box has an associated label (spermatozoon or background) and a confidence rating.

This deep object detection network is composed of two parts: a feature extraction network, which takes in raw images, and outputs feature maps, and an object detection network, which interprets those feature maps, and outputs predictions for the location and presence of items.

In this work, MobileNetV2[11] was used as a feature extraction network, and single-shot detector (SSD)[12] was used as an object detection network. While we present summaries of these network architectures below, we encourage readers to see the original papers for more information.

Medical images require specialized expertise to annotate, and, as a result, medical image datasets are often much smaller than general image datasets. Transfer learning offers an effective way to mitigate the challenges of small training sets. Related work done by the medical imaging group at the National Institutes of Health (NIH) explored transfer learning of CifarNet, AlexNet, VGGNet, Overfeat, and GoogLeNet for image classification on lymph node detection and interstitial lung disease diagnosis[13] and found that models pretrained on ImageNet, an extensive database of miscellaneous images,[14] outperformed models trained from scratch, and achieved expert-level performance. Thus, we pretrained our feature extractor, MobileNetV2, on ImageNet.

Architectures

We use two architectures in our CASA system – MobileNetV2 and SSD.

The original MobileNet is a feature extraction network optimized for fast predictions on mobile devices.[11] MobileNetV2 is an updated version composed of a fully convolutional layer with thirty-two 3 × 3 filters, and then 19 residual bottleneck layers, taking advantage of depth-wise convolutions for speed, and linear bottlenecks for memory efficiency. This results in a lightweight, small, and fast network while maintaining an acceptable standard of accuracy. By selecting this network for our feature extraction network, we optimize for speed over accuracy.

The SSD is an object detection network which makes the use of the convolutional layers of VGG16, appending additional convolutional layers, and extracting feature maps at each layer for prediction. These feature maps are a variety of sizes, which allow for predictions at a variety of scales. This utilizes the Multibox algorithm,[12] which generates 8732 default box proposals, at predetermined distributed locations and sizes. The model then classifies each proposal as either a spermatozoon or background and regresses the bounding box shape to match identified component objects.

Loss

The loss function is a numerical measure of the performance of the model and is minimized through backpropagation. SSD uses a loss function composed of the sum of a regularization loss and a weighted average of a classification loss and a localization loss (Equation 1, [Supplementary Information]. The classification loss represents how accurate the network is at identifying a given object in an image as belonging to the correct class (e.g ., sperm or background) and is given by a softmax loss between the predicted and actual classes for each bounding box (Equation 2, [Supplementary Information]. The localization loss represents how precise the network is determining the coordinates of the bounding box and is given by the sum of smoothed distances between the center XY coordinates, width, and height of each predicted bounding box with the corresponding coordinates of the actual bounding box (Equation 3, [Supplementary Information]). We also add a regularization loss LR which is given by the square of the L2 norm of the weights in the network, multiplied by a regularization strength hyperparameter (Equation 4, [Supplementary Information]). This regularization loss serves the pragmatic function of penalizing overly large values for model parameters and mitigates model overfitting.

Training and tuning

The model was trained on our dataset on Google Cloud in 8 h on an NVIDIA K80 GPU, using minibatch gradient descent. The RMSProp optimizer, which accumulates a moving average for the first-order momentum across batches, was used to update our weights (Equation 5, [Supplementary Information]). This smoothens training updates compared to standard stochastic gradient descent optimization while preventing slow training.

We then used model performance on the validation data to tune the hyperparameters of the model. The main goal of the model was to automate manual sperm identification. Thus, the model has to make rapid predictions on quickly changing microscopy images. Furthermore, for patients with severe male factor infertility where no sperm is found in the semen, it must err on the side of high sensitivity.

Nonmaximal suppression

Due to the design of SSD, the output of the model on any given image is 8732 predictions, the vast majority of which correspond to the “background” class. Because these boxes have significant overlap with one another, we applied nonmaximal suppression,[12] a technique which keeps only the most confident prediction of each set of overlapping boxes, to declutter the resulting image.

Metrics

Following the standard of Microsoft's Common Objects in Context (COCO) challenge, the following metrics were used to assess model performance:

The IOU, also known as the Jaccard Index [Figure 2], is a measure of how well a given bounding box aligns with another bounding box. It is equal to the percentage of the overlapping area of the two boxes. The IOU was used to determine if a given model prediction was correct; any bounding box with an IOU of greater than some set threshold, generally 0.5 (i.e ., an overlap of 50%), is considered a correct bounding box.
Figure 2: The visual calculation of the intersection-over-union ratio (IOU).

Click here to view


Mean average precision (mAP) is a measure of the precision-recall trade-off of a model. We used the COCO mAP metric, which averages the precision over 101 evenly distributed recall values, from 0 to 1, to calculate the mAP.

Average recall (AR) is a measure of the sensitivity of a model at a given number of detections per image. In this paper, we used 100 detections per image, i.e ., after nonmaximal suppression, only the 100 most confident detections out of all 8732 detections were assessed and all other detections were removed. We used the COCO AR metric, which is defined as the maximum recall given this fixed number of detections per image, averaged over categories and IOUs. In keeping with the COCO standard, 10 IOU thresholds were used (50%, 55%, 60%, 65%, …, 95%).[15]

The F1 score is an overall measure of the accuracy of our classifier and is defined as the harmonic mean of precision (mAP) and recall (AR).


  Results Top


Benchmarks

While our training dataset was labeled by a single embryologist, we wanted to compare the performance of our model against human error rates. Thus, three embryologists independently labeled, with bounding boxes, our test dataset of 110 images, containing 111 sperm, with the VIA Image Annotator.[16] These annotations were then parsed to form a ground-truth set of labels, where, if in a given image, at least two out of three embryologists drew bounding boxes with at least 0.5 IOU, then we took the ground truth to be a bounding box with the average coordinates of the embryologists' labels [Figure 3]. We then calculated IOUs from each embryologist's prediction for the corresponding bounding box in our ground truth and then calculated mAP and AR for human performance.
Figure 3: A (faux) visual example of the methodology behind how the groundtruth labels were built. Top: if at least 2 embryologists labeled the same area (blue), then a ground-truth bounding box is created from the average of embryologist labels (red). Bottom: if only 1 embryologist labeled an area, it is not added to the ground-truth labels.

Click here to view


Model assessment

We calculated mAP and AR on our test set, with our tuned hyperparameters. We also report the F1 score, which is the harmonic mean of the precision and recall. The performance of our model is given in [Table 1].
Table 1: Comparison of performance of architecture on test set

Click here to view


Qualitative results and limitations

The sample output from our model is shown in [Figure 4]. The model has a difficult time identifying distorted sperm, particularly those with bent tails, deformities, or those occluded by microscopy artifacts. Detection rates and localization performance on sperm with normal and subnormal morphology, our primary use case, align with embryologist-level performance.
Figure 4: Example predictions from SSD MobileNetV2 on two spermatozoon-rich images. (a) Model detections on sample 1. (b) Embryologist labels on sample 1. (c) Model detections on sample 2. (d) Embryologist labels on sample 2. SSD: single-shot detector.

Click here to view


Furthermore, the determinations were made on random static images. The accuracy of the deep neural network is lower than that of the embryologists; however, the model has the potential to be significantly more time efficient. On our GPU hardware, the model makes a prediction on all 702 images in the dataset in approximately 25 s, whereas it took three embryologists total of approximately 129 600 s (approximately 36 h) in comparison to manually identify and label all 702 images. In a clinical setting, without the need to label the images, it would take embryologists several (approximately 2–3) hours to visually identify sperm in these images, several orders of magnitude longer than the speed of the model.


  Discussion Top


Object detection via deep convolutional neural networks is an active and robust field of research. Robust models for object detection have been trained for a variety of image modalities – magnetic resonance imaging (MRI), computed tomography (CT), X-ray, and mammography – for a variety of tasks – detection, diagnosis, and assistive labeling.[17]

Computer vision tasks appear in a variety of medical imaging domains. Since medical images often contain nuanced and high-level features in localizable parts of the image, and the classification (assessment, diagnosis, and monitoring) task is often abstract, deep convolutional neural networks are the method of choice.[18],[19],[20],[21]

As such, the confluence of deep convolutional neural networks and existing object detection architectures are well suited for sperm identification and morphological classification. Currently, TESE success rates vary between centers.[8] While patient demographics play a role, we know that the laboratory also plays a significant role in determining the success of a sperm extraction.[22] Our results indicate that deep learning-based technologies can improve the efficiency of finding sperm in testicular biopsy samples. Deep learning algorithms can help improve the efficiency and accuracy of sperm identification by reducing the tedium of the procedure. However, for cell identification deep learning-based vision technologies to be successful for micro-TESE, these systems must possess some specific characteristics:

  1. Since the number of sperm present in the sample is typically low, the system should err on the side of false positives
  2. The system should be able to account for variation in the samples in terms of the constituents of the sample
  3. The final clinical utilization of the system must be simple, cost-effective, and require low maintenance.


For this study, we trained a deep learning architecture on a novel dataset of testicular biopsy images, and achieved a mAP at IOU 0.5 of 0.741, with an AR at 100 detections per image of 0.376. This is likely because deformed sperm are not well represented in our dataset and exist as a small subclass within the sperm class and thereby suffer from the effects of class imbalance. Given that MobileNetV2 is optimized for speed rather than accuracy, this is a robust result. For example, on the MS COCO dataset, a dataset used to benchmark object detection models, MobileNetV2 achieved a mAP of only 0.22.[23] Our model achieves subhuman performance but at extremely high throughput, thus we suspect our model will be a useful “prefiltering” step to assist embryologists in quickly finding areas of interest in testicular biopsy samples.

We emphasize that our aim is to create a practical tool for real-time sperm identification. A properly functioning network might not be accurate enough to outperform an embryologist on the number of spermatozoa found in each microscopy image but must be fast and accurate enough to outperform an embryologist in the amount of time it takes to find spermatozoa in a sample. We expect that future work will focus on locating and identifying sperm in TESE samples in tandem with embryologists.

We note that MobileNetV2 is optimized for fast inference; on a Google Pixel I smartphone, the model processes an image every 75 ms,[11] and on dedicated hardware, the model can make much faster predictions – we speculate that the practical limitation will only be based on the imaging speed of the microscope. These deep learning architectures can be applied to the micro-TESE procedure without a great deal of difficulty by integrating with existing imaging software. Using automated technologies, the potential exists to minimize the need for skilled tissue processing personnel, while at the same time increasing the efficiency and consistency of the process, leading to increased sperm recovery rates.

After initial testing of the model, it will be deployed in an academic in vitro fertilization center as a research tool for further testing. We intend to deploy the model as a real-time video classification pipeline, which automatically identifies sperm in testicular tissue. Our novel tool will improve the efficiency of searching for sperm in testicular biopsy samples, which is currently a labor-intensive process dependent on the skill of the embryologist.


  Author Contributions Top


DJW preprocessed the data, trained the model, analyzed results, and prepared the manuscript. OB collected and prepared the dataset, analyzed results, and prepared the manuscript for publication. VR assisted in labeling the dataset and read and reviewed the manuscript for publication. ME, who is the senior operating surgeon for all cases described in the paper, read and reviewed the manuscript. BB read and reviewed the manuscript. All authors read and approved the final manuscript.


  Competing Interests Top


ME declares potential personal conflicts of interest in the companies Sandstone Diagnostics, Dadi, and Underdog which may hold an intellectual property license related to the technology described in this paper.


  Acknowledgments Top


The authors would like to thank the subjects that took part and the embryologists at Stanford Reproductive Clinic who participated in the collection and annotation of images.

Supplementary Information is linked to the online version of the paper on the Asian Journal of Andrology website.


  Supplementary Information Top




Equation 1: SSD object detection loss, composed of the sum of a regularization loss L R and a weighted average of a classification loss LC and a localization loss L l.



Equation 2: SSD classification loss, a categorical cross-entropy loss. The first term iterates over the positive labeled bounding boxes and subtracts the log-softmax of the probability the model predicts for the corresponding class. The second term iterates over the generated negative bounding boxes and subtracts the log-softmax of the probability the model predicts for the “background” class.



Equation 3: SSD localization loss. x represents a single image, l, g, d represents the set of predicted boxes, true boxes, and default boxes, respectively. The variable i iterates over the predicted and default bounding boxes in the image and j over the true boxes. cx, cy, w, h denote the center-x, center-y, width, and height of the corresponding box. is an indicator variable which is equal to 1 if the IOU between the ground-truth box g i and the labeled box l i is >0.5.



Equation 4: SSD regularization loss. λ is a regularization strength hyperparameter, and each θ is a single parameter in the model M .



Equation 5: RMSProp update rule. l is learning rate, g is a moving average of the moment of the gradient, which decays at a rate γ, and x is our parameter, with batch gradient ¶x (Tieleman et al . 2012).

 
  References Top

1.
Gudeloglu A, Parekattil SJ. Update in the evaluation of the azoospermic male. Clinics 2013; 68: 27–34.  Back to cited text no. 1
    
2.
Devroey P, Liu J, Nagy Z, Goossens A, Tournaye H, et al . Pregnancies after testicular sperm extraction and intracytoplasmic sperm injection in non-obstructive azoospermia. Hum Reprod 1995; 10: 1457–60.  Back to cited text no. 2
    
3.
Schlegel PN. Testicular sperm extraction: microdissection improves sperm yield with minimal tissue excision. Hum Reprod 1999; 14: 131–5.  Back to cited text no. 3
    
4.
Corona G, Minhas S, Giwercman A, Bettocchi C, Dinkelman-Smit M, et al . Sperm recovery and ICSI outcomes in men with non-obstructive azoospermia: a systematic review and meta-analysis. Hum Reprod Update 2019;25: 733–57.   Back to cited text no. 4
    
5.
Donoso P, Tournaye H, Devroey P. Which is the best sperm retrieval technique for non-obstructive azoospermia? A systematic review. Hum Reprod Update 2007; 13: 539–49.  Back to cited text no. 5
    
6.
Popal W, Nagy ZP. Laboratory processing and intracytoplasmic sperm injection using epididymal and testicular spermatozoa: what can be done to improve outcomes? Clinics 2013; 68: 125–30.  Back to cited text no. 6
    
7.
Auger J, Eustache F, Ducot B, Blandin T, Daudin M, et al . Intra- and inter-individual variability in human sperm concentration, motility and vitality assessment during a workshop involving ten laboratories. Hum Reprod 2000; 15: 2360–8.  Back to cited text no. 7
    
8.
Anderson RE, Hotaling JM. Inherent difficulties of meta-analysis for surgical techniques in male infertility: an argument for standardizing reporting and outcomes. Fertil Steril 2015; 104: 1127–8.  Back to cited text no. 8
    
9.
Agarwal S, Terrail JO, Jurie F. Recent advances in object detection in the age of deep convolutional neural networks. arXiv preprint 2018; arXiv: 1809.03193.  Back to cited text no. 9
    
10.
Sladojevic S, Arsenovic M, Anderla A, Culibrk D, Stefanovic D. Deep neural networks-based recognition of plant diseases by leaf image classification. Comput Intell Neurosci 2016; 2016: 3289801.  Back to cited text no. 10
    
11.
Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2018. p4510–20.  Back to cited text no. 11
    
12.
Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, et al . SSD: Single Shot Multibox Detector. In European Conference on Computer Vision;2016. p21–37.  Back to cited text no. 12
    
13.
Shin HC, Roth HR, Gao M, Lu L, Xu Z, et al . Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging 2016; 35: 1285–98.  Back to cited text no. 13
    
14.
Deng J, Dong W, Socher R, Li LJ, Li K, et al . Imagenet: A Large-Scale Hierarchical Image Database. In IEEE Conference on Computer Vision and Pattern Recognition; 2009. p248–55.  Back to cited text no. 14
    
15.
Lin TY, Maire M, Belongie S, Hays J, Perona P, et al . Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision;2014. p740–55.  Back to cited text no. 15
    
16.
Dutta A, Zisserman A. The VIA annotation software for images, audio and video. arXiv preprint 2019;arXiv:1904.10699.  Back to cited text no. 16
    
17.
Greenspan H, Van Ginneken B, Summers RM. Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique. IEEE Trans Med Imaging 2016; 35: 1153–9.  Back to cited text no. 17
    
18.
Brosch T, Tang LY, Yoo Y, Li DK, Traboulsee A, et al . Deep 3D convolutional encoder networks with shortcuts for multiscale feature integration applied to multiple sclerosis lesion segmentation. IEEE Trans Med Imaging 2016; 35: 1229–39.  Back to cited text no. 18
    
19.
Anthimopoulos M, Christodoulidis S, Ebner L, Christe A, Mougiakakou S. Lung pattern classification for interstitial lung diseases using a deep convolutional neural network. IEEE Trans Med Imaging 2016; 35: 1207–16.  Back to cited text no. 19
    
20.
Pereira S, Pinto A, Alves V, Silva CA. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans Med Imaging 2016; 35: 1240–51.  Back to cited text no. 20
    
21.
Dou Q, Chen H, Yu L, Zhao L, Qin J, et al . Automatic detection of cerebral microbleeds from MR images via 3D convolutional neural networks. IEEE Trans Med Imaging 2016; 35: 1182–95.  Back to cited text no. 21
    
22.
Go KJ. 'By the work, one knows the workman': the practice and profession of the embryologist and its translation to quality in the embryology laboratory. Reprod Biomed Online 2015; 31: 449–58.  Back to cited text no. 22
    
23.
Huang J, Rathod V, Sun C, Zhu M, Korattikara A, et al . Speed/Accuracy Trade-Offs for Modern Convolutional Object Detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2017. p7310–1.  Back to cited text no. 23
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4]
 
 
    Tables

  [Table 1]



 

 
Top
 
 
  Search
 
 Search Pubmed for
 
    -  Wu DJ
    -  Badamjav O
    -  Reddy VV
    -  Eisenberg M
    -  Behr B
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)  

 
  In this article
Abstract
Introduction
Materials and Me...
Results
Discussion
Author Contributions
Competing Interests
Acknowledgments
Supplementary In...
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed335    
    PDF Downloaded31    

Recommend this journal