Research Article | Vol. 6, Issue 3 | Journal of Dermatology Research | Open Access |
Frederick H Silver1,2*, Tanmay Deshmukh2, Hari Nadiminti3
1Department of Pathology and Laboratory Medicine, Robert Wood Johnson Medical School, Rutgers, The State University of New Jersey, Piscataway, NJ 08854
2OptoVibronex, LLC., Bethlehem, PA 18104, USA
3Dermatology, Summit Health, Berkeley Heights NJ 07922, USA
*Correspondence author: Frederick H Silver, Department of Pathology and Laboratory Medicine, Robert Wood Johnson Medical School, Rutgers, The State University of New Jersey, Piscataway, NJ 08854 and OptoVibronex, LLC., Bethlehem, PA 18104, USA; Email: [email protected]
Citation: Silver FH, et al. Comparison of Melanocytic Lesions and Melanomas: A Clinical Pilot Study Based on Optical Coherence Tomography Images and Artificial Intelligence (AI) Models. J Dermatol Res. 2025;6(3):1-10.
Copyright© 2025 by Silver FH, et al. All rights reserved. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
| Received 10 December, 2025 | Accepted 23 December, 2025 | Published 30 December, 2025 |
Abstract
Background: There are approximately 5.4M Basal Cell (BCC) and Squamous Cell Carcinomas (SCC) diagnosed in the US with the numbers expanding each year. Currently, the gold standard for skin cancer diagnosis is histopathology, which requires the surgical excision of the tumor followed by pathological evaluation of a tissue biopsy. The Three-Dimensional (3D) nature of human tissue suggests that Two-Dimensional (2D) cross-sections may be insufficient in some cases to represent the complex structure due to sampling bias. There is a need for new techniques that can be used to classify skin lesion types and margins noninvasively.
Methods: We use optical coherence tomography volume scan images and Artificial Intelligence (AI) to noninvasively create 3D images of basal cell and squamous cell carcinomas.
Results: 3D optical coherence tomography images can be broken down into a series of cross sections that can be classified as benign or cancerous using convolutional neural network models developed in this study. These models can identify cancerous regions as well as clear edges. Cancerous regions can also be verified based on visual review of the color-coded images and the loss of the green and blue subchannel pixel intensities.
Conclusion: 3D optical coherence tomography cross-sections of cancerous lesions can be collected noninvasively and using AI the skin lesions can be classified as well the clear edges. These images may provide a means to speed up treatment and promote better screening of cancer patients, especially for older patients who will likely develop more than one cancerous lesion as they age.
Keywords: Skin Cancer; Basal Cell Carcinoma; Squamous Cell Carcinoma; Melanoma; Optical Coherence Tomography; 3D Image; Fibrotic Tissue; Cancer Associated Fibroblasts; Thin Blood Vessels; Intermediate Filaments; AI
Introduction
Melanomas are pigmented skin cancers that are increasingly being diagnosed each year throughout the world. The American Cancer Society estimates that nearly 105,000 new cases of invasive melanoma of the skin and over 107,000 new cases of melanoma in-situ will be diagnosed in the United States in 2025 [1]. Most melanomas are asymmetrical brown spots or nodular bumps greater than 6 mm in diameter with irregular pigmentation. Lesions with distinctive colors such as white, red, slate gray, pink, black or blue are commonly seen [2]. A variety of dermoscopic features are associated with melanomas including irregular pigment distribution, pigmented finger-like projections arranged asymmetrically, white streaks, blue veils and atypical linear or highly convoluted blood vessels [3]. There are many different histologic variants of melanoma and pigmented lesions that are similar when viewed with a dermoscope. No specific stain or special technique alone can be used to distinguish between a benign nevus and melanoma [4]. Increased Breslow’s thickness and the depth of lesion penetration into the dermis remains the most important factors in gaging the severity of melanomas [4].
One study reported that use of noninvasive devices provides high sensitivity when used to detect melanoma [5]. The FotoFinder Moleanalyzer Pro is reported to have a sensitivity and specificity like that of expert dermatologists [5]. Classification of dermoscopic skin lesion images based on Convolutional Neural Network models (CNNs) is reported to work well in predicting melanoma [6]. In comparison, in another study, diagnostic performance of artificial intelligence for histologic melanoma recognition was compared to the diagnosis by expert pathologists. It was concluded that the diagnosis was confounded by the disagreement among pathologists as to the presence of melanoma [7]. In that study only half (47%) of the melanoma diagnoses were unanimously confirmed by all 18 pathologists showing the difficulty in routine diagnosis of this cancer [7].
The diagnosis, interpretation and classification of melanocytic tumors have become more complicated considering that several other melanocyte lesions can be confused with melanoma. In addition, different names have been given to associated melanocytic lesions described in the literature [8]. One of these lesions is known as a Dysplastic Nevus and is abbreviated as DN. This is a nevus with architectural disorder and atypical melanocytes [8]. Despite its name, DN is not considered as a true dysplastic precursor or premalignant lesion [8]. DN cytological atypia is characterized by enlargement of nuclei (with varying degrees of irregularity), chromatin clumping and hyperchromatism and variably prominent nucleoli [8]. A lack of standardized diagnostic terminology makes it challenging to gauge clinical behavior and guide treatment recommendations [9]. It is thought that most lesions currently reported as atypical melanocytic proliferations may be benign and that their significance lies in the potential pitfall for overdiagnosis of malignant melanoma [10]. The literature regarding atypical melanocytic proliferations highlights the potential pitfalls, such as diagnostic disagreement, institutional variations in nomenclature, uncertain biological potential, lack of histologic criteria and unclear management recommendations [11]. A lack of standardized diagnostic terminology makes it challenging to gauge clinical behavior and guide treatment [12].
Benign melanocytic nevi are the proliferation of uniform melanocytes which occur on the skin after birth. They are initially located at the intradermal-epidermal junction and over time tend to migrate into the dermis [13]. Compound nevi identification by dermoscopy do not always correlate well with the histopathological diagnosis. Most clinically diagnosed compound nevi are found intradermally on histopathology [13].
The purpose of this paper is to conduct a pilot study to evaluate the use of Optical Coherence Tomography (OCT) along with AI to classify the differences between melanomas and other melanocytic skin lesions. The results of this pilot study suggest that further studies are warranted to evaluate the use of OCT and AI to differentiate melanoma from benign melanocytic lesions.
Methodology
An OptoScope was used in this study to collect OCT images. It consists of a modified Lumedica OQ 2.0 OCT (Lumedica Inc, Durham, NC) as described previously operating at a wavelength of 840 nm collecting 13,000 frames per second. OCT gray-scale images were used to classify both normal skin and melanocytic lesions. The images were collected on intact normal skin and biopsied pigmented lesions as described previously [14,15]. The pigmented lesions were studied immediately after a biopsy was taken and then processed and stained with H&E for histopathology. All images were collected as part of an IRB approved clinical study on skin cancers at Summit Health (Berkeley Heights, NJ) after obtaining informed consent from the patients. Clinical diagnoses were made by more than one board-certified dermatopathologist after H&E staining and review of the tissue sections as part of routine clinical skin excisional protocols. Measurements were made on OCT images of control skin (N=200), melanomas (N=106) and 2 atypical melanocytic lesions, a dysplastic nevus and a typical benign nevus. The OptoScope includes a handpiece that collects reflected red light from the skin that is connected to an OCT that operates using an I5 computer. All OCT images were created by horizontally scanning the cancerous lesion cross sections. The gray scale-scans were color-coded using image J as reported previously [14,15]. The OCT gray-scale pixel images were broken into green, blue and red subchannel images using a lookup table [14,15]. By breaking up the pixel intensity distribution at each point into low (green), medium (blue) and high (red) intensities, it is possible to examine differences in reflection of the different layers of skin and skin lesions. The clinical evaluation of the skin required only 2 minutes for analysis of each lesion. Images collected on skin lesions were compared to images of normal skin using a Convolutional Neural Network (CNN). 80% of the data was used for model training and 20% for testing. The sensitivity, specificity and Area Under the Operating Curve (AUOC) were calculated for CNN models developed for each cancer type studied and are listed in Table 1. The sensitivity and specificity of these models ranged between 92 and 100% as reported previously [14,15]. All samples evaluated in this study were not used in training the CNN model.
In this study, a transfer learning-based Convolutional Neural Network (CNN) using the ResNet18 architecture was implemented to classify OCT images of normal skin versus melanoma. The gray-scale images were converted into three-channel inputs to match the ResNet18 requirements and data augmentation techniques such as random horizontal flipping, rotation and resizing were applied to enhance generalization, while normalization followed ImageNet standards. Using a 5-fold cross-validation strategy, to ensure robust performance evaluation, the final classification layer of ResNet18 was modified to output a single node for binary classification. The network was trained using the Adam optimizer with Binary Cross-Entropy loss over multiple epochs, with the best-performing model saved based on validation accuracy. To provide interpretability, Gradient-weighted Class Activation Mapping (Grad-CAM) was employed to highlight the discriminative regions used by the model when distinguishing skin lesions from normal skin. Finally, model performance was assessed through test accuracy, confusion matrices, sensitivity, specificity and ROC-AUOC analysis, offering both predictive power and clinical relevance. Each unknown lesion was compared to normal skin to calculate a probability that the sample was cancerous or similar to normal skin.
Results
Fig. 1 shows a typical color-coded OCT image of normal skin. Note the stratum corneum in yellow marked by the yellow arrow in A, the granulating layers in pink and red (see arrow) and the papillary and basal cell layers shown in blue area. In the green subchannel layer shown in B the stratum corneum is seen while in the blue subchannel image the hyporeflective layer is shown by a yellow arrow.

Figure 1: Color-coded OCT image of normal skin (A) and the green (B), blue (C) and red (D) sub-channels images. The low (B), medium (C) and high (D) pixel intensities were determined from a look-up table. Note the stratum corneum in yellow marked by the meddle yellow arrow in A, the granulating layers in pink (middle arrow) and red and the papillary and basal cell layers shown in blue (lowest arrow). In the green subchannel layer shown in B the stratum corneum is seen while in the blue subchannel image the hyporeflective layer is shown by an arrow.
Fig. 2 shows a typical color-coded OCT image (A) as well as green (B), blue (C) and red subchannel images (D) of a lesion diagnosed by a board-certified dermatopathologist as a melanoma. Note the rough surface of the melanoma due to the presence of lesions marked with an arrow in A. The green subchannel disappears at the location of the lesion (see B) which is marked by the arrow. The hyporeflective layer is absent possibly due to mutation of the keratin of the intermediate filaments (see arow in C).

Figure 2: Typical color-coded (A) and subchannel green (B), blue (C) and red (D) images of a melanoma diagnosed by board-certified dermatopathologists based on histopathology. Arrows indicate the location of the lesion in the different images.
Fig. 3 shows the results of a Convolutional Neural Network (CNN) model comparing gray scale OCT images of melanomas to normal skin. The number of samples tested are shown in the boxes. The sensitivity and specificity of the CNN model is highly effective in distinguishing melanoma from normal skin as shown in Table 1.

Figure 3: Convolutional neural network results distinguishing melanoma (1) from normal skin (0). Note the sensitivity of distinguishing melanoma from normal is 100% while the specificity is 100%. Table 1 lists the results from the CNN model distinguishing melanoma from normal skin.
| Sensitivity | Specificity | Area Under Operating Curve |
Melanoma | 100% | 100% | 1 |
BCC | 100% | 97.16% | 0.99 |
SCC | 93.33% | 43.83% | 0.69 |
Table 1: Sensitivity, Specificity and Area Under the Operating Curve for CNN Models of differentiating normal skin from Melanoma, Basal Cell Carcinoma (BCC) or Squamous Cell Carcinoma (SCC).
Fig. 4 shows typical color-coded and subchannel images of an atypical melanocytic lesion diagnosed by a board-certified dermatopathologist. The arrows point to areas where there are groups of cells possibly melanocytes. Table 2 lists the probability that the lesion is a melanoma.

Figure 4: Typical color-coded and subchannel images of an atypical melanocytic lesion diagnosed by a board-certified dermatopathologist. The arrows point to areas where there are groups of cells possibly melanocytes.
Fig. 5 is a color-coded subchannel image of a dysplastic nevus. Note the large proliferation of cells at the lesion surface in A and in B where the green cellular layer is quite thick there is thick hyporeflective region in C.

Figure 5: Typical color-coded and subchannel images of a dysplastic melanocytic lesion diagnosed by a board-certified dermatopathologist. Note the cellular proliferation in B demonstrated by the thick green band and the large hyporeflective region in C (black region between top layer of epidermis and the papillary dermis) of a benign lesion. The probability of this being a melanoma is given in Table 2.
Fig. 5 is a typical color-coded and subchannel images of a dysplastic melanocytic lesion diagnosed by a board-certified dermatopathologist. Note the cellular proliferation in B and the large hyporeflective region in C representative (black region) of a benign lesion. The probability of this being a melanoma is only 14.3% as given in Table 2.
Fig. 6 is a typical color-coded image of a nevus (A) and subchannel images including green (B), blue (C) and red (D). Note the rete ridges standout at the base of the epidermis compared to normal skin (Fig. 1) (see arrow in B and C) compared to normal nevus. Areas with enhanced melanin particles tend to scatter light deeper into the sample due to Mie scattering.

Figure 6: Typical color-coded image of a nevus (A) and including green (B), blue (C) and red (D) subchannel images. Note the rete ridges stand out compared to normal skin (Fig. 1) in the nevus. Areas with enhanced melanin particles tend to scatter light deeper into the sample due to Mie scattering therefore preventing back scattering of the infrared light. This is why the green channel is very faint.
Subject | Probability of Normal Skin | Probability of Melanoma |
Normal Skin (N = 11) | 90.4% {2.6}* | 9.6% {2.6} |
Melanoma (N = 8) | 7.72% {1.34} | 92.82% {1.34} |
Dysplastic Nevus (N=1) | 85.7% | 14.3% |
Atypical Melanocytic Lesion (N = 2) | 100% | 0% |
Nevus (N=1) | 96% | 4.1% |
Table 2: Probability Prediction of Different Lesions Using CNN model with standard deviations shown in brackets.
Fig. 7 shows a melanoma where the left half appears like normal skin and the right half is modeled as melanoma.

Figure 7: A melanoma with areas of normal skin and a cancerous lesion. This is the lesion shown in Fig. 6. A split screen with normal skin on the left and melanoma on the right.
Table 3 shows the data on the probability that the lesion is a melanoma is increased by dividing it into two parts based on the OCT image.
Subject | Probability of Melanoma (Left) | Probability of Melanoma (Right) |
Melanoma 1 | 10.66% | 94.43% |
Melanoma 2 | 16.14% | 99.73% |
Table 3: Melanoma split image where ½ of the lesion is modeled as a melanoma and the other half as normal skin.
Discussion
The ability to noninvasively classify cancerous skin lesions in-vivo in 3D provides additional information that can be used by Dermatologists and Pathologists to facilitate lesion identification as well as the treatment of skin cancers. We have previously shown that epithelial cell derived skin cancers are characterized by new cancer associated fibroblasts that have resonant frequencies that are higher than that of normal epithelial cells (80 Hz versus 50 Hz), new thin blood vessels with resonant frequencies of about 130 Hz and fibrous tissue with resonant frequencies between 250 and 260 Hz [17]. These resonant frequencies are not seen in normal skin and can be used to differentiate normal skin from skin cancers [17,18]. These changes in resonant frequencies are related to changes in the structure and texture of the tissue that can be indirectly derived from analysis of the OCT images using CNN models [19,20].
3D reconstructions of BCC and SCC skin cancers can be obtained noninvasively using the volume scan app found on the OptoScope. The app generates 128 serial cross-sections of each lesion that is visualized and can be analyzed with CNN models to classify the lesion type. It can also be used to locate the clear edges of the lesion that appear to be cancer free. The image can be split in half and each half analyzed with the CNN separately. In addition to identifying the type and location of a lesion, OptoScope images and data may provide a means to identify benign lesions that don’t need to be biopsied or be excised. Noninvasive measurements would speed up patient evaluation, improve flow through of patients through the medical office and reduce costs of care.
The use of OCT to classify lesions takes about 2 minutes and can increase patient flow through the Dermatologist’s office. It also provides additional information and facilitates identification of the location of the lesion edges as well as the extent of the lesion protruding into the dermis. This is useful in identifying difficult cases that may need referral to a plastic surgeon to avoid extensive scaring when an excision is made. This also helps a clinician to gauge the width and depth of the lesion which is key to planning the extent of treatment that may be needed. It also improves patient flow through a busy office, especially considering the increased number of skin cancers occurring throughout the world and the difficulty of scheduling Dermatologist office visits.
The results shown in this paper indicate that the use of AI in predicting the probability that a lesion is cancerous can be verified visually by reviewing the green and blue subchannel images. Loss of the green subchannel image pixel intensity is associated with the formation of cellular aggregates in BCC, SCC and melanoma due to Mie scattering of the light deeper into the specimen [19,20]. This appears to occur in all epithelial derived cancers of the skin and can be confirmed by CNN model predictions after reviewing the subchannel images. The loss of blue channel pixel intensities in cancerous lesions is associated with mutations in the intermediate filaments that are associated with loss of keratin and increases in the cytokeratin found in BCCs, SCCs and melanomas [19,20]. The cancer free edges of the lesions can be verified by both viewing lesion images as well as by CNN model predictions. This may provide a means to limit removal of normal skin and may save time during Mohs surgery while waiting for the results of histology conducted on frozen sections. In addition, the instrument can be run remotely making it possible to collect images in areas where dermatologist visits are difficult to schedule. Dermoscopy and visual inspection are the most widely used methods to detect skin cancers [21,22]. However, more advanced noninvasive diagnostic techniques are being developed. Some of these methods can be employed remotely and data transferred over the internet. Rapid noninvasive methods are also needed to improve the quality of patient skin checks that can be achieved in part through improving the quality of teledermatology.
Teledermatology has been used remotely by analyzing camera photos or live video conferences of skin problems along with associated clinical histories [23-26]. In the past teledermatology was considered a supplement to a patient’s total care and not a replacement for in-person doctor’s visits [23-26]. New techniques such as optical coherence tomography, elastic scattering, Raman Spectroscopy, high frequency ultrasound, electrical impedance spectroscopy and reflectance confocal microscopy may provide more information on skin subsurface structure [26]. Advances in these technologies for the diagnosis of skin cancer are needed to optimize individual patient treatments [26]. Use of OCT and AI may promote the use of teledermatology especially in remote regions and locations where Dermatologists are in short supply. Screening for skin cancer is especially important for subjects that have blue, green or hazel eye colors and have Fitzgerald skin types I and II. These patients are likely to develop one or more skin cancers by the age of 70. Improved screening can be achieved through development of additional telemedicine techniques.
Conclusion
Using noninvasive 3D OCT images of skin lesions with the OptoScope it is possible to identify the location and relative size of skin lesions without touching the skin. These 3D images can be broken down into a series of cross sections that can be reviewed one by one. Each cross section can be classified as benign or cancerous using Convolutional Neural Network (CNN) models that have been developed in this study. These models can identify cancerous regions as well as clear edges. Cancerous regions can be verified based on visual review of the color-coded images and the loss of the green and blue subchannel pixel intensities. By providing rapid OCT images of skin lesions Dermatologists and Pathologists can work together to eliminate unnecessary biopsies and excisions and reduce the need to remove excessive normal skin. These images may provide a means to speed up lesion treatment and provide impetus to use topical treatments on smaller lesions. Early lesion noninvasive classification techniques may promote better patient screening especially in older patients with blond, green and hazel eye colors who will likely have multiple cancerous lesions by the age of 70.
Conflicts of Interest
FHS is a stockholder and TD is an employee of OptoVibronex.
Authors’ Contributions
Conceptualization, F.H.S. and T.D.; methodology, F.H.S and H.N.; formal analysis, F.H.S., T.D. and H.N.; investigation, F.H.S. and H.N.; data curation, T.D.; writing—original draft preparation, F.H.S. and T.D.; writing—review and editing, F.H.S., T.D., H.N.; All authors have read and agreed to the published version of the manuscript.
Funding Details
Partial support for this project was provided by Ben Franklin Tech Partners during 2024-2025.
Acknowledgements
The authors thank Professor Emrah Bayrak and the Capstone team at Lehigh University composed of Kendalin Flores, Max Tran, Ernesto Sanchez Lopez, Nico Babbio and Julia Knox for their assistance in programming the convolutional neural network models.
Institutional Review Board Statement
The protocol was approved by the I.R.B. at Advarra on August 7, 2025, IRB Number CR 006782727.
Informed Consent Statement
All subjects provided consent.
Data Availability Statement
Data available at optovibronex.com.
References
https://www.cancer.org/cancer/types/basal-and-squamous-cell-skin-cancer/about/key-statistics.html
https://gco.iarc.fr/today/en/fact-sheets-cancers
Frederick H Silver1,2*, Tanmay Deshmukh2, Hari Nadiminti3
1Department of Pathology and Laboratory Medicine, Robert Wood Johnson Medical School, Rutgers, The State University of New Jersey, Piscataway, NJ 08854
2OptoVibronex, LLC., Bethlehem, PA 18104, USA
3Dermatology, Summit Health, Berkeley Heights NJ 07922, USA
*Correspondence author: Frederick H Silver, Department of Pathology and Laboratory Medicine, Robert Wood Johnson Medical School, Rutgers, The State University of New Jersey, Piscataway, NJ 08854 and OptoVibronex, LLC., Bethlehem, PA 18104, USA;
Email: [email protected]
Frederick H Silver1,2*, Tanmay Deshmukh2, Hari Nadiminti3
1Department of Pathology and Laboratory Medicine, Robert Wood Johnson Medical School, Rutgers, The State University of New Jersey, Piscataway, NJ 08854
2OptoVibronex, LLC., Bethlehem, PA 18104, USA
3Dermatology, Summit Health, Berkeley Heights NJ 07922, USA
*Correspondence author: Frederick H Silver, Department of Pathology and Laboratory Medicine, Robert Wood Johnson Medical School, Rutgers, The State University of New Jersey, Piscataway, NJ 08854 and OptoVibronex, LLC., Bethlehem, PA 18104, USA;
Email: [email protected]
Copyright© 2025 by Silver FH, et al. All rights reserved. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Citation: Silver FH, et al. Comparison of Melanocytic Lesions and Melanomas: A Clinical Pilot Study Based on Optical Coherence Tomography Images and Artificial Intelligence (AI) Models. J Dermatol Res. 2025;6(3):1-10.