Detection of concha bullosa using deep learning models in cone-beam computed tomography images: a feasibility study

Article information

Arch Craniofac Surg. 2025;26(1):19-28
Publication date (electronic) : 2025 February 20
doi : https://doi.org/10.7181/acfs.2024.00283
1Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
2Operational Research Center in Healthcare, Near East University, Nicosia, Turkey
3Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
4Department of Preventive and Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
5Department of Periodontics A. B. Shetty Memorial Institute of Dental Sciences, Nitte (Deemed to be University), Mangalore, India
Correspondence: Dilber Uzun Ozsahin Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, University City, Sharjah 27272, United Arab Emirates E-mail: dozsahin@sharjah.ac.ae
Received 2024 May 20; Revised 2024 October 13; Accepted 2025 February 12.

Abstract

Background

Pneumatization of turbinates, also known as concha bullosa (CB), is associated with nasal septal deviation and sinonasal pathologies. This study aims to evaluate the performance of deep learning models in detecting CB in coronal cone-beam computed tomography (CBCT) images.

Methods

Standardized coronal images were obtained from 203 CBCT scans (83 with CB and 119 without CB) from the radiology archives of a dental teaching hospital. These scans underwent preprocessing through a hybridized contrast enhancement (CE) method using discrete wavelet transform (DWT). Of the 203 CBCT images, 162 were randomly assigned to the training set and 41 to the testing set. Initially, the images were enhanced using a CE technique before being input into pre-trained deep learning models, namely ResNet50, ResNet101, and MobileNet. The features extracted by each model were then flattened and input into a random forest (RF) classifier. In the subsequent phase, the CE technique was refined by incorporating DWT.

Results

CE-DWT-ResNet101-RF demonstrated the highest performance, achieving an accuracy of 91.7% and an area under the curve (AUC) of 98%. In contrast, CE-MobileNet-RF recorded the lowest accuracy at 82.46% and an AUC of 92%. The highest precision, recall, and F1 score (all 92%) were observed for CE-DWT-ResNet101-RF.

Conclusion

Deep learning models demonstrated high accuracy in detecting CB in CBCT images. However, to confirm these results, further studies involving larger sample sizes and various deep learning models are required.

INTRODUCTION

The presence of an air-filled cavity within the middle turbinates, known as concha bullosa (CB), is linked to various pathologies of the sinonasal region [1-3]. A previous study indicated that CB occurs in 50% of patients exhibiting nasal symptoms [4]. The association of CB with nasal septal deviation possibly explains its links with sinonasal pathologies [5-10]. Studies have also revealed a correlation between dimensions of CB and sinonasal inflammation [11-13]. Over the past few years computed tomography (CT) has been increasingly used for imaging sinonasal pathologies [14-16].

Artificial intelligence (AI) is not only effective in detecting complex patterns in imaging data but also in providing automated, quantitative image assessments. Therefore, incorporating AI into radiology practices could help radiologists make more accurate image assessments [17]. Convolutional neural networks (CNNs), a subtype of deep learning models, have been widely successful in image recognition tasks [18]. Recently, there has been an increase in radiology research involving CNNs for detecting, classifying, and segmenting lesions. CNNs have also been utilized for image reconstruction [19,20]. To the best of our knowledge, only one published study has explored the use of CNNs for detecting CB in CT images [21]. Research indicates that cone-beam CT (CBCT) provides imaging details comparable to CT in the sinonasal region but with a significantly lower radiation dose. However, the effectiveness of CNNs in detecting CB in CBCT images has not been previously studied. This study aims to assess the performance of deep learning models in detecting CB using CBCT images.

METHODS

We conducted a retrospective study using 203 CBCT scans from the radiology department at University Dental Hospital, Sharjah. These scans included a large field-of-view (FOV) of 18× 20 cm, with 84 scans exhibiting CB and 119 scans without CB. The scans were performed using a Planmeca Viso 7 CBCT unit from Finland, which features a 0.2-mm resolution. The CBCT machines operated at settings of 95 kVp and 5 mA. Ethical approval for this study was granted by the Institutional Ethics Committee of University of Sharjah (Reference number: REC-21-01-10-01). Prior to processing with a CNN, all identification data were anonymized in the CBCT images.

CBCT scans were conducted on patients aged 18–60 years who visited the hospital from June 2020 to June 2023. The parameters for the CBCT included a scan size of 18 × 20 cm, a slice thickness of 0.450 mm, 100 kVp, and 10 mA. The exclusion criteria for the scans were as follows: small- and medium-FOV scans, scans with errors and artifacts, and scans from patients with a history of facial trauma or surgery. Due to the limited availability of large-volume CBCT scans in dental radiology archives, convenience sampling was employed to estimate the sample size. From the available scans, 204 large FOV CBCT images were selected for the study. Of these, 162 images were randomly assigned to the training set and 41 to the testing set. To ensure uniformity, the coronal image fields of all CBCT images were cropped to maintain consistent anatomical landmarks. The first coronal slice clearly showing the crista galli was used as the reference point for slicing, following the previously described method. Each image was then cropped into a 200 × 400-pixel square, extending from the crista galli superiorly to the hard palate inferiorly, and 5 mm laterally from the lateral nasal wall on both sides (Fig. 1) [15]. Annotations were made using the Visual Geometry Group Image Annotator, an open-source software program for manual annotation (Fig. 2) [22].

Fig. 1.

Cropped coronal cone-beam computed tomography image showing the presence of concha bullosa.

Fig. 2.

An image annotated using the Visual Geometry Group Image Annotator software showing bilateral concha bullosa.

Data preprocessing

The data underwent preprocessing through a method combining contrast enhancement and discrete wavelet transform (CE-DWT) [23]. Initially, the images were converted to greyscale. The “haar” wavelet was then employed to apply DWT to the images. This process split each image into two components: an approximation component (cA), which captures the low-frequency content reflecting general trends, and detail components (cH, cV, and cD), which capture high-frequency content such as textures and edges. Additionally, the images underwent CEs of 1.5 times on the cH, cV, and cD coefficients. Subsequently, the images were reconstructed using the inverse wavelet transform and converted back into RGB format (Fig. 3).

Fig. 3.

Steps to produce enhanced contrast enhanced discrete wavelet transformed images. (A) Original image. (B) Gray scale image. (C) C1: approximation coefficients, C2: horizontal detail coefficients, C3: vertical detail coefficients, C4: diagonal detail coefficients. (D) D1: enhanced horizontal details, D2: enhanced vertical details, D3: enhanced diagonal details. (E) Reconstruction with enhanced coefficients.

We employed an ensemble learning system known as a random forest (RF), which integrates multiple decision trees to deliver predictions. Each decision tree is constructed using a random subset of the training set and a random selection of features. RF reduces overfitting and improves generalization by aggregating the predictions from numerous trees. The final outcome is determined by either a majority vote or an average of the predictions from the different trees [24].

Models

In this study, various deep learning models were utilized to classify images as either normal or indicative of CB. These models incorporated diverse image processing and feature extraction techniques. Initially, the images underwent enhancement through a CE technique before being input into pre-trained deep learning models, specifically ResNet50, ResNet101, and MobileNet. The features extracted by each model were then flattened and input into a RF classifier. In the subsequent phase, the CE technique was refined by integrating the DWT. These enhanced images were then used to train the RF classifier. The performance of the models was assessed using a variety of evaluation metrics. To augment the total number of images and enhance model performance, the dataset underwent augmentation prior to training the models. This data augmentation also helped to reduce the risk of model overfitting. The dataset was split such that approximately 80% was used for training and 20% for testing. The model hyperparameters were configured to use 30 trees. The training utilized a software environment that included Anaconda and Jupyter notebooks, Keras 2.0.8, and TensorFlow 1.4.0. The hardware setup comprised a GeForce Nvidia GTX1080, an i7 processor, and 16 GB of RAM.

Discrete wavelet transform

For time-frequency analysis of signals, the DWT serves as a robust mathematical tool, particularly effective with nonstationary time series data [25]. DWT divides a signal into a set of wavelets, localized in both time and frequency. Unlike the conventional Fourier transform, which solely provides frequency information, DWT captures both frequency and time characteristics, enabling a more comprehensive analysis.

DWT decomposes X(t) (time series) into a series of wavelet coefficients at different scales and positions. The decomposition is achieved through successive high- and low-pass filtering. For a given level j, DWT is formulated as follows:

X(t) = ∑kcAj,kψj,k(t)+∑j,∑kcDj´,kϕj´,k(t),

Where ψj,k(t) and ϕj´,k(t) denote the scaling (approximation) and wavelet (detail) functions respectively. They are characterized by the approximation coefficients cAj,k and the detail coefficients cDj´,k at level j´ and position k.

Contrast enhancement

CE increases the contrast between various regions to improve an image’s visual appeal and the discriminative ability of the acquired features. It achieves this goal by extending the range of pixel intensity values. Common techniques for enhancing contrast include adaptive histogram equalization, histogram equalization, and contrast stretching [26].

MobileNet

Depth-dependent separable convolution is the core component of MobileNet [27]. This architecture includes two levels of convolutions, known as convolution layers, which consist of point and depthwise convolutions. The output feature maps from a preceding convolution layer are merged with the input feature maps of the subsequent dense block layer. In DenseNet, a transition layer is situated between two dense blocks, where a 1× 1 convolution kernel is used to reduce the number of input feature maps. Unlike DenseNet, MobileNet does not incorporate a transition layer; instead, it utilizes a convolution layer in lieu of a pooling layer. This convolution layer automatically processes the output feature maps from the preceding point convolution layer using a stride of 2, effectively reducing the dimension of the feature map.

ResNet

To address issues in computer vision, machine learning researchers have incorporated additional layers into deep CNNs. While adding more layers can enhance a model’s performance, it may also exacerbate the degradation problem as the network deepens. Discrete layers, trained to produce highly precise outcomes for various tasks, facilitate faster convergence in complex scenarios. Importantly, this performance decline is not due to overfitting. Factors such as network configuration, optimization strategy, and the vanishing gradient problem may have contributed to this decline. Deep residual networks utilize residual blocks to enhance model performance. By creating an alternative pathway for the gradient, these networks can effectively tackle the vanishing gradient problem. Additionally, they allow the model to learn an identity function, ensuring that the performance of the model’s higher and lower layers is comparable. ResNet was specifically developed to address these issues [28,29]. The design of ResNet50 builds on the ResNet–ResNet34 framework, with the key difference being that each component consists of three layers instead of two. ResNet50 generates a total of 3.8 billion FLOPS, marking a significant increase from ResNet34. In this design, every two-layer block from the previous model is replaced with a three-layer bottleneck block, resulting in a 50-layer structure [30].

RESULTS

The demographic data for the study samples, which consisted of coronal CBCT scans, are presented in Table 1. The majority of the CBCT images were obtained from male patients. The performance of the models was evaluated based on accuracy, precision, recall, F1 score, and area under the curve (AUC), as shown in Table 2. CE-ResNet50-RF demonstrated an accuracy of 83.69%, precision of 84%, recall of 84%, F1 score of 83%, and AUC of 94% (Fig. 4). CE-ResNet101-RF showed improved results, with an accuracy of 84.25%, precision of 85%, recall of 84%, F1 score of 84%, and AUC of 94% (Fig. 5). Both CE-ResNet50-RF and CE-ResNet101-RF surpassed the performance of CE-MobileNet-RF, which recorded an accuracy of 82.46%, precision of 83%, recall of 83%, F1 score of 82%, and AUC of 92% (Fig. 6). The RF classifier, when trained on hybrid image processing, outperformed models using only single CE preprocessing. CE-DWT-ResNet50-RF achieved an accuracy of 90.88%, precision of 91%, recall of 91%, F1 score of 91%, and AUC of 98% (Fig. 7). CE-DWT-ResNet101-RF outperformed all other models, achieving an accuracy of 91.7%, precision of 92%, recall of 92%, F1 score of 92%, and AUC of 98% (Fig. 8). CE-DWT-MobileNet-RF, however, showed lower performance with an accuracy of 87.56%, precision of 88%, recall of 88%, F1 score of 87%, and AUC of 97% (Fig. 9). This represents a 5% increase in performance compared to CE-MobileNet-RF. These results indicate that hybrid preprocessing significantly improved the performance of the RF classifier.

Sex distribution of the study subjects

Model performance in terms of accuracy, precision, recall, F1-score, and AUC

Fig. 4.

(A) Receiver operating characteristic (ROC) curve for CE-ResNet-50-RF. (B) Confusion matrix for CE-ResNet-50-RF. CE, contrast enhancement; RF, random forest.

Fig. 5.

(A) Receiver operating characteristic (ROC) curve for CE-ResNet-101-RF. (B) Confusion matrix for CE-ResNet-101-RF. CE, contrast enhancement; RF, random forest.

Fig. 6.

(A) Receiver operating characteristic (ROC) curve for CE-MobileNet-RF. (B) Confusion matrix for CE-MobileNet-RF. CE, contrast enhancement; RF, random forest.

Fig. 7.

(A) Receiver operating characteristic (ROC) curve for CE-DWT-ResNet-50-RF. (B) Confusion matrix for CE-DWT-ResNet-50-RF. CE-DWT, contrast enhancement and discrete wavelet transform; RF, random forest.

Fig. 8.

(A) Receiver operating characteristic (ROC) curve for CE-DWT-ResNet-101-RF. (B) Confusion matrix for CE-DWT-ResNet-101-RF. CE-DWT, contrast enhancement and discrete wavelet transform; RF, random forest.

Fig. 9.

(A) Receiver operating characteristic (ROC) curve for CE-DWT-MobileNet-RF. (B) Confusion matrix for CE-DWT-MobileNet-RF. CE-DWT, contrast enhancement and discrete wavelet transform; RF, random forest.

To compare the AUC of the different models in our study, we used the standard error of difference and Z-test values (vassarstats.net) (Table 3). The receiver operating characteristic of CE-DWT-ResNet-50-RF and CE-DWT-ResNet-101-RF were significantly higher compared to CE-ResNet-50-RF (p = 0.0331), CE-ResNet-101-RF (p = 0.0331), and CE-ResNet-50-RF (p = 0.0066).

Comparison of the areas under the curve of the different models used in the study

DISCUSSION

Deviation of the nasal septum is thought to have a developmental link with CB, which may explain CB’s role in sinonasal pathology and nasal airway obstruction [5,6]. Research has shown associations between CB and various pathologies, including long-standing rhinosinusitis, which results from the obstruction of the ostiomeatal complex and disrupts the function of the nasal mucociliary unit in the mucosal lining [7-9]. A study from Turkey found that 65% of patients with septate CB had maxillary sinusitis [10]. Similarly, a recent study from Saudi Arabia, utilizing multidetector CT, found a correlation between extensive CB and maxillary sinusitis [11]. An Indian study reported comparable results concerning extensive CB and maxillary sinusitis [12]. Various surgical techniques have been employed to decrease CB and turbinate thickness, which in turn improves airflow and reduces sinonasal inflammation [13].

CT is the most commonly used imaging modality for the nasal and paranasal regions [14]. A study utilizing CT imaging found that 49.3% of patients with CB also presented with evidence of maxillary sinusitis [15]. However, dissatisfaction with radiology reporting, particularly in the nasal and paranasal areas, has been expressed by many otolaryngology specialists [16]. This concern is underscored by findings from a recent study, which indicated that only one-third of the radiologists involved could accurately identify anatomical variations such as CB and paradoxical middle turbinates [16]. Experts suggest that the accuracy of radiologists’ diagnoses heavily depends on the quality of their training and their years of experience, factors that can sometimes introduce subjectivity into their assessments [17].

AI-based research has attracted considerable attention in the medical field [31]. Almost all medical fields have started to adopt AI [32]. However, research related to AI applications in the field of otolaryngology is inadequate [21].

Only one recent study by Parmar et al. [21] evaluated the role of CNNs in detecting CB. They employed the Inception-V3 CNN, whereas we used ResNet50, ResNet101, and MobileNet with CE-DWT in our study [21].

Wavelet transform, which separates an image into a multi-resolution sub-band structure through a two-channel filter bank, has been widely utilized in image processing over recent decades. The process involves a multi-resolution decomposition of an image through repeated applications of low-pass, high-pass, and down sampling filters in both horizontal and vertical directions [33].

Parmar et al. [21] reported an accuracy of 81% and an AUC of 0.93 for the model used in their study. In our study, the CE-DWT-ResNet-101-RF model achieved a higher accuracy of 91.7% and an AUC of 0.98, while the CE-MobileNet-RF model attained an accuracy of 82.46%. The primary distinction between our study and that of Parmar et al. lies in the imaging modality used; Parmar et al. utilized cropped CT images, whereas our study employed cropped CBCT images [15]. CBCT exposes patients to lower radiation doses than CT for the same anatomical volume [34]. Therefore, CBCT-based deep learning models serve as low-dose alternatives to those based on CT.

Although there is one published study related to AI-based detection of CB, several recent studies have assessed the accuracy of AI models in identifying sinonasal pathologies [21,35,36].

Kim et al. [35] employed a three-dimensional CNN (ResNet18) to detect maxillary sinus fungus balls in CT images, achieving an accuracy of 87.5%. In a similar vein, the CE-DWT-MobileNet-RF in this study reached an equivalent accuracy rate (87.5%) in detecting CB. Additionally, Ozbay and Tunc demonstrated that a CNN-based automated segmentation of maxillary sinus pathologies in CT images resulted in a high accuracy of 98.52% [36].

The absence of CB reporting by radiologists during interpretations and its omission from routine surgical checklists by surgeon results in ineffective detection and treatment of CB [21]. Thus, there is a need for AI-based CB detection systems to assist physicians in making effective decisions [21].

The high accuracy and AUC demonstrated by the deep learning models used in this study support their potential for clinical application. However, this study is not without limitations. First, the small sample size significantly restricts the generalizability of the results. Second, the models were trained on two-dimensional images derived from three-dimensional CBCT scans. Future research involving larger sample sizes, three-dimensional imaging, and a variety of deep learning models could further validate and expand their clinical applicability.

Notes

Conflict of interest

No potential conflict of interest relevant to this article was reported.

Funding

None.

Ethical approval

All experimental protocols were approved by the research ethics committee Ref. no. REC-21-01-10-01 (University of Sharjah). The written informed consent was waived because this study design is a retrospective review.

Author contributions

Conceptualization: Shishir Shetty, Auwalu Saleh Mubarak, Dilber Uzun Ozsahin, Leena R David. Data curation: Mhd Omar Al Jouhari, Wael Talaat, Sausan Al Kawas, Natheer Al-Rawi, Sunaina Shetty, Mamatha Shetty. Formal analysis: Shishir Shetty, Auwalu Saleh Mubarak, Dilber Uzun Ozsahin, Leena R David. Writing - original draft: Shishir Shetty, Auwalu Saleh Mubarak, Dilber Uzun Ozsahin, Leena R David. Writing - review & editing: Shishir Shetty, Auwalu Saleh Mubarak, Dilber Uzun Ozsahin, Leena R David. Resources: Mhd Omar Al Jouhari, Wael Talaat, Sausan Al Kawas, Natheer Al-Rawi, Sunaina Shetty, Mamatha Shetty. Software; Supervision; Validation: Shishir Shetty, Auwalu Saleh Mubarak, Dilber Uzun Ozsahin, Leena R David.

Abbreviations

AI

artificial intelligence

AUC

area under the curve

CB

concha bullosa

CBCT

cone-beam computed tomography

CEDWT

contrast enhancement and discrete wavelet transform

CNN

convolutional neural network

CT

computed tomography

FOV

field-ofview

RF

random forest

ROC

receiver operating characteristic

References

1. Pittore B, Al Safi W, Jarvis SJ. Concha bullosa of the inferior turbinate: an unusual cause of nasal obstruction. Acta Otorhinolaryngol Ital 2011;31:47–9.
2. Shetty SR, Al Bayatti SW, Al-Rawi NH, Kamath V, Reddy S, Narasimhan S, et al. The effect of concha bullosa and nasal septal deviation on palatal dimensions: a cone beam computed tomography study. BMC Oral Health 2021;21:607.
3. Shetty SR, Al Bayatti SW, Al-Rawi NH, Marei H, Reddy S, Abdelmagyd HA, et al. Analysis of inferior nasal turbinate width and concha bullosa in subjects with nasal septum deviation: a cone beam tomography study. BMC Oral Health 2021;21:206.
4. Ozcan KM, Selcuk A, Ozcan I, Akdogan O, Dere H. Anatomical variations of nasal turbinates. J Craniofac Surg 2008;19:1678–82.
5. El-Taher M, AbdelHameed WA, Alam-Eldeen MH, Haridy A. Coincidence of concha bullosa with nasal septal deviation; radiological study. Indian J Otolaryngol Head Neck Surg 2019;71(Suppl 3):1918–22.
6. Shetty S, Al-Bayatti S, Alam MK, Al-Rawi NH, Kamath V, Tippu SR, et al. Analysis of inferior nasal turbinate volume in subjects with nasal septum deviation: a retrospective cone beam tomography study. PeerJ 2022;10e14032.
7. Kucybala I, Janik KA, Ciuk S, Storman D, Urbanik A. Nasal septal deviation and concha bullosa: do they have an impact on maxillary sinus volumes and prevalence of maxillary sinusitis? Pol J Radiol 2017;82:126–33.
8. Ozkiris M, Karacavus S, Kapusuz Z, Saydam L. The impact of unilateral concha bullosa on mucociliary activity: an assessment by rhinoscintigraphy. Am J Rhinol Allergy 2013;27:54–7.
9. Shetty SR, Al-Bayatti SW, Al Kawas S, Al-Rawi NH, Kamath V, Shetty R, et al. A study on the association between the inferior nasal turbinate volume and the maxillary sinus mucosal lining using cone beam tomography. Heliyon 2022;8e09190.
10. San T, San S, Gurkan E, Erdogan B. The role of septated concha bullosa on sinonasal pathologies. Eur Arch Otorhinolaryngol 2015;272:1417–21.
11. El-Din WA, Madani GA, Fattah IO, Mahmoud E, Essawy AS. Prevalence of the anatomical variations of concha bullosa and its relation with sinusitis among Saudi population: a computed tomography scan study. Anat Cell Biol 2021;54:193–201.
12. Kalaiarasi R, Ramakrishnan V, Poyyamoli S. Anatomical variations of the middle turbinate concha bullosa and its relationship with chronic sinusitis: a prospective radiologic study. Int Arch Otorhinolaryngol 2018;22:297–302.
13. Suratwala NB, Suratwala JN, Jadawala HD. Effectiveness of volumetric reduction of middle concha bullosa by crushing technique in chronic nasal obstruction. Indian J Otolaryngol Head Neck Surg 2022;74(Suppl 2):1009–16.
14. Ila K, Yilmaz N, Oner S, Basaran E, Oner Z. Evaluation of superior concha bullosa by computed tomography. Surg Radiol Anat 2018;40:841–6.
15. Smith KD, Edwards PC, Saini TS, Norton NS. The prevalence of concha bullosa and nasal septal deviation and their relationship to maxillary sinusitis by volumetric tomography. Int J Dent 2010;2010:404982.
16. Deutschmann MW, Yeung J, Bosch M, Lysack JT, Kingstone M, Kilty SJ, et al. Radiologic reporting for paranasal sinus computed tomography: a multi-institutional review of content and consistency. Laryngoscope 2013;123:1100–5.
17. Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJ. Artificial intelligence in radiology. Nat Rev Cancer 2018;18:500–10.
18. European Society of Radiology (ESR). What the radiologist should know about artificial intelligence: an ESR white paper. Insights Imaging 2019;10:44.
19. Yamashita R, Nishio M, Do RK, Togashi K. Convolutional neural networks: an overview and application in radiology. Insights Imaging 2018;9:611–29.
20. Chen MC, Ball RL, Yang L, Moradzadeh N, Chapman BE, Larson DB, et al. Deep learning to classify radiology free-text reports. Radiology 2018;286:845–52.
21. Parmar P, Habib AR, Mendis D, Daniel A, Duvnjak M, Ho J, et al. An artificial intelligence algorithm that identifies middle turbinate pneumatisation (concha bullosa) on sinus computed tomography scans. J Laryngol Otol 2020;134:328–31.
22. Dutta A, Gupta A, Zisserman A. VGG Image Annotator (VIA) [Internet]. Visual Geometry Group c2024. [cited 2024 Aug 21]. Available from: https://www.robots.ox.ac.uk/~vgg/software/via.
23. Herman S. Computed tomography contrast enhancement principles and the use of high-concentration contrast media. J Comput Assist Tomogr 2004;28 Suppl 1:S7–11.
24. Abba SI, Yassin MA, Mubarak AS, Shah SM, Usman J, Oudah AY, et al. Drinking water resources suitability assessment based on pollution index of groundwater using improved explainable artificial intelligence. Sustainability 2023;15:15655.
25. Skodras AN. Discrete wavelet transform: an introduction Hellenic Open University: Technical Report HOU-CS-TR-2003-02-EN; 2003.
26. Arbelaez P, Maire M, Fowlkes C, Malik J. Contour detection and hierarchical image segmentation. IEEE Trans Pattern Anal Mach Intell 2011;33:898–916.
27. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, et al. Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv [Preprint] 2017;Apr. 17. [cited 2024 May 20]. https://doi.org/10.48550/arXiv.1704.04861.
28. Mubarak AS, Serte S, Al-Turjman F, Ameen ZS, Ozsoz M. Local binary pattern and deep learning feature extraction fusion for COVID-19 detection on computed tomography images. Expert Syst 2022;39e12842.
29. Ozsoz M, Mubarak A, Said Z, Aliyu R, Al-Turjman F, Serte S. Deep learning-based feature extraction coupled with multiclass SVM for COVID-19 detection in the IoT era. Int J Nanotechnol 2021;1:1–18.
30. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA; 2016, p. 770-8.
31. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J 2019;6:94–8.
32. Malik P, Pathania M, Rathaur VK. Overview of artificial intelligence in medicine. J Family Med Prim Care 2019;8:2328–31.
33. Ranjan R, Kumar P. An improved image compression algorithm using 2D DWT and PCA with canonical Huffman encoding. Entropy (Basel) 2023;25:1382.
34. Lorenzoni DC, Bolognese AM, Garib DG, Guedes FR, Sant’anna EF. Cone-beam computed tomography and radiographs in dentistry: aspects related to radiation dose. Int J Dent 2012;2012:813768.
35. Kim KS, Kim BK, Chung MJ, Cho HB, Cho BH, Jung YG. Detection of maxillary sinus fungal ball via 3-D CNN-based artificial intelligence: fully automated system and clinical validation. PLoS One 2022;17e0263125.
36. Ozbay S, Tunc O. Deep learning in analysing paranasal sinuses. Elektron Elektrotech 2022;28:65–70.

Article information Continued

Fig. 1.

Cropped coronal cone-beam computed tomography image showing the presence of concha bullosa.

Fig. 2.

An image annotated using the Visual Geometry Group Image Annotator software showing bilateral concha bullosa.

Fig. 3.

Steps to produce enhanced contrast enhanced discrete wavelet transformed images. (A) Original image. (B) Gray scale image. (C) C1: approximation coefficients, C2: horizontal detail coefficients, C3: vertical detail coefficients, C4: diagonal detail coefficients. (D) D1: enhanced horizontal details, D2: enhanced vertical details, D3: enhanced diagonal details. (E) Reconstruction with enhanced coefficients.

Fig. 4.

(A) Receiver operating characteristic (ROC) curve for CE-ResNet-50-RF. (B) Confusion matrix for CE-ResNet-50-RF. CE, contrast enhancement; RF, random forest.

Fig. 5.

(A) Receiver operating characteristic (ROC) curve for CE-ResNet-101-RF. (B) Confusion matrix for CE-ResNet-101-RF. CE, contrast enhancement; RF, random forest.

Fig. 6.

(A) Receiver operating characteristic (ROC) curve for CE-MobileNet-RF. (B) Confusion matrix for CE-MobileNet-RF. CE, contrast enhancement; RF, random forest.

Fig. 7.

(A) Receiver operating characteristic (ROC) curve for CE-DWT-ResNet-50-RF. (B) Confusion matrix for CE-DWT-ResNet-50-RF. CE-DWT, contrast enhancement and discrete wavelet transform; RF, random forest.

Fig. 8.

(A) Receiver operating characteristic (ROC) curve for CE-DWT-ResNet-101-RF. (B) Confusion matrix for CE-DWT-ResNet-101-RF. CE-DWT, contrast enhancement and discrete wavelet transform; RF, random forest.

Fig. 9.

(A) Receiver operating characteristic (ROC) curve for CE-DWT-MobileNet-RF. (B) Confusion matrix for CE-DWT-MobileNet-RF. CE-DWT, contrast enhancement and discrete wavelet transform; RF, random forest.

Table 1.

Sex distribution of the study subjects

Sex distribution Male Female Total
With concha bullosa 70 14 84
Without concha bullosa 92 27 119
Total 162 41 203

Table 2.

Model performance in terms of accuracy, precision, recall, F1-score, and AUC

Model Accuracy (%) Precision (%) Recall (%) F1-score AUC (%)
CE-DWT-ResNet-50-RF 90.98 91 91 0.91 98
CE-DWT-ResNet-101-RF 91.70 92 92 0.92 98
CE-DWT-MobileNet-RF 87.56 88 88 0.87 97
CE-ResNet-50-RF 83.69 84 84 0.83 94
CE-ResNet-101-RF 84.25 85 84 0.84 94
CE-MobileNet-RF 82.46 83 83 0.82 92

AUC, area under the curve; CE-DWT, contrast enhancement and discrete wavelet transform; RF, random forest; CE, contrast enhancement.

Table 3.

Comparison of the areas under the curve of the different models used in the study

Model Model Standard error Z value p-value
CE-DWT-ResNet-50-RF CE-DWT-ResNet-101-RF 0.0155 0.0000 0.5000
CE-DWT-MobileNet-RF 0.0173 0.5785 0.2814
CE-ResNet-50-RF 0.0218 1.8365 0.0331*
CE-ResNet-101-RF 0.0218 1.8365 0.0331*
CE-MobileNet-RF 0.0243 2.4731 0.0066*
CE-DWT-ResNet-101-RF CE-DWT-MobileNet-RF 0.0173 0.5785 0.2814
CE-ResNet-50-RF 0.0218 1.8365 0.0331*
CE-ResNet-101-RF 0.0218 1.8365 0.0331*
CE-MobileNet-RF 0.0243 2.4731 0.0066*
CE-DWT-MobileNet-RF CE-ResNet-50-RF 0.0231 1.2984 0.0970
CE-ResNet-101-RF 0.0231 1.2984 0.0970
CE-MobileNet-RF 0.0255 1.9641 0.0247*
CE-ResNet-50-RF CE-ResNet-101-RF 0.0266 0.0000 0.5000
CE-MobileNet-RF 0.0287 0.6969 0.2429
CE-ResNet-101-RF CE-MobileNet-RF 0.0287 0.6969 0.2429

CE-DWT, contrast enhancement and discrete wavelet transform; RF, random forest; CE, contrast enhancement.

*

p<0.05.