5.1. Performance Comparison
For prototype-based methods, CDLoss consistently improved AUROC. In GCPL, AUROC increased by 1.28% on CIFAR10 (from 84.29 to 85.57), by 1.17% on CIFAR+50 (from 88.40 to 89.57), and by 3.34% on TinyImageNet (from 69.89 to 73.23). Similarly, SLCPL showed improvements of 0.54% on CIFAR10, 0.82% on CIFAR+50, and 3.93% on TinyImageNet. These gains highlight CDLoss’s ability to refine decision boundaries in high-dimensional feature spaces.
For RPL and ARPL, which incorporate reciprocal points to separate KCs from UCs, CDLoss also showed notable improvements. RPL achieved AUROC increases of 0.36% on CIFAR10, 0.33% on CIFAR+50, and 3.97% on TinyImageNet. Similarly, ARPL improved AUROC by 0.28% on CIFAR10 and 0.34% on TinyImageNet.
Overall, the results demonstrate that CDLoss improves AUROC across all datasets and methods, with particularly significant gains in more challenging datasets like TinyImageNet.
For MSP, CDLoss increased OSCR by 0.87% on CIFAR10 (from 83.75 to 84.62), by 1.28% on CIFAR+50 (from 88.21 to 89.49), and by 0.86% on TinyImageNet. For GCPL, CDLoss yielded larger gains, with OSCR improvements of 1.45% on CIFAR10 (from 81.69 to 83.14), 1.26% on CIFAR+50 (from 86.47 to 87.73), and 6.87% on TinyImageNet (from 48.83 to 55.70). SLCPL also benefited from CDLoss, with OSCR increasing by 0.59% on CIFAR10, 1.04% on CIFAR+50, and 6.43% on TinyImageNet.
For RPL, CDLoss improved OSCR by 0.49% on CIFAR10 (from 83.29 to 83.78), 0.35% on CIFAR+50 (from 87.51 to 87.86), and 7.58% on TinyImageNet (from 48.92 to 56.50). ARPL exhibited smaller but consistent improvements, with OSCR increasing by 0.42% on CIFAR10, 0.12% on CIFAR+50, and 0.52% on TinyImageNet.
These results confirm that CDLoss reduces inter-class overlap while enhancing intra-class cohesion, leading to improved OSCR performance, particularly in complex datasets.
Prototype-based methods exhibited more substantial gains. For GCPL, accuracy improved by 0.60% on CIFAR10 (from 93.90 to 94.50), 0.40% on CIFAR+50 (from 95.91 to 96.31), and a notable 7.18% on TinyImageNet (from 60.50 to 67.68). RPL also saw noticeable improvements, with increases of 0.28% on CIFAR10, 0.05% on CIFAR+50, and 7.42% on TinyImageNet. ARPL and SLCPL showed consistent gains, with accuracy improvements of 0.20% and 0.23% on CIFAR10, respectively, and notable gains on TinyImageNet (e.g., 0.38% for ARPL and 5.76% for SLCPL).
These findings demonstrate that CDLoss enhances the discriminative power of feature representations, leading to improved KC classification accuracy.
Despite these improvements, performance on TinyImageNet remains lower compared to other datasets. This discrepancy is attributed to the dataset’s inherent complexity, including diverse backgrounds, rich content, and high variability. Additionally, the large number of UCs in TinyImageNet increases the overlap with KCs in feature space, posing greater challenges for accurate recognition.
5.3. Visualization of the Impact of CDLoss on Feature Response
To investigate the impact of CDLoss on feature separation, T-SNE was used to visualize feature responses with and without CDLoss on the MNIST dataset. Digits 0, 1, 5, and 7 were designated as UCs, while the remaining digits served as KCs.
The introduction of CDLoss reduces overlap by aligning samples more closely with their one-hot encoding vectors. For example, in the MSP method, the incorporation of CDLoss pushes UCs (e.g., category 6) towards the periphery of the feature space, minimizing their overlap with KCs. Similarly, in GCPL, CDLoss reduces overlap between UCs and classes 1 and 4. In the SLCPL framework, overlap between UCs and class 2 is effectively minimized.
However, certain limitations remain. In the RPL method, CDLoss reduces overlap between UCs and classes 2 and 4, but overlap with classes 3 and 5 persists. Similarly, ARPL exhibits reduced overlap for some categories (e.g., classes 2 and 4), but overlap with others remains near the feature space center. These observations suggest that while CDLoss significantly enhances feature separation, further optimization is needed for certain complex categories.
5.4. Stability Analysis of the Proposed Method
However, on the TinyImageNet dataset, the stability of both ARPL and SLCPL decreases slightly, likely due to the dataset’s inherent complexity. TinyImageNet’s rich content and high variability make it challenging to optimize feature representations, potentially requiring further tuning of CDLoss parameters.
In conclusion, CDLoss enhances stability across most datasets and frameworks. Nonetheless, datasets like TinyImageNet highlight the need for additional optimization to address challenges posed by increased complexity and variability.
Source link
Xiaolin Li www.mdpi.com