Skip navigation

Please use this identifier to cite or link to this item: http://10.10.120.238:8080/xmlui/handle/123456789/686
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPanda M.K.en_US
dc.contributor.authorThangaraj V.en_US
dc.contributor.authorSubudhi B.N.en_US
dc.contributor.authorJakhetiya V.en_US
dc.date.accessioned2023-11-30T08:45:22Z-
dc.date.available2023-11-30T08:45:22Z-
dc.date.issued2023-
dc.identifier.issn0178-2789-
dc.identifier.otherEID(2-s2.0-85171985184)-
dc.identifier.urihttps://dx.doi.org/10.1007/s00371-023-03078-4-
dc.identifier.urihttp://localhost:8080/xmlui/handle/123456789/686-
dc.description.abstractThis article introduces a unique and first attempt at the fusion of visible and infrared images depending on multi-scale decomposition and salient feature map detection. The proposed technique integrates the bidimensional empirical mode decomposition (BEMD) strategy with Bayesian’s probabilistic strategy for fusion. The proposed mechanism can effectively handle the uncertainty in the challenging source pairs and retain maximum details of the sources at a multi-scale level. The BEMD level features are extracted and integrated with Bayesian’s probabilistic fusion strategy to extract several salient feature maps from the infrared and visual sensors images, which are able to preserve the common information and reduce the source images’ superfluous information at various scales. The combination of these salient feature maps generates an image that gives the target scene complete information with reduced artifacts. The performance of the proposed algorithm is estimated by testing it on the benchmark “TNO” database. The empirical results of the proposed algorithm are evaluated using both visual analysis and quantitative assessment. In this work, the efficiency of the proposed technique is corroborated against seventeen existing state-of-the-art (SOTA) techniques and found to be effective. For the quantitative assessment, we have used the four most-cited quantitative evaluation measures: mutual information for the discrete cosine features (FMI dct) , amount of artifacts added during the fusion process (Nabf) , structure similarity index (SSIM a) , and edge preservation index (EPI a) . It is observed that the proposed algorithm attained the best average values: Avg. FMI dct = 0.39863, Avg. Nabf = 0.00102, Avg. SSIM a = 0.77820, and Avg. EPI a = 0.78404. It is also observed that the proposed scheme outperforms the competitive SOTA techniques in terms of different considered quantitative evaluation measures with at least a gain of 3% and the highest gain of 94%. © 2023, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.en_US
dc.language.isoenen_US
dc.publisherSpringer Science and Business Media Deutschland GmbHen_US
dc.sourceVisual Computeren_US
dc.subjectBayesian’s probabilistic fusion strategyen_US
dc.subjectInfrared imageen_US
dc.subjectMulti-scale feature decompositionen_US
dc.subjectVisual imageen_US
dc.titleBayesian’s probabilistic strategy for feature fusion from visible and infrared imagesen_US
dc.typeJournal Articleen_US
Appears in Collections:Journal Article

Files in This Item:
There are no files associated with this item.
Show simple item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.