Skip navigation

Please use this identifier to cite or link to this item: http://10.10.120.238:8080/xmlui/handle/123456789/803
Full metadata record
DC FieldValueLanguage
dc.contributor.authorShirazi H.en_US
dc.contributor.authorBezawada B.en_US
dc.contributor.authorRay I.en_US
dc.contributor.authorAnderson C.en_US
dc.date.accessioned2023-11-30T08:50:36Z-
dc.date.available2023-11-30T08:50:36Z-
dc.date.issued2021-
dc.identifier.issn0926227X-
dc.identifier.otherEID(2-s2.0-85100784139)-
dc.identifier.urihttps://dx.doi.org/10.3233/JCS-191411-
dc.identifier.urihttp://localhost:8080/xmlui/handle/123456789/803-
dc.description.abstractPhishing websites trick honest users into believing that they interact with a legitimate website and capture sensitive information, such as user names, passwords, credit card numbers, and other personal information. Machine learning is a promising technique to distinguish between phishing and legitimate websites. However, machine learning approaches are susceptible to adversarial learning attacks where a phishing sample can bypass classifiers. Our experiments on publicly available datasets reveal that the phishing detection mechanisms are vulnerable to adversarial learning attacks. We investigate the robustness of machine learning-based phishing detection in the face of adversarial learning attacks. We propose a practical approach to simulate such attacks by generating adversarial samples through direct feature manipulation. To enhance the sample's success probability, we describe a clustering approach that guides an attacker to select the best possible phishing samples that can bypass the classifier by appearing as legitimate samples. We define the notion of vulnerability level for each dataset that measures the number of features that can be manipulated and the cost for such manipulation. Further, we clustered phishing samples and showed that some clusters of samples are more likely to exhibit higher vulnerability levels than others. This helps an adversary identify the best candidates of phishing samples to generate adversarial samples at a lower cost. Our finding can be used to refine the dataset and develop better learning models to compensate for the weak samples in the training dataset. © 2021 - IOS Press. All rights reserved.en_US
dc.language.isoenen_US
dc.publisherIOS Press BVen_US
dc.sourceJournal of Computer Securityen_US
dc.subjectadversarial samplingen_US
dc.subjectclassifiersen_US
dc.subjectmachine learningen_US
dc.subjectPhishingen_US
dc.titleDirected adversarial sampling attacks on phishing detectionen_US
dc.typeJournal Articleen_US
Appears in Collections:Journal Article

Files in This Item:
There are no files associated with this item.
Show simple item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.