Skip navigation

Please use this identifier to cite or link to this item: http://10.10.120.238:8080/xmlui/handle/123456789/202
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKumar G.en_US
dc.contributor.authorKumar A.en_US
dc.contributor.authorAhlawat S.en_US
dc.contributor.authorPrasad Y.en_US
dc.date.accessioned2023-11-30T08:13:17Z-
dc.date.available2023-11-30T08:13:17Z-
dc.date.issued2022-
dc.identifier.isbn978-3031215131-
dc.identifier.issn1865-0929-
dc.identifier.otherEID(2-s2.0-85145006868)-
dc.identifier.urihttps://dx.doi.org/10.1007/978-3-031-21514-8_48-
dc.identifier.urihttp://localhost:8080/xmlui/handle/123456789/202-
dc.description.abstractRecently, deep learning framework gained extreme importance in various domains such as Computer Vision, Natural Language Processing, Bioinformatics, etc. The general architecture of deep learning framework is very complex that includes various tunable hyper-parameters and millions/billions of learnable weight parameters. In many of these Deep Neural Network (DNN) models, a single forward pass requires billions of operations such as multiplication, addition, comparison and exponentiation. Thus, it requires large computation time and dissipates huge amount of power even at the inference/prediction phase. Due to the success of DNN models in many application domains, the area and power efficient hardware implementations of DNNs in resource constraint systems have recently become highly desirable. To ensure the programmable flexibility and shorten the development period, field-programmable gate array (FPGA) is suitable for implementing the DNN models. However, the limited bandwidth and low on-chip memory storage of FPGA are the bottlenecks for deploying DNN on these FPGAs for inferencing. In this paper, Binary Particle Swarm Optimization (PSO) based approach is presented to reduce the hardware cost in terms of memory and power consumption. The number of weight parameters of the model and floating point units are reduced without any degradation in the generalization accuracy. It is observed that 85% of the weight parameters are reduced with 1% loss in accuracy. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.en_US
dc.language.isoenen_US
dc.publisherSpringer Science and Business Media Deutschland GmbHen_US
dc.sourceCommunications in Computer and Information Scienceen_US
dc.subjectDeep neural networken_US
dc.subjectFPGAen_US
dc.subjectOptimizationen_US
dc.subjectPSOen_US
dc.titleLow Cost Implementation of Deep Neural Network on Hardwareen_US
dc.typeConference Paperen_US
Appears in Collections:Conference Paper

Files in This Item:
There are no files associated with this item.
Show simple item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.