Variable length coding of quantized deep learning models / (Record no. 166860)
[ view plain ]
000 -LEADER | |
---|---|
fixed length control field | 04264namaa22004211i 4500 |
003 - CONTROL NUMBER IDENTIFIER | |
control field | OSt |
005 - أخر تعامل مع التسجيلة | |
control field | 20250223033238.0 |
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION | |
fixed length control field | 240507s2023 |||a|||fr|m|| 000 0 eng d |
040 ## - CATALOGING SOURCE | |
Original cataloguing agency | EG-GICUC |
Language of cataloging | eng |
Transcribing agency | EG-GICUC |
Modifying agency | EG-GICUC |
Description conventions | rda |
041 0# - LANGUAGE CODE | |
Language code of text/sound track or separate title | eng |
Language code of summary or abstract | eng |
Language code of sung or spoken text | ara |
049 ## - Acquisition Source | |
Acquisition Source | Deposit |
082 04 - DEWEY DECIMAL CLASSIFICATION NUMBER | |
Classification number | 621.39 |
092 ## - LOCALLY ASSIGNED DEWEY CALL NUMBER (OCLC) | |
Classification number | 621.39 |
Edition number | 21 |
097 ## - Degree | |
Degree | M.Sc |
099 ## - LOCAL FREE-TEXT CALL NUMBER (OCLC) | |
Local Call Number | Cai01 13 06 M.Sc 2023 Re.V |
100 0# - MAIN ENTRY--PERSONAL NAME | |
Authority record control number or standard number | Reem Omar Mohamed El-Sayed Abdel-Salam, |
Preparation | preparation. |
245 10 - TITLE STATEMENT | |
Title | Variable length coding of quantized deep learning models / |
Statement of responsibility, etc. | By Reem Omar Mohamed El-Sayed Abdel-Salam ; Under the Supervision of Prof. Dr. Amr G. Wassal, Dr. Ahmed H. Abdel-Gawad |
246 15 - VARYING FORM OF TITLE | |
Title proper/short title | طول متغير للترميز نماذج التعلم العميق الكمي / |
264 #0 - PRODUCTION, PUBLICATION, DISTRIBUTION, MANUFACTURE, AND COPYRIGHT NOTICE | |
Date of production, publication, distribution, manufacture, or copyright notice | 2023. |
300 ## - PHYSICAL DESCRIPTION | |
Extent | 67 pages : |
Other physical details | illustrations ; |
Dimensions | 30 cm. + |
Accompanying material | CD. |
336 ## - CONTENT TYPE | |
Content type term | text |
Source | rda content |
337 ## - MEDIA TYPE | |
Media type term | Unmediated |
Source | rdamedia |
338 ## - CARRIER TYPE | |
Carrier type term | volume |
Source | rdacarrier |
502 ## - DISSERTATION NOTE | |
Dissertation note | Thesis (M.Sc.) -Cairo University, 2023. |
504 ## - BIBLIOGRAPHY, ETC. NOTE | |
Bibliography, etc. note | Bibliography: pages 65-67. |
520 ## - SUMMARY, ETC. | |
Summary, etc. | Quantization plays a crucial role in efficiently deploying deep learning models on<br/>resource-constraint devices. Most of the existing quantization approaches require<br/>access either to the full dataset or a small amount of it, in order to re-train the<br/>model, which might be hard due to resources limitation for training or data access<br/>issues. Current methods achieve high performance on INT8 (or above) fixed-point<br/>integers. However, performance degrades with lower bit-width. Therefore, we<br/>propose variable length coding of quantized deep learning models (VLCQ), a<br/>data-free quantization method, which pushes the boundaries of quantization to<br/>have a higher accuracy performance with lower bit-width. VLCQ leverages from<br/>weight distribution of the model in quantization to improve accuracy, as well as to<br/>further compress weights, thus achieving lower bit-width. VLCQ achieves nearly<br/>the same FP32 accuracy with sub-6 bit-width for MobileNetV2 and ResNet18<br/>models. In addition to that, compared to the state of the art, VLCQ achieves<br/>similar accuracy while having 20% bit-width saving. Finally, for ResNet18 and<br/>Resnet50 models, VLCQ successfully quantizes them with 2 sub-bit-width with<br/>2-3% accuracy loss, making it the first data-free post-training quantization method<br/>to achieve good performance with very low bit-width. |
520 ## - SUMMARY, ETC. | |
Summary, etc. | يلعب التكميم دورًا مهمًا في نشر نماذج التعلم العميق بكفاءة على أجهزة قيود الموارد. معظم نهج التكميم الحالية تتطلب الوصول إلى مجموعة البيانات الكاملة أو كمية صغيرة منها بالترتيب لإعادة تدريب النموذج، والذي قد يكون صعبًا بسبب محدودية الموارد الخاصة به التدريب أو مشكلات الوصول إلى البيانات. الأساليب الحالية تحقق أداءً عاليًا في INT8 (أو أعلى) الأعداد الصحيحة ذات النقطة الثابتة. ومع ذلك، يتدهور الأداء مع عرض بت أقل. لذلك، نقترح ترميز متغير الطول للكمي نماذج التعلم العميق (VLCQ)، طريقة تكمية خالية من البيانات، تدفع حدود التكميم للحصول على أداء أعلى دقة مع عرض بت أقل. تستفيد VLCQ من توزيع الوزن للنموذج في التكميم لتحسين الدقة، وكذلك لزيادة ضغط الأوزان، وبالتالي تحقيق عرض بت أقل. |
530 ## - ADDITIONAL PHYSICAL FORM AVAILABLE NOTE | |
Issues CD | Issued also as CD |
546 ## - LANGUAGE NOTE | |
Text Language | Text in English and abstract in Arabic & English. |
650 #7 - SUBJECT ADDED ENTRY--TOPICAL TERM | |
Topical term or geographic name entry element | Computer Engineering |
Source of heading or term | qrmark |
653 #0 - INDEX TERM--UNCONTROLLED | |
Uncontrolled term | Quantization |
-- | Post-training Quantization |
-- | Deep learning |
-- | Variable length encoding |
-- | deep neural networks |
700 0# - ADDED ENTRY--PERSONAL NAME | |
Personal name | Amr G. Wassal |
Relator term | thesis advisor. |
700 0# - ADDED ENTRY--PERSONAL NAME | |
Personal name | Ahmed H. Abdel-Gawad |
Relator term | thesis advisor. |
900 ## - Thesis Information | |
Grant date | 01-01-2023 |
Supervisory body | Amr G. Wassal |
-- | Ahmed H. Abdel-Gawad |
Discussion body | Hoda Baraka |
-- | Ahmed F. Seddik |
Universities | Cairo University |
Faculties | Faculty of Engineering |
Department | Department of Computer Engineering |
905 ## - Cataloger and Reviser Names | |
Cataloger Name | Eman Ghareeb |
Reviser Names | Huda |
942 ## - ADDED ENTRY ELEMENTS (KOHA) | |
Source of classification or shelving scheme | Dewey Decimal Classification |
Koha item type | Thesis |
Edition | 21 |
Suppress in OPAC | No |
Source of classification or shelving scheme | Home library | Current library | Date acquired | Inventory number | Full call number | Barcode | Date last seen | Effective from | Koha item type |
---|---|---|---|---|---|---|---|---|---|
Dewey Decimal Classification | المكتبة المركزبة الجديدة - جامعة القاهرة | قاعة الرسائل الجامعية - الدور الاول | 07.05.2024 | 88286 | Cai01 13 06 M.Sc 2023 Re.V | 01010110088286000 | 07.05.2024 | 07.05.2024 | Thesis |