暂无搜索历史
论文: Not All Images are Worth 16x16 Words: Dynamic Transformers for Efficient Ima...
论文: Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
论文: Incorporating Convolution Designs into Visual Transformers
论文: Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction withou...
论文: Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
论文: PeLK: Parameter-efficient Large Kernel ConvNets with Peripheral Convolution
论文: FasterViT: Fast Vision Transformers with Hierarchical Attention
论文: LORS: Low-rank Residual Structure for Parameter-Efficient Network Stacking
论文: Conditional Positional Encodings for Vision Transformers
论文: Training data-efficient image transformers & distillation through attention
论文: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
论文: Dynamic Label Assignment for Object Detection by Combining Predicted and Anc...
模型速度在模型的移动端应用中十分重要,提高模型推理速度的方法有模型剪枝、权值量化、知识蒸馏、模型设计以及动态推理等。其中,动态推理根据输入调整其结构,降低整体计...
论文: CondenseNet V2: Sparse Feature Reactivation for Deep Networks
论文: Energy-based Out-of-distribution Detection
论文: LiftPool: Bidirectional ConvNet Pooling
论文: EfficientNetV2: Smaller Models and Faster Training
论文: Contrastive Learning based Hybrid Networks for Long-Tailed Image Classificat...
* **论文地址:https://arxiv.org/abs/2103.09460**
论文: Is it Enough to Optimize CNN Architectures on ImageNet?
暂未填写公司和职称
暂未填写技能专长
暂未填写学校和专业
暂未填写个人网址
暂未填写所在城市