You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

mask_rcnn_swin-t-p4-w7_fpn_ms-crop-3x_coco.py 3.3 kB

2 years ago
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091
  1. _base_ = [
  2. '../_base_/models/mask_rcnn_r50_fpn.py',
  3. '../_base_/datasets/coco_instance.py',
  4. '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
  5. ]
  6. pretrained = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth' # noqa
  7. model = dict(
  8. type='MaskRCNN',
  9. backbone=dict(
  10. _delete_=True,
  11. type='SwinTransformer',
  12. embed_dims=96,
  13. depths=[2, 2, 6, 2],
  14. num_heads=[3, 6, 12, 24],
  15. window_size=7,
  16. mlp_ratio=4,
  17. qkv_bias=True,
  18. qk_scale=None,
  19. drop_rate=0.,
  20. attn_drop_rate=0.,
  21. drop_path_rate=0.2,
  22. patch_norm=True,
  23. out_indices=(0, 1, 2, 3),
  24. with_cp=False,
  25. convert_weights=True,
  26. init_cfg=dict(type='Pretrained', checkpoint=pretrained)),
  27. neck=dict(in_channels=[96, 192, 384, 768]))
  28. img_norm_cfg = dict(
  29. mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
  30. # augmentation strategy originates from DETR / Sparse RCNN
  31. train_pipeline = [
  32. dict(type='LoadImageFromFile'),
  33. dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
  34. dict(type='RandomFlip', flip_ratio=0.5),
  35. dict(
  36. type='AutoAugment',
  37. policies=[[
  38. dict(
  39. type='Resize',
  40. img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333),
  41. (608, 1333), (640, 1333), (672, 1333), (704, 1333),
  42. (736, 1333), (768, 1333), (800, 1333)],
  43. multiscale_mode='value',
  44. keep_ratio=True)
  45. ],
  46. [
  47. dict(
  48. type='Resize',
  49. img_scale=[(400, 1333), (500, 1333), (600, 1333)],
  50. multiscale_mode='value',
  51. keep_ratio=True),
  52. dict(
  53. type='RandomCrop',
  54. crop_type='absolute_range',
  55. crop_size=(384, 600),
  56. allow_negative_crop=True),
  57. dict(
  58. type='Resize',
  59. img_scale=[(480, 1333), (512, 1333), (544, 1333),
  60. (576, 1333), (608, 1333), (640, 1333),
  61. (672, 1333), (704, 1333), (736, 1333),
  62. (768, 1333), (800, 1333)],
  63. multiscale_mode='value',
  64. override=True,
  65. keep_ratio=True)
  66. ]]),
  67. dict(type='Normalize', **img_norm_cfg),
  68. dict(type='Pad', size_divisor=32),
  69. dict(type='DefaultFormatBundle'),
  70. dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
  71. ]
  72. data = dict(train=dict(pipeline=train_pipeline))
  73. optimizer = dict(
  74. _delete_=True,
  75. type='AdamW',
  76. lr=0.0001,
  77. betas=(0.9, 0.999),
  78. weight_decay=0.05,
  79. paramwise_cfg=dict(
  80. custom_keys={
  81. 'absolute_pos_embed': dict(decay_mult=0.),
  82. 'relative_position_bias_table': dict(decay_mult=0.),
  83. 'norm': dict(decay_mult=0.)
  84. }))
  85. lr_config = dict(warmup_iters=1000, step=[27, 33])
  86. runner = dict(max_epochs=36)

No Description

Contributors (3)