企业🤖AI智能体构建引擎,智能编排和调试,一键部署,支持私有化部署方案 广告
# ImageNet 数据集 根据 [http://image-net.org](http://image-net.org) : ImageNet 是根据 WordNet 层次结构组织的图像数据集。WordNet 中的每个有意义的概念,可能由多个单词或单词短语描述,称为同义词集或 synset。 ImageNet 有大约 100 K 个同义词集,平均每个同义词集约有 1,000 个人工注释图像。 ImageNet 仅存储对图像的引用,而图像存储在互联网上的原始位置。在深度学习论文中,ImageNet-1K 是指作为 ImageNet 的**大规模视觉识别挑战**( **ILSVRC** )的一部分发布的数据集,用于将数据集分类为 1,000 个类别: 可以在以下 URL 找到 1,000 个挑战类别: [http://image-net.org/challenges/LSVRC/2017/browse-synsets](http://image-net.org/challenges/LSVRC/2017/browse-synsets) [http://image-net.org/challenges/LSVRC/2016/browse-synsets](http://image-net.org/challenges/LSVRC/2016/browse-synsets) [http://image-net.org/challenges/LSVRC/2015/browse-synsets](http://image-net.org/challenges/LSVRC/2015/browse-synsets) [http://image-net.org/challenges/LSVRC/2014/browse-synsets](http://image-net.org/challenges/LSVRC/2014/browse-synsets) [http://image-net.org/challenges/LSVRC/2013/browse-synsets](http://image-net.org/challenges/LSVRC/2013/browse-synsets) [http://image-net.org/challenges/LSVRC/2012/browse-synsets](http://image-net.org/challenges/LSVRC/2012/browse-synsets) [http://image-net.org/challenges/LSVRC/2011/browse-synsets](http://image-net.org/challenges/LSVRC/2011/browse-synsets) [http://image-net.org/challenges/LSVRC/2010/browse-synsets.](http://image-net.org/challenges/LSVRC/2010/browse-synsets) 我们编写了一个自定义函数来从 Google 下载 ImageNet 标签: ```py def build_id2label(self): base_url = 'https://raw.githubusercontent.com/tensorflow/models/master/research/inception/inception/data/' synset_url = '{}/imagenet_lsvrc_2015_synsets.txt'.format(base_url) synset_to_human_url = '{}/imagenet_metadata.txt'.format(base_url) filename, _ = urllib.request.urlretrieve(synset_url) synset_list = [s.strip() for s in open(filename).readlines()] num_synsets_in_ilsvrc = len(synset_list) assert num_synsets_in_ilsvrc == 1000 filename, _ = urllib.request.urlretrieve(synset_to_human_url) synset_to_human_list = open(filename).readlines() num_synsets_in_all_imagenet = len(synset_to_human_list) assert num_synsets_in_all_imagenet == 21842 synset2name = {} for s in synset_to_human_list: parts = s.strip().split('\t') assert len(parts) == 2 synset = parts[0] name = parts[1] synset2name[synset] = name if self.n_classes == 1001: id2label={0:'empty'} id=1 else: id2label = {} id=0 for synset in synset_list: label = synset2name[synset] id2label[id] = label id += 1 return id2label ``` 我们将这些标签加载到我们的 Jupyter 笔记本中,如下所示: ```py ### Load ImageNet dataset for labels from datasetslib.imagenet import imageNet inet = imageNet() inet.load_data(n_classes=1000) #n_classes is 1001 for Inception models and 1000 for VGG models ``` 在 ImageNet-1K 数据集上训练过的热门预训练图像分类模型如下表所示: | **模型名称** | **Top-1 准确率** | **Top-5 准确率** | **Top-5 错误率** | **原始文件的链接** | | --- | --- | --- | --- | --- | | AlexNet | | | 15.3% | [https://www.cs.toronto.edu/~fritz/absps/imagenet.pdf](https://www.cs.toronto.edu/~fritz/absps/imagenet.pdf) | | 盗梦空间也称为 Inception V1 | 69.8 | 89.6 | 6.67% | [https://arxiv.org/abs/1409.4842](https://arxiv.org/abs/1409.4842) | | BN-启-V2 也称为 Inception V2 | 73.9 | 91.8 | 4.9% | [https://arxiv.org/abs/1502.03167](https://arxiv.org/abs/1502.03167) | | Inception v3 | 78.0 | 93.9 | 3.46% | [https://arxiv.org/abs/1512.00567](https://arxiv.org/abs/1512.00567) | | 成立 V4 | 80.2 | 95.2 | | [http://arxiv.org/abs/1602.07261](http://arxiv.org/abs/1602.07261) | | Inception-Resnet-V2 | 80.4 | 95.2 | | [http://arxiv.org/abs/1602.07261](http://arxiv.org/abs/1602.07261) | | VGG16 | 71.5 | 89.8 | 7.4% | [https://arxiv.org/abs/1409.1556](https://arxiv.org/abs/1409.1556) | | VGG19 | 71.1 | 89.8 | 7.3% | [https://arxiv.org/abs/1409.1556](https://arxiv.org/abs/1409.1556) | | ResNet V1 50 | 75.2 | 92.2 | 7.24% | [https://arxiv.org/abs/1512.03385](https://arxiv.org/abs/1512.03385) | | Resnet V1 101 | 76.4 | 92.9 | | [https://arxiv.org/abs/1512.03385](https://arxiv.org/abs/1512.03385) | | Resnet V1 152 | 76.8 | 93.2 | | [https://arxiv.org/abs/1512.03385](https://arxiv.org/abs/1512.03385) | | ResNet V2 50 | 75.6 | 92.8 | | [https://arxiv.org/abs/1603.05027](https://arxiv.org/abs/1603.05027) | | ResNet V2 101 | 77.0 | 93.7 | | [https://arxiv.org/abs/1603.05027](https://arxiv.org/abs/1603.05027) | | ResNet V2 152 | 77.8 | 94.1 | | [https://arxiv.org/abs/1603.05027](https://arxiv.org/abs/1603.05027) | | ResNet V2 200 | 79.9 | 95.2 | | [https://arxiv.org/abs/1603.05027](https://arxiv.org/abs/1603.05027) | | Xception | 79.0 | 94.5 | | [https://arxiv.org/abs/1610.02357](https://arxiv.org/abs/1610.02357) | | MobileNet V1 版本 | 41.3 至 70.7 | 66.2 至 89.5 | | [https://arxiv.org/pdf/1704.04861.pdf](https://arxiv.org/pdf/1704.04861.pdf) | 在上表中,Top-1 和 Top-5 指标指的是模型在 ImageNet 验证数据集上的表现。 Google Research 最近发布了一种名为 MobileNets 的新模型。 MobileNets 采用移动优先策略开发,牺牲了低资源使用的准确性。 MobileNets 旨在消耗低功耗并提供低延迟,以便在移动和嵌入式设备上提供更好的体验。谷歌为 MobileNet 模型提供了 16 个预训练好的检查点文件,每个模型提供不同数量的参数和**乘法累加**( **MAC** )。 MAC 和参数越高,资源使用和延迟就越高。因此,您可以在更高的准确性与更高的资源使用/延迟之间进行选择。 | **模型检查点** | **百万 MAC** | **百万参数** | **Top-1 准确率** | **Top-5 准确率** | | --- | --- | --- | --- | --- | | [MobileNet_v1_1.0_224](http://download.tensorflow.org/models/mobilenet_v1_1.0_224_2017_06_14.tar.gz) | 569 | 4.24 | 70.7 | 89.5 | | [MobileNet_v1_1.0_192](http://download.tensorflow.org/models/mobilenet_v1_1.0_192_2017_06_14.tar.gz) | 418 | 4.24 | 69.3 | 88.9 | | [MobileNet_v1_1.0_160](http://download.tensorflow.org/models/mobilenet_v1_1.0_160_2017_06_14.tar.gz) | 291 | 4.24 | 67.2 | 87.5 | | [MobileNet_v1_1.0_128](http://download.tensorflow.org/models/mobilenet_v1_1.0_128_2017_06_14.tar.gz) | 186 | 4.24 | 64.1 | 85.3 | | [MobileNet_v1_0.75_224](http://http//download.tensorflow.org/models/mobilenet_v1_0.75_224_2017_06_14.tar.gz) | 317 | 2.59 | 68.4 | 88.2 | | [MobileNet_v1_0.75_192](http://download.tensorflow.org/models/mobilenet_v1_0.75_192_2017_06_14.tar.gz) | 233 | 2.59 | 67.4 | 87.3 | | [MobileNet_v1_0.75_160](http://download.tensorflow.org/models/mobilenet_v1_0.75_160_2017_06_14.tar.gz) | 162 | 2.59 | 65.2 | 86.1 | | [MobileNet_v1_0.75_128](http://download.tensorflow.org/models/mobilenet_v1_0.75_128_2017_06_14.tar.gz) | 104 | 2.59 | 61.8 | 83.6 | | [MobileNet_v1_0.50_224](http://download.tensorflow.org/models/mobilenet_v1_0.50_224_2017_06_14.tar.gz) | 150 | 1.34 | 64.0 | 85.4 | | [MobileNet_v1_0.50_192](http://download.tensorflow.org/models/mobilenet_v1_0.50_192_2017_06_14.tar.gz) | 110 | 1.34 | 62.1 | 84.0 | | [MobileNet_v1_0.50_160](http://download.tensorflow.org/models/mobilenet_v1_0.50_160_2017_06_14.tar.gz) | 77 | 1.34 | 59.9 | 82.5 | | [MobileNet_v1_0.50_128](http://download.tensorflow.org/models/mobilenet_v1_0.50_128_2017_06_14.tar.gz) | 49 | 1.34 | 56.2 | 79.6 | | [MobileNet_v1_0.25_224](http://download.tensorflow.org/models/mobilenet_v1_0.25_224_2017_06_14.tar.gz) | 41 | 0.47 | 50.6 | 75.0 | | [MobileNet_v1_0.25_192](http://download.tensorflow.org/models/mobilenet_v1_0.25_192_2017_06_14.tar.gz) | 34 | 0.47 | 49.0 | 73.6 | | [MobileNet_v1_0.25_160](http://download.tensorflow.org/models/mobilenet_v1_0.25_160_2017_06_14.tar.gz) | 21 | 0.47 | 46.0 | 70.7 | | [MobileNet_v1_0.25_128](http://download.tensorflow.org/models/mobilenet_v1_0.25_128_2017_06_14.tar.gz) | 14 | 0.47 | 41.3 | 66.2 | 有关 MobileNets 的更多信息,请访问以下资源: [https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html](https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html) [https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md) [https://arxiv.org/pdf/1704.04861.pdf.](https://arxiv.org/pdf/1704.04861.pdf)