diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..9b0b743 --- /dev/null +++ b/.gitignore @@ -0,0 +1,94 @@ +# IntelliJ project files +.idea +*.iml +out +gen + +### Vim template +[._]*.s[a-w][a-z] +[._]s[a-w][a-z] +*.un~ +Session.vim +.netrwhist +*~ + +### IPythonNotebook template +# Temporary data +.ipynb_checkpoints/ + +### Python template +# Byte-compiled / optimized / DLL files +__pycache__/ +*.py[cod] +*$py.class + +# C extensions +*.so + +# Distribution / packaging +.Python +env/ +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +#lib/ +#lib64/ +parts/ +sdist/ +var/ +*.egg-info/ +.installed.cfg +*.egg + +# PyInstaller +# Usually these files are written by a python script from a template +# before PyInstaller builds the exe, so as to inject date/other infos into it. +*.manifest +*.spec + +# Installer logs +pip-log.txt +pip-delete-this-directory.txt + +# Unit test / coverage reports +htmlcov/ +.tox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*,cover + +# Translations +*.mo +*.pot + +# Django stuff: +*.log + +# Sphinx documentation +docs/_build/ + +# PyBuilder +target/ + +*.ipynb +*.params +*.json +.vscode/ + +lib/dataset/pycocotools/*.c +lib/dataset/pycocotools/*.cpp +lib/nms/*.c +lib/nms/*.cpp + +data +external +output +model + +.db diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000..e2d5bcb --- /dev/null +++ b/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2017 Microsoft + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/README.md b/README.md new file mode 100644 index 0000000..420db81 --- /dev/null +++ b/README.md @@ -0,0 +1,131 @@ +This is the official code for [Learning RoI Transformer for Detecting Oriented Objects in Aerial Images](https://arxiv.org/abs/1812.00155) + +This code is based on deformable convolution network + +mmdetection version is on the way + +Since there are custom c++ operators, We need to complie the MXNet source. + +## Requirements: Software + +1. MXNet from [the offical repository](https://github.com/dmlc/mxnet). + +2. Python 2.7. We recommend using Anaconda2 as it already includes many common packages. We do not support Python 3 yet, if you want to use Python 3 you need to modify the code to make it work. + +3. Python packages might missing: cython, opencv-python >= 3.2.0, easydict. If `pip` is set up on your system, those packages should be able to be fetched and installed by running + ``` + pip install -r requirements.txt + ``` +4. For Windows users, Visual Studio 2015 is needed to compile cython module. + +## Installation + +1. Clone the Deformable ConvNets repository, and we'll call the directory that you cloned Deformable-ConvNets as ${DCN_ROOT}. +``` +git clone https://github.com/msracver/Deformable-ConvNets.git +``` + +2. For Windows users, run ``cmd .\init.bat``. For Linux user, run `sh ./init.sh`. The scripts will build cython module automatically and create some folders. + +3. Install MXNet: + + **Note: The MXNet's Custom Op cannot execute parallelly using multi-gpus after this [PR](https://github.com/apache/incubator-mxnet/pull/6928). We strongly suggest the user rollback to version [MXNet@(commit 998378a)](https://github.com/dmlc/mxnet/tree/998378a) for training (following Section 3.2 - 3.5).** + + ***Quick start*** + + 3.1 Install MXNet and all dependencies by + ``` + pip install -r requirements.txt + ``` + If there is no other error message, MXNet should be installed successfully. + + ***Build from source (alternative way)*** + + 3.2 Clone MXNet and checkout to [MXNet@(commit 998378a)](https://github.com/dmlc/mxnet/tree/998378a) by + ``` + git clone --recursive https://github.com/dmlc/mxnet.git + git checkout 998378a + git submodule update + # if it's the first time to checkout, just use: git submodule update --init --recursive + ``` + 3.3 Copy the c++ operators to MXNet source + ``` + cp fpn/operator_cxx/* mxnet/src/operator/contrib + ``` + 3.3 Compile MXNet + ``` + cd ${MXNET_ROOT} + make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1 + ``` + 3.4 Install the MXNet Python (python 2.7) binding by + + ***Note: If you will actively switch between different versions of MXNet, please follow 3.5 instead of 3.4*** + ``` + cd python + sudo python setup.py install + ``` + 3.5 For advanced users, you may put your Python packge into `./external/mxnet/$(YOUR_MXNET_PACKAGE)`, and modify `MXNET_VERSION` in `./experiments/rfcn/cfgs/*.yaml` to `$(YOUR_MXNET_PACKAGE)`. Thus you can switch among different versions of MXNet quickly. + +complie dota_kit + + +## Prepare DOTA Data: + +1.prepare script + put your original dota data (before split) in path_to_data + make sure it looks like + path_to_data/train/images, + path_to_data/train/labelTxt, + path_to_data/val/images, + path_to_data/val/labelTxt, + path_to_data/test/images + + cd prepare_data + python prepare_data.py --data_path path_to_data --num_process 32 + +2.soft link + mkdir data + cd data + ln -s path_to_data dota_1024 + +## Pretrained Models + +We provide trained convnet models. + +1. To use the demo with our pre-trained RoI Transformer models for DOTA, please download manually from [Google Drive](https://drive.google.com/drive/folders/1kUBsH2v5DK6QjqDoMmyx16bW7gUlEgn1?usp=sharing), or [BaiduYun](https://pan.baidu.com/s/14KBADK41S5hOO8NQVQlbWA) (Extraction code: fucc) + and put it under the following folder. + Make sure it look like this: + ``` + ./output/rcnn/DOTA/resnet_v1_101_dota_RoITransformer_trainval_rcnn_end2end/train/rcnn_dota-0040.params + ./output/fpn/DOTA/resnet_v1_101_dota_rotbox_light_head_RoITransformer_trainval_fpn_end2end/train/fpn_DOTA_oriented-0008.params + ``` +## Training & Testing +1.training + sh train_dota_light_fpn_RoITransformer.sh + +2.testing + sh test_dota_light_fpn_RoITransformer.sh + +--------------------------------------------------- + +© Microsoft, 2017. Licensed under an MIT license. + + +If you find RoI Transformer and DOTA data useful in your research, please consider citing: +``` +@inproceedings{ding2019learning, + title={Learning RoI Transformer for Oriented Object Detection in Aerial Images}, + author={Ding, Jian and Xue, Nan and Long, Yang and Xia, Gui-Song and Lu, Qikai}, + booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, + pages={2849--2858}, + year={2019} +} +@inproceedings{xia2018dota, + title={DOTA: A large-scale dataset for object detection in aerial images}, + author={Xia, Gui-Song and Bai, Xiang and Ding, Jian and Zhu, Zhen and Belongie, Serge and Luo, Jiebo and Datcu, Mihai and Pelillo, Marcello and Zhang, Liangpei}, + booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, + pages={3974--3983}, + year={2018} +} +``` + diff --git a/ThirdPartyNotices.txt b/ThirdPartyNotices.txt new file mode 100644 index 0000000..f1fee74 --- /dev/null +++ b/ThirdPartyNotices.txt @@ -0,0 +1,158 @@ +************************************************************************ + +THIRD-PARTY SOFTWARE NOTICES AND INFORMATION + +This project incorporates components from the projects listed below. The original copyright notices and the licenses under which Microsoft received such components are set forth below. Microsoft reserves all rights not expressly granted herein, whether by implication, estoppel or otherwise. + +1. MXNet (https://github.com/apache/incubator-mxnet) +2. Fast R-CNN (https://github.com/rbgirshick/fast-rcnn) +3. Faster R-CNN (https://github.com/rbgirshick/py-faster-rcnn) +4. Caffe (https://github.com/BVLC/caffe) +5. MS COCO API (https://github.com/cocodataset/cocoapi) + +MXNet + +Copyright (c) 2015-2016 by Contributors + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. + + +Fast R-CNN + +Copyright (c) Microsoft Corporation + +All rights reserved. + +MIT License + +Permission is hereby granted, free of charge, to any person obtaining a +copy of this software and associated documentation files (the "Software"), +to deal in the Software without restriction, including without limitation +the rights to use, copy, modify, merge, publish, distribute, sublicense, +and/or sell copies of the Software, and to permit persons to whom the +Software is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included +in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL +THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR +OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, +ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR +OTHER DEALINGS IN THE SOFTWARE. + + +Faster R-CNN + +The MIT License (MIT) + +Copyright (c) 2015 Microsoft Corporation + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. + + +Caffe + +COPYRIGHT + +All contributions by the University of California: +Copyright (c) 2014, 2015, The Regents of the University of California (Regents) +All rights reserved. + +All other contributions: +Copyright (c) 2014, 2015, the respective contributors +All rights reserved. + +Caffe uses a shared copyright model: each contributor holds copyright over +their contributions to Caffe. The project versioning records all such +contribution and copyright details. If a contributor wants to further mark +their specific copyright on a particular contribution, they should indicate +their copyright solely in the commit message of the change when it is +committed. + +LICENSE + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. +2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR +ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +CONTRIBUTION AGREEMENT + +By contributing to the BVLC/caffe repository through pull-request, comment, +or otherwise, the contributor releases their content to the +license and copyright terms herein. + + +MS COCO API + +Copyright (c) 2014, Piotr Dollar and Tsung-Yi Lin +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. +2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR +ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +The views and conclusions contained in the software and documentation are those +of the authors and should not be interpreted as representing official policies, +either expressed or implied, of the FreeBSD Project. + diff --git a/dota_kit/.gitignore b/dota_kit/.gitignore new file mode 100644 index 0000000..e658533 --- /dev/null +++ b/dota_kit/.gitignore @@ -0,0 +1,13 @@ +build +polyiou_wrap.cxx +.idea/DOTAPreprocess.iml +*.so +*.jpg +**/.idea/ +*pyc +examplesplit +__pycache__/ +.ipynb_checkpoints/ +Task1_merge/*.txt +Task1/*.txt +restoredexample/* diff --git a/dota_kit/DOTA.py b/dota_kit/DOTA.py new file mode 100644 index 0000000..3889e9f --- /dev/null +++ b/dota_kit/DOTA.py @@ -0,0 +1,136 @@ +#The code is used for visulization, inspired from cocoapi +# Licensed under the Simplified BSD License [see bsd.txt] + +import os +import matplotlib.pyplot as plt +from matplotlib.collections import PatchCollection +from matplotlib.patches import Polygon, Circle +import numpy as np +import dota_utils as util +from collections import defaultdict +import cv2 +import pdb + +def _isArrayLike(obj): + if type(obj) == str: + return False + return hasattr(obj, '__iter__') and hasattr(obj, '__len__') + +class DOTA: + def __init__(self, basepath): + self.basepath = basepath + self.labelpath = os.path.join(basepath, 'labelTxt') + self.imagepath = os.path.join(basepath, 'images') + self.imgpaths = util.GetFileFromThisRootDir(self.labelpath) + self.imglist = [util.custombasename(x) for x in self.imgpaths] + self.catToImgs = defaultdict(list) + self.ImgToAnns = defaultdict(list) + self.createIndex() + + def createIndex(self): + for filename in self.imgpaths: + objects = util.parse_dota_poly(filename) + imgid = util.custombasename(filename) + self.ImgToAnns[imgid] = objects + for obj in objects: + cat = obj['name'] + self.catToImgs[cat].append(imgid) + + def getImgIds(self, catNms=[]): + """ + :param catNms: category names + :return: all the image ids contain the categories + """ + catNms = catNms if _isArrayLike(catNms) else [catNms] + if len(catNms) == 0: + return self.imglist + else: + imgids = [] + for i, cat in enumerate(catNms): + if i == 0: + imgids = set(self.catToImgs[cat]) + else: + imgids &= set(self.catToImgs[cat]) + return list(imgids) + + def loadAnns(self, catNms=[], imgId = None, difficult=None): + """ + :param catNms: category names + :param imgId: the img to load anns + :return: objects + """ + catNms = catNms if _isArrayLike(catNms) else [catNms] + objects = self.ImgToAnns[imgId] + if len(catNms) == 0: + return objects + pdb.set_trace() + outobjects = [obj for obj in objects if (obj['name'] in catNms)] + return outobjects + def showAnns(self, objects, imgId, range): + """ + :param catNms: category names + :param objects: objects to show + :param imgId: img to show + :param range: display range in the img + :return: + """ + img = self.loadImgs(imgId)[0] + plt.imshow(img) + plt.axis('off') + + ax = plt.gca() + ax.set_autoscale_on(False) + polygons = [] + color = [] + circles = [] + r = 5 + # pdb.set_trace() + for obj in objects: + if obj['difficult'] != '0': + continue + c = (np.random.random((1, 3)) * 0.6 + 0.4).tolist()[0] + poly = obj['poly'] + import pdb + # pdb.set_trace() + polygons.append(Polygon(poly)) + color.append(c) + point = poly[0] + circle = Circle((point[0], point[1]), r) + circles.append(circle) + p = PatchCollection(polygons, facecolors=color, linewidths=0, alpha=0.4) + ax.add_collection(p) + p = PatchCollection(polygons, facecolors='none', edgecolors=color, linewidths=2) + ax.add_collection(p) + p = PatchCollection(circles, facecolors='red') + ax.add_collection(p) + plt.show() + def loadImgs(self, imgids=[]): + """ + :param imgids: integer ids specifying img + :return: loaded img objects + """ + print('isarralike:', _isArrayLike(imgids)) + imgids = imgids if _isArrayLike(imgids) else [imgids] + print('imgids:', imgids) + imgs = [] + for imgid in imgids: + filename = os.path.join(self.imagepath, imgid + '.png') + print('filename:', filename) + img = cv2.imread(filename) + imgs.append(img) + return imgs + +if __name__ == '__main__': + import argparse + + parser = argparse.ArgumentParser(description='visualization of DOTA benchmark') + parser.add_argument('--dataset',dest='dataset', + help='path of dataset', + type=str,required=True) + args = parser.parse_args() + examplesplit = DOTA(args.dataset) + imgids = examplesplit.getImgIds(catNms=['ship']) + img = examplesplit.loadImgs(imgids) + for imgid in imgids: + anns = examplesplit.loadAnns(imgId=imgid) + examplesplit.showAnns(anns, imgid, 2) \ No newline at end of file diff --git a/dota_kit/ImgSplit.py b/dota_kit/ImgSplit.py new file mode 100644 index 0000000..e2fcadc --- /dev/null +++ b/dota_kit/ImgSplit.py @@ -0,0 +1,285 @@ +import os +import codecs +import numpy as np +import math +from dota_utils import GetFileFromThisRootDir +import cv2 +import shapely.geometry as shgeo +import dota_utils as util +import copy +import pdb + +def choose_best_pointorder_fit_another(poly1, poly2): + """ + To make the two polygons best fit with each point + """ + x1 = poly1[0] + y1 = poly1[1] + x2 = poly1[2] + y2 = poly1[3] + x3 = poly1[4] + y3 = poly1[5] + x4 = poly1[6] + y4 = poly1[7] + combinate = [np.array([x1, y1, x2, y2, x3, y3, x4, y4]), np.array([x2, y2, x3, y3, x4, y4, x1, y1]), + np.array([x3, y3, x4, y4, x1, y1, x2, y2]), np.array([x4, y4, x1, y1, x2, y2, x3, y3])] + dst_coordinate = np.array(poly2) + distances = np.array([np.sum((coord - dst_coordinate)**2) for coord in combinate]) + sorted = distances.argsort() + return combinate[sorted[0]] + +def cal_line_length(point1, point2): + return math.sqrt( math.pow(point1[0] - point2[0], 2) + math.pow(point1[1] - point2[1], 2)) + + +class splitbase(): + def __init__(self, + basepath, + outpath, + code = 'utf-8', + gap=512, + subsize=1024, + thresh=0.7, + choosebestpoint=True, + ext = '.png', + padding=True + ): + """ + :param basepath: base path for dota data + :param outpath: output base path for dota data, + the basepath and outputpath have the similar subdirectory, 'images' and 'labelTxt' + :param code: encodeing format of txt file + :param gap: overlap between two patches + :param subsize: subsize of patch + :param thresh: the thresh determine whether to keep the instance if the instance is cut down in the process of split + :param choosebestpoint: used to choose the first point for the + :param ext: ext for the image format + :param padding: if to padding the images so that all the images have the same size + """ + self.basepath = basepath + self.outpath = outpath + self.code = code + self.gap = gap + self.subsize = subsize + self.slide = self.subsize - self.gap + self.thresh = thresh + self.imagepath = os.path.join(self.basepath, 'images') + self.labelpath = os.path.join(self.basepath, 'labelTxt') + self.outimagepath = os.path.join(self.outpath, 'images') + self.outlabelpath = os.path.join(self.outpath, 'labelTxt') + self.choosebestpoint = choosebestpoint + self.ext = ext + self.padding = padding + print('padding:', padding) + + # pdb.set_trace() + if not os.path.isdir(self.outpath): + os.mkdir(self.outpath) + if not os.path.isdir(self.outimagepath): + # pdb.set_trace() + os.mkdir(self.outimagepath) + if not os.path.isdir(self.outlabelpath): + os.mkdir(self.outlabelpath) + # pdb.set_trace() + ## point: (x, y), rec: (xmin, ymin, xmax, ymax) + # def __del__(self): + # self.f_sub.close() + ## grid --> (x, y) position of grids + def polyorig2sub(self, left, up, poly): + polyInsub = np.zeros(len(poly)) + for i in range(int(len(poly)/2)): + polyInsub[i * 2] = int(poly[i * 2] - left) + polyInsub[i * 2 + 1] = int(poly[i * 2 + 1] - up) + return polyInsub + + def calchalf_iou(self, poly1, poly2): + """ + It is not the iou on usual, the iou is the value of intersection over poly1 + """ + inter_poly = poly1.intersection(poly2) + inter_area = inter_poly.area + poly1_area = poly1.area + half_iou = inter_area / poly1_area + return inter_poly, half_iou + + def saveimagepatches(self, img, subimgname, left, up): + subimg = copy.deepcopy(img[up: (up + self.subsize), left: (left + self.subsize)]) + outdir = os.path.join(self.outimagepath, subimgname + self.ext) + h, w, c = np.shape(subimg) + if (self.padding): + outimg = np.zeros((self.subsize, self.subsize, 3)) + outimg[0:h, 0:w, :] = subimg + cv2.imwrite(outdir, outimg) + else: + cv2.imwrite(outdir, subimg) + + def GetPoly4FromPoly5(self, poly): + distances = [cal_line_length((poly[i * 2], poly[i * 2 + 1] ), (poly[(i + 1) * 2], poly[(i + 1) * 2 + 1])) for i in range(int(len(poly)/2 - 1))] + distances.append(cal_line_length((poly[0], poly[1]), (poly[8], poly[9]))) + pos = np.array(distances).argsort()[0] + count = 0 + outpoly = [] + while count < 5: + #print('count:', count) + if (count == pos): + outpoly.append((poly[count * 2] + poly[(count * 2 + 2)%10])/2) + outpoly.append((poly[(count * 2 + 1)%10] + poly[(count * 2 + 3)%10])/2) + count = count + 1 + elif (count == (pos + 1)%5): + count = count + 1 + continue + + else: + outpoly.append(poly[count * 2]) + outpoly.append(poly[count * 2 + 1]) + count = count + 1 + return outpoly + + def savepatches(self, resizeimg, objects, subimgname, left, up, right, down): + outdir = os.path.join(self.outlabelpath, subimgname + '.txt') + mask_poly = [] + imgpoly = shgeo.Polygon([(left, up), (right, up), (right, down), + (left, down)]) + with codecs.open(outdir, 'w', self.code) as f_out: + for obj in objects: + gtpoly = shgeo.Polygon([(obj['poly'][0], obj['poly'][1]), + (obj['poly'][2], obj['poly'][3]), + (obj['poly'][4], obj['poly'][5]), + (obj['poly'][6], obj['poly'][7])]) + if (gtpoly.area <= 0): + continue + inter_poly, half_iou = self.calchalf_iou(gtpoly, imgpoly) + + # print('writing...') + if (half_iou == 1): + polyInsub = self.polyorig2sub(left, up, obj['poly']) + outline = ' '.join(list(map(str, polyInsub))) + outline = outline + ' ' + obj['name'] + ' ' + str(obj['difficult']) + f_out.write(outline + '\n') + elif (half_iou > 0): + #elif (half_iou > self.thresh): + ## print('<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<') + inter_poly = shgeo.polygon.orient(inter_poly, sign=1) + out_poly = list(inter_poly.exterior.coords)[0: -1] + if len(out_poly) < 4: + continue + + out_poly2 = [] + for i in range(len(out_poly)): + out_poly2.append(out_poly[i][0]) + out_poly2.append(out_poly[i][1]) + + if (len(out_poly) == 5): + #print('==========================') + out_poly2 = self.GetPoly4FromPoly5(out_poly2) + elif (len(out_poly) > 5): + """ + if the cut instance is a polygon with points more than 5, we do not handle it currently + """ + continue + if (self.choosebestpoint): + out_poly2 = choose_best_pointorder_fit_another(out_poly2, obj['poly']) + + polyInsub = self.polyorig2sub(left, up, out_poly2) + + for index, item in enumerate(polyInsub): + if (item <= 1): + polyInsub[index] = 1 + elif (item >= self.subsize): + polyInsub[index] = self.subsize + outline = ' '.join(list(map(str, polyInsub))) + if (half_iou > self.thresh): + outline = outline + ' ' + obj['name'] + ' ' + str(obj['difficult']) + else: + ## if the left part is too small, label as '2' + outline = outline + ' ' + obj['name'] + ' ' + '2' + f_out.write(outline + '\n') + #else: + # mask_poly.append(inter_poly) + self.saveimagepatches(resizeimg, subimgname, left, up) + + def SplitSingle(self, name, rate, extent): + """ + split a single image and ground truth + :param name: image name + :param rate: the resize scale for the image + :param extent: the image format + :return: + """ + img = cv2.imread(os.path.join(self.imagepath, name + extent)) + if np.shape(img) == (): + return + fullname = os.path.join(self.labelpath, name + '.txt') + objects = util.parse_dota_poly2(fullname) + for obj in objects: + obj['poly'] = list(map(lambda x:rate*x, obj['poly'])) + #obj['poly'] = list(map(lambda x: ([2 * y for y in x]), obj['poly'])) + + if (rate != 1): + resizeimg = cv2.resize(img, None, fx=rate, fy=rate, interpolation = cv2.INTER_CUBIC) + else: + resizeimg = img + outbasename = name + '__' + str(rate) + '__' + weight = np.shape(resizeimg)[1] + height = np.shape(resizeimg)[0] + + left, up = 0, 0 + while (left < weight): + if (left + self.subsize >= weight): + left = max(weight - self.subsize, 0) + up = 0 + while (up < height): + if (up + self.subsize >= height): + up = max(height - self.subsize, 0) + right = min(left + self.subsize, weight - 1) + down = min(up + self.subsize, height - 1) + subimgname = outbasename + str(left) + '___' + str(up) + # self.f_sub.write(name + ' ' + subimgname + ' ' + str(left) + ' ' + str(up) + '\n') + self.savepatches(resizeimg, objects, subimgname, left, up, right, down) + if (up + self.subsize >= height): + break + else: + up = up + self.slide + if (left + self.subsize >= weight): + break + else: + left = left + self.slide + + def splitdata(self, rate): + """ + :param rate: resize rate before cut + """ + + imagelist = GetFileFromThisRootDir(self.imagepath) + imagenames = [util.custombasename(x) for x in imagelist if (util.custombasename(x) != 'Thumbs')] + for name in imagenames: + self.SplitSingle(name, rate, self.ext) + + +if __name__ == '__main__': + # example usage of ImgSplit + import argparse + + parser = argparse.ArgumentParser(description='Splitting the DOTA images into small patches') + parser.add_argument('--dataset',dest='dataset', + help='path of dataset', + type=str,required=True) + parser.add_argument('--dest', dest='dest', + help='path of splitted dataset', + type=str,required=True) + parser.add_argument('--scale', dest='scale', + help='scale for splitting', + type=float,default=1.0) + parser.add_argument('--pad', dest='pad', + help='pad image or not', + default=True, type=bool) + args = parser.parse_args() + + # dest = os.path.join(args.dest,'scale{}'.format(args.scale)) + print('args.dataset:', args.dataset) + print('args.dest:', args.dest) + print('args.padding:', args.pad) + split = splitbase(args.dataset, + args.dest, padding=args.pad) + split.splitdata(args.scale) + diff --git a/dota_kit/ResultMerge.py b/dota_kit/ResultMerge.py new file mode 100644 index 0000000..c77f239 --- /dev/null +++ b/dota_kit/ResultMerge.py @@ -0,0 +1,249 @@ +""" + To use the code, users should to config detpath, annopath and imagesetfile + detpath is the path for 15 result files, for the format, you can refer to "http://captain.whu.edu.cn/DOTAweb/tasks.html" + search for PATH_TO_BE_CONFIGURED to config the paths + Note, the evaluation is on the large scale images +""" +import os +import numpy as np +import re +import time +import sys +sys.path.insert(0,'..') +try: + import dota_utils as util +except: + import dota_kit.dota_utils as util +import dota_kit.polyiou as polyiou +import pdb +import math + +## the thresh for nms when merge image +nms_thresh = 0.1 + +def py_cpu_nms_poly(dets, thresh): + scores = dets[:, 8] + polys = [] + areas = [] + for i in range(len(dets)): + tm_polygon = polyiou.VectorDouble([dets[i][0], dets[i][1], + dets[i][2], dets[i][3], + dets[i][4], dets[i][5], + dets[i][6], dets[i][7]]) + polys.append(tm_polygon) + order = scores.argsort()[::-1] + + keep = [] + while order.size > 0: + ovr = [] + i = order[0] + keep.append(i) + for j in range(order.size - 1): + # pdb.set_trace() + iou = polyiou.iou_poly(polys[i], polys[order[j + 1]]) + ovr.append(iou) + ovr = np.array(ovr) + + # print('ovr: ', ovr) + # print('thresh: ', thresh) + try: + if math.isnan(ovr[0]): + pdb.set_trace() + except: + pass + inds = np.where(ovr <= thresh)[0] + # print('inds: ', inds) + + order = order[inds + 1] + + return keep + +def py_cpu_nms_poly_fast(dets, thresh): + obbs = dets[:, 0:-1] + x1 = np.min(obbs[:, 0::2], axis=1) + y1 = np.min(obbs[:, 1::2], axis=1) + x2 = np.max(obbs[:, 0::2], axis=1) + y2 = np.max(obbs[:, 1::2], axis=1) + scores = dets[:, 8] + areas = (x2 - x1 + 1) * (y2 - y1 + 1) + + polys = [] + for i in range(len(dets)): + tm_polygon = polyiou.VectorDouble([dets[i][0], dets[i][1], + dets[i][2], dets[i][3], + dets[i][4], dets[i][5], + dets[i][6], dets[i][7]]) + polys.append(tm_polygon) + order = scores.argsort()[::-1] + + keep = [] + while order.size > 0: + ovr = [] + i = order[0] + keep.append(i) + # if order.size == 0: + # break + xx1 = np.maximum(x1[i], x1[order[1:]]) + yy1 = np.maximum(y1[i], y1[order[1:]]) + xx2 = np.minimum(x2[i], x2[order[1:]]) + yy2 = np.minimum(y2[i], y2[order[1:]]) + # w = np.maximum(0.0, xx2 - xx1 + 1) + # h = np.maximum(0.0, yy2 - yy1 + 1) + w = np.maximum(0.0, xx2 - xx1) + h = np.maximum(0.0, yy2 - yy1) + hbb_inter = w * h + hbb_ovr = hbb_inter / (areas[i] + areas[order[1:]] - hbb_inter) + # h_keep_inds = np.where(hbb_ovr == 0)[0] + h_inds = np.where(hbb_ovr > 0)[0] + tmp_order = order[h_inds + 1] + for j in range(tmp_order.size): + iou = polyiou.iou_poly(polys[i], polys[tmp_order[j]]) + hbb_ovr[h_inds[j]] = iou + # ovr.append(iou) + # ovr_index.append(tmp_order[j]) + + # ovr = np.array(ovr) + # ovr_index = np.array(ovr_index) + # print('ovr: ', ovr) + # print('thresh: ', thresh) + try: + if math.isnan(ovr[0]): + pdb.set_trace() + except: + pass + inds = np.where(hbb_ovr <= thresh)[0] + + # order_obb = ovr_index[inds] + # print('inds: ', inds) + # order_hbb = order[h_keep_inds + 1] + order = order[inds + 1] + # pdb.set_trace() + # order = np.concatenate((order_obb, order_hbb), axis=0).astype(np.int) + return keep + +def py_cpu_nms(dets, thresh): + """Pure Python NMS baseline.""" + #print('dets:', dets) + x1 = dets[:, 0] + y1 = dets[:, 1] + x2 = dets[:, 2] + y2 = dets[:, 3] + scores = dets[:, 4] + + areas = (x2 - x1 + 1) * (y2 - y1 + 1) + ## index for dets + order = scores.argsort()[::-1] + + + keep = [] + while order.size > 0: + i = order[0] + keep.append(i) + xx1 = np.maximum(x1[i], x1[order[1:]]) + yy1 = np.maximum(y1[i], y1[order[1:]]) + xx2 = np.minimum(x2[i], x2[order[1:]]) + yy2 = np.minimum(y2[i], y2[order[1:]]) + + w = np.maximum(0.0, xx2 - xx1 + 1) + h = np.maximum(0.0, yy2 - yy1 + 1) + inter = w * h + ovr = inter / (areas[i] + areas[order[1:]] - inter) + + inds = np.where(ovr <= thresh)[0] + order = order[inds + 1] + + return keep + +def nmsbynamedict(nameboxdict, nms, thresh): + nameboxnmsdict = {x: [] for x in nameboxdict} + for imgname in nameboxdict: + #print('imgname:', imgname) + #keep = py_cpu_nms(np.array(nameboxdict[imgname]), thresh) + #print('type nameboxdict:', type(nameboxnmsdict)) + #print('type imgname:', type(imgname)) + #print('type nms:', type(nms)) + keep = nms(np.array(nameboxdict[imgname]), thresh) + #print('keep:', keep) + outdets = [] + #print('nameboxdict[imgname]: ', nameboxnmsdict[imgname]) + for index in keep: + # print('index:', index) + outdets.append(nameboxdict[imgname][index]) + nameboxnmsdict[imgname] = outdets + return nameboxnmsdict +def poly2origpoly(poly, x, y, rate): + origpoly = [] + for i in range(int(len(poly)/2)): + tmp_x = float(poly[i * 2] + x) / float(rate) + tmp_y = float(poly[i * 2 + 1] + y) / float(rate) + origpoly.append(tmp_x) + origpoly.append(tmp_y) + return origpoly + +def mergebase(srcpath, dstpath, nms): + filelist = util.GetFileFromThisRootDir(srcpath) + for fullname in filelist: + name = util.custombasename(fullname) + #print('name:', name) + dstname = os.path.join(dstpath, name + '.txt') + with open(fullname, 'r') as f_in: + nameboxdict = {} + lines = f_in.readlines() + splitlines = [x.strip().split(' ') for x in lines] + for splitline in splitlines: + subname = splitline[0] + splitname = subname.split('__') + oriname = splitname[0] + pattern1 = re.compile(r'__\d+___\d+') + #print('subname:', subname) + x_y = re.findall(pattern1, subname) + x_y_2 = re.findall(r'\d+', x_y[0]) + x, y = int(x_y_2[0]), int(x_y_2[1]) + + pattern2 = re.compile(r'__([\d+\.]+)__\d+___') + + rate = re.findall(pattern2, subname)[0] + + confidence = splitline[1] + poly = list(map(float, splitline[2:])) + origpoly = poly2origpoly(poly, x, y, rate) + det = origpoly + det.append(confidence) + det = list(map(float, det)) + if (oriname not in nameboxdict): + nameboxdict[oriname] = [] + nameboxdict[oriname].append(det) + nameboxnmsdict = nmsbynamedict(nameboxdict, nms, nms_thresh) + with open(dstname, 'w') as f_out: + for imgname in nameboxnmsdict: + for det in nameboxnmsdict[imgname]: + #print('det:', det) + confidence = det[-1] + bbox = det[0:-1] + outline = imgname + ' ' + str(confidence) + ' ' + ' '.join(map(str, bbox)) + #print('outline:', outline) + f_out.write(outline + '\n') +def mergebyrec(srcpath, dstpath): + """ + srcpath: result files before merge and nms + dstpath: result files after merge and nms + """ + # srcpath = r'E:\bod-dataset\results\bod-v3_rfcn_2000000' + # dstpath = r'E:\bod-dataset\results\bod-v3_rfcn_2000000_nms' + + mergebase(srcpath, + dstpath, + py_cpu_nms) +def mergebypoly(srcpath, dstpath): + """ + srcpath: result files before merge and nms + dstpath: result files after merge and nms + """ + + mergebase(srcpath, + dstpath, + py_cpu_nms_poly) +if __name__ == '__main__': + mergebypoly(r'/home/dj/code/RoITransformer/output/rcnn/DOTA16/resnet_v1_101_dotacls16_RoITrans_end2end/test/Task1_results', + r'/home/dj/code/RoITransformer/output/rcnn/DOTA16/resnet_v1_101_dotacls16_RoITrans_end2end/test/Task1_results_0.1_nms_single') + # mergebyrec() \ No newline at end of file diff --git a/dota_kit/ResultMerge_multi_process.py b/dota_kit/ResultMerge_multi_process.py new file mode 100644 index 0000000..5e98bc3 --- /dev/null +++ b/dota_kit/ResultMerge_multi_process.py @@ -0,0 +1,263 @@ +""" + To use the code, users should to config detpath, annopath and imagesetfile + detpath is the path for 15 result files, for the format, you can refer to "http://captain.whu.edu.cn/DOTAweb/tasks.html" + search for PATH_TO_BE_CONFIGURED to config the paths + Note, the evaluation is on the large scale images +""" +import os +import numpy as np +import re +import time +import sys +sys.path.insert(0,'..') +try: + import dota_utils as util +except: + import dota_kit.dota_utils as util +import dota_kit.polyiou as polyiou +import pdb +import math +from multiprocessing import Pool +from functools import partial + +## the thresh for nms when merge image +nms_thresh = 0.1 + +def py_cpu_nms_poly(dets, thresh): + scores = dets[:, 8] + polys = [] + areas = [] + for i in range(len(dets)): + tm_polygon = polyiou.VectorDouble([dets[i][0], dets[i][1], + dets[i][2], dets[i][3], + dets[i][4], dets[i][5], + dets[i][6], dets[i][7]]) + polys.append(tm_polygon) + order = scores.argsort()[::-1] + + keep = [] + while order.size > 0: + ovr = [] + i = order[0] + keep.append(i) + for j in range(order.size - 1): + iou = polyiou.iou_poly(polys[i], polys[order[j + 1]]) + ovr.append(iou) + ovr = np.array(ovr) + + # print('ovr: ', ovr) + # print('thresh: ', thresh) + try: + if math.isnan(ovr[0]): + pdb.set_trace() + except: + pass + inds = np.where(ovr <= thresh)[0] + # print('inds: ', inds) + + order = order[inds + 1] + + return keep + +def py_cpu_nms_poly_fast(dets, thresh): + obbs = dets[:, 0:-1] + x1 = np.min(obbs[:, 0::2], axis=1) + y1 = np.min(obbs[:, 1::2], axis=1) + x2 = np.max(obbs[:, 0::2], axis=1) + y2 = np.max(obbs[:, 1::2], axis=1) + scores = dets[:, 8] + areas = (x2 - x1 + 1) * (y2 - y1 + 1) + + polys = [] + for i in range(len(dets)): + tm_polygon = polyiou.VectorDouble([dets[i][0], dets[i][1], + dets[i][2], dets[i][3], + dets[i][4], dets[i][5], + dets[i][6], dets[i][7]]) + polys.append(tm_polygon) + order = scores.argsort()[::-1] + + keep = [] + while order.size > 0: + ovr = [] + i = order[0] + keep.append(i) + # if order.size == 0: + # break + xx1 = np.maximum(x1[i], x1[order[1:]]) + yy1 = np.maximum(y1[i], y1[order[1:]]) + xx2 = np.minimum(x2[i], x2[order[1:]]) + yy2 = np.minimum(y2[i], y2[order[1:]]) + # w = np.maximum(0.0, xx2 - xx1 + 1) + # h = np.maximum(0.0, yy2 - yy1 + 1) + w = np.maximum(0.0, xx2 - xx1) + h = np.maximum(0.0, yy2 - yy1) + hbb_inter = w * h + hbb_ovr = hbb_inter / (areas[i] + areas[order[1:]] - hbb_inter) + # h_keep_inds = np.where(hbb_ovr == 0)[0] + h_inds = np.where(hbb_ovr > 0)[0] + tmp_order = order[h_inds + 1] + for j in range(tmp_order.size): + iou = polyiou.iou_poly(polys[i], polys[tmp_order[j]]) + hbb_ovr[h_inds[j]] = iou + # ovr.append(iou) + # ovr_index.append(tmp_order[j]) + + # ovr = np.array(ovr) + # ovr_index = np.array(ovr_index) + # print('ovr: ', ovr) + # print('thresh: ', thresh) + try: + if math.isnan(ovr[0]): + pdb.set_trace() + except: + pass + inds = np.where(hbb_ovr <= thresh)[0] + + # order_obb = ovr_index[inds] + # print('inds: ', inds) + # order_hbb = order[h_keep_inds + 1] + order = order[inds + 1] + # pdb.set_trace() + # order = np.concatenate((order_obb, order_hbb), axis=0).astype(np.int) + return keep + +def py_cpu_nms(dets, thresh): + """Pure Python NMS baseline.""" + #print('dets:', dets) + x1 = dets[:, 0] + y1 = dets[:, 1] + x2 = dets[:, 2] + y2 = dets[:, 3] + scores = dets[:, 4] + + areas = (x2 - x1 + 1) * (y2 - y1 + 1) + ## index for dets + order = scores.argsort()[::-1] + + + keep = [] + while order.size > 0: + i = order[0] + keep.append(i) + xx1 = np.maximum(x1[i], x1[order[1:]]) + yy1 = np.maximum(y1[i], y1[order[1:]]) + xx2 = np.minimum(x2[i], x2[order[1:]]) + yy2 = np.minimum(y2[i], y2[order[1:]]) + + w = np.maximum(0.0, xx2 - xx1 + 1) + h = np.maximum(0.0, yy2 - yy1 + 1) + inter = w * h + ovr = inter / (areas[i] + areas[order[1:]] - inter) + + inds = np.where(ovr <= thresh)[0] + order = order[inds + 1] + + return keep + +def nmsbynamedict(nameboxdict, nms, thresh): + nameboxnmsdict = {x: [] for x in nameboxdict} + for imgname in nameboxdict: + #print('imgname:', imgname) + #keep = py_cpu_nms(np.array(nameboxdict[imgname]), thresh) + #print('type nameboxdict:', type(nameboxnmsdict)) + #print('type imgname:', type(imgname)) + #print('type nms:', type(nms)) + keep = nms(np.array(nameboxdict[imgname]), thresh) + #print('keep:', keep) + outdets = [] + #print('nameboxdict[imgname]: ', nameboxnmsdict[imgname]) + for index in keep: + # print('index:', index) + outdets.append(nameboxdict[imgname][index]) + nameboxnmsdict[imgname] = outdets + return nameboxnmsdict +def poly2origpoly(poly, x, y, rate): + origpoly = [] + for i in range(int(len(poly)/2)): + tmp_x = float(poly[i * 2] + x) / float(rate) + tmp_y = float(poly[i * 2 + 1] + y) / float(rate) + origpoly.append(tmp_x) + origpoly.append(tmp_y) + return origpoly + +def mergesingle(dstpath, nms, fullname): + name = util.custombasename(fullname) + #print('name:', name) + dstname = os.path.join(dstpath, name + '.txt') + with open(fullname, 'r') as f_in: + nameboxdict = {} + lines = f_in.readlines() + splitlines = [x.strip().split(' ') for x in lines] + for splitline in splitlines: + subname = splitline[0] + splitname = subname.split('__') + oriname = splitname[0] + pattern1 = re.compile(r'__\d+___\d+') + #print('subname:', subname) + x_y = re.findall(pattern1, subname) + x_y_2 = re.findall(r'\d+', x_y[0]) + x, y = int(x_y_2[0]), int(x_y_2[1]) + + pattern2 = re.compile(r'__([\d+\.]+)__\d+___') + + rate = re.findall(pattern2, subname)[0] + + confidence = splitline[1] + poly = list(map(float, splitline[2:])) + origpoly = poly2origpoly(poly, x, y, rate) + det = origpoly + det.append(confidence) + det = list(map(float, det)) + if (oriname not in nameboxdict): + nameboxdict[oriname] = [] + nameboxdict[oriname].append(det) + nameboxnmsdict = nmsbynamedict(nameboxdict, nms, nms_thresh) + with open(dstname, 'w') as f_out: + for imgname in nameboxnmsdict: + for det in nameboxnmsdict[imgname]: + #print('det:', det) + confidence = det[-1] + bbox = det[0:-1] + outline = imgname + ' ' + str(confidence) + ' ' + ' '.join(map(str, bbox)) + #print('outline:', outline) + f_out.write(outline + '\n') + +def mergebase(srcpath, dstpath, nms): + pool = Pool(32) + filelist = util.GetFileFromThisRootDir(srcpath) + + mergesingle_fn = partial(mergesingle, dstpath, nms) + # pdb.set_trace() + pool.map(mergesingle_fn, filelist) + + +def mergebyrec(srcpath, dstpath): + """ + srcpath: result files before merge and nms + dstpath: result files after merge and nms + """ + # srcpath = r'E:\bod-dataset\results\bod-v3_rfcn_2000000' + # dstpath = r'E:\bod-dataset\results\bod-v3_rfcn_2000000_nms' + + mergebase(srcpath, + dstpath, + py_cpu_nms) +def mergebypoly(srcpath, dstpath): + """ + srcpath: result files before merge and nms + dstpath: result files after merge and nms + """ + # srcpath = r'/home/dingjian/evaluation_task1/result/faster-rcnn-59/comp4_test_results' + # dstpath = r'/home/dingjian/evaluation_task1/result/faster-rcnn-59/testtime' + + # mergebase(srcpath, + # dstpath, + # py_cpu_nms_poly) + mergebase(srcpath, + dstpath, + py_cpu_nms_poly_fast) +if __name__ == '__main__': + mergebypoly(r'/home/dj/code/RoITransformer/output/rcnn/DOTA16/resnet_v1_101_dotacls16_RoITrans_end2end/test/Task1_results', + r'/home/dj/code/RoITransformer/output/rcnn/DOTA16/resnet_v1_101_dotacls16_RoITrans_end2end/test/Task1_results_0.1_nms_single') + # mergebyrec() \ No newline at end of file diff --git a/dota_kit/SplitOnlyImage.py b/dota_kit/SplitOnlyImage.py new file mode 100644 index 0000000..bbbf6ac --- /dev/null +++ b/dota_kit/SplitOnlyImage.py @@ -0,0 +1,70 @@ +import os +import numpy as np +import cv2 +import copy +import dota_utils as util + +class splitbase(): + def __init__(self, + srcpath, + dstpath, + gap=512, + subsize=1024, + ext='.png'): + # self.srcpath = srcpath + # self.outpath = dstpath + self.gap = gap + self.subsize = subsize + self.slide = self.subsize - self.gap + self.srcpath = srcpath + self.dstpath = dstpath + self.ext = ext + def saveimagepatches(self, img, subimgname, left, up, ext='.png'): + subimg = copy.deepcopy(img[up: (up + self.subsize), left: (left + self.subsize)]) + outdir = os.path.join(self.dstpath, subimgname + ext) + cv2.imwrite(outdir, subimg) + + def SplitSingle(self, name, rate, extent): + img = cv2.imread(os.path.join(self.srcpath, name + extent)) + assert np.shape(img) != () + + if (rate != 1): + resizeimg = cv2.resize(img, None, fx=rate, fy=rate, interpolation = cv2.INTER_CUBIC) + else: + resizeimg = img + outbasename = name + '__' + str(rate) + '__' + + weight = np.shape(resizeimg)[1] + height = np.shape(resizeimg)[0] + + left, up = 0, 0 + while (left < weight): + if (left + self.subsize >= weight): + left = max(weight - self.subsize, 0) + up = 0 + while (up < height): + if (up + self.subsize >= height): + up = max(height - self.subsize, 0) + subimgname = outbasename + str(left) + '___' + str(up) + self.saveimagepatches(resizeimg, subimgname, left, up) + if (up + self.subsize >= height): + break + else: + up = up + self.slide + if (left + self.subsize >= weight): + break + else: + left = left + self.slide + + def splitdata(self, rate): + + imagelist = util.GetFileFromThisRootDir(self.srcpath) + imagenames = [util.custombasename(x) for x in imagelist if (util.custombasename(x) != 'Thumbs')] + for name in imagenames: + self.SplitSingle(name, rate, self.ext) +if __name__ == '__main__': + split = splitbase(r'/data/dota_new/dota/test/images', + r'/data/dota_new/dota/split-1024/test/images') + split.splitdata(1) + split.splitdata(2) + split.splitdata(0.5) \ No newline at end of file diff --git a/dota_kit/__init__.py b/dota_kit/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/dota_kit/dota_evaluation_task1.py b/dota_kit/dota_evaluation_task1.py new file mode 100644 index 0000000..d6e7c7d --- /dev/null +++ b/dota_kit/dota_evaluation_task1.py @@ -0,0 +1,489 @@ +# -------------------------------------------------------- +# dota_evaluation_task1 +# Licensed under The MIT License [see LICENSE for details] +# Written by Jian Ding, based on code from Bharath Hariharan +# -------------------------------------------------------- + +""" + To use the code, users should to config detpath, annopath and imagesetfile + detpath is the path for 15 result files, for the format, you can refer to "http://captain.whu.edu.cn/DOTAweb/tasks.html" + search for PATH_TO_BE_CONFIGURED to config the paths + Note, the evaluation is on the large scale images +""" +import xml.etree.ElementTree as ET +import os +#TODO: finish it +#import cPickle +import numpy as np +import matplotlib.pyplot as plt +import polyiou +from multiprocessing import Pool +from functools import partial + +def parse_gt(filename): + """ + + :param filename: ground truth file to parse + :return: all instances in a picture + """ + objects = [] + with open(filename, 'r') as f: + while True: + line = f.readline() + if line: + splitlines = line.strip().split(' ') + object_struct = {} + if (len(splitlines) < 9): + continue + object_struct['name'] = splitlines[8] + + if (len(splitlines) == 9): + object_struct['difficult'] = 0 + elif (len(splitlines) == 10): + object_struct['difficult'] = int(splitlines[9]) + object_struct['bbox'] = [float(splitlines[0]), + float(splitlines[1]), + float(splitlines[2]), + float(splitlines[3]), + float(splitlines[4]), + float(splitlines[5]), + float(splitlines[6]), + float(splitlines[7])] + objects.append(object_struct) + else: + break + return objects +def voc_ap(rec, prec, use_07_metric=False): + """ ap = voc_ap(rec, prec, [use_07_metric]) + Compute VOC AP given precision and recall. + If use_07_metric is true, uses the + VOC 07 11 point method (default:False). + """ + if use_07_metric: + # 11 point metric + ap = 0. + for t in np.arange(0., 1.1, 0.1): + if np.sum(rec >= t) == 0: + p = 0 + else: + p = np.max(prec[rec >= t]) + ap = ap + p / 11. + else: + # correct AP calculation + # first append sentinel values at the end + mrec = np.concatenate(([0.], rec, [1.])) + mpre = np.concatenate(([0.], prec, [0.])) + + # compute the precision envelope + for i in range(mpre.size - 1, 0, -1): + mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i]) + + # to calculate area under PR curve, look for points + # where X axis (recall) changes value + i = np.where(mrec[1:] != mrec[:-1])[0] + + # and sum (\Delta recall) * prec + ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) + return ap + + +def voc_eval(detpath, + annopath, + imagesetfile, + classname, + # cachedir, + ovthresh=0.5, + use_07_metric=False): + """rec, prec, ap = voc_eval(detpath, + annopath, + imagesetfile, + classname, + [ovthresh], + [use_07_metric]) + Top level function that does the PASCAL VOC evaluation. + detpath: Path to detections + detpath.format(classname) should produce the detection results file. + annopath: Path to annotations + annopath.format(imagename) should be the xml annotations file. + imagesetfile: Text file containing the list of images, one image per line. + classname: Category name (duh) + cachedir: Directory for caching the annotations + [ovthresh]: Overlap threshold (default = 0.5) + [use_07_metric]: Whether to use VOC07's 11 point AP computation + (default False) + """ + # assumes detections are in detpath.format(classname) + # assumes annotations are in annopath.format(imagename) + # assumes imagesetfile is a text file with each line an image name + # cachedir caches the annotations in a pickle file + + # first load gt + #if not os.path.isdir(cachedir): + # os.mkdir(cachedir) + #cachefile = os.path.join(cachedir, 'annots.pkl') + # read list of images + print('eval ' + classname) + with open(imagesetfile, 'r') as f: + lines = f.readlines() + imagenames = [x.strip() for x in lines] + #print('imagenames: ', imagenames) + #if not os.path.isfile(cachefile): + # load annots + recs = {} + for i, imagename in enumerate(imagenames): + #print('parse_files name: ', annopath.format(imagename)) + recs[imagename] = parse_gt(annopath.format(imagename)) + #if i % 100 == 0: + # print ('Reading annotation for {:d}/{:d}'.format( + # i + 1, len(imagenames)) ) + # save + #print ('Saving cached annotations to {:s}'.format(cachefile)) + #with open(cachefile, 'w') as f: + # cPickle.dump(recs, f) + #else: + # load + #with open(cachefile, 'r') as f: + # recs = cPickle.load(f) + + # extract gt objects for this class + class_recs = {} + npos = 0 + for imagename in imagenames: + R = [obj for obj in recs[imagename] if obj['name'] == classname] + bbox = np.array([x['bbox'] for x in R]) + difficult = np.array([x['difficult'] for x in R]).astype(np.bool) + det = [False] * len(R) + npos = npos + sum(~difficult) + class_recs[imagename] = {'bbox': bbox, + 'difficult': difficult, + 'det': det} + + # read dets from Task1* files + detfile = detpath.format(classname) + with open(detfile, 'r') as f: + lines = f.readlines() + + splitlines = [x.strip().split(' ') for x in lines] + image_ids = [x[0] for x in splitlines] + confidence = np.array([float(x[1]) for x in splitlines]) + + #print('check confidence: ', confidence) + + BB = np.array([[float(z) for z in x[2:]] for x in splitlines]) + + # sort by confidence + sorted_ind = np.argsort(-confidence) + sorted_scores = np.sort(-confidence) + + #print('check sorted_scores: ', sorted_scores) + #print('check sorted_ind: ', sorted_ind) + + ## note the usage only in numpy not for list + BB = BB[sorted_ind, :] + image_ids = [image_ids[x] for x in sorted_ind] + #print('check imge_ids: ', image_ids) + #print('imge_ids len:', len(image_ids)) + # go down dets and mark TPs and FPs + nd = len(image_ids) + tp = np.zeros(nd) + fp = np.zeros(nd) + for d in range(nd): + R = class_recs[image_ids[d]] + bb = BB[d, :].astype(float) + ovmax = -np.inf + BBGT = R['bbox'].astype(float) + + ## compute det bb with each BBGT + + if BBGT.size > 0: + # compute overlaps + # intersection + + def calcoverlaps(BBGT, bb): + overlaps = [] + for index, GT in enumerate(BBGT): + + overlap = polyiou.iou_poly(polyiou.VectorDouble(BBGT[index]), polyiou.VectorDouble(bb)) + overlaps.append(overlap) + return overlaps + overlaps = calcoverlaps(BBGT, bb) + + ovmax = np.max(overlaps) + jmax = np.argmax(overlaps) + + if ovmax > ovthresh: + if not R['difficult'][jmax]: + if not R['det'][jmax]: + tp[d] = 1. + R['det'][jmax] = 1 + else: + fp[d] = 1. + else: + fp[d] = 1. + + # compute precision recall + + print('check fp:', fp) + print('check tp', tp) + + + print('npos num:', npos) + fp = np.cumsum(fp) + tp = np.cumsum(tp) + + rec = tp / float(npos) + # avoid divide by zero in case the first detection matches a difficult + # ground truth + prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps) + ap = voc_ap(rec, prec, use_07_metric) + + return rec, prec, ap + +def single_voc_eval_warp(detpath, + annopath, + imagesetfile, + classname, + # cachedir, + ovthresh=0.5, + use_07_metric=False): + rec, prec, ap = voc_eval(detpath, + annopath, + imagesetfile, + classname, + # cachedir, + ovthresh=ovthresh, + use_07_metric=use_07_metric) + return ap +def main(): + + # detpath = r'/home/dingjian/evaluation_task1/result/faster-rcnn-59/comp4_testnms_c_extension_0.1/comp4_det_test_{:s}.txt' + # annopath = r'/home/dingjian/evaluation_task1/testset/wordlabel-utf-8/{:s}.txt' + # imagesetfile = r'/home/dingjian/evaluation_task1/testset/testset.txt' + # classnames = ['plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court', + # 'basketball-court', 'storage-tank', 'soccer-ball-field', 'turntable', 'harbor', 'swimming-pool', 'helicopter'] + + detpath = r'PATH_TO_BE_CONFIGURED/Task1_{:s}.txt' + annopath = r'PATH_TO_BE_CONFIGURED/{:s}.txt' # change the directory to the path of val/labelTxt, if you want to do evaluation on the valset + imagesetfile = r'PATH_TO_BE_CONFIGURED/valset.txt' + + classnames = ['plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court', + 'basketball-court', 'storage-tank', 'soccer-ball-field', 'roundabout', 'harbor', 'swimming-pool', 'helicopter'] + + classaps = [] + map = 0 + for classname in classnames: + print('classname:', classname) + rec, prec, ap = voc_eval(detpath, + annopath, + imagesetfile, + classname, + ovthresh=0.5, + use_07_metric=True) + map = map + ap + #print('rec: ', rec, 'prec: ', prec, 'ap: ', ap) + print('ap: ', ap) + classaps.append(ap) + + # umcomment to show p-r curve of each category + # plt.figure(figsize=(8,4)) + # plt.xlabel('recall') + # plt.ylabel('precision') + # plt.plot(rec, prec) + # plt.show() + map = map/len(classnames) + print('map:', map) + classaps = 100*np.array(classaps) + print('classaps: ', classaps) + +def eval_DOTA_Task1(detpath, annopath, imagesetfile): + + # detpath = r'/home/dingjian/evaluation_task1/result/faster-rcnn-59/comp4_testnms_c_extension_0.1/comp4_det_test_{:s}.txt' + # annopath = r'/home/dingjian/evaluation_task1/testset/wordlabel-utf-8/{:s}.txt' + # imagesetfile = r'/home/dingjian/evaluation_task1/testset/testset.txt' + # classnames = ['plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court', + # 'basketball-court', 'storage-tank', 'soccer-ball-field', 'turntable', 'harbor', 'swimming-pool', 'helicopter'] + + # detpath = r'PATH_TO_BE_CONFIGURED/Task1_{:s}.txt' + # annopath = r'PATH_TO_BE_CONFIGURED/{:s}.txt' # change the directory to the path of val/labelTxt, if you want to do evaluation on the valset + # imagesetfile = r'PATH_TO_BE_CONFIGURED/valset.txt' + classnames = ['plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court', + 'basketball-court', 'storage-tank', 'soccer-ball-field', 'roundabout', 'harbor', 'swimming-pool', 'helicopter'] + + classaps = [] + map = 0 + # TODO: change it to pool + for classname in classnames: + print('classname:', classname) + rec, prec, ap = voc_eval(detpath, + annopath, + imagesetfile, + classname, + ovthresh=0.5, + use_07_metric=True) + map = map + ap + #print('rec: ', rec, 'prec: ', prec, 'ap: ', ap) + print('ap: ', ap) + classaps.append(ap) + + # umcomment to show p-r curve of each category + # plt.figure(figsize=(8,4)) + # plt.xlabel('recall') + # plt.ylabel('precision') + # plt.plot(rec, prec) + # plt.savefig + #plt.show() + map = map/len(classnames) + print('map:', map) + classaps = 100*np.array(classaps) + print('classaps: ', classaps) + # with open(detpath + '/mAP.txt', 'w') as f_out: + # f_out.write('mAP: ' + str(map) + '\n') + # f_out.write('classaps: ' + str(classaps)) + return map, classaps + +def eval_HRSC_L1(detpath, annopath, imagesetfile): + + # detpath = r'/home/dingjian/evaluation_task1/result/faster-rcnn-59/comp4_testnms_c_extension_0.1/comp4_det_test_{:s}.txt' + # annopath = r'/home/dingjian/evaluation_task1/testset/wordlabel-utf-8/{:s}.txt' + # imagesetfile = r'/home/dingjian/evaluation_task1/testset/testset.txt' + # classnames = ['plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court', + # 'basketball-court', 'storage-tank', 'soccer-ball-field', 'turntable', 'harbor', 'swimming-pool', 'helicopter'] + + # detpath = r'PATH_TO_BE_CONFIGURED/Task1_{:s}.txt' + # annopath = r'PATH_TO_BE_CONFIGURED/{:s}.txt' # change the directory to the path of val/labelTxt, if you want to do evaluation on the valset + # imagesetfile = r'PATH_TO_BE_CONFIGURED/valset.txt' + classnames = ['ship'] + classaps = [] + map = 0 + # TODO: change it to pool + for classname in classnames: + print('classname:', classname) + rec, prec, ap = voc_eval(detpath, + annopath, + imagesetfile, + classname, + ovthresh=0.5, + use_07_metric=True) + map = map + ap + #print('rec: ', rec, 'prec: ', prec, 'ap: ', ap) + print('ap: ', ap) + classaps.append(ap) + + # umcomment to show p-r curve of each category + # plt.figure(figsize=(8,4)) + # plt.xlabel('recall') + # plt.ylabel('precision') + # plt.plot(rec, prec) + # plt.savefig + #plt.show() + map = map/len(classnames) + print('map:', map) + classaps = 100*np.array(classaps) + print('classaps: ', classaps) + # with open(detpath + '/mAP.txt', 'w') as f_out: + # f_out.write('mAP: ' + str(map) + '\n') + # f_out.write('classaps: ' + str(classaps)) + return map, classaps + + +def eval_vehicle(detpath, annopath, imagesetfile): + + # detpath = r'/home/dingjian/evaluation_task1/result/faster-rcnn-59/comp4_testnms_c_extension_0.1/comp4_det_test_{:s}.txt' + # annopath = r'/home/dingjian/evaluation_task1/testset/wordlabel-utf-8/{:s}.txt' + # imagesetfile = r'/home/dingjian/evaluation_task1/testset/testset.txt' + # classnames = ['plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court', + # 'basketball-court', 'storage-tank', 'soccer-ball-field', 'turntable', 'harbor', 'swimming-pool', 'helicopter'] + + # detpath = r'PATH_TO_BE_CONFIGURED/Task1_{:s}.txt' + # annopath = r'PATH_TO_BE_CONFIGURED/{:s}.txt' # change the directory to the path of val/labelTxt, if you want to do evaluation on the valset + # imagesetfile = r'PATH_TO_BE_CONFIGURED/valset.txt' + classnames = ['vehicle'] + classaps = [] + map = 0 + # TODO: change it to pool + for classname in classnames: + print('classname:', classname) + rec, prec, ap = voc_eval(detpath, + annopath, + imagesetfile, + classname, + ovthresh=0.5, + use_07_metric=True) + map = map + ap + #print('rec: ', rec, 'prec: ', prec, 'ap: ', ap) + print('ap: ', ap) + classaps.append(ap) + + # umcomment to show p-r curve of each category + # plt.figure(figsize=(8,4)) + # plt.xlabel('recall') + # plt.ylabel('precision') + # plt.plot(rec, prec) + # plt.savefig + #plt.show() + map = map/len(classnames) + print('map:', map) + classaps = 100*np.array(classaps) + print('classaps: ', classaps) + # with open(detpath + '/mAP.txt', 'w') as f_out: + # f_out.write('mAP: ' + str(map) + '\n') + # f_out.write('classaps: ' + str(classaps)) + return map, classaps + +def eval_DOTA_Task1_multi_process(detpath, annopath, imagesetfile): + + # detpath = r'/home/dingjian/evaluation_task1/result/faster-rcnn-59/comp4_testnms_c_extension_0.1/comp4_det_test_{:s}.txt' + # annopath = r'/home/dingjian/evaluation_task1/testset/wordlabel-utf-8/{:s}.txt' + # imagesetfile = r'/home/dingjian/evaluation_task1/testset/testset.txt' + # classnames = ['plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court', + # 'basketball-court', 'storage-tank', 'soccer-ball-field', 'turntable', 'harbor', 'swimming-pool', 'helicopter'] + # detpath = r'PATH_TO_BE_CONFIGURED/Task1_{:s}.txt' + # annopath = r'PATH_TO_BE_CONFIGURED/{:s}.txt' # change the directory to the path of val/labelTxt, if you want to do evaluation on the valset + # imagesetfile = r'PATH_TO_BE_CONFIGURED/valset.txt' + classnames = ['plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court', + 'basketball-court', 'storage-tank', 'soccer-ball-field', 'roundabout', 'harbor', 'swimming-pool', 'helicopter'] + pool = Pool(80) + classaps = [] + mAP = 0 + # TODO: change it to pool + eval_fn = partial(single_voc_eval_warp, detpath, annopath, imagesetfile, ovthresh=0.5, use_07_metric=True) + aps = pool.map(eval_fn, classnames) + # for classname in classnames: + # print('classname:', classname) + # rec, prec, ap = voc_eval(detpath, + # annopath, + # imagesetfile, + # classname, + # ovthresh=0.5, + # use_07_metric=True) + # map = map + ap + # #print('rec: ', rec, 'prec: ', prec, 'ap: ', ap) + # print('ap: ', ap) + # classaps.append(ap) + # + # # umcomment to show p-r curve of each category + # # plt.figure(figsize=(8,4)) + # # plt.xlabel('recall') + # # plt.ylabel('precision') + # # plt.plot(rec, prec) + # # plt.savefig + # #plt.show() + for i in range(len(classnames)): + print('classname:', classnames[i]) + mAP = mAP + aps[i] + print('ap: ', aps[i]) + mAP = mAP/len(classnames) + print('map:', mAP) + classaps = 100*np.array(aps) + print('classaps: ', classaps) + # with open(detpath + '/mAP.txt', 'w') as f_out: + # f_out.write('mAP: ' + str(map) + '\n') + # f_out.write('classaps: ' + str(classaps)) + return mAP, classaps + +if __name__ == '__main__': + # main() + detpath = os.path.join(r'/home/dingjian/jianding/code/RoITransformer/output/rcnn/DOTA/resnet_v1_101_dota_light_head_best_point_trainval_rcnn_end2end_deform_real_psroi_v3/test/Task1_results_0.1_nms') + '/Task1_{:s}.txt' + annopath = r'/home/dingjian/jianding/code/RoITransformer/data/test/labelTxt/{:s}.txt' + imagesetfile = r'/home/dingjian/jianding/code/RoITransformer/data/test/testset.txt' + eval_DOTA_Task1_multi_process(detpath, annopath, imagesetfile) \ No newline at end of file diff --git a/dota_kit/dota_evaluation_task2.py b/dota_kit/dota_evaluation_task2.py new file mode 100644 index 0000000..56822e1 --- /dev/null +++ b/dota_kit/dota_evaluation_task2.py @@ -0,0 +1,268 @@ +# -------------------------------------------------------- +# dota_evaluation_task1 +# Licensed under The MIT License [see LICENSE for details] +# Written by Jian Ding, based on code from Bharath Hariharan +# -------------------------------------------------------- + +""" + To use the code, users should to config detpath, annopath and imagesetfile + detpath is the path for 15 result files, for the format, you can refer to "http://captain.whu.edu.cn/DOTAweb/tasks.html" + search for PATH_TO_BE_CONFIGURED to config the paths + Note, the evaluation is on the large scale images +""" +import xml.etree.ElementTree as ET +import os +#import cPickle +import numpy as np +import matplotlib.pyplot as plt + +def parse_gt(filename): + objects = [] + with open(filename, 'r') as f: + lines = f.readlines() + splitlines = [x.strip().split(' ') for x in lines] + for splitline in splitlines: + object_struct = {} + object_struct['name'] = splitline[8] + if (len(splitline) == 9): + object_struct['difficult'] = 0 + elif (len(splitline) == 10): + object_struct['difficult'] = int(splitline[9]) + # object_struct['difficult'] = 0 + object_struct['bbox'] = [int(float(splitline[0])), + int(float(splitline[1])), + int(float(splitline[4])), + int(float(splitline[5]))] + w = int(float(splitline[4])) - int(float(splitline[0])) + h = int(float(splitline[5])) - int(float(splitline[1])) + object_struct['area'] = w * h + #print('area:', object_struct['area']) + # if object_struct['area'] < (15 * 15): + # #print('area:', object_struct['area']) + # object_struct['difficult'] = 1 + objects.append(object_struct) + return objects +def voc_ap(rec, prec, use_07_metric=False): + """ ap = voc_ap(rec, prec, [use_07_metric]) + Compute VOC AP given precision and recall. + If use_07_metric is true, uses the + VOC 07 11 point method (default:False). + """ + if use_07_metric: + # 11 point metric + ap = 0. + for t in np.arange(0., 1.1, 0.1): + if np.sum(rec >= t) == 0: + p = 0 + else: + p = np.max(prec[rec >= t]) + ap = ap + p / 11. + else: + # correct AP calculation + # first append sentinel values at the end + mrec = np.concatenate(([0.], rec, [1.])) + mpre = np.concatenate(([0.], prec, [0.])) + + # compute the precision envelope + for i in range(mpre.size - 1, 0, -1): + mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i]) + + # to calculate area under PR curve, look for points + # where X axis (recall) changes value + i = np.where(mrec[1:] != mrec[:-1])[0] + + # and sum (\Delta recall) * prec + ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) + return ap + +def voc_eval(detpath, + annopath, + imagesetfile, + classname, + # cachedir, + ovthresh=0.5, + use_07_metric=False): + """rec, prec, ap = voc_eval(detpath, + annopath, + imagesetfile, + classname, + [ovthresh], + [use_07_metric]) + Top level function that does the PASCAL VOC evaluation. + detpath: Path to detections + detpath.format(classname) should produce the detection results file. + annopath: Path to annotations + annopath.format(imagename) should be the xml annotations file. + imagesetfile: Text file containing the list of images, one image per line. + classname: Category name (duh) + cachedir: Directory for caching the annotations + [ovthresh]: Overlap threshold (default = 0.5) + [use_07_metric]: Whether to use VOC07's 11 point AP computation + (default False) + """ + # assumes detections are in detpath.format(classname) + # assumes annotations are in annopath.format(imagename) + # assumes imagesetfile is a text file with each line an image name + # cachedir caches the annotations in a pickle file + + # first load gt + #if not os.path.isdir(cachedir): + # os.mkdir(cachedir) + #cachefile = os.path.join(cachedir, 'annots.pkl') + # read list of images + with open(imagesetfile, 'r') as f: + lines = f.readlines() + imagenames = [x.strip() for x in lines] + #print('imagenames: ', imagenames) + #if not os.path.isfile(cachefile): + # load annots + recs = {} + for i, imagename in enumerate(imagenames): + #print('parse_files name: ', annopath.format(imagename)) + recs[imagename] = parse_gt(annopath.format(imagename)) + #if i % 100 == 0: + # print ('Reading annotation for {:d}/{:d}'.format( + # i + 1, len(imagenames)) ) + # save + #print ('Saving cached annotations to {:s}'.format(cachefile)) + #with open(cachefile, 'w') as f: + # cPickle.dump(recs, f) + #else: + # load + #with open(cachefile, 'r') as f: + # recs = cPickle.load(f) + + # extract gt objects for this class + class_recs = {} + npos = 0 + for imagename in imagenames: + R = [obj for obj in recs[imagename] if obj['name'] == classname] + bbox = np.array([x['bbox'] for x in R]) + difficult = np.array([x['difficult'] for x in R]).astype(np.bool) + det = [False] * len(R) + npos = npos + sum(~difficult) + class_recs[imagename] = {'bbox': bbox, + 'difficult': difficult, + 'det': det} + + # read dets + detfile = detpath.format(classname) + with open(detfile, 'r') as f: + lines = f.readlines() + + splitlines = [x.strip().split(' ') for x in lines] + image_ids = [x[0] for x in splitlines] + confidence = np.array([float(x[1]) for x in splitlines]) + + #print('check confidence: ', confidence) + + BB = np.array([[float(z) for z in x[2:]] for x in splitlines]) + + # sort by confidence + sorted_ind = np.argsort(-confidence) + sorted_scores = np.sort(-confidence) + + #print('check sorted_scores: ', sorted_scores) + #print('check sorted_ind: ', sorted_ind) + BB = BB[sorted_ind, :] + image_ids = [image_ids[x] for x in sorted_ind] + #print('check imge_ids: ', image_ids) + #print('imge_ids len:', len(image_ids)) + # go down dets and mark TPs and FPs + nd = len(image_ids) + tp = np.zeros(nd) + fp = np.zeros(nd) + for d in range(nd): + R = class_recs[image_ids[d]] + bb = BB[d, :].astype(float) + ovmax = -np.inf + BBGT = R['bbox'].astype(float) + + if BBGT.size > 0: + # compute overlaps + # intersection + ixmin = np.maximum(BBGT[:, 0], bb[0]) + iymin = np.maximum(BBGT[:, 1], bb[1]) + ixmax = np.minimum(BBGT[:, 2], bb[2]) + iymax = np.minimum(BBGT[:, 3], bb[3]) + iw = np.maximum(ixmax - ixmin + 1., 0.) + ih = np.maximum(iymax - iymin + 1., 0.) + inters = iw * ih + + # union + uni = ((bb[2] - bb[0] + 1.) * (bb[3] - bb[1] + 1.) + + (BBGT[:, 2] - BBGT[:, 0] + 1.) * + (BBGT[:, 3] - BBGT[:, 1] + 1.) - inters) + + overlaps = inters / uni + ovmax = np.max(overlaps) + ## if there exist 2 + jmax = np.argmax(overlaps) + + if ovmax > ovthresh: + if not R['difficult'][jmax]: + if not R['det'][jmax]: + tp[d] = 1. + R['det'][jmax] = 1 + else: + fp[d] = 1. + # print('filename:', image_ids[d]) + else: + fp[d] = 1. + + # compute precision recall + + print('check fp:', fp) + print('check tp', tp) + + + print('npos num:', npos) + fp = np.cumsum(fp) + tp = np.cumsum(tp) + + rec = tp / float(npos) + # avoid divide by zero in case the first detection matches a difficult + # ground truth + prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps) + ap = voc_ap(rec, prec, use_07_metric) + + return rec, prec, ap + +def main(): + # detpath = r'E:\documentation\OneDrive\documentation\DotaEvaluation\evluation_task2\evluation_task2\faster-rcnn-nms_0.3_task2\nms_0.3_task\Task2_{:s}.txt' + # annopath = r'I:\dota\testset\ReclabelTxt-utf-8\{:s}.txt' + # imagesetfile = r'I:\dota\testset\va.txt' + + detpath = r'PATH_TO_BE_CONFIGURED/Task1_{:s}.txt' + annopath = r'PATH_TO_BE_CONFIGURED/{:s}.txt'# change the directory to the path of val/labelTxt, if you want to do evaluation on the valset + imagesetfile = r'PATH_TO_BE_CONFIGURED/valset.txt' + + classnames = ['plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court', + 'basketball-court', 'storage-tank', 'soccer-ball-field', 'roundabout', 'harbor', 'swimming-pool', 'helicopter'] + classaps = [] + map = 0 + for classname in classnames: + print('classname:', classname) + rec, prec, ap = voc_eval(detpath, + annopath, + imagesetfile, + classname, + ovthresh=0.5, + use_07_metric=True) + map = map + ap + #print('rec: ', rec, 'prec: ', prec, 'ap: ', ap) + print('ap: ', ap) + classaps.append(ap) + + ## uncomment to plot p-r curve for each category + # plt.figure(figsize=(8,4)) + # plt.xlabel('recall') + # plt.ylabel('precision') + # plt.plot(rec, prec) + # plt.show() + map = map/len(classnames) + print('map:', map) + classaps = 100*np.array(classaps) + print('classaps: ', classaps) +if __name__ == '__main__': + main() \ No newline at end of file diff --git a/dota_kit/dota_utils.py b/dota_kit/dota_utils.py new file mode 100644 index 0000000..c8c58f7 --- /dev/null +++ b/dota_kit/dota_utils.py @@ -0,0 +1,227 @@ +import sys +import codecs +import numpy as np +import shapely.geometry as shgeo +import os +import re +import math +""" + some basic functions which are useful for process DOTA data +""" + +wordname_15 = ['plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court', + 'basketball-court', 'storage-tank', 'soccer-ball-field', 'roundabout', 'harbor', 'swimming-pool', 'helicopter'] + +def custombasename(fullname): + return os.path.basename(os.path.splitext(fullname)[0]) + +def GetFileFromThisRootDir(dir,ext = None): + allfiles = [] + needExtFilter = (ext != None) + for root,dirs,files in os.walk(dir): + for filespath in files: + filepath = os.path.join(root, filespath) + extension = os.path.splitext(filepath)[1][1:] + if needExtFilter and extension in ext: + allfiles.append(filepath) + elif not needExtFilter: + allfiles.append(filepath) + return allfiles + +def TuplePoly2Poly(poly): + outpoly = [poly[0][0], poly[0][1], + poly[1][0], poly[1][1], + poly[2][0], poly[2][1], + poly[3][0], poly[3][1] + ] + return outpoly + +def parse_dota_poly(filename): + """ + parse the dota ground truth in the format: + [(x1, y1), (x2, y2), (x3, y3), (x4, y4)] + """ + objects = [] + #print('filename:', filename) + f = [] + if (sys.version_info >= (3, 5)): + fd = open(filename, 'r') + f = fd + elif (sys.version_info >= 2.7): + fd = codecs.open(filename, 'r') + f = fd + # count = 0 + while True: + line = f.readline() + # count = count + 1 + # if count < 2: + # continue + if line: + splitlines = line.strip().split(' ') + object_struct = {} + ### clear the wrong name after check all the data + #if (len(splitlines) >= 9) and (splitlines[8] in classname): + if (len(splitlines) < 9): + continue + if (len(splitlines) >= 9): + object_struct['name'] = splitlines[8] + if (len(splitlines) == 9): + object_struct['difficult'] = '0' + elif (len(splitlines) >= 10): + # if splitlines[9] == '1': + # if (splitlines[9] == 'tr'): + # object_struct['difficult'] = '1' + # else: + object_struct['difficult'] = splitlines[9] + # else: + # object_struct['difficult'] = 0 + object_struct['poly'] = [(float(splitlines[0]), float(splitlines[1])), + (float(splitlines[2]), float(splitlines[3])), + (float(splitlines[4]), float(splitlines[5])), + (float(splitlines[6]), float(splitlines[7])) + ] + gtpoly = shgeo.Polygon(object_struct['poly']) + object_struct['area'] = gtpoly.area + # poly = list(map(lambda x:np.array(x), object_struct['poly'])) + # object_struct['long-axis'] = max(distance(poly[0], poly[1]), distance(poly[1], poly[2])) + # object_struct['short-axis'] = min(distance(poly[0], poly[1]), distance(poly[1], poly[2])) + # if (object_struct['long-axis'] < 15): + # object_struct['difficult'] = '1' + # global small_count + # small_count = small_count + 1 + objects.append(object_struct) + else: + break + return objects + +def parse_dota_poly2(filename): + """ + parse the dota ground truth in the format: + [x1, y1, x2, y2, x3, y3, x4, y4] + """ + objects = parse_dota_poly(filename) + for obj in objects: + obj['poly'] = TuplePoly2Poly(obj['poly']) + obj['poly'] = list(map(int, obj['poly'])) + return objects + +def parse_dota_rec(filename): + """ + parse the dota ground truth in the bounding box format: + "xmin, ymin, xmax, ymax" + """ + objects = parse_dota_poly(filename) + for obj in objects: + poly = obj['poly'] + bbox = dots4ToRec4(poly) + obj['bndbox'] = bbox + return objects +## bounding box transfer for varies format + +def dots4ToRec4(poly): + xmin, xmax, ymin, ymax = min(poly[0][0], min(poly[1][0], min(poly[2][0], poly[3][0]))), \ + max(poly[0][0], max(poly[1][0], max(poly[2][0], poly[3][0]))), \ + min(poly[0][1], min(poly[1][1], min(poly[2][1], poly[3][1]))), \ + max(poly[0][1], max(poly[1][1], max(poly[2][1], poly[3][1]))) + return xmin, ymin, xmax, ymax +def dots4ToRec8(poly): + xmin, ymin, xmax, ymax = dots4ToRec4(poly) + return xmin, ymin, xmax, ymin, xmax, ymax, xmin, ymax + #return dots2ToRec8(dots4ToRec4(poly)) +def dots2ToRec8(rec): + xmin, ymin, xmax, ymax = rec[0], rec[1], rec[2], rec[3] + return xmin, ymin, xmax, ymin, xmax, ymax, xmin, ymax + +def groundtruth2Task1(srcpath, dstpath): + filelist = GetFileFromThisRootDir(srcpath) + # names = [custombasename(x.strip())for x in filelist] + filedict = {} + for cls in wordname_15: + fd = open(os.path.join(dstpath, 'Task1_') + cls + r'.txt', 'w') + filedict[cls] = fd + for filepath in filelist: + objects = parse_dota_poly2(filepath) + + subname = custombasename(filepath) + pattern2 = re.compile(r'__([\d+\.]+)__\d+___') + rate = re.findall(pattern2, subname)[0] + + for obj in objects: + category = obj['name'] + difficult = obj['difficult'] + poly = obj['poly'] + if difficult == '2': + continue + if rate == '0.5': + outline = custombasename(filepath) + ' ' + '1' + ' ' + ' '.join(map(str, poly)) + elif rate == '1': + outline = custombasename(filepath) + ' ' + '0.8' + ' ' + ' '.join(map(str, poly)) + elif rate == '2': + outline = custombasename(filepath) + ' ' + '0.6' + ' ' + ' '.join(map(str, poly)) + + filedict[category].write(outline + '\n') + +def Task2groundtruth_poly(srcpath, dstpath): + thresh = 0.1 + filedict = {} + Tasklist = GetFileFromThisRootDir(srcpath, '.txt') + + for Taskfile in Tasklist: + idname = custombasename(Taskfile).split('_')[-1] + # idname = datamap_inverse[idname] + f = open(Taskfile, 'r') + lines = f.readlines() + for line in lines: + if len(line) == 0: + continue + # print('line:', line) + splitline = line.strip().split(' ') + filename = splitline[0] + confidence = splitline[1] + bbox = splitline[2:] + if float(confidence) > thresh: + if filename not in filedict: + # filedict[filename] = codecs.open(os.path.join(dstpath, filename + '.txt'), 'w', 'utf_16') + filedict[filename] = codecs.open(os.path.join(dstpath, filename + '.txt'), 'w') + # poly = util.dots2ToRec8(bbox) + poly = bbox + # filedict[filename].write(' '.join(poly) + ' ' + idname + '_' + str(round(float(confidence), 2)) + '\n') + # print('idname:', idname) + + # filedict[filename].write(' '.join(poly) + ' ' + idname + '_' + str(round(float(confidence), 2)) + '\n') + + filedict[filename].write(' '.join(poly) + ' ' + idname + '\n') + + +def polygonToRotRectangle(bbox): + """ + :param bbox: The polygon stored in format [x1, y1, x2, y2, x3, y3, x4, y4] + :return: Rotated Rectangle in format [cx, cy, w, h, theta] + """ + bbox = np.array(bbox,dtype=np.float32) + bbox = np.reshape(bbox,newshape=(2,4),order='F') + angle = math.atan2(-(bbox[0,1]-bbox[0,0]),bbox[1,1]-bbox[1,0]) + + center = [[0],[0]] + + for i in range(4): + center[0] += bbox[0,i] + center[1] += bbox[1,i] + + center = np.array(center,dtype=np.float32)/4.0 + + R = np.array([[math.cos(angle), -math.sin(angle)], [math.sin(angle), math.cos(angle)]], dtype=np.float32) + + normalized = np.matmul(R.transpose(),bbox-center) + + xmin = np.min(normalized[0,:]) + xmax = np.max(normalized[0,:]) + ymin = np.min(normalized[1,:]) + ymax = np.max(normalized[1,:]) + + w = xmax - xmin + 1 + h = ymax - ymin + 1 + + return [float(center[0]),float(center[1]),w,h,angle] + + diff --git a/dota_kit/poly_nms_gpu/Makefile b/dota_kit/poly_nms_gpu/Makefile new file mode 100644 index 0000000..a482398 --- /dev/null +++ b/dota_kit/poly_nms_gpu/Makefile @@ -0,0 +1,3 @@ +all: + python setup.py build_ext --inplace + rm -rf build diff --git a/dota_kit/poly_nms_gpu/__init__.py b/dota_kit/poly_nms_gpu/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/dota_kit/poly_nms_gpu/nms.py b/dota_kit/poly_nms_gpu/nms.py new file mode 100644 index 0000000..ce2ef0c --- /dev/null +++ b/dota_kit/poly_nms_gpu/nms.py @@ -0,0 +1,14 @@ +import numpy as np + +from poly_nms import poly_gpu_nms +from poly_overlaps import poly_overlaps + +def poly_gpu_nms_wrapper(thresh, device_id): + def _nms(dets): + return poly_gpu_nms(dets, thresh, device_id) + return _nms + +def poly_overlaps_nms_wrapper(device_id): + def _overlaps(boxes, query_boxes): + return poly_overlaps(boxes, query_boxes, device_id) + return _overlaps \ No newline at end of file diff --git a/dota_kit/poly_nms_gpu/nms_wrapper.py b/dota_kit/poly_nms_gpu/nms_wrapper.py new file mode 100644 index 0000000..768b8a3 --- /dev/null +++ b/dota_kit/poly_nms_gpu/nms_wrapper.py @@ -0,0 +1,17 @@ +# -------------------------------------------------------- +# Fast R-CNN +# Copyright (c) 2015 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Ross Girshick +# -------------------------------------------------------- + +# from nms.gpu_nms import gpu_nms +# from nms.cpu_nms import cpu_nms +from .poly_nms import poly_gpu_nms +def poly_nms_gpu(dets, thresh, force_cpu=False): + """Dispatch to either CPU or GPU NMS implementations.""" + + if dets.shape[0] == 0: + return [] + return poly_gpu_nms(dets, thresh, device_id=0) + diff --git a/dota_kit/poly_nms_gpu/poly_nms.cpp b/dota_kit/poly_nms_gpu/poly_nms.cpp new file mode 100644 index 0000000..fc33c9f --- /dev/null +++ b/dota_kit/poly_nms_gpu/poly_nms.cpp @@ -0,0 +1,7899 @@ +/* Generated by Cython 0.25.2 */ + +#define PY_SSIZE_T_CLEAN +#include "Python.h" +#ifndef Py_PYTHON_H + #error Python headers needed to compile C extensions, please install development version of Python. +#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03020000) + #error Cython requires Python 2.6+ or Python 3.2+. +#else +#define CYTHON_ABI "0_25_2" +#include +#ifndef offsetof + #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) +#endif +#if !defined(WIN32) && !defined(MS_WINDOWS) + #ifndef __stdcall + #define __stdcall + #endif + #ifndef __cdecl + #define __cdecl + #endif + #ifndef __fastcall + #define __fastcall + #endif +#endif +#ifndef DL_IMPORT + #define DL_IMPORT(t) t +#endif +#ifndef DL_EXPORT + #define DL_EXPORT(t) t +#endif +#ifndef HAVE_LONG_LONG + #if PY_VERSION_HEX >= 0x03030000 || (PY_MAJOR_VERSION == 2 && PY_VERSION_HEX >= 0x02070000) + #define HAVE_LONG_LONG + #endif +#endif +#ifndef PY_LONG_LONG + #define PY_LONG_LONG LONG_LONG +#endif +#ifndef Py_HUGE_VAL + #define Py_HUGE_VAL HUGE_VAL +#endif +#ifdef PYPY_VERSION + #define CYTHON_COMPILING_IN_PYPY 1 + #define CYTHON_COMPILING_IN_PYSTON 0 + #define CYTHON_COMPILING_IN_CPYTHON 0 + #undef CYTHON_USE_TYPE_SLOTS + #define CYTHON_USE_TYPE_SLOTS 0 + #undef CYTHON_USE_ASYNC_SLOTS + #define CYTHON_USE_ASYNC_SLOTS 0 + #undef CYTHON_USE_PYLIST_INTERNALS + #define CYTHON_USE_PYLIST_INTERNALS 0 + #undef CYTHON_USE_UNICODE_INTERNALS + #define CYTHON_USE_UNICODE_INTERNALS 0 + #undef CYTHON_USE_UNICODE_WRITER + #define CYTHON_USE_UNICODE_WRITER 0 + #undef CYTHON_USE_PYLONG_INTERNALS + #define CYTHON_USE_PYLONG_INTERNALS 0 + #undef CYTHON_AVOID_BORROWED_REFS + #define CYTHON_AVOID_BORROWED_REFS 1 + #undef CYTHON_ASSUME_SAFE_MACROS + #define CYTHON_ASSUME_SAFE_MACROS 0 + #undef CYTHON_UNPACK_METHODS + #define CYTHON_UNPACK_METHODS 0 + #undef CYTHON_FAST_THREAD_STATE + #define CYTHON_FAST_THREAD_STATE 0 + #undef CYTHON_FAST_PYCALL + #define CYTHON_FAST_PYCALL 0 +#elif defined(PYSTON_VERSION) + #define CYTHON_COMPILING_IN_PYPY 0 + #define CYTHON_COMPILING_IN_PYSTON 1 + #define CYTHON_COMPILING_IN_CPYTHON 0 + #ifndef CYTHON_USE_TYPE_SLOTS + #define CYTHON_USE_TYPE_SLOTS 1 + #endif + #undef CYTHON_USE_ASYNC_SLOTS + #define CYTHON_USE_ASYNC_SLOTS 0 + #undef CYTHON_USE_PYLIST_INTERNALS + #define CYTHON_USE_PYLIST_INTERNALS 0 + #ifndef CYTHON_USE_UNICODE_INTERNALS + #define CYTHON_USE_UNICODE_INTERNALS 1 + #endif + #undef CYTHON_USE_UNICODE_WRITER + #define CYTHON_USE_UNICODE_WRITER 0 + #undef CYTHON_USE_PYLONG_INTERNALS + #define CYTHON_USE_PYLONG_INTERNALS 0 + #ifndef CYTHON_AVOID_BORROWED_REFS + #define CYTHON_AVOID_BORROWED_REFS 0 + #endif + #ifndef CYTHON_ASSUME_SAFE_MACROS + #define CYTHON_ASSUME_SAFE_MACROS 1 + #endif + #ifndef CYTHON_UNPACK_METHODS + #define CYTHON_UNPACK_METHODS 1 + #endif + #undef CYTHON_FAST_THREAD_STATE + #define CYTHON_FAST_THREAD_STATE 0 + #undef CYTHON_FAST_PYCALL + #define CYTHON_FAST_PYCALL 0 +#else + #define CYTHON_COMPILING_IN_PYPY 0 + #define CYTHON_COMPILING_IN_PYSTON 0 + #define CYTHON_COMPILING_IN_CPYTHON 1 + #ifndef CYTHON_USE_TYPE_SLOTS + #define CYTHON_USE_TYPE_SLOTS 1 + #endif + #if PY_MAJOR_VERSION < 3 + #undef CYTHON_USE_ASYNC_SLOTS + #define CYTHON_USE_ASYNC_SLOTS 0 + #elif !defined(CYTHON_USE_ASYNC_SLOTS) + #define CYTHON_USE_ASYNC_SLOTS 1 + #endif + #if PY_VERSION_HEX < 0x02070000 + #undef CYTHON_USE_PYLONG_INTERNALS + #define CYTHON_USE_PYLONG_INTERNALS 0 + #elif !defined(CYTHON_USE_PYLONG_INTERNALS) + #define CYTHON_USE_PYLONG_INTERNALS 1 + #endif + #ifndef CYTHON_USE_PYLIST_INTERNALS + #define CYTHON_USE_PYLIST_INTERNALS 1 + #endif + #ifndef CYTHON_USE_UNICODE_INTERNALS + #define CYTHON_USE_UNICODE_INTERNALS 1 + #endif + #if PY_VERSION_HEX < 0x030300F0 + #undef CYTHON_USE_UNICODE_WRITER + #define CYTHON_USE_UNICODE_WRITER 0 + #elif !defined(CYTHON_USE_UNICODE_WRITER) + #define CYTHON_USE_UNICODE_WRITER 1 + #endif + #ifndef CYTHON_AVOID_BORROWED_REFS + #define CYTHON_AVOID_BORROWED_REFS 0 + #endif + #ifndef CYTHON_ASSUME_SAFE_MACROS + #define CYTHON_ASSUME_SAFE_MACROS 1 + #endif + #ifndef CYTHON_UNPACK_METHODS + #define CYTHON_UNPACK_METHODS 1 + #endif + #ifndef CYTHON_FAST_THREAD_STATE + #define CYTHON_FAST_THREAD_STATE 1 + #endif + #ifndef CYTHON_FAST_PYCALL + #define CYTHON_FAST_PYCALL 1 + #endif +#endif +#if !defined(CYTHON_FAST_PYCCALL) +#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) +#endif +#if CYTHON_USE_PYLONG_INTERNALS + #include "longintrepr.h" + #undef SHIFT + #undef BASE + #undef MASK +#endif +#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) + #define Py_OptimizeFlag 0 +#endif +#define __PYX_BUILD_PY_SSIZE_T "n" +#define CYTHON_FORMAT_SSIZE_T "z" +#if PY_MAJOR_VERSION < 3 + #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" + #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ + PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) + #define __Pyx_DefaultClassType PyClass_Type +#else + #define __Pyx_BUILTIN_MODULE_NAME "builtins" + #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ + PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) + #define __Pyx_DefaultClassType PyType_Type +#endif +#ifndef Py_TPFLAGS_CHECKTYPES + #define Py_TPFLAGS_CHECKTYPES 0 +#endif +#ifndef Py_TPFLAGS_HAVE_INDEX + #define Py_TPFLAGS_HAVE_INDEX 0 +#endif +#ifndef Py_TPFLAGS_HAVE_NEWBUFFER + #define Py_TPFLAGS_HAVE_NEWBUFFER 0 +#endif +#ifndef Py_TPFLAGS_HAVE_FINALIZE + #define Py_TPFLAGS_HAVE_FINALIZE 0 +#endif +#ifndef METH_FASTCALL + #define METH_FASTCALL 0x80 + typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject **args, + Py_ssize_t nargs, PyObject *kwnames); +#else + #define __Pyx_PyCFunctionFast _PyCFunctionFast +#endif +#if CYTHON_FAST_PYCCALL +#define __Pyx_PyFastCFunction_Check(func)\ + ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST))))) +#else +#define __Pyx_PyFastCFunction_Check(func) 0 +#endif +#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) + #define CYTHON_PEP393_ENABLED 1 + #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ + 0 : _PyUnicode_Ready((PyObject *)(op))) + #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) + #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) + #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) + #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u) + #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) + #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) + #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) + #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) +#else + #define CYTHON_PEP393_ENABLED 0 + #define PyUnicode_1BYTE_KIND 1 + #define PyUnicode_2BYTE_KIND 2 + #define PyUnicode_4BYTE_KIND 4 + #define __Pyx_PyUnicode_READY(op) (0) + #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) + #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) + #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) + #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE)) + #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) + #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) + #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) + #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) +#endif +#if CYTHON_COMPILING_IN_PYPY + #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) + #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) +#else + #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) + #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ + PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) +#endif +#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains) + #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) +#endif +#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check) + #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) +#endif +#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format) + #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) +#endif +#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) + #define PyObject_Malloc(s) PyMem_Malloc(s) + #define PyObject_Free(p) PyMem_Free(p) + #define PyObject_Realloc(p) PyMem_Realloc(p) +#endif +#if CYTHON_COMPILING_IN_PYSTON + #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co) + #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno) +#else + #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) + #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) +#endif +#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) +#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) +#if PY_MAJOR_VERSION >= 3 + #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) +#else + #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) +#endif +#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) + #define PyObject_ASCII(o) PyObject_Repr(o) +#endif +#if PY_MAJOR_VERSION >= 3 + #define PyBaseString_Type PyUnicode_Type + #define PyStringObject PyUnicodeObject + #define PyString_Type PyUnicode_Type + #define PyString_Check PyUnicode_Check + #define PyString_CheckExact PyUnicode_CheckExact +#endif +#if PY_MAJOR_VERSION >= 3 + #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) + #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) +#else + #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) + #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) +#endif +#ifndef PySet_CheckExact + #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type) +#endif +#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) +#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) +#if PY_MAJOR_VERSION >= 3 + #define PyIntObject PyLongObject + #define PyInt_Type PyLong_Type + #define PyInt_Check(op) PyLong_Check(op) + #define PyInt_CheckExact(op) PyLong_CheckExact(op) + #define PyInt_FromString PyLong_FromString + #define PyInt_FromUnicode PyLong_FromUnicode + #define PyInt_FromLong PyLong_FromLong + #define PyInt_FromSize_t PyLong_FromSize_t + #define PyInt_FromSsize_t PyLong_FromSsize_t + #define PyInt_AsLong PyLong_AsLong + #define PyInt_AS_LONG PyLong_AS_LONG + #define PyInt_AsSsize_t PyLong_AsSsize_t + #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask + #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask + #define PyNumber_Int PyNumber_Long +#endif +#if PY_MAJOR_VERSION >= 3 + #define PyBoolObject PyLongObject +#endif +#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY + #ifndef PyUnicode_InternFromString + #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) + #endif +#endif +#if PY_VERSION_HEX < 0x030200A4 + typedef long Py_hash_t; + #define __Pyx_PyInt_FromHash_t PyInt_FromLong + #define __Pyx_PyInt_AsHash_t PyInt_AsLong +#else + #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t + #define __Pyx_PyInt_AsHash_t PyInt_AsSsize_t +#endif +#if PY_MAJOR_VERSION >= 3 + #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : PyInstanceMethod_New(func)) +#else + #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass) +#endif +#if CYTHON_USE_ASYNC_SLOTS + #if PY_VERSION_HEX >= 0x030500B1 + #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods + #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) + #else + typedef struct { + unaryfunc am_await; + unaryfunc am_aiter; + unaryfunc am_anext; + } __Pyx_PyAsyncMethodsStruct; + #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) + #endif +#else + #define __Pyx_PyType_AsAsync(obj) NULL +#endif +#ifndef CYTHON_RESTRICT + #if defined(__GNUC__) + #define CYTHON_RESTRICT __restrict__ + #elif defined(_MSC_VER) && _MSC_VER >= 1400 + #define CYTHON_RESTRICT __restrict + #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L + #define CYTHON_RESTRICT restrict + #else + #define CYTHON_RESTRICT + #endif +#endif +#ifndef CYTHON_UNUSED +# if defined(__GNUC__) +# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) +# define CYTHON_UNUSED __attribute__ ((__unused__)) +# else +# define CYTHON_UNUSED +# endif +# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) +# define CYTHON_UNUSED __attribute__ ((__unused__)) +# else +# define CYTHON_UNUSED +# endif +#endif +#ifndef CYTHON_MAYBE_UNUSED_VAR +# if defined(__cplusplus) + template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { } +# else +# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x) +# endif +#endif +#ifndef CYTHON_NCP_UNUSED +# if CYTHON_COMPILING_IN_CPYTHON +# define CYTHON_NCP_UNUSED +# else +# define CYTHON_NCP_UNUSED CYTHON_UNUSED +# endif +#endif +#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) + +#ifndef __cplusplus + #error "Cython files generated with the C++ option must be compiled with a C++ compiler." +#endif +#ifndef CYTHON_INLINE + #if defined(__clang__) + #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) + #else + #define CYTHON_INLINE inline + #endif +#endif +template +void __Pyx_call_destructor(T& x) { + x.~T(); +} +template +class __Pyx_FakeReference { + public: + __Pyx_FakeReference() : ptr(NULL) { } + __Pyx_FakeReference(const T& ref) : ptr(const_cast(&ref)) { } + T *operator->() { return ptr; } + T *operator&() { return ptr; } + operator T&() { return *ptr; } + template bool operator ==(U other) { return *ptr == other; } + template bool operator !=(U other) { return *ptr != other; } + private: + T *ptr; +}; + +#if defined(WIN32) || defined(MS_WINDOWS) + #define _USE_MATH_DEFINES +#endif +#include +#ifdef NAN +#define __PYX_NAN() ((float) NAN) +#else +static CYTHON_INLINE float __PYX_NAN() { + float value; + memset(&value, 0xFF, sizeof(value)); + return value; +} +#endif +#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) +#define __Pyx_truncl trunc +#else +#define __Pyx_truncl truncl +#endif + + +#define __PYX_ERR(f_index, lineno, Ln_error) \ +{ \ + __pyx_filename = __pyx_f[f_index]; __pyx_lineno = lineno; __pyx_clineno = __LINE__; goto Ln_error; \ +} + +#if PY_MAJOR_VERSION >= 3 + #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) +#else + #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) +#endif + +#ifndef __PYX_EXTERN_C + #ifdef __cplusplus + #define __PYX_EXTERN_C extern "C" + #else + #define __PYX_EXTERN_C extern + #endif +#endif + +#define __PYX_HAVE__poly_nms +#define __PYX_HAVE_API__poly_nms +#include +#include +#include +#include "numpy/arrayobject.h" +#include "numpy/ufuncobject.h" +#include "poly_nms.hpp" +#ifdef _OPENMP +#include +#endif /* _OPENMP */ + +#ifdef PYREX_WITHOUT_ASSERTIONS +#define CYTHON_WITHOUT_ASSERTIONS +#endif + +typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; + const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; + +#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 +#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT 0 +#define __PYX_DEFAULT_STRING_ENCODING "" +#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString +#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize +#define __Pyx_uchar_cast(c) ((unsigned char)c) +#define __Pyx_long_cast(x) ((long)x) +#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ + (sizeof(type) < sizeof(Py_ssize_t)) ||\ + (sizeof(type) > sizeof(Py_ssize_t) &&\ + likely(v < (type)PY_SSIZE_T_MAX ||\ + v == (type)PY_SSIZE_T_MAX) &&\ + (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ + v == (type)PY_SSIZE_T_MIN))) ||\ + (sizeof(type) == sizeof(Py_ssize_t) &&\ + (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ + v == (type)PY_SSIZE_T_MAX))) ) +#if defined (__cplusplus) && __cplusplus >= 201103L + #include + #define __Pyx_sst_abs(value) std::abs(value) +#elif SIZEOF_INT >= SIZEOF_SIZE_T + #define __Pyx_sst_abs(value) abs(value) +#elif SIZEOF_LONG >= SIZEOF_SIZE_T + #define __Pyx_sst_abs(value) labs(value) +#elif defined (_MSC_VER) && defined (_M_X64) + #define __Pyx_sst_abs(value) _abs64(value) +#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L + #define __Pyx_sst_abs(value) llabs(value) +#elif defined (__GNUC__) + #define __Pyx_sst_abs(value) __builtin_llabs(value) +#else + #define __Pyx_sst_abs(value) ((value<0) ? -value : value) +#endif +static CYTHON_INLINE char* __Pyx_PyObject_AsString(PyObject*); +static CYTHON_INLINE char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); +#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) +#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) +#define __Pyx_PyBytes_FromString PyBytes_FromString +#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize +static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); +#if PY_MAJOR_VERSION < 3 + #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString + #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize +#else + #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString + #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize +#endif +#define __Pyx_PyObject_AsSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) +#define __Pyx_PyObject_AsUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) +#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) +#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) +#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) +#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) +#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) +#if PY_MAJOR_VERSION < 3 +static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) +{ + const Py_UNICODE *u_end = u; + while (*u_end++) ; + return (size_t)(u_end - u - 1); +} +#else +#define __Pyx_Py_UNICODE_strlen Py_UNICODE_strlen +#endif +#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) +#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode +#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode +#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) +#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) +#define __Pyx_PyBool_FromLong(b) ((b) ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False)) +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); +static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); +#if CYTHON_ASSUME_SAFE_MACROS +#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) +#else +#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) +#endif +#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) +#if PY_MAJOR_VERSION >= 3 +#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) +#else +#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) +#endif +#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) +#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII +static int __Pyx_sys_getdefaultencoding_not_ascii; +static int __Pyx_init_sys_getdefaultencoding_params(void) { + PyObject* sys; + PyObject* default_encoding = NULL; + PyObject* ascii_chars_u = NULL; + PyObject* ascii_chars_b = NULL; + const char* default_encoding_c; + sys = PyImport_ImportModule("sys"); + if (!sys) goto bad; + default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); + Py_DECREF(sys); + if (!default_encoding) goto bad; + default_encoding_c = PyBytes_AsString(default_encoding); + if (!default_encoding_c) goto bad; + if (strcmp(default_encoding_c, "ascii") == 0) { + __Pyx_sys_getdefaultencoding_not_ascii = 0; + } else { + char ascii_chars[128]; + int c; + for (c = 0; c < 128; c++) { + ascii_chars[c] = c; + } + __Pyx_sys_getdefaultencoding_not_ascii = 1; + ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); + if (!ascii_chars_u) goto bad; + ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); + if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { + PyErr_Format( + PyExc_ValueError, + "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", + default_encoding_c); + goto bad; + } + Py_DECREF(ascii_chars_u); + Py_DECREF(ascii_chars_b); + } + Py_DECREF(default_encoding); + return 0; +bad: + Py_XDECREF(default_encoding); + Py_XDECREF(ascii_chars_u); + Py_XDECREF(ascii_chars_b); + return -1; +} +#endif +#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 +#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) +#else +#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) +#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT +static char* __PYX_DEFAULT_STRING_ENCODING; +static int __Pyx_init_sys_getdefaultencoding_params(void) { + PyObject* sys; + PyObject* default_encoding = NULL; + char* default_encoding_c; + sys = PyImport_ImportModule("sys"); + if (!sys) goto bad; + default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); + Py_DECREF(sys); + if (!default_encoding) goto bad; + default_encoding_c = PyBytes_AsString(default_encoding); + if (!default_encoding_c) goto bad; + __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c)); + if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; + strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); + Py_DECREF(default_encoding); + return 0; +bad: + Py_XDECREF(default_encoding); + return -1; +} +#endif +#endif + + +/* Test for GCC > 2.95 */ +#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) + #define likely(x) __builtin_expect(!!(x), 1) + #define unlikely(x) __builtin_expect(!!(x), 0) +#else /* !__GNUC__ or GCC < 2.95 */ + #define likely(x) (x) + #define unlikely(x) (x) +#endif /* __GNUC__ */ + +static PyObject *__pyx_m; +static PyObject *__pyx_d; +static PyObject *__pyx_b; +static PyObject *__pyx_empty_tuple; +static PyObject *__pyx_empty_bytes; +static PyObject *__pyx_empty_unicode; +static int __pyx_lineno; +static int __pyx_clineno = 0; +static const char * __pyx_cfilenm= __FILE__; +static const char *__pyx_filename; + +/* Header.proto */ +#if !defined(CYTHON_CCOMPLEX) + #if defined(__cplusplus) + #define CYTHON_CCOMPLEX 1 + #elif defined(_Complex_I) + #define CYTHON_CCOMPLEX 1 + #else + #define CYTHON_CCOMPLEX 0 + #endif +#endif +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + #include + #else + #include + #endif +#endif +#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__) + #undef _Complex_I + #define _Complex_I 1.0fj +#endif + + +static const char *__pyx_f[] = { + "poly_nms.pyx", + "__init__.pxd", + "type.pxd", +}; +/* BufferFormatStructs.proto */ +#define IS_UNSIGNED(type) (((type) -1) > 0) +struct __Pyx_StructField_; +#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0) +typedef struct { + const char* name; + struct __Pyx_StructField_* fields; + size_t size; + size_t arraysize[8]; + int ndim; + char typegroup; + char is_unsigned; + int flags; +} __Pyx_TypeInfo; +typedef struct __Pyx_StructField_ { + __Pyx_TypeInfo* type; + const char* name; + size_t offset; +} __Pyx_StructField; +typedef struct { + __Pyx_StructField* field; + size_t parent_offset; +} __Pyx_BufFmt_StackElem; +typedef struct { + __Pyx_StructField root; + __Pyx_BufFmt_StackElem* head; + size_t fmt_offset; + size_t new_count, enc_count; + size_t struct_alignment; + int is_complex; + char enc_type; + char new_packmode; + char enc_packmode; + char is_valid_array; +} __Pyx_BufFmt_Context; + + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":725 + * # in Cython to enable them only on the right systems. + * + * ctypedef npy_int8 int8_t # <<<<<<<<<<<<<< + * ctypedef npy_int16 int16_t + * ctypedef npy_int32 int32_t + */ +typedef npy_int8 __pyx_t_5numpy_int8_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":726 + * + * ctypedef npy_int8 int8_t + * ctypedef npy_int16 int16_t # <<<<<<<<<<<<<< + * ctypedef npy_int32 int32_t + * ctypedef npy_int64 int64_t + */ +typedef npy_int16 __pyx_t_5numpy_int16_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":727 + * ctypedef npy_int8 int8_t + * ctypedef npy_int16 int16_t + * ctypedef npy_int32 int32_t # <<<<<<<<<<<<<< + * ctypedef npy_int64 int64_t + * #ctypedef npy_int96 int96_t + */ +typedef npy_int32 __pyx_t_5numpy_int32_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":728 + * ctypedef npy_int16 int16_t + * ctypedef npy_int32 int32_t + * ctypedef npy_int64 int64_t # <<<<<<<<<<<<<< + * #ctypedef npy_int96 int96_t + * #ctypedef npy_int128 int128_t + */ +typedef npy_int64 __pyx_t_5numpy_int64_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":732 + * #ctypedef npy_int128 int128_t + * + * ctypedef npy_uint8 uint8_t # <<<<<<<<<<<<<< + * ctypedef npy_uint16 uint16_t + * ctypedef npy_uint32 uint32_t + */ +typedef npy_uint8 __pyx_t_5numpy_uint8_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":733 + * + * ctypedef npy_uint8 uint8_t + * ctypedef npy_uint16 uint16_t # <<<<<<<<<<<<<< + * ctypedef npy_uint32 uint32_t + * ctypedef npy_uint64 uint64_t + */ +typedef npy_uint16 __pyx_t_5numpy_uint16_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":734 + * ctypedef npy_uint8 uint8_t + * ctypedef npy_uint16 uint16_t + * ctypedef npy_uint32 uint32_t # <<<<<<<<<<<<<< + * ctypedef npy_uint64 uint64_t + * #ctypedef npy_uint96 uint96_t + */ +typedef npy_uint32 __pyx_t_5numpy_uint32_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":735 + * ctypedef npy_uint16 uint16_t + * ctypedef npy_uint32 uint32_t + * ctypedef npy_uint64 uint64_t # <<<<<<<<<<<<<< + * #ctypedef npy_uint96 uint96_t + * #ctypedef npy_uint128 uint128_t + */ +typedef npy_uint64 __pyx_t_5numpy_uint64_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":739 + * #ctypedef npy_uint128 uint128_t + * + * ctypedef npy_float32 float32_t # <<<<<<<<<<<<<< + * ctypedef npy_float64 float64_t + * #ctypedef npy_float80 float80_t + */ +typedef npy_float32 __pyx_t_5numpy_float32_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":740 + * + * ctypedef npy_float32 float32_t + * ctypedef npy_float64 float64_t # <<<<<<<<<<<<<< + * #ctypedef npy_float80 float80_t + * #ctypedef npy_float128 float128_t + */ +typedef npy_float64 __pyx_t_5numpy_float64_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":749 + * # The int types are mapped a bit surprising -- + * # numpy.int corresponds to 'l' and numpy.long to 'q' + * ctypedef npy_long int_t # <<<<<<<<<<<<<< + * ctypedef npy_longlong long_t + * ctypedef npy_longlong longlong_t + */ +typedef npy_long __pyx_t_5numpy_int_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":750 + * # numpy.int corresponds to 'l' and numpy.long to 'q' + * ctypedef npy_long int_t + * ctypedef npy_longlong long_t # <<<<<<<<<<<<<< + * ctypedef npy_longlong longlong_t + * + */ +typedef npy_longlong __pyx_t_5numpy_long_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":751 + * ctypedef npy_long int_t + * ctypedef npy_longlong long_t + * ctypedef npy_longlong longlong_t # <<<<<<<<<<<<<< + * + * ctypedef npy_ulong uint_t + */ +typedef npy_longlong __pyx_t_5numpy_longlong_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":753 + * ctypedef npy_longlong longlong_t + * + * ctypedef npy_ulong uint_t # <<<<<<<<<<<<<< + * ctypedef npy_ulonglong ulong_t + * ctypedef npy_ulonglong ulonglong_t + */ +typedef npy_ulong __pyx_t_5numpy_uint_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":754 + * + * ctypedef npy_ulong uint_t + * ctypedef npy_ulonglong ulong_t # <<<<<<<<<<<<<< + * ctypedef npy_ulonglong ulonglong_t + * + */ +typedef npy_ulonglong __pyx_t_5numpy_ulong_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":755 + * ctypedef npy_ulong uint_t + * ctypedef npy_ulonglong ulong_t + * ctypedef npy_ulonglong ulonglong_t # <<<<<<<<<<<<<< + * + * ctypedef npy_intp intp_t + */ +typedef npy_ulonglong __pyx_t_5numpy_ulonglong_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":757 + * ctypedef npy_ulonglong ulonglong_t + * + * ctypedef npy_intp intp_t # <<<<<<<<<<<<<< + * ctypedef npy_uintp uintp_t + * + */ +typedef npy_intp __pyx_t_5numpy_intp_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":758 + * + * ctypedef npy_intp intp_t + * ctypedef npy_uintp uintp_t # <<<<<<<<<<<<<< + * + * ctypedef npy_double float_t + */ +typedef npy_uintp __pyx_t_5numpy_uintp_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":760 + * ctypedef npy_uintp uintp_t + * + * ctypedef npy_double float_t # <<<<<<<<<<<<<< + * ctypedef npy_double double_t + * ctypedef npy_longdouble longdouble_t + */ +typedef npy_double __pyx_t_5numpy_float_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":761 + * + * ctypedef npy_double float_t + * ctypedef npy_double double_t # <<<<<<<<<<<<<< + * ctypedef npy_longdouble longdouble_t + * + */ +typedef npy_double __pyx_t_5numpy_double_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":762 + * ctypedef npy_double float_t + * ctypedef npy_double double_t + * ctypedef npy_longdouble longdouble_t # <<<<<<<<<<<<<< + * + * ctypedef npy_cfloat cfloat_t + */ +typedef npy_longdouble __pyx_t_5numpy_longdouble_t; +/* Declarations.proto */ +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + typedef ::std::complex< float > __pyx_t_float_complex; + #else + typedef float _Complex __pyx_t_float_complex; + #endif +#else + typedef struct { float real, imag; } __pyx_t_float_complex; +#endif +static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float, float); + +/* Declarations.proto */ +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + typedef ::std::complex< double > __pyx_t_double_complex; + #else + typedef double _Complex __pyx_t_double_complex; + #endif +#else + typedef struct { double real, imag; } __pyx_t_double_complex; +#endif +static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double); + + +/*--- Type declarations ---*/ + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":764 + * ctypedef npy_longdouble longdouble_t + * + * ctypedef npy_cfloat cfloat_t # <<<<<<<<<<<<<< + * ctypedef npy_cdouble cdouble_t + * ctypedef npy_clongdouble clongdouble_t + */ +typedef npy_cfloat __pyx_t_5numpy_cfloat_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":765 + * + * ctypedef npy_cfloat cfloat_t + * ctypedef npy_cdouble cdouble_t # <<<<<<<<<<<<<< + * ctypedef npy_clongdouble clongdouble_t + * + */ +typedef npy_cdouble __pyx_t_5numpy_cdouble_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":766 + * ctypedef npy_cfloat cfloat_t + * ctypedef npy_cdouble cdouble_t + * ctypedef npy_clongdouble clongdouble_t # <<<<<<<<<<<<<< + * + * ctypedef npy_cdouble complex_t + */ +typedef npy_clongdouble __pyx_t_5numpy_clongdouble_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":768 + * ctypedef npy_clongdouble clongdouble_t + * + * ctypedef npy_cdouble complex_t # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew1(a): + */ +typedef npy_cdouble __pyx_t_5numpy_complex_t; + +/* --- Runtime support code (head) --- */ +/* Refnanny.proto */ +#ifndef CYTHON_REFNANNY + #define CYTHON_REFNANNY 0 +#endif +#if CYTHON_REFNANNY + typedef struct { + void (*INCREF)(void*, PyObject*, int); + void (*DECREF)(void*, PyObject*, int); + void (*GOTREF)(void*, PyObject*, int); + void (*GIVEREF)(void*, PyObject*, int); + void* (*SetupContext)(const char*, int, const char*); + void (*FinishContext)(void**); + } __Pyx_RefNannyAPIStruct; + static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; + static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); + #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; +#ifdef WITH_THREAD + #define __Pyx_RefNannySetupContext(name, acquire_gil)\ + if (acquire_gil) {\ + PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ + __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ + PyGILState_Release(__pyx_gilstate_save);\ + } else {\ + __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ + } +#else + #define __Pyx_RefNannySetupContext(name, acquire_gil)\ + __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) +#endif + #define __Pyx_RefNannyFinishContext()\ + __Pyx_RefNanny->FinishContext(&__pyx_refnanny) + #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0) + #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0) + #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0) + #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0) +#else + #define __Pyx_RefNannyDeclarations + #define __Pyx_RefNannySetupContext(name, acquire_gil) + #define __Pyx_RefNannyFinishContext() + #define __Pyx_INCREF(r) Py_INCREF(r) + #define __Pyx_DECREF(r) Py_DECREF(r) + #define __Pyx_GOTREF(r) + #define __Pyx_GIVEREF(r) + #define __Pyx_XINCREF(r) Py_XINCREF(r) + #define __Pyx_XDECREF(r) Py_XDECREF(r) + #define __Pyx_XGOTREF(r) + #define __Pyx_XGIVEREF(r) +#endif +#define __Pyx_XDECREF_SET(r, v) do {\ + PyObject *tmp = (PyObject *) r;\ + r = v; __Pyx_XDECREF(tmp);\ + } while (0) +#define __Pyx_DECREF_SET(r, v) do {\ + PyObject *tmp = (PyObject *) r;\ + r = v; __Pyx_DECREF(tmp);\ + } while (0) +#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) +#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) + +/* RaiseArgTupleInvalid.proto */ +static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, + Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); + +/* RaiseDoubleKeywords.proto */ +static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); + +/* ParseKeywords.proto */ +static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\ + PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\ + const char* function_name); + +/* ArgTypeTest.proto */ +static CYTHON_INLINE int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed, + const char *name, int exact); + +/* BufferFormatCheck.proto */ +static CYTHON_INLINE int __Pyx_GetBufferAndValidate(Py_buffer* buf, PyObject* obj, + __Pyx_TypeInfo* dtype, int flags, int nd, int cast, __Pyx_BufFmt_StackElem* stack); +static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info); +static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts); +static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, + __Pyx_BufFmt_StackElem* stack, + __Pyx_TypeInfo* type); // PROTO + +/* PyObjectGetAttrStr.proto */ +#if CYTHON_USE_TYPE_SLOTS +static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { + PyTypeObject* tp = Py_TYPE(obj); + if (likely(tp->tp_getattro)) + return tp->tp_getattro(obj, attr_name); +#if PY_MAJOR_VERSION < 3 + if (likely(tp->tp_getattr)) + return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); +#endif + return PyObject_GetAttr(obj, attr_name); +} +#else +#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) +#endif + +/* GetBuiltinName.proto */ +static PyObject *__Pyx_GetBuiltinName(PyObject *name); + +/* GetModuleGlobalName.proto */ +static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name); + +/* PyObjectCall.proto */ +#if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); +#else +#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) +#endif + +/* ExtTypeTest.proto */ +static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); + +/* PyCFunctionFastCall.proto */ +#if CYTHON_FAST_PYCCALL +static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs); +#else +#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL) +#endif + +/* PyFunctionFastCall.proto */ +#if CYTHON_FAST_PYCALL +#define __Pyx_PyFunction_FastCall(func, args, nargs)\ + __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) +#if 1 || PY_VERSION_HEX < 0x030600B1 +static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, int nargs, PyObject *kwargs); +#else +#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs) +#endif +#endif + +/* PyObjectCallMethO.proto */ +#if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); +#endif + +/* PyObjectCallOneArg.proto */ +static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); + +/* PyObjectCallNoArg.proto */ +#if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func); +#else +#define __Pyx_PyObject_CallNoArg(func) __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL) +#endif + +/* BufferIndexError.proto */ +static void __Pyx_RaiseBufferIndexError(int axis); + +#define __Pyx_BufPtrStrided1d(type, buf, i0, s0) (type)((char*)buf + i0 * s0) +#define __Pyx_BufPtrStrided2d(type, buf, i0, s0, i1, s1) (type)((char*)buf + i0 * s0 + i1 * s1) +/* SliceObject.proto */ +static CYTHON_INLINE PyObject* __Pyx_PyObject_GetSlice( + PyObject* obj, Py_ssize_t cstart, Py_ssize_t cstop, + PyObject** py_start, PyObject** py_stop, PyObject** py_slice, + int has_cstart, int has_cstop, int wraparound); + +/* BufferFallbackError.proto */ +static void __Pyx_RaiseBufferFallbackError(void); + +/* PyThreadStateGet.proto */ +#if CYTHON_FAST_THREAD_STATE +#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; +#define __Pyx_PyThreadState_assign __pyx_tstate = PyThreadState_GET(); +#else +#define __Pyx_PyThreadState_declare +#define __Pyx_PyThreadState_assign +#endif + +/* PyErrFetchRestore.proto */ +#if CYTHON_FAST_THREAD_STATE +#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) +#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) +#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) +#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) +static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); +static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); +#else +#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) +#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) +#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) +#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) +#endif + +/* RaiseException.proto */ +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); + +/* DictGetItem.proto */ +#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY +static PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key) { + PyObject *value; + value = PyDict_GetItemWithError(d, key); + if (unlikely(!value)) { + if (!PyErr_Occurred()) { + PyObject* args = PyTuple_Pack(1, key); + if (likely(args)) + PyErr_SetObject(PyExc_KeyError, args); + Py_XDECREF(args); + } + return NULL; + } + Py_INCREF(value); + return value; +} +#else + #define __Pyx_PyDict_GetItem(d, key) PyObject_GetItem(d, key) +#endif + +/* RaiseTooManyValuesToUnpack.proto */ +static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); + +/* RaiseNeedMoreValuesToUnpack.proto */ +static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); + +/* RaiseNoneIterError.proto */ +static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void); + +/* SaveResetException.proto */ +#if CYTHON_FAST_THREAD_STATE +#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) +static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); +#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) +static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); +#else +#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) +#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) +#endif + +/* PyErrExceptionMatches.proto */ +#if CYTHON_FAST_THREAD_STATE +#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) +static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); +#else +#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) +#endif + +/* GetException.proto */ +#if CYTHON_FAST_THREAD_STATE +#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) +static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); +#else +static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); +#endif + +/* Import.proto */ +static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); + +/* CodeObjectCache.proto */ +typedef struct { + PyCodeObject* code_object; + int code_line; +} __Pyx_CodeObjectCacheEntry; +struct __Pyx_CodeObjectCache { + int count; + int max_count; + __Pyx_CodeObjectCacheEntry* entries; +}; +static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; +static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); +static PyCodeObject *__pyx_find_code_object(int code_line); +static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); + +/* AddTraceback.proto */ +static void __Pyx_AddTraceback(const char *funcname, int c_line, + int py_line, const char *filename); + +/* BufferStructDeclare.proto */ +typedef struct { + Py_ssize_t shape, strides, suboffsets; +} __Pyx_Buf_DimInfo; +typedef struct { + size_t refcount; + Py_buffer pybuffer; +} __Pyx_Buffer; +typedef struct { + __Pyx_Buffer *rcbuffer; + char *data; + __Pyx_Buf_DimInfo diminfo[8]; +} __Pyx_LocalBuf_ND; + +#if PY_MAJOR_VERSION < 3 + static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags); + static void __Pyx_ReleaseBuffer(Py_buffer *view); +#else + #define __Pyx_GetBuffer PyObject_GetBuffer + #define __Pyx_ReleaseBuffer PyBuffer_Release +#endif + + +/* None.proto */ +static Py_ssize_t __Pyx_zeros[] = {0, 0, 0, 0, 0, 0, 0, 0}; +static Py_ssize_t __Pyx_minusones[] = {-1, -1, -1, -1, -1, -1, -1, -1}; + +/* CIntToPy.proto */ +static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value); + +/* RealImag.proto */ +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + #define __Pyx_CREAL(z) ((z).real()) + #define __Pyx_CIMAG(z) ((z).imag()) + #else + #define __Pyx_CREAL(z) (__real__(z)) + #define __Pyx_CIMAG(z) (__imag__(z)) + #endif +#else + #define __Pyx_CREAL(z) ((z).real) + #define __Pyx_CIMAG(z) ((z).imag) +#endif +#if defined(__cplusplus) && CYTHON_CCOMPLEX\ + && (defined(_WIN32) || defined(__clang__) || (defined(__GNUC__) && (__GNUC__ >= 5 || __GNUC__ == 4 && __GNUC_MINOR__ >= 4 )) || __cplusplus >= 201103) + #define __Pyx_SET_CREAL(z,x) ((z).real(x)) + #define __Pyx_SET_CIMAG(z,y) ((z).imag(y)) +#else + #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x) + #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y) +#endif + +/* Arithmetic.proto */ +#if CYTHON_CCOMPLEX + #define __Pyx_c_eq_float(a, b) ((a)==(b)) + #define __Pyx_c_sum_float(a, b) ((a)+(b)) + #define __Pyx_c_diff_float(a, b) ((a)-(b)) + #define __Pyx_c_prod_float(a, b) ((a)*(b)) + #define __Pyx_c_quot_float(a, b) ((a)/(b)) + #define __Pyx_c_neg_float(a) (-(a)) + #ifdef __cplusplus + #define __Pyx_c_is_zero_float(z) ((z)==(float)0) + #define __Pyx_c_conj_float(z) (::std::conj(z)) + #if 1 + #define __Pyx_c_abs_float(z) (::std::abs(z)) + #define __Pyx_c_pow_float(a, b) (::std::pow(a, b)) + #endif + #else + #define __Pyx_c_is_zero_float(z) ((z)==0) + #define __Pyx_c_conj_float(z) (conjf(z)) + #if 1 + #define __Pyx_c_abs_float(z) (cabsf(z)) + #define __Pyx_c_pow_float(a, b) (cpowf(a, b)) + #endif + #endif +#else + static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex); + static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex); + #if 1 + static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex, __pyx_t_float_complex); + #endif +#endif + +/* Arithmetic.proto */ +#if CYTHON_CCOMPLEX + #define __Pyx_c_eq_double(a, b) ((a)==(b)) + #define __Pyx_c_sum_double(a, b) ((a)+(b)) + #define __Pyx_c_diff_double(a, b) ((a)-(b)) + #define __Pyx_c_prod_double(a, b) ((a)*(b)) + #define __Pyx_c_quot_double(a, b) ((a)/(b)) + #define __Pyx_c_neg_double(a) (-(a)) + #ifdef __cplusplus + #define __Pyx_c_is_zero_double(z) ((z)==(double)0) + #define __Pyx_c_conj_double(z) (::std::conj(z)) + #if 1 + #define __Pyx_c_abs_double(z) (::std::abs(z)) + #define __Pyx_c_pow_double(a, b) (::std::pow(a, b)) + #endif + #else + #define __Pyx_c_is_zero_double(z) ((z)==0) + #define __Pyx_c_conj_double(z) (conj(z)) + #if 1 + #define __Pyx_c_abs_double(z) (cabs(z)) + #define __Pyx_c_pow_double(a, b) (cpow(a, b)) + #endif + #endif +#else + static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex); + static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex); + #if 1 + static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex, __pyx_t_double_complex); + #endif +#endif + +/* CIntToPy.proto */ +static CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value); + +/* CIntFromPy.proto */ +static CYTHON_INLINE npy_int32 __Pyx_PyInt_As_npy_int32(PyObject *); + +/* CIntFromPy.proto */ +static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); + +/* CIntToPy.proto */ +static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); + +/* CIntFromPy.proto */ +static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); + +/* CheckBinaryVersion.proto */ +static int __Pyx_check_binary_version(void); + +/* PyIdentifierFromString.proto */ +#if !defined(__Pyx_PyIdentifier_FromString) +#if PY_MAJOR_VERSION < 3 + #define __Pyx_PyIdentifier_FromString(s) PyString_FromString(s) +#else + #define __Pyx_PyIdentifier_FromString(s) PyUnicode_FromString(s) +#endif +#endif + +/* ModuleImport.proto */ +static PyObject *__Pyx_ImportModule(const char *name); + +/* TypeImport.proto */ +static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, size_t size, int strict); + +/* InitStrings.proto */ +static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); + + +/* Module declarations from 'cpython.buffer' */ + +/* Module declarations from 'libc.string' */ + +/* Module declarations from 'libc.stdio' */ + +/* Module declarations from '__builtin__' */ + +/* Module declarations from 'cpython.type' */ +static PyTypeObject *__pyx_ptype_7cpython_4type_type = 0; + +/* Module declarations from 'cpython' */ + +/* Module declarations from 'cpython.object' */ + +/* Module declarations from 'cpython.ref' */ + +/* Module declarations from 'libc.stdlib' */ + +/* Module declarations from 'numpy' */ + +/* Module declarations from 'numpy' */ +static PyTypeObject *__pyx_ptype_5numpy_dtype = 0; +static PyTypeObject *__pyx_ptype_5numpy_flatiter = 0; +static PyTypeObject *__pyx_ptype_5numpy_broadcast = 0; +static PyTypeObject *__pyx_ptype_5numpy_ndarray = 0; +static PyTypeObject *__pyx_ptype_5numpy_ufunc = 0; +static CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *, char *, char *, int *); /*proto*/ + +/* Module declarations from 'poly_nms' */ +static __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t = { "float32_t", NULL, sizeof(__pyx_t_5numpy_float32_t), { 0 }, 0, 'R', 0, 0 }; +static __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_int32_t = { "int32_t", NULL, sizeof(__pyx_t_5numpy_int32_t), { 0 }, 0, IS_UNSIGNED(__pyx_t_5numpy_int32_t) ? 'U' : 'I', IS_UNSIGNED(__pyx_t_5numpy_int32_t), 0 }; +static __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_int_t = { "int_t", NULL, sizeof(__pyx_t_5numpy_int_t), { 0 }, 0, IS_UNSIGNED(__pyx_t_5numpy_int_t) ? 'U' : 'I', IS_UNSIGNED(__pyx_t_5numpy_int_t), 0 }; +#define __Pyx_MODULE_NAME "poly_nms" +int __pyx_module_is_main_poly_nms = 0; + +/* Implementation of 'poly_nms' */ +static PyObject *__pyx_builtin_ValueError; +static PyObject *__pyx_builtin_range; +static PyObject *__pyx_builtin_RuntimeError; +static PyObject *__pyx_builtin_ImportError; +static const char __pyx_k_np[] = "np"; +static const char __pyx_k_dets[] = "dets"; +static const char __pyx_k_keep[] = "keep"; +static const char __pyx_k_main[] = "__main__"; +static const char __pyx_k_test[] = "__test__"; +static const char __pyx_k_dtype[] = "dtype"; +static const char __pyx_k_int32[] = "int32"; +static const char __pyx_k_numpy[] = "numpy"; +static const char __pyx_k_order[] = "order"; +static const char __pyx_k_range[] = "range"; +static const char __pyx_k_zeros[] = "zeros"; +static const char __pyx_k_import[] = "__import__"; +static const char __pyx_k_scores[] = "scores"; +static const char __pyx_k_thresh[] = "thresh"; +static const char __pyx_k_argsort[] = "argsort"; +static const char __pyx_k_num_out[] = "num_out"; +static const char __pyx_k_poly_nms[] = "poly_nms"; +static const char __pyx_k_boxes_dim[] = "boxes_dim"; +static const char __pyx_k_boxes_num[] = "boxes_num"; +static const char __pyx_k_device_id[] = "device_id"; +static const char __pyx_k_ValueError[] = "ValueError"; +static const char __pyx_k_ImportError[] = "ImportError"; +static const char __pyx_k_sorted_dets[] = "sorted_dets"; +static const char __pyx_k_RuntimeError[] = "RuntimeError"; +static const char __pyx_k_poly_gpu_nms[] = "poly_gpu_nms"; +static const char __pyx_k_ndarray_is_not_C_contiguous[] = "ndarray is not C contiguous"; +static const char __pyx_k_home_dingjian_code_DOTA_devkit[] = "/home/dingjian/code/DOTA_devkit/poly_nms_gpu/poly_nms.pyx"; +static const char __pyx_k_numpy_core_multiarray_failed_to[] = "numpy.core.multiarray failed to import"; +static const char __pyx_k_unknown_dtype_code_in_numpy_pxd[] = "unknown dtype code in numpy.pxd (%d)"; +static const char __pyx_k_Format_string_allocated_too_shor[] = "Format string allocated too short, see comment in numpy.pxd"; +static const char __pyx_k_Non_native_byte_order_not_suppor[] = "Non-native byte order not supported"; +static const char __pyx_k_ndarray_is_not_Fortran_contiguou[] = "ndarray is not Fortran contiguous"; +static const char __pyx_k_numpy_core_umath_failed_to_impor[] = "numpy.core.umath failed to import"; +static const char __pyx_k_Format_string_allocated_too_shor_2[] = "Format string allocated too short."; +static PyObject *__pyx_kp_u_Format_string_allocated_too_shor; +static PyObject *__pyx_kp_u_Format_string_allocated_too_shor_2; +static PyObject *__pyx_n_s_ImportError; +static PyObject *__pyx_kp_u_Non_native_byte_order_not_suppor; +static PyObject *__pyx_n_s_RuntimeError; +static PyObject *__pyx_n_s_ValueError; +static PyObject *__pyx_n_s_argsort; +static PyObject *__pyx_n_s_boxes_dim; +static PyObject *__pyx_n_s_boxes_num; +static PyObject *__pyx_n_s_dets; +static PyObject *__pyx_n_s_device_id; +static PyObject *__pyx_n_s_dtype; +static PyObject *__pyx_kp_s_home_dingjian_code_DOTA_devkit; +static PyObject *__pyx_n_s_import; +static PyObject *__pyx_n_s_int32; +static PyObject *__pyx_n_s_keep; +static PyObject *__pyx_n_s_main; +static PyObject *__pyx_kp_u_ndarray_is_not_C_contiguous; +static PyObject *__pyx_kp_u_ndarray_is_not_Fortran_contiguou; +static PyObject *__pyx_n_s_np; +static PyObject *__pyx_n_s_num_out; +static PyObject *__pyx_n_s_numpy; +static PyObject *__pyx_kp_s_numpy_core_multiarray_failed_to; +static PyObject *__pyx_kp_s_numpy_core_umath_failed_to_impor; +static PyObject *__pyx_n_s_order; +static PyObject *__pyx_n_s_poly_gpu_nms; +static PyObject *__pyx_n_s_poly_nms; +static PyObject *__pyx_n_s_range; +static PyObject *__pyx_n_s_scores; +static PyObject *__pyx_n_s_sorted_dets; +static PyObject *__pyx_n_s_test; +static PyObject *__pyx_n_s_thresh; +static PyObject *__pyx_kp_u_unknown_dtype_code_in_numpy_pxd; +static PyObject *__pyx_n_s_zeros; +static PyObject *__pyx_pf_8poly_nms_poly_gpu_nms(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_dets, PyObject *__pyx_v_thresh, __pyx_t_5numpy_int32_t __pyx_v_device_id); /* proto */ +static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ +static void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info); /* proto */ +static PyObject *__pyx_int_8; +static PyObject *__pyx_int_neg_1; +static PyObject *__pyx_slice_; +static PyObject *__pyx_slice__3; +static PyObject *__pyx_slice__4; +static PyObject *__pyx_tuple__2; +static PyObject *__pyx_tuple__5; +static PyObject *__pyx_tuple__6; +static PyObject *__pyx_tuple__7; +static PyObject *__pyx_tuple__8; +static PyObject *__pyx_tuple__9; +static PyObject *__pyx_tuple__10; +static PyObject *__pyx_tuple__11; +static PyObject *__pyx_tuple__12; +static PyObject *__pyx_tuple__13; +static PyObject *__pyx_tuple__14; +static PyObject *__pyx_codeobj__15; + +/* "poly_nms.pyx":9 + * void _poly_nms(np.int32_t*, int*, np.float32_t*, int, int, float, int) + * + * def poly_gpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh, # <<<<<<<<<<<<<< + * np.int32_t device_id=0): + * cdef int boxes_num = dets.shape[0] + */ + +/* Python wrapper */ +static PyObject *__pyx_pw_8poly_nms_1poly_gpu_nms(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static PyMethodDef __pyx_mdef_8poly_nms_1poly_gpu_nms = {"poly_gpu_nms", (PyCFunction)__pyx_pw_8poly_nms_1poly_gpu_nms, METH_VARARGS|METH_KEYWORDS, 0}; +static PyObject *__pyx_pw_8poly_nms_1poly_gpu_nms(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyArrayObject *__pyx_v_dets = 0; + PyObject *__pyx_v_thresh = 0; + __pyx_t_5numpy_int32_t __pyx_v_device_id; + PyObject *__pyx_r = 0; + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("poly_gpu_nms (wrapper)", 0); + { + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_dets,&__pyx_n_s_thresh,&__pyx_n_s_device_id,0}; + PyObject* values[3] = {0,0,0}; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args; + const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); + switch (pos_args) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + kw_args = PyDict_Size(__pyx_kwds); + switch (pos_args) { + case 0: + if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_dets)) != 0)) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_thresh)) != 0)) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("poly_gpu_nms", 0, 2, 3, 1); __PYX_ERR(0, 9, __pyx_L3_error) + } + case 2: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_device_id); + if (value) { values[2] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "poly_gpu_nms") < 0)) __PYX_ERR(0, 9, __pyx_L3_error) + } + } else { + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + __pyx_v_dets = ((PyArrayObject *)values[0]); + __pyx_v_thresh = ((PyObject*)values[1]); + if (values[2]) { + __pyx_v_device_id = __Pyx_PyInt_As_npy_int32(values[2]); if (unlikely((__pyx_v_device_id == ((npy_int32)-1)) && PyErr_Occurred())) __PYX_ERR(0, 10, __pyx_L3_error) + } else { + __pyx_v_device_id = ((__pyx_t_5numpy_int32_t)0); + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("poly_gpu_nms", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 9, __pyx_L3_error) + __pyx_L3_error:; + __Pyx_AddTraceback("poly_nms.poly_gpu_nms", __pyx_clineno, __pyx_lineno, __pyx_filename); + __Pyx_RefNannyFinishContext(); + return NULL; + __pyx_L4_argument_unpacking_done:; + if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_dets), __pyx_ptype_5numpy_ndarray, 1, "dets", 0))) __PYX_ERR(0, 9, __pyx_L1_error) + if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_thresh), (&PyFloat_Type), 1, "thresh", 1))) __PYX_ERR(0, 9, __pyx_L1_error) + __pyx_r = __pyx_pf_8poly_nms_poly_gpu_nms(__pyx_self, __pyx_v_dets, __pyx_v_thresh, __pyx_v_device_id); + + /* function exit code */ + goto __pyx_L0; + __pyx_L1_error:; + __pyx_r = NULL; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static PyObject *__pyx_pf_8poly_nms_poly_gpu_nms(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_dets, PyObject *__pyx_v_thresh, __pyx_t_5numpy_int32_t __pyx_v_device_id) { + int __pyx_v_boxes_num; + int __pyx_v_boxes_dim; + int __pyx_v_num_out; + PyArrayObject *__pyx_v_keep = 0; + PyArrayObject *__pyx_v_scores = 0; + PyArrayObject *__pyx_v_order = 0; + PyArrayObject *__pyx_v_sorted_dets = 0; + __Pyx_LocalBuf_ND __pyx_pybuffernd_dets; + __Pyx_Buffer __pyx_pybuffer_dets; + __Pyx_LocalBuf_ND __pyx_pybuffernd_keep; + __Pyx_Buffer __pyx_pybuffer_keep; + __Pyx_LocalBuf_ND __pyx_pybuffernd_order; + __Pyx_Buffer __pyx_pybuffer_order; + __Pyx_LocalBuf_ND __pyx_pybuffernd_scores; + __Pyx_Buffer __pyx_pybuffer_scores; + __Pyx_LocalBuf_ND __pyx_pybuffernd_sorted_dets; + __Pyx_Buffer __pyx_pybuffer_sorted_dets; + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + PyObject *__pyx_t_5 = NULL; + PyArrayObject *__pyx_t_6 = NULL; + PyArrayObject *__pyx_t_7 = NULL; + PyArrayObject *__pyx_t_8 = NULL; + PyArrayObject *__pyx_t_9 = NULL; + Py_ssize_t __pyx_t_10; + int __pyx_t_11; + Py_ssize_t __pyx_t_12; + Py_ssize_t __pyx_t_13; + float __pyx_t_14; + PyObject *__pyx_t_15 = NULL; + PyObject *__pyx_t_16 = NULL; + PyObject *__pyx_t_17 = NULL; + __Pyx_RefNannySetupContext("poly_gpu_nms", 0); + __pyx_pybuffer_keep.pybuffer.buf = NULL; + __pyx_pybuffer_keep.refcount = 0; + __pyx_pybuffernd_keep.data = NULL; + __pyx_pybuffernd_keep.rcbuffer = &__pyx_pybuffer_keep; + __pyx_pybuffer_scores.pybuffer.buf = NULL; + __pyx_pybuffer_scores.refcount = 0; + __pyx_pybuffernd_scores.data = NULL; + __pyx_pybuffernd_scores.rcbuffer = &__pyx_pybuffer_scores; + __pyx_pybuffer_order.pybuffer.buf = NULL; + __pyx_pybuffer_order.refcount = 0; + __pyx_pybuffernd_order.data = NULL; + __pyx_pybuffernd_order.rcbuffer = &__pyx_pybuffer_order; + __pyx_pybuffer_sorted_dets.pybuffer.buf = NULL; + __pyx_pybuffer_sorted_dets.refcount = 0; + __pyx_pybuffernd_sorted_dets.data = NULL; + __pyx_pybuffernd_sorted_dets.rcbuffer = &__pyx_pybuffer_sorted_dets; + __pyx_pybuffer_dets.pybuffer.buf = NULL; + __pyx_pybuffer_dets.refcount = 0; + __pyx_pybuffernd_dets.data = NULL; + __pyx_pybuffernd_dets.rcbuffer = &__pyx_pybuffer_dets; + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_dets.rcbuffer->pybuffer, (PyObject*)__pyx_v_dets, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 9, __pyx_L1_error) + } + __pyx_pybuffernd_dets.diminfo[0].strides = __pyx_pybuffernd_dets.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_dets.diminfo[0].shape = __pyx_pybuffernd_dets.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_dets.diminfo[1].strides = __pyx_pybuffernd_dets.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_dets.diminfo[1].shape = __pyx_pybuffernd_dets.rcbuffer->pybuffer.shape[1]; + + /* "poly_nms.pyx":11 + * def poly_gpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh, + * np.int32_t device_id=0): + * cdef int boxes_num = dets.shape[0] # <<<<<<<<<<<<<< + * cdef int boxes_dim = dets.shape[1] + * cdef int num_out + */ + __pyx_v_boxes_num = (__pyx_v_dets->dimensions[0]); + + /* "poly_nms.pyx":12 + * np.int32_t device_id=0): + * cdef int boxes_num = dets.shape[0] + * cdef int boxes_dim = dets.shape[1] # <<<<<<<<<<<<<< + * cdef int num_out + * cdef np.ndarray[np.int32_t, ndim=1] \ + */ + __pyx_v_boxes_dim = (__pyx_v_dets->dimensions[1]); + + /* "poly_nms.pyx":15 + * cdef int num_out + * cdef np.ndarray[np.int32_t, ndim=1] \ + * keep = np.zeros(boxes_num, dtype=np.int32) # <<<<<<<<<<<<<< + * cdef np.ndarray[np.float32_t, ndim=1] \ + * scores = dets[:, 8] + */ + __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 15, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_zeros); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 15, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_boxes_num); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 15, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 15, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_GIVEREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); + __pyx_t_1 = 0; + __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 15, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 15, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_int32); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 15, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_dtype, __pyx_t_5) < 0) __PYX_ERR(0, 15, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 15, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 15, __pyx_L1_error) + __pyx_t_6 = ((PyArrayObject *)__pyx_t_5); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_keep.rcbuffer->pybuffer, (PyObject*)__pyx_t_6, &__Pyx_TypeInfo_nn___pyx_t_5numpy_int32_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) { + __pyx_v_keep = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_keep.rcbuffer->pybuffer.buf = NULL; + __PYX_ERR(0, 14, __pyx_L1_error) + } else {__pyx_pybuffernd_keep.diminfo[0].strides = __pyx_pybuffernd_keep.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_keep.diminfo[0].shape = __pyx_pybuffernd_keep.rcbuffer->pybuffer.shape[0]; + } + } + __pyx_t_6 = 0; + __pyx_v_keep = ((PyArrayObject *)__pyx_t_5); + __pyx_t_5 = 0; + + /* "poly_nms.pyx":17 + * keep = np.zeros(boxes_num, dtype=np.int32) + * cdef np.ndarray[np.float32_t, ndim=1] \ + * scores = dets[:, 8] # <<<<<<<<<<<<<< + * cdef np.ndarray[np.int_t, ndim=1] \ + * order = scores.argsort()[::-1] + */ + __pyx_t_5 = PyObject_GetItem(((PyObject *)__pyx_v_dets), __pyx_tuple__2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 17, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 17, __pyx_L1_error) + __pyx_t_7 = ((PyArrayObject *)__pyx_t_5); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_scores.rcbuffer->pybuffer, (PyObject*)__pyx_t_7, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) { + __pyx_v_scores = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_scores.rcbuffer->pybuffer.buf = NULL; + __PYX_ERR(0, 16, __pyx_L1_error) + } else {__pyx_pybuffernd_scores.diminfo[0].strides = __pyx_pybuffernd_scores.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_scores.diminfo[0].shape = __pyx_pybuffernd_scores.rcbuffer->pybuffer.shape[0]; + } + } + __pyx_t_7 = 0; + __pyx_v_scores = ((PyArrayObject *)__pyx_t_5); + __pyx_t_5 = 0; + + /* "poly_nms.pyx":19 + * scores = dets[:, 8] + * cdef np.ndarray[np.int_t, ndim=1] \ + * order = scores.argsort()[::-1] # <<<<<<<<<<<<<< + * cdef np.ndarray[np.float32_t, ndim=2] \ + * sorted_dets = dets[order, :] + */ + __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_scores), __pyx_n_s_argsort); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 19, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = NULL; + if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { + __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1); + if (likely(__pyx_t_3)) { + PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); + __Pyx_INCREF(__pyx_t_3); + __Pyx_INCREF(function); + __Pyx_DECREF_SET(__pyx_t_1, function); + } + } + if (__pyx_t_3) { + __pyx_t_5 = __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 19, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + } else { + __pyx_t_5 = __Pyx_PyObject_CallNoArg(__pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 19, __pyx_L1_error) + } + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyObject_GetItem(__pyx_t_5, __pyx_slice__3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 19, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 19, __pyx_L1_error) + __pyx_t_8 = ((PyArrayObject *)__pyx_t_1); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_order.rcbuffer->pybuffer, (PyObject*)__pyx_t_8, &__Pyx_TypeInfo_nn___pyx_t_5numpy_int_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) { + __pyx_v_order = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_order.rcbuffer->pybuffer.buf = NULL; + __PYX_ERR(0, 18, __pyx_L1_error) + } else {__pyx_pybuffernd_order.diminfo[0].strides = __pyx_pybuffernd_order.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_order.diminfo[0].shape = __pyx_pybuffernd_order.rcbuffer->pybuffer.shape[0]; + } + } + __pyx_t_8 = 0; + __pyx_v_order = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + + /* "poly_nms.pyx":21 + * order = scores.argsort()[::-1] + * cdef np.ndarray[np.float32_t, ndim=2] \ + * sorted_dets = dets[order, :] # <<<<<<<<<<<<<< + * _poly_nms(&keep[0], &num_out, &sorted_dets[0, 0], boxes_num, boxes_dim, thresh, device_id) + * keep = keep[:num_out] + */ + __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 21, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_INCREF(((PyObject *)__pyx_v_order)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_order)); + PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)__pyx_v_order)); + __Pyx_INCREF(__pyx_slice__4); + __Pyx_GIVEREF(__pyx_slice__4); + PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_slice__4); + __pyx_t_5 = PyObject_GetItem(((PyObject *)__pyx_v_dets), __pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 21, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 21, __pyx_L1_error) + __pyx_t_9 = ((PyArrayObject *)__pyx_t_5); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer, (PyObject*)__pyx_t_9, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) { + __pyx_v_sorted_dets = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer.buf = NULL; + __PYX_ERR(0, 20, __pyx_L1_error) + } else {__pyx_pybuffernd_sorted_dets.diminfo[0].strides = __pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_sorted_dets.diminfo[0].shape = __pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_sorted_dets.diminfo[1].strides = __pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_sorted_dets.diminfo[1].shape = __pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer.shape[1]; + } + } + __pyx_t_9 = 0; + __pyx_v_sorted_dets = ((PyArrayObject *)__pyx_t_5); + __pyx_t_5 = 0; + + /* "poly_nms.pyx":22 + * cdef np.ndarray[np.float32_t, ndim=2] \ + * sorted_dets = dets[order, :] + * _poly_nms(&keep[0], &num_out, &sorted_dets[0, 0], boxes_num, boxes_dim, thresh, device_id) # <<<<<<<<<<<<<< + * keep = keep[:num_out] + * return list(order[keep]) + */ + __pyx_t_10 = 0; + __pyx_t_11 = -1; + if (__pyx_t_10 < 0) { + __pyx_t_10 += __pyx_pybuffernd_keep.diminfo[0].shape; + if (unlikely(__pyx_t_10 < 0)) __pyx_t_11 = 0; + } else if (unlikely(__pyx_t_10 >= __pyx_pybuffernd_keep.diminfo[0].shape)) __pyx_t_11 = 0; + if (unlikely(__pyx_t_11 != -1)) { + __Pyx_RaiseBufferIndexError(__pyx_t_11); + __PYX_ERR(0, 22, __pyx_L1_error) + } + __pyx_t_12 = 0; + __pyx_t_13 = 0; + __pyx_t_11 = -1; + if (__pyx_t_12 < 0) { + __pyx_t_12 += __pyx_pybuffernd_sorted_dets.diminfo[0].shape; + if (unlikely(__pyx_t_12 < 0)) __pyx_t_11 = 0; + } else if (unlikely(__pyx_t_12 >= __pyx_pybuffernd_sorted_dets.diminfo[0].shape)) __pyx_t_11 = 0; + if (__pyx_t_13 < 0) { + __pyx_t_13 += __pyx_pybuffernd_sorted_dets.diminfo[1].shape; + if (unlikely(__pyx_t_13 < 0)) __pyx_t_11 = 1; + } else if (unlikely(__pyx_t_13 >= __pyx_pybuffernd_sorted_dets.diminfo[1].shape)) __pyx_t_11 = 1; + if (unlikely(__pyx_t_11 != -1)) { + __Pyx_RaiseBufferIndexError(__pyx_t_11); + __PYX_ERR(0, 22, __pyx_L1_error) + } + __pyx_t_14 = __pyx_PyFloat_AsFloat(__pyx_v_thresh); if (unlikely((__pyx_t_14 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 22, __pyx_L1_error) + _poly_nms((&(*__Pyx_BufPtrStrided1d(__pyx_t_5numpy_int32_t *, __pyx_pybuffernd_keep.rcbuffer->pybuffer.buf, __pyx_t_10, __pyx_pybuffernd_keep.diminfo[0].strides))), (&__pyx_v_num_out), (&(*__Pyx_BufPtrStrided2d(__pyx_t_5numpy_float32_t *, __pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer.buf, __pyx_t_12, __pyx_pybuffernd_sorted_dets.diminfo[0].strides, __pyx_t_13, __pyx_pybuffernd_sorted_dets.diminfo[1].strides))), __pyx_v_boxes_num, __pyx_v_boxes_dim, __pyx_t_14, __pyx_v_device_id); + + /* "poly_nms.pyx":23 + * sorted_dets = dets[order, :] + * _poly_nms(&keep[0], &num_out, &sorted_dets[0, 0], boxes_num, boxes_dim, thresh, device_id) + * keep = keep[:num_out] # <<<<<<<<<<<<<< + * return list(order[keep]) + */ + __pyx_t_5 = __Pyx_PyObject_GetSlice(((PyObject *)__pyx_v_keep), 0, __pyx_v_num_out, NULL, NULL, NULL, 0, 1, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 23, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 23, __pyx_L1_error) + __pyx_t_6 = ((PyArrayObject *)__pyx_t_5); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_keep.rcbuffer->pybuffer); + __pyx_t_11 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_keep.rcbuffer->pybuffer, (PyObject*)__pyx_t_6, &__Pyx_TypeInfo_nn___pyx_t_5numpy_int32_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack); + if (unlikely(__pyx_t_11 < 0)) { + PyErr_Fetch(&__pyx_t_15, &__pyx_t_16, &__pyx_t_17); + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_keep.rcbuffer->pybuffer, (PyObject*)__pyx_v_keep, &__Pyx_TypeInfo_nn___pyx_t_5numpy_int32_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) { + Py_XDECREF(__pyx_t_15); Py_XDECREF(__pyx_t_16); Py_XDECREF(__pyx_t_17); + __Pyx_RaiseBufferFallbackError(); + } else { + PyErr_Restore(__pyx_t_15, __pyx_t_16, __pyx_t_17); + } + } + __pyx_pybuffernd_keep.diminfo[0].strides = __pyx_pybuffernd_keep.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_keep.diminfo[0].shape = __pyx_pybuffernd_keep.rcbuffer->pybuffer.shape[0]; + if (unlikely(__pyx_t_11 < 0)) __PYX_ERR(0, 23, __pyx_L1_error) + } + __pyx_t_6 = 0; + __Pyx_DECREF_SET(__pyx_v_keep, ((PyArrayObject *)__pyx_t_5)); + __pyx_t_5 = 0; + + /* "poly_nms.pyx":24 + * _poly_nms(&keep[0], &num_out, &sorted_dets[0, 0], boxes_num, boxes_dim, thresh, device_id) + * keep = keep[:num_out] + * return list(order[keep]) # <<<<<<<<<<<<<< + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_5 = PyObject_GetItem(((PyObject *)__pyx_v_order), ((PyObject *)__pyx_v_keep)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 24, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_1 = PySequence_List(__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 24, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "poly_nms.pyx":9 + * void _poly_nms(np.int32_t*, int*, np.float32_t*, int, int, float, int) + * + * def poly_gpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh, # <<<<<<<<<<<<<< + * np.int32_t device_id=0): + * cdef int boxes_num = dets.shape[0] + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_5); + { PyObject *__pyx_type, *__pyx_value, *__pyx_tb; + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_dets.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_keep.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_order.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_scores.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer); + __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);} + __Pyx_AddTraceback("poly_nms.poly_gpu_nms", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = NULL; + goto __pyx_L2; + __pyx_L0:; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_dets.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_keep.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_order.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_scores.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer); + __pyx_L2:; + __Pyx_XDECREF((PyObject *)__pyx_v_keep); + __Pyx_XDECREF((PyObject *)__pyx_v_scores); + __Pyx_XDECREF((PyObject *)__pyx_v_order); + __Pyx_XDECREF((PyObject *)__pyx_v_sorted_dets); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":197 + * # experimental exception made for __getbuffer__ and __releasebuffer__ + * # -- the details of this may change. + * def __getbuffer__(ndarray self, Py_buffer* info, int flags): # <<<<<<<<<<<<<< + * # This implementation of getbuffer is geared towards Cython + * # requirements, and does not yet fullfill the PEP. + */ + +/* Python wrapper */ +static CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ +static CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { + int __pyx_r; + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); + __pyx_r = __pyx_pf_5numpy_7ndarray___getbuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); + + /* function exit code */ + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { + int __pyx_v_copy_shape; + int __pyx_v_i; + int __pyx_v_ndim; + int __pyx_v_endian_detector; + int __pyx_v_little_endian; + int __pyx_v_t; + char *__pyx_v_f; + PyArray_Descr *__pyx_v_descr = 0; + int __pyx_v_offset; + int __pyx_v_hasfields; + int __pyx_r; + __Pyx_RefNannyDeclarations + int __pyx_t_1; + int __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + int __pyx_t_5; + PyObject *__pyx_t_6 = NULL; + char *__pyx_t_7; + __Pyx_RefNannySetupContext("__getbuffer__", 0); + if (__pyx_v_info != NULL) { + __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); + __Pyx_GIVEREF(__pyx_v_info->obj); + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":203 + * # of flags + * + * if info == NULL: return # <<<<<<<<<<<<<< + * + * cdef int copy_shape, i, ndim + */ + __pyx_t_1 = ((__pyx_v_info == NULL) != 0); + if (__pyx_t_1) { + __pyx_r = 0; + goto __pyx_L0; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":206 + * + * cdef int copy_shape, i, ndim + * cdef int endian_detector = 1 # <<<<<<<<<<<<<< + * cdef bint little_endian = ((&endian_detector)[0] != 0) + * + */ + __pyx_v_endian_detector = 1; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":207 + * cdef int copy_shape, i, ndim + * cdef int endian_detector = 1 + * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<< + * + * ndim = PyArray_NDIM(self) + */ + __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":209 + * cdef bint little_endian = ((&endian_detector)[0] != 0) + * + * ndim = PyArray_NDIM(self) # <<<<<<<<<<<<<< + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + */ + __pyx_v_ndim = PyArray_NDIM(__pyx_v_self); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":211 + * ndim = PyArray_NDIM(self) + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * copy_shape = 1 + * else: + */ + __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0); + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":212 + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + * copy_shape = 1 # <<<<<<<<<<<<<< + * else: + * copy_shape = 0 + */ + __pyx_v_copy_shape = 1; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":211 + * ndim = PyArray_NDIM(self) + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * copy_shape = 1 + * else: + */ + goto __pyx_L4; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":214 + * copy_shape = 1 + * else: + * copy_shape = 0 # <<<<<<<<<<<<<< + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + */ + /*else*/ { + __pyx_v_copy_shape = 0; + } + __pyx_L4:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":216 + * copy_shape = 0 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") + */ + __pyx_t_2 = (((__pyx_v_flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS) != 0); + if (__pyx_t_2) { + } else { + __pyx_t_1 = __pyx_t_2; + goto __pyx_L6_bool_binop_done; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":217 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): # <<<<<<<<<<<<<< + * raise ValueError(u"ndarray is not C contiguous") + * + */ + __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_C_CONTIGUOUS) != 0)) != 0); + __pyx_t_1 = __pyx_t_2; + __pyx_L6_bool_binop_done:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":216 + * copy_shape = 0 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") + */ + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":218 + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") # <<<<<<<<<<<<<< + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + */ + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 218, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 218, __pyx_L1_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":216 + * copy_shape = 0 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":220 + * raise ValueError(u"ndarray is not C contiguous") + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") + */ + __pyx_t_2 = (((__pyx_v_flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS) != 0); + if (__pyx_t_2) { + } else { + __pyx_t_1 = __pyx_t_2; + goto __pyx_L9_bool_binop_done; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":221 + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): # <<<<<<<<<<<<<< + * raise ValueError(u"ndarray is not Fortran contiguous") + * + */ + __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_F_CONTIGUOUS) != 0)) != 0); + __pyx_t_1 = __pyx_t_2; + __pyx_L9_bool_binop_done:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":220 + * raise ValueError(u"ndarray is not C contiguous") + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") + */ + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":222 + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") # <<<<<<<<<<<<<< + * + * info.buf = PyArray_DATA(self) + */ + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 222, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 222, __pyx_L1_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":220 + * raise ValueError(u"ndarray is not C contiguous") + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":224 + * raise ValueError(u"ndarray is not Fortran contiguous") + * + * info.buf = PyArray_DATA(self) # <<<<<<<<<<<<<< + * info.ndim = ndim + * if copy_shape: + */ + __pyx_v_info->buf = PyArray_DATA(__pyx_v_self); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":225 + * + * info.buf = PyArray_DATA(self) + * info.ndim = ndim # <<<<<<<<<<<<<< + * if copy_shape: + * # Allocate new buffer for strides and shape info. + */ + __pyx_v_info->ndim = __pyx_v_ndim; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":226 + * info.buf = PyArray_DATA(self) + * info.ndim = ndim + * if copy_shape: # <<<<<<<<<<<<<< + * # Allocate new buffer for strides and shape info. + * # This is allocated as one block, strides first. + */ + __pyx_t_1 = (__pyx_v_copy_shape != 0); + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":229 + * # Allocate new buffer for strides and shape info. + * # This is allocated as one block, strides first. + * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) # <<<<<<<<<<<<<< + * info.shape = info.strides + ndim + * for i in range(ndim): + */ + __pyx_v_info->strides = ((Py_ssize_t *)malloc((((sizeof(Py_ssize_t)) * ((size_t)__pyx_v_ndim)) * 2))); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":230 + * # This is allocated as one block, strides first. + * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) + * info.shape = info.strides + ndim # <<<<<<<<<<<<<< + * for i in range(ndim): + * info.strides[i] = PyArray_STRIDES(self)[i] + */ + __pyx_v_info->shape = (__pyx_v_info->strides + __pyx_v_ndim); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":231 + * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) + * info.shape = info.strides + ndim + * for i in range(ndim): # <<<<<<<<<<<<<< + * info.strides[i] = PyArray_STRIDES(self)[i] + * info.shape[i] = PyArray_DIMS(self)[i] + */ + __pyx_t_4 = __pyx_v_ndim; + for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { + __pyx_v_i = __pyx_t_5; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":232 + * info.shape = info.strides + ndim + * for i in range(ndim): + * info.strides[i] = PyArray_STRIDES(self)[i] # <<<<<<<<<<<<<< + * info.shape[i] = PyArray_DIMS(self)[i] + * else: + */ + (__pyx_v_info->strides[__pyx_v_i]) = (PyArray_STRIDES(__pyx_v_self)[__pyx_v_i]); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":233 + * for i in range(ndim): + * info.strides[i] = PyArray_STRIDES(self)[i] + * info.shape[i] = PyArray_DIMS(self)[i] # <<<<<<<<<<<<<< + * else: + * info.strides = PyArray_STRIDES(self) + */ + (__pyx_v_info->shape[__pyx_v_i]) = (PyArray_DIMS(__pyx_v_self)[__pyx_v_i]); + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":226 + * info.buf = PyArray_DATA(self) + * info.ndim = ndim + * if copy_shape: # <<<<<<<<<<<<<< + * # Allocate new buffer for strides and shape info. + * # This is allocated as one block, strides first. + */ + goto __pyx_L11; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":235 + * info.shape[i] = PyArray_DIMS(self)[i] + * else: + * info.strides = PyArray_STRIDES(self) # <<<<<<<<<<<<<< + * info.shape = PyArray_DIMS(self) + * info.suboffsets = NULL + */ + /*else*/ { + __pyx_v_info->strides = ((Py_ssize_t *)PyArray_STRIDES(__pyx_v_self)); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":236 + * else: + * info.strides = PyArray_STRIDES(self) + * info.shape = PyArray_DIMS(self) # <<<<<<<<<<<<<< + * info.suboffsets = NULL + * info.itemsize = PyArray_ITEMSIZE(self) + */ + __pyx_v_info->shape = ((Py_ssize_t *)PyArray_DIMS(__pyx_v_self)); + } + __pyx_L11:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":237 + * info.strides = PyArray_STRIDES(self) + * info.shape = PyArray_DIMS(self) + * info.suboffsets = NULL # <<<<<<<<<<<<<< + * info.itemsize = PyArray_ITEMSIZE(self) + * info.readonly = not PyArray_ISWRITEABLE(self) + */ + __pyx_v_info->suboffsets = NULL; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":238 + * info.shape = PyArray_DIMS(self) + * info.suboffsets = NULL + * info.itemsize = PyArray_ITEMSIZE(self) # <<<<<<<<<<<<<< + * info.readonly = not PyArray_ISWRITEABLE(self) + * + */ + __pyx_v_info->itemsize = PyArray_ITEMSIZE(__pyx_v_self); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":239 + * info.suboffsets = NULL + * info.itemsize = PyArray_ITEMSIZE(self) + * info.readonly = not PyArray_ISWRITEABLE(self) # <<<<<<<<<<<<<< + * + * cdef int t + */ + __pyx_v_info->readonly = (!(PyArray_ISWRITEABLE(__pyx_v_self) != 0)); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":242 + * + * cdef int t + * cdef char* f = NULL # <<<<<<<<<<<<<< + * cdef dtype descr = self.descr + * cdef int offset + */ + __pyx_v_f = NULL; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":243 + * cdef int t + * cdef char* f = NULL + * cdef dtype descr = self.descr # <<<<<<<<<<<<<< + * cdef int offset + * + */ + __pyx_t_3 = ((PyObject *)__pyx_v_self->descr); + __Pyx_INCREF(__pyx_t_3); + __pyx_v_descr = ((PyArray_Descr *)__pyx_t_3); + __pyx_t_3 = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":246 + * cdef int offset + * + * cdef bint hasfields = PyDataType_HASFIELDS(descr) # <<<<<<<<<<<<<< + * + * if not hasfields and not copy_shape: + */ + __pyx_v_hasfields = PyDataType_HASFIELDS(__pyx_v_descr); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":248 + * cdef bint hasfields = PyDataType_HASFIELDS(descr) + * + * if not hasfields and not copy_shape: # <<<<<<<<<<<<<< + * # do not call releasebuffer + * info.obj = None + */ + __pyx_t_2 = ((!(__pyx_v_hasfields != 0)) != 0); + if (__pyx_t_2) { + } else { + __pyx_t_1 = __pyx_t_2; + goto __pyx_L15_bool_binop_done; + } + __pyx_t_2 = ((!(__pyx_v_copy_shape != 0)) != 0); + __pyx_t_1 = __pyx_t_2; + __pyx_L15_bool_binop_done:; + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":250 + * if not hasfields and not copy_shape: + * # do not call releasebuffer + * info.obj = None # <<<<<<<<<<<<<< + * else: + * # need to call releasebuffer + */ + __Pyx_INCREF(Py_None); + __Pyx_GIVEREF(Py_None); + __Pyx_GOTREF(__pyx_v_info->obj); + __Pyx_DECREF(__pyx_v_info->obj); + __pyx_v_info->obj = Py_None; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":248 + * cdef bint hasfields = PyDataType_HASFIELDS(descr) + * + * if not hasfields and not copy_shape: # <<<<<<<<<<<<<< + * # do not call releasebuffer + * info.obj = None + */ + goto __pyx_L14; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":253 + * else: + * # need to call releasebuffer + * info.obj = self # <<<<<<<<<<<<<< + * + * if not hasfields: + */ + /*else*/ { + __Pyx_INCREF(((PyObject *)__pyx_v_self)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); + __Pyx_GOTREF(__pyx_v_info->obj); + __Pyx_DECREF(__pyx_v_info->obj); + __pyx_v_info->obj = ((PyObject *)__pyx_v_self); + } + __pyx_L14:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":255 + * info.obj = self + * + * if not hasfields: # <<<<<<<<<<<<<< + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or + */ + __pyx_t_1 = ((!(__pyx_v_hasfields != 0)) != 0); + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":256 + * + * if not hasfields: + * t = descr.type_num # <<<<<<<<<<<<<< + * if ((descr.byteorder == c'>' and little_endian) or + * (descr.byteorder == c'<' and not little_endian)): + */ + __pyx_t_4 = __pyx_v_descr->type_num; + __pyx_v_t = __pyx_t_4; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":257 + * if not hasfields: + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + __pyx_t_2 = ((__pyx_v_descr->byteorder == '>') != 0); + if (!__pyx_t_2) { + goto __pyx_L20_next_or; + } else { + } + __pyx_t_2 = (__pyx_v_little_endian != 0); + if (!__pyx_t_2) { + } else { + __pyx_t_1 = __pyx_t_2; + goto __pyx_L19_bool_binop_done; + } + __pyx_L20_next_or:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":258 + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or + * (descr.byteorder == c'<' and not little_endian)): # <<<<<<<<<<<<<< + * raise ValueError(u"Non-native byte order not supported") + * if t == NPY_BYTE: f = "b" + */ + __pyx_t_2 = ((__pyx_v_descr->byteorder == '<') != 0); + if (__pyx_t_2) { + } else { + __pyx_t_1 = __pyx_t_2; + goto __pyx_L19_bool_binop_done; + } + __pyx_t_2 = ((!(__pyx_v_little_endian != 0)) != 0); + __pyx_t_1 = __pyx_t_2; + __pyx_L19_bool_binop_done:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":257 + * if not hasfields: + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":259 + * if ((descr.byteorder == c'>' and little_endian) or + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" + */ + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 259, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 259, __pyx_L1_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":257 + * if not hasfields: + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":260 + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + * if t == NPY_BYTE: f = "b" # <<<<<<<<<<<<<< + * elif t == NPY_UBYTE: f = "B" + * elif t == NPY_SHORT: f = "h" + */ + switch (__pyx_v_t) { + case NPY_BYTE: + __pyx_v_f = ((char *)"b"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":261 + * raise ValueError(u"Non-native byte order not supported") + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" # <<<<<<<<<<<<<< + * elif t == NPY_SHORT: f = "h" + * elif t == NPY_USHORT: f = "H" + */ + case NPY_UBYTE: + __pyx_v_f = ((char *)"B"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":262 + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" + * elif t == NPY_SHORT: f = "h" # <<<<<<<<<<<<<< + * elif t == NPY_USHORT: f = "H" + * elif t == NPY_INT: f = "i" + */ + case NPY_SHORT: + __pyx_v_f = ((char *)"h"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":263 + * elif t == NPY_UBYTE: f = "B" + * elif t == NPY_SHORT: f = "h" + * elif t == NPY_USHORT: f = "H" # <<<<<<<<<<<<<< + * elif t == NPY_INT: f = "i" + * elif t == NPY_UINT: f = "I" + */ + case NPY_USHORT: + __pyx_v_f = ((char *)"H"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":264 + * elif t == NPY_SHORT: f = "h" + * elif t == NPY_USHORT: f = "H" + * elif t == NPY_INT: f = "i" # <<<<<<<<<<<<<< + * elif t == NPY_UINT: f = "I" + * elif t == NPY_LONG: f = "l" + */ + case NPY_INT: + __pyx_v_f = ((char *)"i"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":265 + * elif t == NPY_USHORT: f = "H" + * elif t == NPY_INT: f = "i" + * elif t == NPY_UINT: f = "I" # <<<<<<<<<<<<<< + * elif t == NPY_LONG: f = "l" + * elif t == NPY_ULONG: f = "L" + */ + case NPY_UINT: + __pyx_v_f = ((char *)"I"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":266 + * elif t == NPY_INT: f = "i" + * elif t == NPY_UINT: f = "I" + * elif t == NPY_LONG: f = "l" # <<<<<<<<<<<<<< + * elif t == NPY_ULONG: f = "L" + * elif t == NPY_LONGLONG: f = "q" + */ + case NPY_LONG: + __pyx_v_f = ((char *)"l"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":267 + * elif t == NPY_UINT: f = "I" + * elif t == NPY_LONG: f = "l" + * elif t == NPY_ULONG: f = "L" # <<<<<<<<<<<<<< + * elif t == NPY_LONGLONG: f = "q" + * elif t == NPY_ULONGLONG: f = "Q" + */ + case NPY_ULONG: + __pyx_v_f = ((char *)"L"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":268 + * elif t == NPY_LONG: f = "l" + * elif t == NPY_ULONG: f = "L" + * elif t == NPY_LONGLONG: f = "q" # <<<<<<<<<<<<<< + * elif t == NPY_ULONGLONG: f = "Q" + * elif t == NPY_FLOAT: f = "f" + */ + case NPY_LONGLONG: + __pyx_v_f = ((char *)"q"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":269 + * elif t == NPY_ULONG: f = "L" + * elif t == NPY_LONGLONG: f = "q" + * elif t == NPY_ULONGLONG: f = "Q" # <<<<<<<<<<<<<< + * elif t == NPY_FLOAT: f = "f" + * elif t == NPY_DOUBLE: f = "d" + */ + case NPY_ULONGLONG: + __pyx_v_f = ((char *)"Q"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":270 + * elif t == NPY_LONGLONG: f = "q" + * elif t == NPY_ULONGLONG: f = "Q" + * elif t == NPY_FLOAT: f = "f" # <<<<<<<<<<<<<< + * elif t == NPY_DOUBLE: f = "d" + * elif t == NPY_LONGDOUBLE: f = "g" + */ + case NPY_FLOAT: + __pyx_v_f = ((char *)"f"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":271 + * elif t == NPY_ULONGLONG: f = "Q" + * elif t == NPY_FLOAT: f = "f" + * elif t == NPY_DOUBLE: f = "d" # <<<<<<<<<<<<<< + * elif t == NPY_LONGDOUBLE: f = "g" + * elif t == NPY_CFLOAT: f = "Zf" + */ + case NPY_DOUBLE: + __pyx_v_f = ((char *)"d"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":272 + * elif t == NPY_FLOAT: f = "f" + * elif t == NPY_DOUBLE: f = "d" + * elif t == NPY_LONGDOUBLE: f = "g" # <<<<<<<<<<<<<< + * elif t == NPY_CFLOAT: f = "Zf" + * elif t == NPY_CDOUBLE: f = "Zd" + */ + case NPY_LONGDOUBLE: + __pyx_v_f = ((char *)"g"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":273 + * elif t == NPY_DOUBLE: f = "d" + * elif t == NPY_LONGDOUBLE: f = "g" + * elif t == NPY_CFLOAT: f = "Zf" # <<<<<<<<<<<<<< + * elif t == NPY_CDOUBLE: f = "Zd" + * elif t == NPY_CLONGDOUBLE: f = "Zg" + */ + case NPY_CFLOAT: + __pyx_v_f = ((char *)"Zf"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":274 + * elif t == NPY_LONGDOUBLE: f = "g" + * elif t == NPY_CFLOAT: f = "Zf" + * elif t == NPY_CDOUBLE: f = "Zd" # <<<<<<<<<<<<<< + * elif t == NPY_CLONGDOUBLE: f = "Zg" + * elif t == NPY_OBJECT: f = "O" + */ + case NPY_CDOUBLE: + __pyx_v_f = ((char *)"Zd"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":275 + * elif t == NPY_CFLOAT: f = "Zf" + * elif t == NPY_CDOUBLE: f = "Zd" + * elif t == NPY_CLONGDOUBLE: f = "Zg" # <<<<<<<<<<<<<< + * elif t == NPY_OBJECT: f = "O" + * else: + */ + case NPY_CLONGDOUBLE: + __pyx_v_f = ((char *)"Zg"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":276 + * elif t == NPY_CDOUBLE: f = "Zd" + * elif t == NPY_CLONGDOUBLE: f = "Zg" + * elif t == NPY_OBJECT: f = "O" # <<<<<<<<<<<<<< + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + */ + case NPY_OBJECT: + __pyx_v_f = ((char *)"O"); + break; + default: + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":278 + * elif t == NPY_OBJECT: f = "O" + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<< + * info.format = f + * return + */ + __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 278, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_6 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 278, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 278, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_GIVEREF(__pyx_t_6); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_6); + __pyx_t_6 = 0; + __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 278, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_Raise(__pyx_t_6, 0, 0, 0); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __PYX_ERR(1, 278, __pyx_L1_error) + break; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":279 + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + * info.format = f # <<<<<<<<<<<<<< + * return + * else: + */ + __pyx_v_info->format = __pyx_v_f; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":280 + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + * info.format = f + * return # <<<<<<<<<<<<<< + * else: + * info.format = stdlib.malloc(_buffer_format_string_len) + */ + __pyx_r = 0; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":255 + * info.obj = self + * + * if not hasfields: # <<<<<<<<<<<<<< + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":282 + * return + * else: + * info.format = stdlib.malloc(_buffer_format_string_len) # <<<<<<<<<<<<<< + * info.format[0] = c'^' # Native data types, manual alignment + * offset = 0 + */ + /*else*/ { + __pyx_v_info->format = ((char *)malloc(0xFF)); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":283 + * else: + * info.format = stdlib.malloc(_buffer_format_string_len) + * info.format[0] = c'^' # Native data types, manual alignment # <<<<<<<<<<<<<< + * offset = 0 + * f = _util_dtypestring(descr, info.format + 1, + */ + (__pyx_v_info->format[0]) = '^'; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":284 + * info.format = stdlib.malloc(_buffer_format_string_len) + * info.format[0] = c'^' # Native data types, manual alignment + * offset = 0 # <<<<<<<<<<<<<< + * f = _util_dtypestring(descr, info.format + 1, + * info.format + _buffer_format_string_len, + */ + __pyx_v_offset = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":285 + * info.format[0] = c'^' # Native data types, manual alignment + * offset = 0 + * f = _util_dtypestring(descr, info.format + 1, # <<<<<<<<<<<<<< + * info.format + _buffer_format_string_len, + * &offset) + */ + __pyx_t_7 = __pyx_f_5numpy__util_dtypestring(__pyx_v_descr, (__pyx_v_info->format + 1), (__pyx_v_info->format + 0xFF), (&__pyx_v_offset)); if (unlikely(__pyx_t_7 == NULL)) __PYX_ERR(1, 285, __pyx_L1_error) + __pyx_v_f = __pyx_t_7; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":288 + * info.format + _buffer_format_string_len, + * &offset) + * f[0] = c'\0' # Terminate format string # <<<<<<<<<<<<<< + * + * def __releasebuffer__(ndarray self, Py_buffer* info): + */ + (__pyx_v_f[0]) = '\x00'; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":197 + * # experimental exception made for __getbuffer__ and __releasebuffer__ + * # -- the details of this may change. + * def __getbuffer__(ndarray self, Py_buffer* info, int flags): # <<<<<<<<<<<<<< + * # This implementation of getbuffer is geared towards Cython + * # requirements, and does not yet fullfill the PEP. + */ + + /* function exit code */ + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_6); + __Pyx_AddTraceback("numpy.ndarray.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = -1; + if (__pyx_v_info != NULL && __pyx_v_info->obj != NULL) { + __Pyx_GOTREF(__pyx_v_info->obj); + __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = NULL; + } + goto __pyx_L2; + __pyx_L0:; + if (__pyx_v_info != NULL && __pyx_v_info->obj == Py_None) { + __Pyx_GOTREF(Py_None); + __Pyx_DECREF(Py_None); __pyx_v_info->obj = NULL; + } + __pyx_L2:; + __Pyx_XDECREF((PyObject *)__pyx_v_descr); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":290 + * f[0] = c'\0' # Terminate format string + * + * def __releasebuffer__(ndarray self, Py_buffer* info): # <<<<<<<<<<<<<< + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + */ + +/* Python wrapper */ +static CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info); /*proto*/ +static CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info) { + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__releasebuffer__ (wrapper)", 0); + __pyx_pf_5numpy_7ndarray_2__releasebuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info)); + + /* function exit code */ + __Pyx_RefNannyFinishContext(); +} + +static void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info) { + __Pyx_RefNannyDeclarations + int __pyx_t_1; + __Pyx_RefNannySetupContext("__releasebuffer__", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":291 + * + * def __releasebuffer__(ndarray self, Py_buffer* info): + * if PyArray_HASFIELDS(self): # <<<<<<<<<<<<<< + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + */ + __pyx_t_1 = (PyArray_HASFIELDS(__pyx_v_self) != 0); + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":292 + * def __releasebuffer__(ndarray self, Py_buffer* info): + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) # <<<<<<<<<<<<<< + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + * stdlib.free(info.strides) + */ + free(__pyx_v_info->format); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":291 + * + * def __releasebuffer__(ndarray self, Py_buffer* info): + * if PyArray_HASFIELDS(self): # <<<<<<<<<<<<<< + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":293 + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * stdlib.free(info.strides) + * # info.shape was stored after info.strides in the same block + */ + __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0); + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":294 + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + * stdlib.free(info.strides) # <<<<<<<<<<<<<< + * # info.shape was stored after info.strides in the same block + * + */ + free(__pyx_v_info->strides); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":293 + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * stdlib.free(info.strides) + * # info.shape was stored after info.strides in the same block + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":290 + * f[0] = c'\0' # Terminate format string + * + * def __releasebuffer__(ndarray self, Py_buffer* info): # <<<<<<<<<<<<<< + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + */ + + /* function exit code */ + __Pyx_RefNannyFinishContext(); +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":770 + * ctypedef npy_cdouble complex_t + * + * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(1, a) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyObject *__pyx_v_a) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew1", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":771 + * + * cdef inline object PyArray_MultiIterNew1(a): + * return PyArray_MultiIterNew(1, a) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew2(a, b): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(1, ((void *)__pyx_v_a)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 771, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":770 + * ctypedef npy_cdouble complex_t + * + * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(1, a) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew1", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":773 + * return PyArray_MultiIterNew(1, a) + * + * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(2, a, b) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyObject *__pyx_v_a, PyObject *__pyx_v_b) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew2", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":774 + * + * cdef inline object PyArray_MultiIterNew2(a, b): + * return PyArray_MultiIterNew(2, a, b) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(2, ((void *)__pyx_v_a), ((void *)__pyx_v_b)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 774, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":773 + * return PyArray_MultiIterNew(1, a) + * + * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(2, a, b) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew2", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":776 + * return PyArray_MultiIterNew(2, a, b) + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(3, a, b, c) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew3", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":777 + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): + * return PyArray_MultiIterNew(3, a, b, c) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(3, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 777, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":776 + * return PyArray_MultiIterNew(2, a, b) + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(3, a, b, c) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew3", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":779 + * return PyArray_MultiIterNew(3, a, b, c) + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(4, a, b, c, d) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew4(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew4", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":780 + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): + * return PyArray_MultiIterNew(4, a, b, c, d) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(4, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 780, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":779 + * return PyArray_MultiIterNew(3, a, b, c) + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(4, a, b, c, d) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew4", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":782 + * return PyArray_MultiIterNew(4, a, b, c, d) + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew5(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d, PyObject *__pyx_v_e) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew5", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":783 + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): + * return PyArray_MultiIterNew(5, a, b, c, d, e) # <<<<<<<<<<<<<< + * + * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(5, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d), ((void *)__pyx_v_e)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 783, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":782 + * return PyArray_MultiIterNew(4, a, b, c, d) + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew5", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":785 + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: # <<<<<<<<<<<<<< + * # Recursive utility function used in __getbuffer__ to get format + * # string. The new location in the format string is returned. + */ + +static CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *__pyx_v_descr, char *__pyx_v_f, char *__pyx_v_end, int *__pyx_v_offset) { + PyArray_Descr *__pyx_v_child = 0; + int __pyx_v_endian_detector; + int __pyx_v_little_endian; + PyObject *__pyx_v_fields = 0; + PyObject *__pyx_v_childname = NULL; + PyObject *__pyx_v_new_offset = NULL; + PyObject *__pyx_v_t = NULL; + char *__pyx_r; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + Py_ssize_t __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + int __pyx_t_5; + int __pyx_t_6; + int __pyx_t_7; + long __pyx_t_8; + char *__pyx_t_9; + __Pyx_RefNannySetupContext("_util_dtypestring", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":790 + * + * cdef dtype child + * cdef int endian_detector = 1 # <<<<<<<<<<<<<< + * cdef bint little_endian = ((&endian_detector)[0] != 0) + * cdef tuple fields + */ + __pyx_v_endian_detector = 1; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":791 + * cdef dtype child + * cdef int endian_detector = 1 + * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<< + * cdef tuple fields + * + */ + __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":794 + * cdef tuple fields + * + * for childname in descr.names: # <<<<<<<<<<<<<< + * fields = descr.fields[childname] + * child, new_offset = fields + */ + if (unlikely(__pyx_v_descr->names == Py_None)) { + PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); + __PYX_ERR(1, 794, __pyx_L1_error) + } + __pyx_t_1 = __pyx_v_descr->names; __Pyx_INCREF(__pyx_t_1); __pyx_t_2 = 0; + for (;;) { + if (__pyx_t_2 >= PyTuple_GET_SIZE(__pyx_t_1)) break; + #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS + __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_3); __pyx_t_2++; if (unlikely(0 < 0)) __PYX_ERR(1, 794, __pyx_L1_error) + #else + __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 794, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + #endif + __Pyx_XDECREF_SET(__pyx_v_childname, __pyx_t_3); + __pyx_t_3 = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":795 + * + * for childname in descr.names: + * fields = descr.fields[childname] # <<<<<<<<<<<<<< + * child, new_offset = fields + * + */ + if (unlikely(__pyx_v_descr->fields == Py_None)) { + PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); + __PYX_ERR(1, 795, __pyx_L1_error) + } + __pyx_t_3 = __Pyx_PyDict_GetItem(__pyx_v_descr->fields, __pyx_v_childname); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 795, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + if (!(likely(PyTuple_CheckExact(__pyx_t_3))||((__pyx_t_3) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_t_3)->tp_name), 0))) __PYX_ERR(1, 795, __pyx_L1_error) + __Pyx_XDECREF_SET(__pyx_v_fields, ((PyObject*)__pyx_t_3)); + __pyx_t_3 = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":796 + * for childname in descr.names: + * fields = descr.fields[childname] + * child, new_offset = fields # <<<<<<<<<<<<<< + * + * if (end - f) - (new_offset - offset[0]) < 15: + */ + if (likely(__pyx_v_fields != Py_None)) { + PyObject* sequence = __pyx_v_fields; + #if !CYTHON_COMPILING_IN_PYPY + Py_ssize_t size = Py_SIZE(sequence); + #else + Py_ssize_t size = PySequence_Size(sequence); + #endif + if (unlikely(size != 2)) { + if (size > 2) __Pyx_RaiseTooManyValuesError(2); + else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); + __PYX_ERR(1, 796, __pyx_L1_error) + } + #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS + __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); + __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); + __Pyx_INCREF(__pyx_t_3); + __Pyx_INCREF(__pyx_t_4); + #else + __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 796, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 796, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + #endif + } else { + __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 796, __pyx_L1_error) + } + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_dtype))))) __PYX_ERR(1, 796, __pyx_L1_error) + __Pyx_XDECREF_SET(__pyx_v_child, ((PyArray_Descr *)__pyx_t_3)); + __pyx_t_3 = 0; + __Pyx_XDECREF_SET(__pyx_v_new_offset, __pyx_t_4); + __pyx_t_4 = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":798 + * child, new_offset = fields + * + * if (end - f) - (new_offset - offset[0]) < 15: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + */ + __pyx_t_4 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 798, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyNumber_Subtract(__pyx_v_new_offset, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 798, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 798, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = ((((__pyx_v_end - __pyx_v_f) - ((int)__pyx_t_5)) < 15) != 0); + if (__pyx_t_6) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":799 + * + * if (end - f) - (new_offset - offset[0]) < 15: + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") # <<<<<<<<<<<<<< + * + * if ((child.byteorder == c'>' and little_endian) or + */ + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 799, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 799, __pyx_L1_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":798 + * child, new_offset = fields + * + * if (end - f) - (new_offset - offset[0]) < 15: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":801 + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + * if ((child.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (child.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + __pyx_t_7 = ((__pyx_v_child->byteorder == '>') != 0); + if (!__pyx_t_7) { + goto __pyx_L8_next_or; + } else { + } + __pyx_t_7 = (__pyx_v_little_endian != 0); + if (!__pyx_t_7) { + } else { + __pyx_t_6 = __pyx_t_7; + goto __pyx_L7_bool_binop_done; + } + __pyx_L8_next_or:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":802 + * + * if ((child.byteorder == c'>' and little_endian) or + * (child.byteorder == c'<' and not little_endian)): # <<<<<<<<<<<<<< + * raise ValueError(u"Non-native byte order not supported") + * # One could encode it in the format string and have Cython + */ + __pyx_t_7 = ((__pyx_v_child->byteorder == '<') != 0); + if (__pyx_t_7) { + } else { + __pyx_t_6 = __pyx_t_7; + goto __pyx_L7_bool_binop_done; + } + __pyx_t_7 = ((!(__pyx_v_little_endian != 0)) != 0); + __pyx_t_6 = __pyx_t_7; + __pyx_L7_bool_binop_done:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":801 + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + * if ((child.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (child.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + if (__pyx_t_6) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":803 + * if ((child.byteorder == c'>' and little_endian) or + * (child.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * # One could encode it in the format string and have Cython + * # complain instead, BUT: < and > in format strings also imply + */ + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 803, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 803, __pyx_L1_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":801 + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + * if ((child.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (child.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":813 + * + * # Output padding bytes + * while offset[0] < new_offset: # <<<<<<<<<<<<<< + * f[0] = 120 # "x"; pad byte + * f += 1 + */ + while (1) { + __pyx_t_3 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 813, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_t_3, __pyx_v_new_offset, Py_LT); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 813, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 813, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (!__pyx_t_6) break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":814 + * # Output padding bytes + * while offset[0] < new_offset: + * f[0] = 120 # "x"; pad byte # <<<<<<<<<<<<<< + * f += 1 + * offset[0] += 1 + */ + (__pyx_v_f[0]) = 0x78; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":815 + * while offset[0] < new_offset: + * f[0] = 120 # "x"; pad byte + * f += 1 # <<<<<<<<<<<<<< + * offset[0] += 1 + * + */ + __pyx_v_f = (__pyx_v_f + 1); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":816 + * f[0] = 120 # "x"; pad byte + * f += 1 + * offset[0] += 1 # <<<<<<<<<<<<<< + * + * offset[0] += child.itemsize + */ + __pyx_t_8 = 0; + (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + 1); + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":818 + * offset[0] += 1 + * + * offset[0] += child.itemsize # <<<<<<<<<<<<<< + * + * if not PyDataType_HASFIELDS(child): + */ + __pyx_t_8 = 0; + (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + __pyx_v_child->elsize); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":820 + * offset[0] += child.itemsize + * + * if not PyDataType_HASFIELDS(child): # <<<<<<<<<<<<<< + * t = child.type_num + * if end - f < 5: + */ + __pyx_t_6 = ((!(PyDataType_HASFIELDS(__pyx_v_child) != 0)) != 0); + if (__pyx_t_6) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":821 + * + * if not PyDataType_HASFIELDS(child): + * t = child.type_num # <<<<<<<<<<<<<< + * if end - f < 5: + * raise RuntimeError(u"Format string allocated too short.") + */ + __pyx_t_4 = __Pyx_PyInt_From_int(__pyx_v_child->type_num); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 821, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_XDECREF_SET(__pyx_v_t, __pyx_t_4); + __pyx_t_4 = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":822 + * if not PyDataType_HASFIELDS(child): + * t = child.type_num + * if end - f < 5: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short.") + * + */ + __pyx_t_6 = (((__pyx_v_end - __pyx_v_f) < 5) != 0); + if (__pyx_t_6) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":823 + * t = child.type_num + * if end - f < 5: + * raise RuntimeError(u"Format string allocated too short.") # <<<<<<<<<<<<<< + * + * # Until ticket #99 is fixed, use integers to avoid warnings + */ + __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 823, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_Raise(__pyx_t_4, 0, 0, 0); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __PYX_ERR(1, 823, __pyx_L1_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":822 + * if not PyDataType_HASFIELDS(child): + * t = child.type_num + * if end - f < 5: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short.") + * + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":826 + * + * # Until ticket #99 is fixed, use integers to avoid warnings + * if t == NPY_BYTE: f[0] = 98 #"b" # <<<<<<<<<<<<<< + * elif t == NPY_UBYTE: f[0] = 66 #"B" + * elif t == NPY_SHORT: f[0] = 104 #"h" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_BYTE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 826, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 826, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 826, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 98; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":827 + * # Until ticket #99 is fixed, use integers to avoid warnings + * if t == NPY_BYTE: f[0] = 98 #"b" + * elif t == NPY_UBYTE: f[0] = 66 #"B" # <<<<<<<<<<<<<< + * elif t == NPY_SHORT: f[0] = 104 #"h" + * elif t == NPY_USHORT: f[0] = 72 #"H" + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UBYTE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 827, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 827, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 827, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 66; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":828 + * if t == NPY_BYTE: f[0] = 98 #"b" + * elif t == NPY_UBYTE: f[0] = 66 #"B" + * elif t == NPY_SHORT: f[0] = 104 #"h" # <<<<<<<<<<<<<< + * elif t == NPY_USHORT: f[0] = 72 #"H" + * elif t == NPY_INT: f[0] = 105 #"i" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_SHORT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 828, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 828, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 828, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x68; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":829 + * elif t == NPY_UBYTE: f[0] = 66 #"B" + * elif t == NPY_SHORT: f[0] = 104 #"h" + * elif t == NPY_USHORT: f[0] = 72 #"H" # <<<<<<<<<<<<<< + * elif t == NPY_INT: f[0] = 105 #"i" + * elif t == NPY_UINT: f[0] = 73 #"I" + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_USHORT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 829, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 829, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 829, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 72; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":830 + * elif t == NPY_SHORT: f[0] = 104 #"h" + * elif t == NPY_USHORT: f[0] = 72 #"H" + * elif t == NPY_INT: f[0] = 105 #"i" # <<<<<<<<<<<<<< + * elif t == NPY_UINT: f[0] = 73 #"I" + * elif t == NPY_LONG: f[0] = 108 #"l" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_INT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 830, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 830, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 830, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x69; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":831 + * elif t == NPY_USHORT: f[0] = 72 #"H" + * elif t == NPY_INT: f[0] = 105 #"i" + * elif t == NPY_UINT: f[0] = 73 #"I" # <<<<<<<<<<<<<< + * elif t == NPY_LONG: f[0] = 108 #"l" + * elif t == NPY_ULONG: f[0] = 76 #"L" + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UINT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 831, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 831, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 831, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 73; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":832 + * elif t == NPY_INT: f[0] = 105 #"i" + * elif t == NPY_UINT: f[0] = 73 #"I" + * elif t == NPY_LONG: f[0] = 108 #"l" # <<<<<<<<<<<<<< + * elif t == NPY_ULONG: f[0] = 76 #"L" + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 832, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 832, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 832, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x6C; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":833 + * elif t == NPY_UINT: f[0] = 73 #"I" + * elif t == NPY_LONG: f[0] = 108 #"l" + * elif t == NPY_ULONG: f[0] = 76 #"L" # <<<<<<<<<<<<<< + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 833, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 833, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 833, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 76; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":834 + * elif t == NPY_LONG: f[0] = 108 #"l" + * elif t == NPY_ULONG: f[0] = 76 #"L" + * elif t == NPY_LONGLONG: f[0] = 113 #"q" # <<<<<<<<<<<<<< + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + * elif t == NPY_FLOAT: f[0] = 102 #"f" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGLONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 834, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 834, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 834, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x71; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":835 + * elif t == NPY_ULONG: f[0] = 76 #"L" + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" # <<<<<<<<<<<<<< + * elif t == NPY_FLOAT: f[0] = 102 #"f" + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONGLONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 835, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 835, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 835, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 81; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":836 + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + * elif t == NPY_FLOAT: f[0] = 102 #"f" # <<<<<<<<<<<<<< + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_FLOAT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 836, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 836, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 836, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x66; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":837 + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + * elif t == NPY_FLOAT: f[0] = 102 #"f" + * elif t == NPY_DOUBLE: f[0] = 100 #"d" # <<<<<<<<<<<<<< + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_DOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 837, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 837, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 837, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x64; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":838 + * elif t == NPY_FLOAT: f[0] = 102 #"f" + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" # <<<<<<<<<<<<<< + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 838, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 838, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 838, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x67; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":839 + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf # <<<<<<<<<<<<<< + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CFLOAT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 839, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 839, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 839, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 90; + (__pyx_v_f[1]) = 0x66; + __pyx_v_f = (__pyx_v_f + 1); + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":840 + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd # <<<<<<<<<<<<<< + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg + * elif t == NPY_OBJECT: f[0] = 79 #"O" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 840, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 840, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 840, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 90; + (__pyx_v_f[1]) = 0x64; + __pyx_v_f = (__pyx_v_f + 1); + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":841 + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg # <<<<<<<<<<<<<< + * elif t == NPY_OBJECT: f[0] = 79 #"O" + * else: + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CLONGDOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 841, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 841, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 841, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 90; + (__pyx_v_f[1]) = 0x67; + __pyx_v_f = (__pyx_v_f + 1); + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":842 + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg + * elif t == NPY_OBJECT: f[0] = 79 #"O" # <<<<<<<<<<<<<< + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_OBJECT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 842, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 842, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 842, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 79; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":844 + * elif t == NPY_OBJECT: f[0] = 79 #"O" + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<< + * f += 1 + * else: + */ + /*else*/ { + __pyx_t_3 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 844, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 844, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_GIVEREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 844, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 844, __pyx_L1_error) + } + __pyx_L15:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":845 + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + * f += 1 # <<<<<<<<<<<<<< + * else: + * # Cython ignores struct boundary information ("T{...}"), + */ + __pyx_v_f = (__pyx_v_f + 1); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":820 + * offset[0] += child.itemsize + * + * if not PyDataType_HASFIELDS(child): # <<<<<<<<<<<<<< + * t = child.type_num + * if end - f < 5: + */ + goto __pyx_L13; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":849 + * # Cython ignores struct boundary information ("T{...}"), + * # so don't output it + * f = _util_dtypestring(child, f, end, offset) # <<<<<<<<<<<<<< + * return f + * + */ + /*else*/ { + __pyx_t_9 = __pyx_f_5numpy__util_dtypestring(__pyx_v_child, __pyx_v_f, __pyx_v_end, __pyx_v_offset); if (unlikely(__pyx_t_9 == NULL)) __PYX_ERR(1, 849, __pyx_L1_error) + __pyx_v_f = __pyx_t_9; + } + __pyx_L13:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":794 + * cdef tuple fields + * + * for childname in descr.names: # <<<<<<<<<<<<<< + * fields = descr.fields[childname] + * child, new_offset = fields + */ + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":850 + * # so don't output it + * f = _util_dtypestring(child, f, end, offset) + * return f # <<<<<<<<<<<<<< + * + * + */ + __pyx_r = __pyx_v_f; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":785 + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: # <<<<<<<<<<<<<< + * # Recursive utility function used in __getbuffer__ to get format + * # string. The new location in the format string is returned. + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("numpy._util_dtypestring", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XDECREF((PyObject *)__pyx_v_child); + __Pyx_XDECREF(__pyx_v_fields); + __Pyx_XDECREF(__pyx_v_childname); + __Pyx_XDECREF(__pyx_v_new_offset); + __Pyx_XDECREF(__pyx_v_t); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":966 + * + * + * cdef inline void set_array_base(ndarray arr, object base): # <<<<<<<<<<<<<< + * cdef PyObject* baseptr + * if base is None: + */ + +static CYTHON_INLINE void __pyx_f_5numpy_set_array_base(PyArrayObject *__pyx_v_arr, PyObject *__pyx_v_base) { + PyObject *__pyx_v_baseptr; + __Pyx_RefNannyDeclarations + int __pyx_t_1; + int __pyx_t_2; + __Pyx_RefNannySetupContext("set_array_base", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":968 + * cdef inline void set_array_base(ndarray arr, object base): + * cdef PyObject* baseptr + * if base is None: # <<<<<<<<<<<<<< + * baseptr = NULL + * else: + */ + __pyx_t_1 = (__pyx_v_base == Py_None); + __pyx_t_2 = (__pyx_t_1 != 0); + if (__pyx_t_2) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":969 + * cdef PyObject* baseptr + * if base is None: + * baseptr = NULL # <<<<<<<<<<<<<< + * else: + * Py_INCREF(base) # important to do this before decref below! + */ + __pyx_v_baseptr = NULL; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":968 + * cdef inline void set_array_base(ndarray arr, object base): + * cdef PyObject* baseptr + * if base is None: # <<<<<<<<<<<<<< + * baseptr = NULL + * else: + */ + goto __pyx_L3; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":971 + * baseptr = NULL + * else: + * Py_INCREF(base) # important to do this before decref below! # <<<<<<<<<<<<<< + * baseptr = base + * Py_XDECREF(arr.base) + */ + /*else*/ { + Py_INCREF(__pyx_v_base); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":972 + * else: + * Py_INCREF(base) # important to do this before decref below! + * baseptr = base # <<<<<<<<<<<<<< + * Py_XDECREF(arr.base) + * arr.base = baseptr + */ + __pyx_v_baseptr = ((PyObject *)__pyx_v_base); + } + __pyx_L3:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":973 + * Py_INCREF(base) # important to do this before decref below! + * baseptr = base + * Py_XDECREF(arr.base) # <<<<<<<<<<<<<< + * arr.base = baseptr + * + */ + Py_XDECREF(__pyx_v_arr->base); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":974 + * baseptr = base + * Py_XDECREF(arr.base) + * arr.base = baseptr # <<<<<<<<<<<<<< + * + * cdef inline object get_array_base(ndarray arr): + */ + __pyx_v_arr->base = __pyx_v_baseptr; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":966 + * + * + * cdef inline void set_array_base(ndarray arr, object base): # <<<<<<<<<<<<<< + * cdef PyObject* baseptr + * if base is None: + */ + + /* function exit code */ + __Pyx_RefNannyFinishContext(); +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":976 + * arr.base = baseptr + * + * cdef inline object get_array_base(ndarray arr): # <<<<<<<<<<<<<< + * if arr.base is NULL: + * return None + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_get_array_base(PyArrayObject *__pyx_v_arr) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + int __pyx_t_1; + __Pyx_RefNannySetupContext("get_array_base", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":977 + * + * cdef inline object get_array_base(ndarray arr): + * if arr.base is NULL: # <<<<<<<<<<<<<< + * return None + * else: + */ + __pyx_t_1 = ((__pyx_v_arr->base == NULL) != 0); + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":978 + * cdef inline object get_array_base(ndarray arr): + * if arr.base is NULL: + * return None # <<<<<<<<<<<<<< + * else: + * return arr.base + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(Py_None); + __pyx_r = Py_None; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":977 + * + * cdef inline object get_array_base(ndarray arr): + * if arr.base is NULL: # <<<<<<<<<<<<<< + * return None + * else: + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":980 + * return None + * else: + * return arr.base # <<<<<<<<<<<<<< + * + * + */ + /*else*/ { + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(((PyObject *)__pyx_v_arr->base)); + __pyx_r = ((PyObject *)__pyx_v_arr->base); + goto __pyx_L0; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":976 + * arr.base = baseptr + * + * cdef inline object get_array_base(ndarray arr): # <<<<<<<<<<<<<< + * if arr.base is NULL: + * return None + */ + + /* function exit code */ + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":985 + * # Versions of the import_* functions which are more suitable for + * # Cython code. + * cdef inline int import_array() except -1: # <<<<<<<<<<<<<< + * try: + * _import_array() + */ + +static CYTHON_INLINE int __pyx_f_5numpy_import_array(void) { + int __pyx_r; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + PyObject *__pyx_t_5 = NULL; + PyObject *__pyx_t_6 = NULL; + PyObject *__pyx_t_7 = NULL; + PyObject *__pyx_t_8 = NULL; + __Pyx_RefNannySetupContext("import_array", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":986 + * # Cython code. + * cdef inline int import_array() except -1: + * try: # <<<<<<<<<<<<<< + * _import_array() + * except Exception: + */ + { + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); + __Pyx_XGOTREF(__pyx_t_1); + __Pyx_XGOTREF(__pyx_t_2); + __Pyx_XGOTREF(__pyx_t_3); + /*try:*/ { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":987 + * cdef inline int import_array() except -1: + * try: + * _import_array() # <<<<<<<<<<<<<< + * except Exception: + * raise ImportError("numpy.core.multiarray failed to import") + */ + __pyx_t_4 = _import_array(); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(1, 987, __pyx_L3_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":986 + * # Cython code. + * cdef inline int import_array() except -1: + * try: # <<<<<<<<<<<<<< + * _import_array() + * except Exception: + */ + } + __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; + goto __pyx_L10_try_end; + __pyx_L3_error:; + __Pyx_PyThreadState_assign + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":988 + * try: + * _import_array() + * except Exception: # <<<<<<<<<<<<<< + * raise ImportError("numpy.core.multiarray failed to import") + * + */ + __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0]))); + if (__pyx_t_4) { + __Pyx_AddTraceback("numpy.import_array", __pyx_clineno, __pyx_lineno, __pyx_filename); + if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 988, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_GOTREF(__pyx_t_6); + __Pyx_GOTREF(__pyx_t_7); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":989 + * _import_array() + * except Exception: + * raise ImportError("numpy.core.multiarray failed to import") # <<<<<<<<<<<<<< + * + * cdef inline int import_umath() except -1: + */ + __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__11, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 989, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_8); + __Pyx_Raise(__pyx_t_8, 0, 0, 0); + __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; + __PYX_ERR(1, 989, __pyx_L5_except_error) + } + goto __pyx_L5_except_error; + __pyx_L5_except_error:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":986 + * # Cython code. + * cdef inline int import_array() except -1: + * try: # <<<<<<<<<<<<<< + * _import_array() + * except Exception: + */ + __Pyx_PyThreadState_assign + __Pyx_XGIVEREF(__pyx_t_1); + __Pyx_XGIVEREF(__pyx_t_2); + __Pyx_XGIVEREF(__pyx_t_3); + __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); + goto __pyx_L1_error; + __pyx_L10_try_end:; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":985 + * # Versions of the import_* functions which are more suitable for + * # Cython code. + * cdef inline int import_array() except -1: # <<<<<<<<<<<<<< + * try: + * _import_array() + */ + + /* function exit code */ + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_5); + __Pyx_XDECREF(__pyx_t_6); + __Pyx_XDECREF(__pyx_t_7); + __Pyx_XDECREF(__pyx_t_8); + __Pyx_AddTraceback("numpy.import_array", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = -1; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":991 + * raise ImportError("numpy.core.multiarray failed to import") + * + * cdef inline int import_umath() except -1: # <<<<<<<<<<<<<< + * try: + * _import_umath() + */ + +static CYTHON_INLINE int __pyx_f_5numpy_import_umath(void) { + int __pyx_r; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + PyObject *__pyx_t_5 = NULL; + PyObject *__pyx_t_6 = NULL; + PyObject *__pyx_t_7 = NULL; + PyObject *__pyx_t_8 = NULL; + __Pyx_RefNannySetupContext("import_umath", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":992 + * + * cdef inline int import_umath() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + { + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); + __Pyx_XGOTREF(__pyx_t_1); + __Pyx_XGOTREF(__pyx_t_2); + __Pyx_XGOTREF(__pyx_t_3); + /*try:*/ { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":993 + * cdef inline int import_umath() except -1: + * try: + * _import_umath() # <<<<<<<<<<<<<< + * except Exception: + * raise ImportError("numpy.core.umath failed to import") + */ + __pyx_t_4 = _import_umath(); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(1, 993, __pyx_L3_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":992 + * + * cdef inline int import_umath() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + } + __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; + goto __pyx_L10_try_end; + __pyx_L3_error:; + __Pyx_PyThreadState_assign + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":994 + * try: + * _import_umath() + * except Exception: # <<<<<<<<<<<<<< + * raise ImportError("numpy.core.umath failed to import") + * + */ + __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0]))); + if (__pyx_t_4) { + __Pyx_AddTraceback("numpy.import_umath", __pyx_clineno, __pyx_lineno, __pyx_filename); + if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 994, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_GOTREF(__pyx_t_6); + __Pyx_GOTREF(__pyx_t_7); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":995 + * _import_umath() + * except Exception: + * raise ImportError("numpy.core.umath failed to import") # <<<<<<<<<<<<<< + * + * cdef inline int import_ufunc() except -1: + */ + __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__12, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 995, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_8); + __Pyx_Raise(__pyx_t_8, 0, 0, 0); + __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; + __PYX_ERR(1, 995, __pyx_L5_except_error) + } + goto __pyx_L5_except_error; + __pyx_L5_except_error:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":992 + * + * cdef inline int import_umath() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + __Pyx_PyThreadState_assign + __Pyx_XGIVEREF(__pyx_t_1); + __Pyx_XGIVEREF(__pyx_t_2); + __Pyx_XGIVEREF(__pyx_t_3); + __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); + goto __pyx_L1_error; + __pyx_L10_try_end:; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":991 + * raise ImportError("numpy.core.multiarray failed to import") + * + * cdef inline int import_umath() except -1: # <<<<<<<<<<<<<< + * try: + * _import_umath() + */ + + /* function exit code */ + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_5); + __Pyx_XDECREF(__pyx_t_6); + __Pyx_XDECREF(__pyx_t_7); + __Pyx_XDECREF(__pyx_t_8); + __Pyx_AddTraceback("numpy.import_umath", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = -1; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":997 + * raise ImportError("numpy.core.umath failed to import") + * + * cdef inline int import_ufunc() except -1: # <<<<<<<<<<<<<< + * try: + * _import_umath() + */ + +static CYTHON_INLINE int __pyx_f_5numpy_import_ufunc(void) { + int __pyx_r; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + PyObject *__pyx_t_5 = NULL; + PyObject *__pyx_t_6 = NULL; + PyObject *__pyx_t_7 = NULL; + PyObject *__pyx_t_8 = NULL; + __Pyx_RefNannySetupContext("import_ufunc", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":998 + * + * cdef inline int import_ufunc() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + { + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); + __Pyx_XGOTREF(__pyx_t_1); + __Pyx_XGOTREF(__pyx_t_2); + __Pyx_XGOTREF(__pyx_t_3); + /*try:*/ { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":999 + * cdef inline int import_ufunc() except -1: + * try: + * _import_umath() # <<<<<<<<<<<<<< + * except Exception: + * raise ImportError("numpy.core.umath failed to import") + */ + __pyx_t_4 = _import_umath(); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(1, 999, __pyx_L3_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":998 + * + * cdef inline int import_ufunc() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + } + __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; + goto __pyx_L10_try_end; + __pyx_L3_error:; + __Pyx_PyThreadState_assign + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":1000 + * try: + * _import_umath() + * except Exception: # <<<<<<<<<<<<<< + * raise ImportError("numpy.core.umath failed to import") + */ + __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0]))); + if (__pyx_t_4) { + __Pyx_AddTraceback("numpy.import_ufunc", __pyx_clineno, __pyx_lineno, __pyx_filename); + if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 1000, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_GOTREF(__pyx_t_6); + __Pyx_GOTREF(__pyx_t_7); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":1001 + * _import_umath() + * except Exception: + * raise ImportError("numpy.core.umath failed to import") # <<<<<<<<<<<<<< + */ + __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__13, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 1001, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_8); + __Pyx_Raise(__pyx_t_8, 0, 0, 0); + __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; + __PYX_ERR(1, 1001, __pyx_L5_except_error) + } + goto __pyx_L5_except_error; + __pyx_L5_except_error:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":998 + * + * cdef inline int import_ufunc() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + __Pyx_PyThreadState_assign + __Pyx_XGIVEREF(__pyx_t_1); + __Pyx_XGIVEREF(__pyx_t_2); + __Pyx_XGIVEREF(__pyx_t_3); + __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); + goto __pyx_L1_error; + __pyx_L10_try_end:; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":997 + * raise ImportError("numpy.core.umath failed to import") + * + * cdef inline int import_ufunc() except -1: # <<<<<<<<<<<<<< + * try: + * _import_umath() + */ + + /* function exit code */ + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_5); + __Pyx_XDECREF(__pyx_t_6); + __Pyx_XDECREF(__pyx_t_7); + __Pyx_XDECREF(__pyx_t_8); + __Pyx_AddTraceback("numpy.import_ufunc", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = -1; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static PyMethodDef __pyx_methods[] = { + {0, 0, 0, 0} +}; + +#if PY_MAJOR_VERSION >= 3 +static struct PyModuleDef __pyx_moduledef = { + #if PY_VERSION_HEX < 0x03020000 + { PyObject_HEAD_INIT(NULL) NULL, 0, NULL }, + #else + PyModuleDef_HEAD_INIT, + #endif + "poly_nms", + 0, /* m_doc */ + -1, /* m_size */ + __pyx_methods /* m_methods */, + NULL, /* m_reload */ + NULL, /* m_traverse */ + NULL, /* m_clear */ + NULL /* m_free */ +}; +#endif + +static __Pyx_StringTabEntry __pyx_string_tab[] = { + {&__pyx_kp_u_Format_string_allocated_too_shor, __pyx_k_Format_string_allocated_too_shor, sizeof(__pyx_k_Format_string_allocated_too_shor), 0, 1, 0, 0}, + {&__pyx_kp_u_Format_string_allocated_too_shor_2, __pyx_k_Format_string_allocated_too_shor_2, sizeof(__pyx_k_Format_string_allocated_too_shor_2), 0, 1, 0, 0}, + {&__pyx_n_s_ImportError, __pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 0, 1, 1}, + {&__pyx_kp_u_Non_native_byte_order_not_suppor, __pyx_k_Non_native_byte_order_not_suppor, sizeof(__pyx_k_Non_native_byte_order_not_suppor), 0, 1, 0, 0}, + {&__pyx_n_s_RuntimeError, __pyx_k_RuntimeError, sizeof(__pyx_k_RuntimeError), 0, 0, 1, 1}, + {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1}, + {&__pyx_n_s_argsort, __pyx_k_argsort, sizeof(__pyx_k_argsort), 0, 0, 1, 1}, + {&__pyx_n_s_boxes_dim, __pyx_k_boxes_dim, sizeof(__pyx_k_boxes_dim), 0, 0, 1, 1}, + {&__pyx_n_s_boxes_num, __pyx_k_boxes_num, sizeof(__pyx_k_boxes_num), 0, 0, 1, 1}, + {&__pyx_n_s_dets, __pyx_k_dets, sizeof(__pyx_k_dets), 0, 0, 1, 1}, + {&__pyx_n_s_device_id, __pyx_k_device_id, sizeof(__pyx_k_device_id), 0, 0, 1, 1}, + {&__pyx_n_s_dtype, __pyx_k_dtype, sizeof(__pyx_k_dtype), 0, 0, 1, 1}, + {&__pyx_kp_s_home_dingjian_code_DOTA_devkit, __pyx_k_home_dingjian_code_DOTA_devkit, sizeof(__pyx_k_home_dingjian_code_DOTA_devkit), 0, 0, 1, 0}, + {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, + {&__pyx_n_s_int32, __pyx_k_int32, sizeof(__pyx_k_int32), 0, 0, 1, 1}, + {&__pyx_n_s_keep, __pyx_k_keep, sizeof(__pyx_k_keep), 0, 0, 1, 1}, + {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, + {&__pyx_kp_u_ndarray_is_not_C_contiguous, __pyx_k_ndarray_is_not_C_contiguous, sizeof(__pyx_k_ndarray_is_not_C_contiguous), 0, 1, 0, 0}, + {&__pyx_kp_u_ndarray_is_not_Fortran_contiguou, __pyx_k_ndarray_is_not_Fortran_contiguou, sizeof(__pyx_k_ndarray_is_not_Fortran_contiguou), 0, 1, 0, 0}, + {&__pyx_n_s_np, __pyx_k_np, sizeof(__pyx_k_np), 0, 0, 1, 1}, + {&__pyx_n_s_num_out, __pyx_k_num_out, sizeof(__pyx_k_num_out), 0, 0, 1, 1}, + {&__pyx_n_s_numpy, __pyx_k_numpy, sizeof(__pyx_k_numpy), 0, 0, 1, 1}, + {&__pyx_kp_s_numpy_core_multiarray_failed_to, __pyx_k_numpy_core_multiarray_failed_to, sizeof(__pyx_k_numpy_core_multiarray_failed_to), 0, 0, 1, 0}, + {&__pyx_kp_s_numpy_core_umath_failed_to_impor, __pyx_k_numpy_core_umath_failed_to_impor, sizeof(__pyx_k_numpy_core_umath_failed_to_impor), 0, 0, 1, 0}, + {&__pyx_n_s_order, __pyx_k_order, sizeof(__pyx_k_order), 0, 0, 1, 1}, + {&__pyx_n_s_poly_gpu_nms, __pyx_k_poly_gpu_nms, sizeof(__pyx_k_poly_gpu_nms), 0, 0, 1, 1}, + {&__pyx_n_s_poly_nms, __pyx_k_poly_nms, sizeof(__pyx_k_poly_nms), 0, 0, 1, 1}, + {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1}, + {&__pyx_n_s_scores, __pyx_k_scores, sizeof(__pyx_k_scores), 0, 0, 1, 1}, + {&__pyx_n_s_sorted_dets, __pyx_k_sorted_dets, sizeof(__pyx_k_sorted_dets), 0, 0, 1, 1}, + {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, + {&__pyx_n_s_thresh, __pyx_k_thresh, sizeof(__pyx_k_thresh), 0, 0, 1, 1}, + {&__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_k_unknown_dtype_code_in_numpy_pxd, sizeof(__pyx_k_unknown_dtype_code_in_numpy_pxd), 0, 1, 0, 0}, + {&__pyx_n_s_zeros, __pyx_k_zeros, sizeof(__pyx_k_zeros), 0, 0, 1, 1}, + {0, 0, 0, 0, 0, 0, 0} +}; +static int __Pyx_InitCachedBuiltins(void) { + __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 218, __pyx_L1_error) + __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(1, 231, __pyx_L1_error) + __pyx_builtin_RuntimeError = __Pyx_GetBuiltinName(__pyx_n_s_RuntimeError); if (!__pyx_builtin_RuntimeError) __PYX_ERR(1, 799, __pyx_L1_error) + __pyx_builtin_ImportError = __Pyx_GetBuiltinName(__pyx_n_s_ImportError); if (!__pyx_builtin_ImportError) __PYX_ERR(1, 989, __pyx_L1_error) + return 0; + __pyx_L1_error:; + return -1; +} + +static int __Pyx_InitCachedConstants(void) { + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); + + /* "poly_nms.pyx":17 + * keep = np.zeros(boxes_num, dtype=np.int32) + * cdef np.ndarray[np.float32_t, ndim=1] \ + * scores = dets[:, 8] # <<<<<<<<<<<<<< + * cdef np.ndarray[np.int_t, ndim=1] \ + * order = scores.argsort()[::-1] + */ + __pyx_slice_ = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice_)) __PYX_ERR(0, 17, __pyx_L1_error) + __Pyx_GOTREF(__pyx_slice_); + __Pyx_GIVEREF(__pyx_slice_); + __pyx_tuple__2 = PyTuple_Pack(2, __pyx_slice_, __pyx_int_8); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(0, 17, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__2); + __Pyx_GIVEREF(__pyx_tuple__2); + + /* "poly_nms.pyx":19 + * scores = dets[:, 8] + * cdef np.ndarray[np.int_t, ndim=1] \ + * order = scores.argsort()[::-1] # <<<<<<<<<<<<<< + * cdef np.ndarray[np.float32_t, ndim=2] \ + * sorted_dets = dets[order, :] + */ + __pyx_slice__3 = PySlice_New(Py_None, Py_None, __pyx_int_neg_1); if (unlikely(!__pyx_slice__3)) __PYX_ERR(0, 19, __pyx_L1_error) + __Pyx_GOTREF(__pyx_slice__3); + __Pyx_GIVEREF(__pyx_slice__3); + + /* "poly_nms.pyx":21 + * order = scores.argsort()[::-1] + * cdef np.ndarray[np.float32_t, ndim=2] \ + * sorted_dets = dets[order, :] # <<<<<<<<<<<<<< + * _poly_nms(&keep[0], &num_out, &sorted_dets[0, 0], boxes_num, boxes_dim, thresh, device_id) + * keep = keep[:num_out] + */ + __pyx_slice__4 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__4)) __PYX_ERR(0, 21, __pyx_L1_error) + __Pyx_GOTREF(__pyx_slice__4); + __Pyx_GIVEREF(__pyx_slice__4); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":218 + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") # <<<<<<<<<<<<<< + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + */ + __pyx_tuple__5 = PyTuple_Pack(1, __pyx_kp_u_ndarray_is_not_C_contiguous); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(1, 218, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__5); + __Pyx_GIVEREF(__pyx_tuple__5); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":222 + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") # <<<<<<<<<<<<<< + * + * info.buf = PyArray_DATA(self) + */ + __pyx_tuple__6 = PyTuple_Pack(1, __pyx_kp_u_ndarray_is_not_Fortran_contiguou); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(1, 222, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__6); + __Pyx_GIVEREF(__pyx_tuple__6); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":259 + * if ((descr.byteorder == c'>' and little_endian) or + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" + */ + __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_u_Non_native_byte_order_not_suppor); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(1, 259, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__7); + __Pyx_GIVEREF(__pyx_tuple__7); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":799 + * + * if (end - f) - (new_offset - offset[0]) < 15: + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") # <<<<<<<<<<<<<< + * + * if ((child.byteorder == c'>' and little_endian) or + */ + __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_u_Format_string_allocated_too_shor); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(1, 799, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__8); + __Pyx_GIVEREF(__pyx_tuple__8); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":803 + * if ((child.byteorder == c'>' and little_endian) or + * (child.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * # One could encode it in the format string and have Cython + * # complain instead, BUT: < and > in format strings also imply + */ + __pyx_tuple__9 = PyTuple_Pack(1, __pyx_kp_u_Non_native_byte_order_not_suppor); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(1, 803, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__9); + __Pyx_GIVEREF(__pyx_tuple__9); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":823 + * t = child.type_num + * if end - f < 5: + * raise RuntimeError(u"Format string allocated too short.") # <<<<<<<<<<<<<< + * + * # Until ticket #99 is fixed, use integers to avoid warnings + */ + __pyx_tuple__10 = PyTuple_Pack(1, __pyx_kp_u_Format_string_allocated_too_shor_2); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(1, 823, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__10); + __Pyx_GIVEREF(__pyx_tuple__10); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":989 + * _import_array() + * except Exception: + * raise ImportError("numpy.core.multiarray failed to import") # <<<<<<<<<<<<<< + * + * cdef inline int import_umath() except -1: + */ + __pyx_tuple__11 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_multiarray_failed_to); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(1, 989, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__11); + __Pyx_GIVEREF(__pyx_tuple__11); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":995 + * _import_umath() + * except Exception: + * raise ImportError("numpy.core.umath failed to import") # <<<<<<<<<<<<<< + * + * cdef inline int import_ufunc() except -1: + */ + __pyx_tuple__12 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_umath_failed_to_impor); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(1, 995, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__12); + __Pyx_GIVEREF(__pyx_tuple__12); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":1001 + * _import_umath() + * except Exception: + * raise ImportError("numpy.core.umath failed to import") # <<<<<<<<<<<<<< + */ + __pyx_tuple__13 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_umath_failed_to_impor); if (unlikely(!__pyx_tuple__13)) __PYX_ERR(1, 1001, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__13); + __Pyx_GIVEREF(__pyx_tuple__13); + + /* "poly_nms.pyx":9 + * void _poly_nms(np.int32_t*, int*, np.float32_t*, int, int, float, int) + * + * def poly_gpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh, # <<<<<<<<<<<<<< + * np.int32_t device_id=0): + * cdef int boxes_num = dets.shape[0] + */ + __pyx_tuple__14 = PyTuple_Pack(10, __pyx_n_s_dets, __pyx_n_s_thresh, __pyx_n_s_device_id, __pyx_n_s_boxes_num, __pyx_n_s_boxes_dim, __pyx_n_s_num_out, __pyx_n_s_keep, __pyx_n_s_scores, __pyx_n_s_order, __pyx_n_s_sorted_dets); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(0, 9, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__14); + __Pyx_GIVEREF(__pyx_tuple__14); + __pyx_codeobj__15 = (PyObject*)__Pyx_PyCode_New(3, 0, 10, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__14, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_dingjian_code_DOTA_devkit, __pyx_n_s_poly_gpu_nms, 9, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__15)) __PYX_ERR(0, 9, __pyx_L1_error) + __Pyx_RefNannyFinishContext(); + return 0; + __pyx_L1_error:; + __Pyx_RefNannyFinishContext(); + return -1; +} + +static int __Pyx_InitGlobals(void) { + if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error); + __pyx_int_8 = PyInt_FromLong(8); if (unlikely(!__pyx_int_8)) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) __PYX_ERR(0, 1, __pyx_L1_error) + return 0; + __pyx_L1_error:; + return -1; +} + +#if PY_MAJOR_VERSION < 3 +PyMODINIT_FUNC initpoly_nms(void); /*proto*/ +PyMODINIT_FUNC initpoly_nms(void) +#else +PyMODINIT_FUNC PyInit_poly_nms(void); /*proto*/ +PyMODINIT_FUNC PyInit_poly_nms(void) +#endif +{ + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannyDeclarations + #if CYTHON_REFNANNY + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); + if (!__Pyx_RefNanny) { + PyErr_Clear(); + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); + if (!__Pyx_RefNanny) + Py_FatalError("failed to import 'refnanny' module"); + } + #endif + __Pyx_RefNannySetupContext("PyMODINIT_FUNC PyInit_poly_nms(void)", 0); + if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) + #ifdef __Pyx_CyFunction_USED + if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_FusedFunction_USED + if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_Coroutine_USED + if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_Generator_USED + if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_StopAsyncIteration_USED + if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + /*--- Library function declarations ---*/ + /*--- Threads initialization code ---*/ + #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS + #ifdef WITH_THREAD /* Python build with threading support? */ + PyEval_InitThreads(); + #endif + #endif + /*--- Module creation code ---*/ + #if PY_MAJOR_VERSION < 3 + __pyx_m = Py_InitModule4("poly_nms", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); + #else + __pyx_m = PyModule_Create(&__pyx_moduledef); + #endif + if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) + Py_INCREF(__pyx_d); + __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) + #if CYTHON_COMPILING_IN_PYPY + Py_INCREF(__pyx_b); + #endif + if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); + /*--- Initialize various global constants etc. ---*/ + if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) + if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + if (__pyx_module_is_main_poly_nms) { + if (PyObject_SetAttrString(__pyx_m, "__name__", __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) + } + #if PY_MAJOR_VERSION >= 3 + { + PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) + if (!PyDict_GetItemString(modules, "poly_nms")) { + if (unlikely(PyDict_SetItemString(modules, "poly_nms", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error) + } + } + #endif + /*--- Builtin init code ---*/ + if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + /*--- Constants init code ---*/ + if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + /*--- Global init code ---*/ + /*--- Variable export code ---*/ + /*--- Function export code ---*/ + /*--- Type init code ---*/ + /*--- Type import code ---*/ + __pyx_ptype_7cpython_4type_type = __Pyx_ImportType(__Pyx_BUILTIN_MODULE_NAME, "type", + #if CYTHON_COMPILING_IN_PYPY + sizeof(PyTypeObject), + #else + sizeof(PyHeapTypeObject), + #endif + 0); if (unlikely(!__pyx_ptype_7cpython_4type_type)) __PYX_ERR(2, 9, __pyx_L1_error) + __pyx_ptype_5numpy_dtype = __Pyx_ImportType("numpy", "dtype", sizeof(PyArray_Descr), 0); if (unlikely(!__pyx_ptype_5numpy_dtype)) __PYX_ERR(1, 155, __pyx_L1_error) + __pyx_ptype_5numpy_flatiter = __Pyx_ImportType("numpy", "flatiter", sizeof(PyArrayIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_flatiter)) __PYX_ERR(1, 168, __pyx_L1_error) + __pyx_ptype_5numpy_broadcast = __Pyx_ImportType("numpy", "broadcast", sizeof(PyArrayMultiIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_broadcast)) __PYX_ERR(1, 172, __pyx_L1_error) + __pyx_ptype_5numpy_ndarray = __Pyx_ImportType("numpy", "ndarray", sizeof(PyArrayObject), 0); if (unlikely(!__pyx_ptype_5numpy_ndarray)) __PYX_ERR(1, 181, __pyx_L1_error) + __pyx_ptype_5numpy_ufunc = __Pyx_ImportType("numpy", "ufunc", sizeof(PyUFuncObject), 0); if (unlikely(!__pyx_ptype_5numpy_ufunc)) __PYX_ERR(1, 861, __pyx_L1_error) + /*--- Variable import code ---*/ + /*--- Function import code ---*/ + /*--- Execution code ---*/ + #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) + if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + + /* "poly_nms.pyx":1 + * import numpy as np # <<<<<<<<<<<<<< + * cimport numpy as np + * + */ + __pyx_t_1 = __Pyx_Import(__pyx_n_s_numpy, 0, -1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + if (PyDict_SetItem(__pyx_d, __pyx_n_s_np, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "poly_nms.pyx":4 + * cimport numpy as np + * + * assert sizeof(int) == sizeof(np.int32_t) # <<<<<<<<<<<<<< + * + * cdef extern from "poly_nms.hpp": + */ + #ifndef CYTHON_WITHOUT_ASSERTIONS + if (unlikely(!Py_OptimizeFlag)) { + if (unlikely(!(((sizeof(int)) == (sizeof(__pyx_t_5numpy_int32_t))) != 0))) { + PyErr_SetNone(PyExc_AssertionError); + __PYX_ERR(0, 4, __pyx_L1_error) + } + } + #endif + + /* "poly_nms.pyx":9 + * void _poly_nms(np.int32_t*, int*, np.float32_t*, int, int, float, int) + * + * def poly_gpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh, # <<<<<<<<<<<<<< + * np.int32_t device_id=0): + * cdef int boxes_num = dets.shape[0] + */ + __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_8poly_nms_1poly_gpu_nms, NULL, __pyx_n_s_poly_nms); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 9, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + if (PyDict_SetItem(__pyx_d, __pyx_n_s_poly_gpu_nms, __pyx_t_1) < 0) __PYX_ERR(0, 9, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "poly_nms.pyx":1 + * import numpy as np # <<<<<<<<<<<<<< + * cimport numpy as np + * + */ + __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":997 + * raise ImportError("numpy.core.umath failed to import") + * + * cdef inline int import_ufunc() except -1: # <<<<<<<<<<<<<< + * try: + * _import_umath() + */ + + /*--- Wrapped vars code ---*/ + + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + if (__pyx_m) { + if (__pyx_d) { + __Pyx_AddTraceback("init poly_nms", __pyx_clineno, __pyx_lineno, __pyx_filename); + } + Py_DECREF(__pyx_m); __pyx_m = 0; + } else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_ImportError, "init poly_nms"); + } + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + #if PY_MAJOR_VERSION < 3 + return; + #else + return __pyx_m; + #endif +} + +/* --- Runtime support code --- */ +/* Refnanny */ +#if CYTHON_REFNANNY +static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { + PyObject *m = NULL, *p = NULL; + void *r = NULL; + m = PyImport_ImportModule((char *)modname); + if (!m) goto end; + p = PyObject_GetAttrString(m, (char *)"RefNannyAPI"); + if (!p) goto end; + r = PyLong_AsVoidPtr(p); +end: + Py_XDECREF(p); + Py_XDECREF(m); + return (__Pyx_RefNannyAPIStruct *)r; +} +#endif + +/* RaiseArgTupleInvalid */ +static void __Pyx_RaiseArgtupleInvalid( + const char* func_name, + int exact, + Py_ssize_t num_min, + Py_ssize_t num_max, + Py_ssize_t num_found) +{ + Py_ssize_t num_expected; + const char *more_or_less; + if (num_found < num_min) { + num_expected = num_min; + more_or_less = "at least"; + } else { + num_expected = num_max; + more_or_less = "at most"; + } + if (exact) { + more_or_less = "exactly"; + } + PyErr_Format(PyExc_TypeError, + "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", + func_name, more_or_less, num_expected, + (num_expected == 1) ? "" : "s", num_found); +} + +/* RaiseDoubleKeywords */ +static void __Pyx_RaiseDoubleKeywordsError( + const char* func_name, + PyObject* kw_name) +{ + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION >= 3 + "%s() got multiple values for keyword argument '%U'", func_name, kw_name); + #else + "%s() got multiple values for keyword argument '%s'", func_name, + PyString_AsString(kw_name)); + #endif +} + +/* ParseKeywords */ +static int __Pyx_ParseOptionalKeywords( + PyObject *kwds, + PyObject **argnames[], + PyObject *kwds2, + PyObject *values[], + Py_ssize_t num_pos_args, + const char* function_name) +{ + PyObject *key = 0, *value = 0; + Py_ssize_t pos = 0; + PyObject*** name; + PyObject*** first_kw_arg = argnames + num_pos_args; + while (PyDict_Next(kwds, &pos, &key, &value)) { + name = first_kw_arg; + while (*name && (**name != key)) name++; + if (*name) { + values[name-argnames] = value; + continue; + } + name = first_kw_arg; + #if PY_MAJOR_VERSION < 3 + if (likely(PyString_CheckExact(key)) || likely(PyString_Check(key))) { + while (*name) { + if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) + && _PyString_Eq(**name, key)) { + values[name-argnames] = value; + break; + } + name++; + } + if (*name) continue; + else { + PyObject*** argname = argnames; + while (argname != first_kw_arg) { + if ((**argname == key) || ( + (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) + && _PyString_Eq(**argname, key))) { + goto arg_passed_twice; + } + argname++; + } + } + } else + #endif + if (likely(PyUnicode_Check(key))) { + while (*name) { + int cmp = (**name == key) ? 0 : + #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 + (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 : + #endif + PyUnicode_Compare(**name, key); + if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; + if (cmp == 0) { + values[name-argnames] = value; + break; + } + name++; + } + if (*name) continue; + else { + PyObject*** argname = argnames; + while (argname != first_kw_arg) { + int cmp = (**argname == key) ? 0 : + #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 + (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 : + #endif + PyUnicode_Compare(**argname, key); + if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; + if (cmp == 0) goto arg_passed_twice; + argname++; + } + } + } else + goto invalid_keyword_type; + if (kwds2) { + if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; + } else { + goto invalid_keyword; + } + } + return 0; +arg_passed_twice: + __Pyx_RaiseDoubleKeywordsError(function_name, key); + goto bad; +invalid_keyword_type: + PyErr_Format(PyExc_TypeError, + "%.200s() keywords must be strings", function_name); + goto bad; +invalid_keyword: + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION < 3 + "%.200s() got an unexpected keyword argument '%.200s'", + function_name, PyString_AsString(key)); + #else + "%s() got an unexpected keyword argument '%U'", + function_name, key); + #endif +bad: + return -1; +} + +/* ArgTypeTest */ +static void __Pyx_RaiseArgumentTypeInvalid(const char* name, PyObject *obj, PyTypeObject *type) { + PyErr_Format(PyExc_TypeError, + "Argument '%.200s' has incorrect type (expected %.200s, got %.200s)", + name, type->tp_name, Py_TYPE(obj)->tp_name); +} +static CYTHON_INLINE int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed, + const char *name, int exact) +{ + if (unlikely(!type)) { + PyErr_SetString(PyExc_SystemError, "Missing type object"); + return 0; + } + if (none_allowed && obj == Py_None) return 1; + else if (exact) { + if (likely(Py_TYPE(obj) == type)) return 1; + #if PY_MAJOR_VERSION == 2 + else if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1; + #endif + } + else { + if (likely(PyObject_TypeCheck(obj, type))) return 1; + } + __Pyx_RaiseArgumentTypeInvalid(name, obj, type); + return 0; +} + +/* BufferFormatCheck */ +static CYTHON_INLINE int __Pyx_IsLittleEndian(void) { + unsigned int n = 1; + return *(unsigned char*)(&n) != 0; +} +static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, + __Pyx_BufFmt_StackElem* stack, + __Pyx_TypeInfo* type) { + stack[0].field = &ctx->root; + stack[0].parent_offset = 0; + ctx->root.type = type; + ctx->root.name = "buffer dtype"; + ctx->root.offset = 0; + ctx->head = stack; + ctx->head->field = &ctx->root; + ctx->fmt_offset = 0; + ctx->head->parent_offset = 0; + ctx->new_packmode = '@'; + ctx->enc_packmode = '@'; + ctx->new_count = 1; + ctx->enc_count = 0; + ctx->enc_type = 0; + ctx->is_complex = 0; + ctx->is_valid_array = 0; + ctx->struct_alignment = 0; + while (type->typegroup == 'S') { + ++ctx->head; + ctx->head->field = type->fields; + ctx->head->parent_offset = 0; + type = type->fields->type; + } +} +static int __Pyx_BufFmt_ParseNumber(const char** ts) { + int count; + const char* t = *ts; + if (*t < '0' || *t > '9') { + return -1; + } else { + count = *t++ - '0'; + while (*t >= '0' && *t < '9') { + count *= 10; + count += *t++ - '0'; + } + } + *ts = t; + return count; +} +static int __Pyx_BufFmt_ExpectNumber(const char **ts) { + int number = __Pyx_BufFmt_ParseNumber(ts); + if (number == -1) + PyErr_Format(PyExc_ValueError,\ + "Does not understand character buffer dtype format string ('%c')", **ts); + return number; +} +static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) { + PyErr_Format(PyExc_ValueError, + "Unexpected format string character: '%c'", ch); +} +static const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) { + switch (ch) { + case 'c': return "'char'"; + case 'b': return "'signed char'"; + case 'B': return "'unsigned char'"; + case 'h': return "'short'"; + case 'H': return "'unsigned short'"; + case 'i': return "'int'"; + case 'I': return "'unsigned int'"; + case 'l': return "'long'"; + case 'L': return "'unsigned long'"; + case 'q': return "'long long'"; + case 'Q': return "'unsigned long long'"; + case 'f': return (is_complex ? "'complex float'" : "'float'"); + case 'd': return (is_complex ? "'complex double'" : "'double'"); + case 'g': return (is_complex ? "'complex long double'" : "'long double'"); + case 'T': return "a struct"; + case 'O': return "Python object"; + case 'P': return "a pointer"; + case 's': case 'p': return "a string"; + case 0: return "end"; + default: return "unparseable format string"; + } +} +static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) { + switch (ch) { + case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return 2; + case 'i': case 'I': case 'l': case 'L': return 4; + case 'q': case 'Q': return 8; + case 'f': return (is_complex ? 8 : 4); + case 'd': return (is_complex ? 16 : 8); + case 'g': { + PyErr_SetString(PyExc_ValueError, "Python does not define a standard format string size for long double ('g').."); + return 0; + } + case 'O': case 'P': return sizeof(void*); + default: + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } +} +static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) { + switch (ch) { + case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return sizeof(short); + case 'i': case 'I': return sizeof(int); + case 'l': case 'L': return sizeof(long); + #ifdef HAVE_LONG_LONG + case 'q': case 'Q': return sizeof(PY_LONG_LONG); + #endif + case 'f': return sizeof(float) * (is_complex ? 2 : 1); + case 'd': return sizeof(double) * (is_complex ? 2 : 1); + case 'g': return sizeof(long double) * (is_complex ? 2 : 1); + case 'O': case 'P': return sizeof(void*); + default: { + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } + } +} +typedef struct { char c; short x; } __Pyx_st_short; +typedef struct { char c; int x; } __Pyx_st_int; +typedef struct { char c; long x; } __Pyx_st_long; +typedef struct { char c; float x; } __Pyx_st_float; +typedef struct { char c; double x; } __Pyx_st_double; +typedef struct { char c; long double x; } __Pyx_st_longdouble; +typedef struct { char c; void *x; } __Pyx_st_void_p; +#ifdef HAVE_LONG_LONG +typedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong; +#endif +static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED int is_complex) { + switch (ch) { + case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short); + case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int); + case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long); +#ifdef HAVE_LONG_LONG + case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG); +#endif + case 'f': return sizeof(__Pyx_st_float) - sizeof(float); + case 'd': return sizeof(__Pyx_st_double) - sizeof(double); + case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double); + case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*); + default: + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } +} +/* These are for computing the padding at the end of the struct to align + on the first member of the struct. This will probably the same as above, + but we don't have any guarantees. + */ +typedef struct { short x; char c; } __Pyx_pad_short; +typedef struct { int x; char c; } __Pyx_pad_int; +typedef struct { long x; char c; } __Pyx_pad_long; +typedef struct { float x; char c; } __Pyx_pad_float; +typedef struct { double x; char c; } __Pyx_pad_double; +typedef struct { long double x; char c; } __Pyx_pad_longdouble; +typedef struct { void *x; char c; } __Pyx_pad_void_p; +#ifdef HAVE_LONG_LONG +typedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong; +#endif +static size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int is_complex) { + switch (ch) { + case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short); + case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int); + case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long); +#ifdef HAVE_LONG_LONG + case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG); +#endif + case 'f': return sizeof(__Pyx_pad_float) - sizeof(float); + case 'd': return sizeof(__Pyx_pad_double) - sizeof(double); + case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double); + case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*); + default: + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } +} +static char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) { + switch (ch) { + case 'c': + return 'H'; + case 'b': case 'h': case 'i': + case 'l': case 'q': case 's': case 'p': + return 'I'; + case 'B': case 'H': case 'I': case 'L': case 'Q': + return 'U'; + case 'f': case 'd': case 'g': + return (is_complex ? 'C' : 'R'); + case 'O': + return 'O'; + case 'P': + return 'P'; + default: { + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } + } +} +static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) { + if (ctx->head == NULL || ctx->head->field == &ctx->root) { + const char* expected; + const char* quote; + if (ctx->head == NULL) { + expected = "end"; + quote = ""; + } else { + expected = ctx->head->field->type->name; + quote = "'"; + } + PyErr_Format(PyExc_ValueError, + "Buffer dtype mismatch, expected %s%s%s but got %s", + quote, expected, quote, + __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex)); + } else { + __Pyx_StructField* field = ctx->head->field; + __Pyx_StructField* parent = (ctx->head - 1)->field; + PyErr_Format(PyExc_ValueError, + "Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'", + field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex), + parent->type->name, field->name); + } +} +static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) { + char group; + size_t size, offset, arraysize = 1; + if (ctx->enc_type == 0) return 0; + if (ctx->head->field->type->arraysize[0]) { + int i, ndim = 0; + if (ctx->enc_type == 's' || ctx->enc_type == 'p') { + ctx->is_valid_array = ctx->head->field->type->ndim == 1; + ndim = 1; + if (ctx->enc_count != ctx->head->field->type->arraysize[0]) { + PyErr_Format(PyExc_ValueError, + "Expected a dimension of size %zu, got %zu", + ctx->head->field->type->arraysize[0], ctx->enc_count); + return -1; + } + } + if (!ctx->is_valid_array) { + PyErr_Format(PyExc_ValueError, "Expected %d dimensions, got %d", + ctx->head->field->type->ndim, ndim); + return -1; + } + for (i = 0; i < ctx->head->field->type->ndim; i++) { + arraysize *= ctx->head->field->type->arraysize[i]; + } + ctx->is_valid_array = 0; + ctx->enc_count = 1; + } + group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex); + do { + __Pyx_StructField* field = ctx->head->field; + __Pyx_TypeInfo* type = field->type; + if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') { + size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex); + } else { + size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex); + } + if (ctx->enc_packmode == '@') { + size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex); + size_t align_mod_offset; + if (align_at == 0) return -1; + align_mod_offset = ctx->fmt_offset % align_at; + if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset; + if (ctx->struct_alignment == 0) + ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type, + ctx->is_complex); + } + if (type->size != size || type->typegroup != group) { + if (type->typegroup == 'C' && type->fields != NULL) { + size_t parent_offset = ctx->head->parent_offset + field->offset; + ++ctx->head; + ctx->head->field = type->fields; + ctx->head->parent_offset = parent_offset; + continue; + } + if ((type->typegroup == 'H' || group == 'H') && type->size == size) { + } else { + __Pyx_BufFmt_RaiseExpected(ctx); + return -1; + } + } + offset = ctx->head->parent_offset + field->offset; + if (ctx->fmt_offset != offset) { + PyErr_Format(PyExc_ValueError, + "Buffer dtype mismatch; next field is at offset %" CYTHON_FORMAT_SSIZE_T "d but %" CYTHON_FORMAT_SSIZE_T "d expected", + (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset); + return -1; + } + ctx->fmt_offset += size; + if (arraysize) + ctx->fmt_offset += (arraysize - 1) * size; + --ctx->enc_count; + while (1) { + if (field == &ctx->root) { + ctx->head = NULL; + if (ctx->enc_count != 0) { + __Pyx_BufFmt_RaiseExpected(ctx); + return -1; + } + break; + } + ctx->head->field = ++field; + if (field->type == NULL) { + --ctx->head; + field = ctx->head->field; + continue; + } else if (field->type->typegroup == 'S') { + size_t parent_offset = ctx->head->parent_offset + field->offset; + if (field->type->fields->type == NULL) continue; + field = field->type->fields; + ++ctx->head; + ctx->head->field = field; + ctx->head->parent_offset = parent_offset; + break; + } else { + break; + } + } + } while (ctx->enc_count); + ctx->enc_type = 0; + ctx->is_complex = 0; + return 0; +} +static CYTHON_INLINE PyObject * +__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp) +{ + const char *ts = *tsp; + int i = 0, number; + int ndim = ctx->head->field->type->ndim; +; + ++ts; + if (ctx->new_count != 1) { + PyErr_SetString(PyExc_ValueError, + "Cannot handle repeated arrays in format string"); + return NULL; + } + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + while (*ts && *ts != ')') { + switch (*ts) { + case ' ': case '\f': case '\r': case '\n': case '\t': case '\v': continue; + default: break; + } + number = __Pyx_BufFmt_ExpectNumber(&ts); + if (number == -1) return NULL; + if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i]) + return PyErr_Format(PyExc_ValueError, + "Expected a dimension of size %zu, got %d", + ctx->head->field->type->arraysize[i], number); + if (*ts != ',' && *ts != ')') + return PyErr_Format(PyExc_ValueError, + "Expected a comma in format string, got '%c'", *ts); + if (*ts == ',') ts++; + i++; + } + if (i != ndim) + return PyErr_Format(PyExc_ValueError, "Expected %d dimension(s), got %d", + ctx->head->field->type->ndim, i); + if (!*ts) { + PyErr_SetString(PyExc_ValueError, + "Unexpected end of format string, expected ')'"); + return NULL; + } + ctx->is_valid_array = 1; + ctx->new_count = 1; + *tsp = ++ts; + return Py_None; +} +static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) { + int got_Z = 0; + while (1) { + switch(*ts) { + case 0: + if (ctx->enc_type != 0 && ctx->head == NULL) { + __Pyx_BufFmt_RaiseExpected(ctx); + return NULL; + } + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + if (ctx->head != NULL) { + __Pyx_BufFmt_RaiseExpected(ctx); + return NULL; + } + return ts; + case ' ': + case '\r': + case '\n': + ++ts; + break; + case '<': + if (!__Pyx_IsLittleEndian()) { + PyErr_SetString(PyExc_ValueError, "Little-endian buffer not supported on big-endian compiler"); + return NULL; + } + ctx->new_packmode = '='; + ++ts; + break; + case '>': + case '!': + if (__Pyx_IsLittleEndian()) { + PyErr_SetString(PyExc_ValueError, "Big-endian buffer not supported on little-endian compiler"); + return NULL; + } + ctx->new_packmode = '='; + ++ts; + break; + case '=': + case '@': + case '^': + ctx->new_packmode = *ts++; + break; + case 'T': + { + const char* ts_after_sub; + size_t i, struct_count = ctx->new_count; + size_t struct_alignment = ctx->struct_alignment; + ctx->new_count = 1; + ++ts; + if (*ts != '{') { + PyErr_SetString(PyExc_ValueError, "Buffer acquisition: Expected '{' after 'T'"); + return NULL; + } + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->enc_type = 0; + ctx->enc_count = 0; + ctx->struct_alignment = 0; + ++ts; + ts_after_sub = ts; + for (i = 0; i != struct_count; ++i) { + ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts); + if (!ts_after_sub) return NULL; + } + ts = ts_after_sub; + if (struct_alignment) ctx->struct_alignment = struct_alignment; + } + break; + case '}': + { + size_t alignment = ctx->struct_alignment; + ++ts; + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->enc_type = 0; + if (alignment && ctx->fmt_offset % alignment) { + ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment); + } + } + return ts; + case 'x': + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->fmt_offset += ctx->new_count; + ctx->new_count = 1; + ctx->enc_count = 0; + ctx->enc_type = 0; + ctx->enc_packmode = ctx->new_packmode; + ++ts; + break; + case 'Z': + got_Z = 1; + ++ts; + if (*ts != 'f' && *ts != 'd' && *ts != 'g') { + __Pyx_BufFmt_RaiseUnexpectedChar('Z'); + return NULL; + } + case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I': + case 'l': case 'L': case 'q': case 'Q': + case 'f': case 'd': case 'g': + case 'O': case 'p': + if (ctx->enc_type == *ts && got_Z == ctx->is_complex && + ctx->enc_packmode == ctx->new_packmode) { + ctx->enc_count += ctx->new_count; + ctx->new_count = 1; + got_Z = 0; + ++ts; + break; + } + case 's': + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->enc_count = ctx->new_count; + ctx->enc_packmode = ctx->new_packmode; + ctx->enc_type = *ts; + ctx->is_complex = got_Z; + ++ts; + ctx->new_count = 1; + got_Z = 0; + break; + case ':': + ++ts; + while(*ts != ':') ++ts; + ++ts; + break; + case '(': + if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL; + break; + default: + { + int number = __Pyx_BufFmt_ExpectNumber(&ts); + if (number == -1) return NULL; + ctx->new_count = (size_t)number; + } + } + } +} +static CYTHON_INLINE void __Pyx_ZeroBuffer(Py_buffer* buf) { + buf->buf = NULL; + buf->obj = NULL; + buf->strides = __Pyx_zeros; + buf->shape = __Pyx_zeros; + buf->suboffsets = __Pyx_minusones; +} +static CYTHON_INLINE int __Pyx_GetBufferAndValidate( + Py_buffer* buf, PyObject* obj, __Pyx_TypeInfo* dtype, int flags, + int nd, int cast, __Pyx_BufFmt_StackElem* stack) +{ + if (obj == Py_None || obj == NULL) { + __Pyx_ZeroBuffer(buf); + return 0; + } + buf->buf = NULL; + if (__Pyx_GetBuffer(obj, buf, flags) == -1) goto fail; + if (buf->ndim != nd) { + PyErr_Format(PyExc_ValueError, + "Buffer has wrong number of dimensions (expected %d, got %d)", + nd, buf->ndim); + goto fail; + } + if (!cast) { + __Pyx_BufFmt_Context ctx; + __Pyx_BufFmt_Init(&ctx, stack, dtype); + if (!__Pyx_BufFmt_CheckString(&ctx, buf->format)) goto fail; + } + if ((unsigned)buf->itemsize != dtype->size) { + PyErr_Format(PyExc_ValueError, + "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "d byte%s) does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "d byte%s)", + buf->itemsize, (buf->itemsize > 1) ? "s" : "", + dtype->name, (Py_ssize_t)dtype->size, (dtype->size > 1) ? "s" : ""); + goto fail; + } + if (buf->suboffsets == NULL) buf->suboffsets = __Pyx_minusones; + return 0; +fail:; + __Pyx_ZeroBuffer(buf); + return -1; +} +static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info) { + if (info->buf == NULL) return; + if (info->suboffsets == __Pyx_minusones) info->suboffsets = NULL; + __Pyx_ReleaseBuffer(info); +} + +/* GetBuiltinName */ + static PyObject *__Pyx_GetBuiltinName(PyObject *name) { + PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name); + if (unlikely(!result)) { + PyErr_Format(PyExc_NameError, +#if PY_MAJOR_VERSION >= 3 + "name '%U' is not defined", name); +#else + "name '%.200s' is not defined", PyString_AS_STRING(name)); +#endif + } + return result; +} + +/* GetModuleGlobalName */ + static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name) { + PyObject *result; +#if !CYTHON_AVOID_BORROWED_REFS + result = PyDict_GetItem(__pyx_d, name); + if (likely(result)) { + Py_INCREF(result); + } else { +#else + result = PyObject_GetItem(__pyx_d, name); + if (!result) { + PyErr_Clear(); +#endif + result = __Pyx_GetBuiltinName(name); + } + return result; +} + +/* PyObjectCall */ + #if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { + PyObject *result; + ternaryfunc call = func->ob_type->tp_call; + if (unlikely(!call)) + return PyObject_Call(func, arg, kw); + if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) + return NULL; + result = (*call)(func, arg, kw); + Py_LeaveRecursiveCall(); + if (unlikely(!result) && unlikely(!PyErr_Occurred())) { + PyErr_SetString( + PyExc_SystemError, + "NULL result without error in PyObject_Call"); + } + return result; +} +#endif + +/* ExtTypeTest */ + static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { + if (unlikely(!type)) { + PyErr_SetString(PyExc_SystemError, "Missing type object"); + return 0; + } + if (likely(PyObject_TypeCheck(obj, type))) + return 1; + PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s", + Py_TYPE(obj)->tp_name, type->tp_name); + return 0; +} + +/* PyCFunctionFastCall */ + #if CYTHON_FAST_PYCCALL +static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) { + PyCFunctionObject *func = (PyCFunctionObject*)func_obj; + PyCFunction meth = PyCFunction_GET_FUNCTION(func); + PyObject *self = PyCFunction_GET_SELF(func); + assert(PyCFunction_Check(func)); + assert(METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST))); + assert(nargs >= 0); + assert(nargs == 0 || args != NULL); + /* _PyCFunction_FastCallDict() must not be called with an exception set, + because it may clear it (directly or indirectly) and so the + caller loses its exception */ + assert(!PyErr_Occurred()); + return (*((__Pyx_PyCFunctionFast)meth)) (self, args, nargs, NULL); +} +#endif // CYTHON_FAST_PYCCALL + +/* PyFunctionFastCall */ + #if CYTHON_FAST_PYCALL +#include "frameobject.h" +static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, + PyObject *globals) { + PyFrameObject *f; + PyThreadState *tstate = PyThreadState_GET(); + PyObject **fastlocals; + Py_ssize_t i; + PyObject *result; + assert(globals != NULL); + /* XXX Perhaps we should create a specialized + PyFrame_New() that doesn't take locals, but does + take builtins without sanity checking them. + */ + assert(tstate != NULL); + f = PyFrame_New(tstate, co, globals, NULL); + if (f == NULL) { + return NULL; + } + fastlocals = f->f_localsplus; + for (i = 0; i < na; i++) { + Py_INCREF(*args); + fastlocals[i] = *args++; + } + result = PyEval_EvalFrameEx(f,0); + ++tstate->recursion_depth; + Py_DECREF(f); + --tstate->recursion_depth; + return result; +} +#if 1 || PY_VERSION_HEX < 0x030600B1 +static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, int nargs, PyObject *kwargs) { + PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); + PyObject *globals = PyFunction_GET_GLOBALS(func); + PyObject *argdefs = PyFunction_GET_DEFAULTS(func); + PyObject *closure; +#if PY_MAJOR_VERSION >= 3 + PyObject *kwdefs; +#endif + PyObject *kwtuple, **k; + PyObject **d; + Py_ssize_t nd; + Py_ssize_t nk; + PyObject *result; + assert(kwargs == NULL || PyDict_Check(kwargs)); + nk = kwargs ? PyDict_Size(kwargs) : 0; + if (Py_EnterRecursiveCall((char*)" while calling a Python object")) { + return NULL; + } + if ( +#if PY_MAJOR_VERSION >= 3 + co->co_kwonlyargcount == 0 && +#endif + likely(kwargs == NULL || nk == 0) && + co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { + if (argdefs == NULL && co->co_argcount == nargs) { + result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); + goto done; + } + else if (nargs == 0 && argdefs != NULL + && co->co_argcount == Py_SIZE(argdefs)) { + /* function called with no arguments, but all parameters have + a default value: use default values as arguments .*/ + args = &PyTuple_GET_ITEM(argdefs, 0); + result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); + goto done; + } + } + if (kwargs != NULL) { + Py_ssize_t pos, i; + kwtuple = PyTuple_New(2 * nk); + if (kwtuple == NULL) { + result = NULL; + goto done; + } + k = &PyTuple_GET_ITEM(kwtuple, 0); + pos = i = 0; + while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { + Py_INCREF(k[i]); + Py_INCREF(k[i+1]); + i += 2; + } + nk = i / 2; + } + else { + kwtuple = NULL; + k = NULL; + } + closure = PyFunction_GET_CLOSURE(func); +#if PY_MAJOR_VERSION >= 3 + kwdefs = PyFunction_GET_KW_DEFAULTS(func); +#endif + if (argdefs != NULL) { + d = &PyTuple_GET_ITEM(argdefs, 0); + nd = Py_SIZE(argdefs); + } + else { + d = NULL; + nd = 0; + } +#if PY_MAJOR_VERSION >= 3 + result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, + args, nargs, + k, (int)nk, + d, (int)nd, kwdefs, closure); +#else + result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, + args, nargs, + k, (int)nk, + d, (int)nd, closure); +#endif + Py_XDECREF(kwtuple); +done: + Py_LeaveRecursiveCall(); + return result; +} +#endif // CPython < 3.6 +#endif // CYTHON_FAST_PYCALL + +/* PyObjectCallMethO */ + #if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { + PyObject *self, *result; + PyCFunction cfunc; + cfunc = PyCFunction_GET_FUNCTION(func); + self = PyCFunction_GET_SELF(func); + if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) + return NULL; + result = cfunc(self, arg); + Py_LeaveRecursiveCall(); + if (unlikely(!result) && unlikely(!PyErr_Occurred())) { + PyErr_SetString( + PyExc_SystemError, + "NULL result without error in PyObject_Call"); + } + return result; +} +#endif + +/* PyObjectCallOneArg */ + #if CYTHON_COMPILING_IN_CPYTHON +static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) { + PyObject *result; + PyObject *args = PyTuple_New(1); + if (unlikely(!args)) return NULL; + Py_INCREF(arg); + PyTuple_SET_ITEM(args, 0, arg); + result = __Pyx_PyObject_Call(func, args, NULL); + Py_DECREF(args); + return result; +} +static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { +#if CYTHON_FAST_PYCALL + if (PyFunction_Check(func)) { + return __Pyx_PyFunction_FastCall(func, &arg, 1); + } +#endif +#ifdef __Pyx_CyFunction_USED + if (likely(PyCFunction_Check(func) || PyObject_TypeCheck(func, __pyx_CyFunctionType))) { +#else + if (likely(PyCFunction_Check(func))) { +#endif + if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { + return __Pyx_PyObject_CallMethO(func, arg); +#if CYTHON_FAST_PYCCALL + } else if (PyCFunction_GET_FLAGS(func) & METH_FASTCALL) { + return __Pyx_PyCFunction_FastCall(func, &arg, 1); +#endif + } + } + return __Pyx__PyObject_CallOneArg(func, arg); +} +#else +static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { + PyObject *result; + PyObject *args = PyTuple_Pack(1, arg); + if (unlikely(!args)) return NULL; + result = __Pyx_PyObject_Call(func, args, NULL); + Py_DECREF(args); + return result; +} +#endif + +/* PyObjectCallNoArg */ + #if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) { +#if CYTHON_FAST_PYCALL + if (PyFunction_Check(func)) { + return __Pyx_PyFunction_FastCall(func, NULL, 0); + } +#endif +#ifdef __Pyx_CyFunction_USED + if (likely(PyCFunction_Check(func) || PyObject_TypeCheck(func, __pyx_CyFunctionType))) { +#else + if (likely(PyCFunction_Check(func))) { +#endif + if (likely(PyCFunction_GET_FLAGS(func) & METH_NOARGS)) { + return __Pyx_PyObject_CallMethO(func, NULL); + } + } + return __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL); +} +#endif + +/* BufferIndexError */ + static void __Pyx_RaiseBufferIndexError(int axis) { + PyErr_Format(PyExc_IndexError, + "Out of bounds on buffer access (axis %d)", axis); +} + +/* SliceObject */ + static CYTHON_INLINE PyObject* __Pyx_PyObject_GetSlice(PyObject* obj, + Py_ssize_t cstart, Py_ssize_t cstop, + PyObject** _py_start, PyObject** _py_stop, PyObject** _py_slice, + int has_cstart, int has_cstop, CYTHON_UNUSED int wraparound) { +#if CYTHON_USE_TYPE_SLOTS + PyMappingMethods* mp; +#if PY_MAJOR_VERSION < 3 + PySequenceMethods* ms = Py_TYPE(obj)->tp_as_sequence; + if (likely(ms && ms->sq_slice)) { + if (!has_cstart) { + if (_py_start && (*_py_start != Py_None)) { + cstart = __Pyx_PyIndex_AsSsize_t(*_py_start); + if ((cstart == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; + } else + cstart = 0; + } + if (!has_cstop) { + if (_py_stop && (*_py_stop != Py_None)) { + cstop = __Pyx_PyIndex_AsSsize_t(*_py_stop); + if ((cstop == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; + } else + cstop = PY_SSIZE_T_MAX; + } + if (wraparound && unlikely((cstart < 0) | (cstop < 0)) && likely(ms->sq_length)) { + Py_ssize_t l = ms->sq_length(obj); + if (likely(l >= 0)) { + if (cstop < 0) { + cstop += l; + if (cstop < 0) cstop = 0; + } + if (cstart < 0) { + cstart += l; + if (cstart < 0) cstart = 0; + } + } else { + if (!PyErr_ExceptionMatches(PyExc_OverflowError)) + goto bad; + PyErr_Clear(); + } + } + return ms->sq_slice(obj, cstart, cstop); + } +#endif + mp = Py_TYPE(obj)->tp_as_mapping; + if (likely(mp && mp->mp_subscript)) +#endif + { + PyObject* result; + PyObject *py_slice, *py_start, *py_stop; + if (_py_slice) { + py_slice = *_py_slice; + } else { + PyObject* owned_start = NULL; + PyObject* owned_stop = NULL; + if (_py_start) { + py_start = *_py_start; + } else { + if (has_cstart) { + owned_start = py_start = PyInt_FromSsize_t(cstart); + if (unlikely(!py_start)) goto bad; + } else + py_start = Py_None; + } + if (_py_stop) { + py_stop = *_py_stop; + } else { + if (has_cstop) { + owned_stop = py_stop = PyInt_FromSsize_t(cstop); + if (unlikely(!py_stop)) { + Py_XDECREF(owned_start); + goto bad; + } + } else + py_stop = Py_None; + } + py_slice = PySlice_New(py_start, py_stop, Py_None); + Py_XDECREF(owned_start); + Py_XDECREF(owned_stop); + if (unlikely(!py_slice)) goto bad; + } +#if CYTHON_USE_TYPE_SLOTS + result = mp->mp_subscript(obj, py_slice); +#else + result = PyObject_GetItem(obj, py_slice); +#endif + if (!_py_slice) { + Py_DECREF(py_slice); + } + return result; + } + PyErr_Format(PyExc_TypeError, + "'%.200s' object is unsliceable", Py_TYPE(obj)->tp_name); +bad: + return NULL; +} + +/* BufferFallbackError */ + static void __Pyx_RaiseBufferFallbackError(void) { + PyErr_SetString(PyExc_ValueError, + "Buffer acquisition failed on assignment; and then reacquiring the old buffer failed too!"); +} + +/* PyErrFetchRestore */ + #if CYTHON_FAST_THREAD_STATE +static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { + PyObject *tmp_type, *tmp_value, *tmp_tb; + tmp_type = tstate->curexc_type; + tmp_value = tstate->curexc_value; + tmp_tb = tstate->curexc_traceback; + tstate->curexc_type = type; + tstate->curexc_value = value; + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_type); + Py_XDECREF(tmp_value); + Py_XDECREF(tmp_tb); +} +static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { + *type = tstate->curexc_type; + *value = tstate->curexc_value; + *tb = tstate->curexc_traceback; + tstate->curexc_type = 0; + tstate->curexc_value = 0; + tstate->curexc_traceback = 0; +} +#endif + +/* RaiseException */ + #if PY_MAJOR_VERSION < 3 +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, + CYTHON_UNUSED PyObject *cause) { + __Pyx_PyThreadState_declare + Py_XINCREF(type); + if (!value || value == Py_None) + value = NULL; + else + Py_INCREF(value); + if (!tb || tb == Py_None) + tb = NULL; + else { + Py_INCREF(tb); + if (!PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto raise_error; + } + } + if (PyType_Check(type)) { +#if CYTHON_COMPILING_IN_PYPY + if (!value) { + Py_INCREF(Py_None); + value = Py_None; + } +#endif + PyErr_NormalizeException(&type, &value, &tb); + } else { + if (value) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto raise_error; + } + value = type; + type = (PyObject*) Py_TYPE(type); + Py_INCREF(type); + if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto raise_error; + } + } + __Pyx_PyThreadState_assign + __Pyx_ErrRestore(type, value, tb); + return; +raise_error: + Py_XDECREF(value); + Py_XDECREF(type); + Py_XDECREF(tb); + return; +} +#else +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { + PyObject* owned_instance = NULL; + if (tb == Py_None) { + tb = 0; + } else if (tb && !PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto bad; + } + if (value == Py_None) + value = 0; + if (PyExceptionInstance_Check(type)) { + if (value) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto bad; + } + value = type; + type = (PyObject*) Py_TYPE(value); + } else if (PyExceptionClass_Check(type)) { + PyObject *instance_class = NULL; + if (value && PyExceptionInstance_Check(value)) { + instance_class = (PyObject*) Py_TYPE(value); + if (instance_class != type) { + int is_subclass = PyObject_IsSubclass(instance_class, type); + if (!is_subclass) { + instance_class = NULL; + } else if (unlikely(is_subclass == -1)) { + goto bad; + } else { + type = instance_class; + } + } + } + if (!instance_class) { + PyObject *args; + if (!value) + args = PyTuple_New(0); + else if (PyTuple_Check(value)) { + Py_INCREF(value); + args = value; + } else + args = PyTuple_Pack(1, value); + if (!args) + goto bad; + owned_instance = PyObject_Call(type, args, NULL); + Py_DECREF(args); + if (!owned_instance) + goto bad; + value = owned_instance; + if (!PyExceptionInstance_Check(value)) { + PyErr_Format(PyExc_TypeError, + "calling %R should have returned an instance of " + "BaseException, not %R", + type, Py_TYPE(value)); + goto bad; + } + } + } else { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto bad; + } +#if PY_VERSION_HEX >= 0x03030000 + if (cause) { +#else + if (cause && cause != Py_None) { +#endif + PyObject *fixed_cause; + if (cause == Py_None) { + fixed_cause = NULL; + } else if (PyExceptionClass_Check(cause)) { + fixed_cause = PyObject_CallObject(cause, NULL); + if (fixed_cause == NULL) + goto bad; + } else if (PyExceptionInstance_Check(cause)) { + fixed_cause = cause; + Py_INCREF(fixed_cause); + } else { + PyErr_SetString(PyExc_TypeError, + "exception causes must derive from " + "BaseException"); + goto bad; + } + PyException_SetCause(value, fixed_cause); + } + PyErr_SetObject(type, value); + if (tb) { +#if CYTHON_COMPILING_IN_PYPY + PyObject *tmp_type, *tmp_value, *tmp_tb; + PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); + Py_INCREF(tb); + PyErr_Restore(tmp_type, tmp_value, tb); + Py_XDECREF(tmp_tb); +#else + PyThreadState *tstate = PyThreadState_GET(); + PyObject* tmp_tb = tstate->curexc_traceback; + if (tb != tmp_tb) { + Py_INCREF(tb); + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_tb); + } +#endif + } +bad: + Py_XDECREF(owned_instance); + return; +} +#endif + +/* RaiseTooManyValuesToUnpack */ + static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { + PyErr_Format(PyExc_ValueError, + "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); +} + +/* RaiseNeedMoreValuesToUnpack */ + static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { + PyErr_Format(PyExc_ValueError, + "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", + index, (index == 1) ? "" : "s"); +} + +/* RaiseNoneIterError */ + static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) { + PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); +} + +/* SaveResetException */ + #if CYTHON_FAST_THREAD_STATE +static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { + *type = tstate->exc_type; + *value = tstate->exc_value; + *tb = tstate->exc_traceback; + Py_XINCREF(*type); + Py_XINCREF(*value); + Py_XINCREF(*tb); +} +static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { + PyObject *tmp_type, *tmp_value, *tmp_tb; + tmp_type = tstate->exc_type; + tmp_value = tstate->exc_value; + tmp_tb = tstate->exc_traceback; + tstate->exc_type = type; + tstate->exc_value = value; + tstate->exc_traceback = tb; + Py_XDECREF(tmp_type); + Py_XDECREF(tmp_value); + Py_XDECREF(tmp_tb); +} +#endif + +/* PyErrExceptionMatches */ + #if CYTHON_FAST_THREAD_STATE +static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err) { + PyObject *exc_type = tstate->curexc_type; + if (exc_type == err) return 1; + if (unlikely(!exc_type)) return 0; + return PyErr_GivenExceptionMatches(exc_type, err); +} +#endif + +/* GetException */ + #if CYTHON_FAST_THREAD_STATE +static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { +#else +static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) { +#endif + PyObject *local_type, *local_value, *local_tb; +#if CYTHON_FAST_THREAD_STATE + PyObject *tmp_type, *tmp_value, *tmp_tb; + local_type = tstate->curexc_type; + local_value = tstate->curexc_value; + local_tb = tstate->curexc_traceback; + tstate->curexc_type = 0; + tstate->curexc_value = 0; + tstate->curexc_traceback = 0; +#else + PyErr_Fetch(&local_type, &local_value, &local_tb); +#endif + PyErr_NormalizeException(&local_type, &local_value, &local_tb); +#if CYTHON_FAST_THREAD_STATE + if (unlikely(tstate->curexc_type)) +#else + if (unlikely(PyErr_Occurred())) +#endif + goto bad; + #if PY_MAJOR_VERSION >= 3 + if (local_tb) { + if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) + goto bad; + } + #endif + Py_XINCREF(local_tb); + Py_XINCREF(local_type); + Py_XINCREF(local_value); + *type = local_type; + *value = local_value; + *tb = local_tb; +#if CYTHON_FAST_THREAD_STATE + tmp_type = tstate->exc_type; + tmp_value = tstate->exc_value; + tmp_tb = tstate->exc_traceback; + tstate->exc_type = local_type; + tstate->exc_value = local_value; + tstate->exc_traceback = local_tb; + Py_XDECREF(tmp_type); + Py_XDECREF(tmp_value); + Py_XDECREF(tmp_tb); +#else + PyErr_SetExcInfo(local_type, local_value, local_tb); +#endif + return 0; +bad: + *type = 0; + *value = 0; + *tb = 0; + Py_XDECREF(local_type); + Py_XDECREF(local_value); + Py_XDECREF(local_tb); + return -1; +} + +/* Import */ + static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { + PyObject *empty_list = 0; + PyObject *module = 0; + PyObject *global_dict = 0; + PyObject *empty_dict = 0; + PyObject *list; + #if PY_VERSION_HEX < 0x03030000 + PyObject *py_import; + py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); + if (!py_import) + goto bad; + #endif + if (from_list) + list = from_list; + else { + empty_list = PyList_New(0); + if (!empty_list) + goto bad; + list = empty_list; + } + global_dict = PyModule_GetDict(__pyx_m); + if (!global_dict) + goto bad; + empty_dict = PyDict_New(); + if (!empty_dict) + goto bad; + { + #if PY_MAJOR_VERSION >= 3 + if (level == -1) { + if (strchr(__Pyx_MODULE_NAME, '.')) { + #if PY_VERSION_HEX < 0x03030000 + PyObject *py_level = PyInt_FromLong(1); + if (!py_level) + goto bad; + module = PyObject_CallFunctionObjArgs(py_import, + name, global_dict, empty_dict, list, py_level, NULL); + Py_DECREF(py_level); + #else + module = PyImport_ImportModuleLevelObject( + name, global_dict, empty_dict, list, 1); + #endif + if (!module) { + if (!PyErr_ExceptionMatches(PyExc_ImportError)) + goto bad; + PyErr_Clear(); + } + } + level = 0; + } + #endif + if (!module) { + #if PY_VERSION_HEX < 0x03030000 + PyObject *py_level = PyInt_FromLong(level); + if (!py_level) + goto bad; + module = PyObject_CallFunctionObjArgs(py_import, + name, global_dict, empty_dict, list, py_level, NULL); + Py_DECREF(py_level); + #else + module = PyImport_ImportModuleLevelObject( + name, global_dict, empty_dict, list, level); + #endif + } + } +bad: + #if PY_VERSION_HEX < 0x03030000 + Py_XDECREF(py_import); + #endif + Py_XDECREF(empty_list); + Py_XDECREF(empty_dict); + return module; +} + +/* CodeObjectCache */ + static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { + int start = 0, mid = 0, end = count - 1; + if (end >= 0 && code_line > entries[end].code_line) { + return count; + } + while (start < end) { + mid = start + (end - start) / 2; + if (code_line < entries[mid].code_line) { + end = mid; + } else if (code_line > entries[mid].code_line) { + start = mid + 1; + } else { + return mid; + } + } + if (code_line <= entries[mid].code_line) { + return mid; + } else { + return mid + 1; + } +} +static PyCodeObject *__pyx_find_code_object(int code_line) { + PyCodeObject* code_object; + int pos; + if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { + return NULL; + } + pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); + if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { + return NULL; + } + code_object = __pyx_code_cache.entries[pos].code_object; + Py_INCREF(code_object); + return code_object; +} +static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { + int pos, i; + __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; + if (unlikely(!code_line)) { + return; + } + if (unlikely(!entries)) { + entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); + if (likely(entries)) { + __pyx_code_cache.entries = entries; + __pyx_code_cache.max_count = 64; + __pyx_code_cache.count = 1; + entries[0].code_line = code_line; + entries[0].code_object = code_object; + Py_INCREF(code_object); + } + return; + } + pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); + if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { + PyCodeObject* tmp = entries[pos].code_object; + entries[pos].code_object = code_object; + Py_DECREF(tmp); + return; + } + if (__pyx_code_cache.count == __pyx_code_cache.max_count) { + int new_max = __pyx_code_cache.max_count + 64; + entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( + __pyx_code_cache.entries, (size_t)new_max*sizeof(__Pyx_CodeObjectCacheEntry)); + if (unlikely(!entries)) { + return; + } + __pyx_code_cache.entries = entries; + __pyx_code_cache.max_count = new_max; + } + for (i=__pyx_code_cache.count; i>pos; i--) { + entries[i] = entries[i-1]; + } + entries[pos].code_line = code_line; + entries[pos].code_object = code_object; + __pyx_code_cache.count++; + Py_INCREF(code_object); +} + +/* AddTraceback */ + #include "compile.h" +#include "frameobject.h" +#include "traceback.h" +static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( + const char *funcname, int c_line, + int py_line, const char *filename) { + PyCodeObject *py_code = 0; + PyObject *py_srcfile = 0; + PyObject *py_funcname = 0; + #if PY_MAJOR_VERSION < 3 + py_srcfile = PyString_FromString(filename); + #else + py_srcfile = PyUnicode_FromString(filename); + #endif + if (!py_srcfile) goto bad; + if (c_line) { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); + #else + py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); + #endif + } + else { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromString(funcname); + #else + py_funcname = PyUnicode_FromString(funcname); + #endif + } + if (!py_funcname) goto bad; + py_code = __Pyx_PyCode_New( + 0, + 0, + 0, + 0, + 0, + __pyx_empty_bytes, /*PyObject *code,*/ + __pyx_empty_tuple, /*PyObject *consts,*/ + __pyx_empty_tuple, /*PyObject *names,*/ + __pyx_empty_tuple, /*PyObject *varnames,*/ + __pyx_empty_tuple, /*PyObject *freevars,*/ + __pyx_empty_tuple, /*PyObject *cellvars,*/ + py_srcfile, /*PyObject *filename,*/ + py_funcname, /*PyObject *name,*/ + py_line, + __pyx_empty_bytes /*PyObject *lnotab*/ + ); + Py_DECREF(py_srcfile); + Py_DECREF(py_funcname); + return py_code; +bad: + Py_XDECREF(py_srcfile); + Py_XDECREF(py_funcname); + return NULL; +} +static void __Pyx_AddTraceback(const char *funcname, int c_line, + int py_line, const char *filename) { + PyCodeObject *py_code = 0; + PyFrameObject *py_frame = 0; + py_code = __pyx_find_code_object(c_line ? c_line : py_line); + if (!py_code) { + py_code = __Pyx_CreateCodeObjectForTraceback( + funcname, c_line, py_line, filename); + if (!py_code) goto bad; + __pyx_insert_code_object(c_line ? c_line : py_line, py_code); + } + py_frame = PyFrame_New( + PyThreadState_GET(), /*PyThreadState *tstate,*/ + py_code, /*PyCodeObject *code,*/ + __pyx_d, /*PyObject *globals,*/ + 0 /*PyObject *locals*/ + ); + if (!py_frame) goto bad; + __Pyx_PyFrame_SetLineNumber(py_frame, py_line); + PyTraceBack_Here(py_frame); +bad: + Py_XDECREF(py_code); + Py_XDECREF(py_frame); +} + +#if PY_MAJOR_VERSION < 3 +static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) { + if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags); + if (PyObject_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) return __pyx_pw_5numpy_7ndarray_1__getbuffer__(obj, view, flags); + PyErr_Format(PyExc_TypeError, "'%.200s' does not have the buffer interface", Py_TYPE(obj)->tp_name); + return -1; +} +static void __Pyx_ReleaseBuffer(Py_buffer *view) { + PyObject *obj = view->obj; + if (!obj) return; + if (PyObject_CheckBuffer(obj)) { + PyBuffer_Release(view); + return; + } + if (PyObject_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) { __pyx_pw_5numpy_7ndarray_3__releasebuffer__(obj, view); return; } + Py_DECREF(obj); + view->obj = NULL; +} +#endif + + + /* CIntFromPyVerify */ + #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ + __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) +#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ + __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) +#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ + {\ + func_type value = func_value;\ + if (sizeof(target_type) < sizeof(func_type)) {\ + if (unlikely(value != (func_type) (target_type) value)) {\ + func_type zero = 0;\ + if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ + return (target_type) -1;\ + if (is_unsigned && unlikely(value < zero))\ + goto raise_neg_overflow;\ + else\ + goto raise_overflow;\ + }\ + }\ + return (target_type) value;\ + } + +/* CIntToPy */ + static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) { + const int neg_one = (int) -1, const_zero = (int) 0; + const int is_unsigned = neg_one > const_zero; + if (is_unsigned) { + if (sizeof(int) < sizeof(long)) { + return PyInt_FromLong((long) value); + } else if (sizeof(int) <= sizeof(unsigned long)) { + return PyLong_FromUnsignedLong((unsigned long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); +#endif + } + } else { + if (sizeof(int) <= sizeof(long)) { + return PyInt_FromLong((long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { + return PyLong_FromLongLong((PY_LONG_LONG) value); +#endif + } + } + { + int one = 1; int little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&value; + return _PyLong_FromByteArray(bytes, sizeof(int), + little, !is_unsigned); + } +} + +/* Declarations */ + #if CYTHON_CCOMPLEX + #ifdef __cplusplus + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + return ::std::complex< float >(x, y); + } + #else + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + return x + y*(__pyx_t_float_complex)_Complex_I; + } + #endif +#else + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + __pyx_t_float_complex z; + z.real = x; + z.imag = y; + return z; + } +#endif + +/* Arithmetic */ + #if CYTHON_CCOMPLEX +#else + static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + return (a.real == b.real) && (a.imag == b.imag); + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real + b.real; + z.imag = a.imag + b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real - b.real; + z.imag = a.imag - b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real * b.real - a.imag * b.imag; + z.imag = a.real * b.imag + a.imag * b.real; + return z; + } + #if 1 + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + if (b.imag == 0) { + return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.real); + } else if (fabsf(b.real) >= fabsf(b.imag)) { + if (b.real == 0 && b.imag == 0) { + return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.imag); + } else { + float r = b.imag / b.real; + float s = 1.0 / (b.real + b.imag * r); + return __pyx_t_float_complex_from_parts( + (a.real + a.imag * r) * s, (a.imag - a.real * r) * s); + } + } else { + float r = b.real / b.imag; + float s = 1.0 / (b.imag + b.real * r); + return __pyx_t_float_complex_from_parts( + (a.real * r + a.imag) * s, (a.imag * r - a.real) * s); + } + } + #else + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + if (b.imag == 0) { + return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.real); + } else { + float denom = b.real * b.real + b.imag * b.imag; + return __pyx_t_float_complex_from_parts( + (a.real * b.real + a.imag * b.imag) / denom, + (a.imag * b.real - a.real * b.imag) / denom); + } + } + #endif + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex a) { + __pyx_t_float_complex z; + z.real = -a.real; + z.imag = -a.imag; + return z; + } + static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex a) { + return (a.real == 0) && (a.imag == 0); + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex a) { + __pyx_t_float_complex z; + z.real = a.real; + z.imag = -a.imag; + return z; + } + #if 1 + static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex z) { + #if !defined(HAVE_HYPOT) || defined(_MSC_VER) + return sqrtf(z.real*z.real + z.imag*z.imag); + #else + return hypotf(z.real, z.imag); + #endif + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + float r, lnr, theta, z_r, z_theta; + if (b.imag == 0 && b.real == (int)b.real) { + if (b.real < 0) { + float denom = a.real * a.real + a.imag * a.imag; + a.real = a.real / denom; + a.imag = -a.imag / denom; + b.real = -b.real; + } + switch ((int)b.real) { + case 0: + z.real = 1; + z.imag = 0; + return z; + case 1: + return a; + case 2: + z = __Pyx_c_prod_float(a, a); + return __Pyx_c_prod_float(a, a); + case 3: + z = __Pyx_c_prod_float(a, a); + return __Pyx_c_prod_float(z, a); + case 4: + z = __Pyx_c_prod_float(a, a); + return __Pyx_c_prod_float(z, z); + } + } + if (a.imag == 0) { + if (a.real == 0) { + return a; + } else if (b.imag == 0) { + z.real = powf(a.real, b.real); + z.imag = 0; + return z; + } else if (a.real > 0) { + r = a.real; + theta = 0; + } else { + r = -a.real; + theta = atan2f(0, -1); + } + } else { + r = __Pyx_c_abs_float(a); + theta = atan2f(a.imag, a.real); + } + lnr = logf(r); + z_r = expf(lnr * b.real - theta * b.imag); + z_theta = theta * b.real + lnr * b.imag; + z.real = z_r * cosf(z_theta); + z.imag = z_r * sinf(z_theta); + return z; + } + #endif +#endif + +/* Declarations */ + #if CYTHON_CCOMPLEX + #ifdef __cplusplus + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + return ::std::complex< double >(x, y); + } + #else + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + return x + y*(__pyx_t_double_complex)_Complex_I; + } + #endif +#else + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + __pyx_t_double_complex z; + z.real = x; + z.imag = y; + return z; + } +#endif + +/* Arithmetic */ + #if CYTHON_CCOMPLEX +#else + static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + return (a.real == b.real) && (a.imag == b.imag); + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real + b.real; + z.imag = a.imag + b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real - b.real; + z.imag = a.imag - b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real * b.real - a.imag * b.imag; + z.imag = a.real * b.imag + a.imag * b.real; + return z; + } + #if 1 + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + if (b.imag == 0) { + return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real); + } else if (fabs(b.real) >= fabs(b.imag)) { + if (b.real == 0 && b.imag == 0) { + return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.imag); + } else { + double r = b.imag / b.real; + double s = 1.0 / (b.real + b.imag * r); + return __pyx_t_double_complex_from_parts( + (a.real + a.imag * r) * s, (a.imag - a.real * r) * s); + } + } else { + double r = b.real / b.imag; + double s = 1.0 / (b.imag + b.real * r); + return __pyx_t_double_complex_from_parts( + (a.real * r + a.imag) * s, (a.imag * r - a.real) * s); + } + } + #else + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + if (b.imag == 0) { + return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real); + } else { + double denom = b.real * b.real + b.imag * b.imag; + return __pyx_t_double_complex_from_parts( + (a.real * b.real + a.imag * b.imag) / denom, + (a.imag * b.real - a.real * b.imag) / denom); + } + } + #endif + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex a) { + __pyx_t_double_complex z; + z.real = -a.real; + z.imag = -a.imag; + return z; + } + static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex a) { + return (a.real == 0) && (a.imag == 0); + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex a) { + __pyx_t_double_complex z; + z.real = a.real; + z.imag = -a.imag; + return z; + } + #if 1 + static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex z) { + #if !defined(HAVE_HYPOT) || defined(_MSC_VER) + return sqrt(z.real*z.real + z.imag*z.imag); + #else + return hypot(z.real, z.imag); + #endif + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + double r, lnr, theta, z_r, z_theta; + if (b.imag == 0 && b.real == (int)b.real) { + if (b.real < 0) { + double denom = a.real * a.real + a.imag * a.imag; + a.real = a.real / denom; + a.imag = -a.imag / denom; + b.real = -b.real; + } + switch ((int)b.real) { + case 0: + z.real = 1; + z.imag = 0; + return z; + case 1: + return a; + case 2: + z = __Pyx_c_prod_double(a, a); + return __Pyx_c_prod_double(a, a); + case 3: + z = __Pyx_c_prod_double(a, a); + return __Pyx_c_prod_double(z, a); + case 4: + z = __Pyx_c_prod_double(a, a); + return __Pyx_c_prod_double(z, z); + } + } + if (a.imag == 0) { + if (a.real == 0) { + return a; + } else if (b.imag == 0) { + z.real = pow(a.real, b.real); + z.imag = 0; + return z; + } else if (a.real > 0) { + r = a.real; + theta = 0; + } else { + r = -a.real; + theta = atan2(0, -1); + } + } else { + r = __Pyx_c_abs_double(a); + theta = atan2(a.imag, a.real); + } + lnr = log(r); + z_r = exp(lnr * b.real - theta * b.imag); + z_theta = theta * b.real + lnr * b.imag; + z.real = z_r * cos(z_theta); + z.imag = z_r * sin(z_theta); + return z; + } + #endif +#endif + +/* CIntToPy */ + static CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value) { + const enum NPY_TYPES neg_one = (enum NPY_TYPES) -1, const_zero = (enum NPY_TYPES) 0; + const int is_unsigned = neg_one > const_zero; + if (is_unsigned) { + if (sizeof(enum NPY_TYPES) < sizeof(long)) { + return PyInt_FromLong((long) value); + } else if (sizeof(enum NPY_TYPES) <= sizeof(unsigned long)) { + return PyLong_FromUnsignedLong((unsigned long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(enum NPY_TYPES) <= sizeof(unsigned PY_LONG_LONG)) { + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); +#endif + } + } else { + if (sizeof(enum NPY_TYPES) <= sizeof(long)) { + return PyInt_FromLong((long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(enum NPY_TYPES) <= sizeof(PY_LONG_LONG)) { + return PyLong_FromLongLong((PY_LONG_LONG) value); +#endif + } + } + { + int one = 1; int little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&value; + return _PyLong_FromByteArray(bytes, sizeof(enum NPY_TYPES), + little, !is_unsigned); + } +} + +/* CIntFromPy */ + static CYTHON_INLINE npy_int32 __Pyx_PyInt_As_npy_int32(PyObject *x) { + const npy_int32 neg_one = (npy_int32) -1, const_zero = (npy_int32) 0; + const int is_unsigned = neg_one > const_zero; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_Check(x))) { + if (sizeof(npy_int32) < sizeof(long)) { + __PYX_VERIFY_RETURN_INT(npy_int32, long, PyInt_AS_LONG(x)) + } else { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + goto raise_neg_overflow; + } + return (npy_int32) val; + } + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (npy_int32) 0; + case 1: __PYX_VERIFY_RETURN_INT(npy_int32, digit, digits[0]) + case 2: + if (8 * sizeof(npy_int32) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) >= 2 * PyLong_SHIFT) { + return (npy_int32) (((((npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0])); + } + } + break; + case 3: + if (8 * sizeof(npy_int32) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) >= 3 * PyLong_SHIFT) { + return (npy_int32) (((((((npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0])); + } + } + break; + case 4: + if (8 * sizeof(npy_int32) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) >= 4 * PyLong_SHIFT) { + return (npy_int32) (((((((((npy_int32)digits[3]) << PyLong_SHIFT) | (npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0])); + } + } + break; + } +#endif +#if CYTHON_COMPILING_IN_CPYTHON + if (unlikely(Py_SIZE(x) < 0)) { + goto raise_neg_overflow; + } +#else + { + int result = PyObject_RichCompareBool(x, Py_False, Py_LT); + if (unlikely(result < 0)) + return (npy_int32) -1; + if (unlikely(result == 1)) + goto raise_neg_overflow; + } +#endif + if (sizeof(npy_int32) <= sizeof(unsigned long)) { + __PYX_VERIFY_RETURN_INT_EXC(npy_int32, unsigned long, PyLong_AsUnsignedLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(npy_int32) <= sizeof(unsigned PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(npy_int32, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) +#endif + } + } else { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (npy_int32) 0; + case -1: __PYX_VERIFY_RETURN_INT(npy_int32, sdigit, (sdigit) (-(sdigit)digits[0])) + case 1: __PYX_VERIFY_RETURN_INT(npy_int32, digit, +digits[0]) + case -2: + if (8 * sizeof(npy_int32) - 1 > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 2 * PyLong_SHIFT) { + return (npy_int32) (((npy_int32)-1)*(((((npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + case 2: + if (8 * sizeof(npy_int32) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 2 * PyLong_SHIFT) { + return (npy_int32) ((((((npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + case -3: + if (8 * sizeof(npy_int32) - 1 > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 3 * PyLong_SHIFT) { + return (npy_int32) (((npy_int32)-1)*(((((((npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + case 3: + if (8 * sizeof(npy_int32) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 3 * PyLong_SHIFT) { + return (npy_int32) ((((((((npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + case -4: + if (8 * sizeof(npy_int32) - 1 > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 4 * PyLong_SHIFT) { + return (npy_int32) (((npy_int32)-1)*(((((((((npy_int32)digits[3]) << PyLong_SHIFT) | (npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + case 4: + if (8 * sizeof(npy_int32) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 4 * PyLong_SHIFT) { + return (npy_int32) ((((((((((npy_int32)digits[3]) << PyLong_SHIFT) | (npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + } +#endif + if (sizeof(npy_int32) <= sizeof(long)) { + __PYX_VERIFY_RETURN_INT_EXC(npy_int32, long, PyLong_AsLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(npy_int32) <= sizeof(PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(npy_int32, PY_LONG_LONG, PyLong_AsLongLong(x)) +#endif + } + } + { +#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) + PyErr_SetString(PyExc_RuntimeError, + "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); +#else + npy_int32 val; + PyObject *v = __Pyx_PyNumber_IntOrLong(x); + #if PY_MAJOR_VERSION < 3 + if (likely(v) && !PyLong_Check(v)) { + PyObject *tmp = v; + v = PyNumber_Long(tmp); + Py_DECREF(tmp); + } + #endif + if (likely(v)) { + int one = 1; int is_little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&val; + int ret = _PyLong_AsByteArray((PyLongObject *)v, + bytes, sizeof(val), + is_little, !is_unsigned); + Py_DECREF(v); + if (likely(!ret)) + return val; + } +#endif + return (npy_int32) -1; + } + } else { + npy_int32 val; + PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); + if (!tmp) return (npy_int32) -1; + val = __Pyx_PyInt_As_npy_int32(tmp); + Py_DECREF(tmp); + return val; + } +raise_overflow: + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to npy_int32"); + return (npy_int32) -1; +raise_neg_overflow: + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to npy_int32"); + return (npy_int32) -1; +} + +/* CIntFromPy */ + static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { + const int neg_one = (int) -1, const_zero = (int) 0; + const int is_unsigned = neg_one > const_zero; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_Check(x))) { + if (sizeof(int) < sizeof(long)) { + __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) + } else { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + goto raise_neg_overflow; + } + return (int) val; + } + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (int) 0; + case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) + case 2: + if (8 * sizeof(int) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) { + return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); + } + } + break; + case 3: + if (8 * sizeof(int) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) { + return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); + } + } + break; + case 4: + if (8 * sizeof(int) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) { + return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); + } + } + break; + } +#endif +#if CYTHON_COMPILING_IN_CPYTHON + if (unlikely(Py_SIZE(x) < 0)) { + goto raise_neg_overflow; + } +#else + { + int result = PyObject_RichCompareBool(x, Py_False, Py_LT); + if (unlikely(result < 0)) + return (int) -1; + if (unlikely(result == 1)) + goto raise_neg_overflow; + } +#endif + if (sizeof(int) <= sizeof(unsigned long)) { + __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) +#endif + } + } else { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (int) 0; + case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) + case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) + case -2: + if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { + return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case 2: + if (8 * sizeof(int) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { + return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case -3: + if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { + return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case 3: + if (8 * sizeof(int) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { + return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case -4: + if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { + return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case 4: + if (8 * sizeof(int) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { + return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + } +#endif + if (sizeof(int) <= sizeof(long)) { + __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) +#endif + } + } + { +#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) + PyErr_SetString(PyExc_RuntimeError, + "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); +#else + int val; + PyObject *v = __Pyx_PyNumber_IntOrLong(x); + #if PY_MAJOR_VERSION < 3 + if (likely(v) && !PyLong_Check(v)) { + PyObject *tmp = v; + v = PyNumber_Long(tmp); + Py_DECREF(tmp); + } + #endif + if (likely(v)) { + int one = 1; int is_little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&val; + int ret = _PyLong_AsByteArray((PyLongObject *)v, + bytes, sizeof(val), + is_little, !is_unsigned); + Py_DECREF(v); + if (likely(!ret)) + return val; + } +#endif + return (int) -1; + } + } else { + int val; + PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); + if (!tmp) return (int) -1; + val = __Pyx_PyInt_As_int(tmp); + Py_DECREF(tmp); + return val; + } +raise_overflow: + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to int"); + return (int) -1; +raise_neg_overflow: + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to int"); + return (int) -1; +} + +/* CIntToPy */ + static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { + const long neg_one = (long) -1, const_zero = (long) 0; + const int is_unsigned = neg_one > const_zero; + if (is_unsigned) { + if (sizeof(long) < sizeof(long)) { + return PyInt_FromLong((long) value); + } else if (sizeof(long) <= sizeof(unsigned long)) { + return PyLong_FromUnsignedLong((unsigned long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); +#endif + } + } else { + if (sizeof(long) <= sizeof(long)) { + return PyInt_FromLong((long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { + return PyLong_FromLongLong((PY_LONG_LONG) value); +#endif + } + } + { + int one = 1; int little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&value; + return _PyLong_FromByteArray(bytes, sizeof(long), + little, !is_unsigned); + } +} + +/* CIntFromPy */ + static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { + const long neg_one = (long) -1, const_zero = (long) 0; + const int is_unsigned = neg_one > const_zero; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_Check(x))) { + if (sizeof(long) < sizeof(long)) { + __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) + } else { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + goto raise_neg_overflow; + } + return (long) val; + } + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (long) 0; + case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) + case 2: + if (8 * sizeof(long) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) { + return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); + } + } + break; + case 3: + if (8 * sizeof(long) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) { + return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); + } + } + break; + case 4: + if (8 * sizeof(long) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) { + return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); + } + } + break; + } +#endif +#if CYTHON_COMPILING_IN_CPYTHON + if (unlikely(Py_SIZE(x) < 0)) { + goto raise_neg_overflow; + } +#else + { + int result = PyObject_RichCompareBool(x, Py_False, Py_LT); + if (unlikely(result < 0)) + return (long) -1; + if (unlikely(result == 1)) + goto raise_neg_overflow; + } +#endif + if (sizeof(long) <= sizeof(unsigned long)) { + __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) +#endif + } + } else { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (long) 0; + case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) + case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) + case -2: + if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { + return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case 2: + if (8 * sizeof(long) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { + return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case -3: + if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { + return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case 3: + if (8 * sizeof(long) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { + return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case -4: + if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { + return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case 4: + if (8 * sizeof(long) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { + return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + } +#endif + if (sizeof(long) <= sizeof(long)) { + __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) +#endif + } + } + { +#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) + PyErr_SetString(PyExc_RuntimeError, + "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); +#else + long val; + PyObject *v = __Pyx_PyNumber_IntOrLong(x); + #if PY_MAJOR_VERSION < 3 + if (likely(v) && !PyLong_Check(v)) { + PyObject *tmp = v; + v = PyNumber_Long(tmp); + Py_DECREF(tmp); + } + #endif + if (likely(v)) { + int one = 1; int is_little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&val; + int ret = _PyLong_AsByteArray((PyLongObject *)v, + bytes, sizeof(val), + is_little, !is_unsigned); + Py_DECREF(v); + if (likely(!ret)) + return val; + } +#endif + return (long) -1; + } + } else { + long val; + PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); + if (!tmp) return (long) -1; + val = __Pyx_PyInt_As_long(tmp); + Py_DECREF(tmp); + return val; + } +raise_overflow: + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to long"); + return (long) -1; +raise_neg_overflow: + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to long"); + return (long) -1; +} + +/* CheckBinaryVersion */ + static int __Pyx_check_binary_version(void) { + char ctversion[4], rtversion[4]; + PyOS_snprintf(ctversion, 4, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); + PyOS_snprintf(rtversion, 4, "%s", Py_GetVersion()); + if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) { + char message[200]; + PyOS_snprintf(message, sizeof(message), + "compiletime version %s of module '%.100s' " + "does not match runtime version %s", + ctversion, __Pyx_MODULE_NAME, rtversion); + return PyErr_WarnEx(NULL, message, 1); + } + return 0; +} + +/* ModuleImport */ + #ifndef __PYX_HAVE_RT_ImportModule +#define __PYX_HAVE_RT_ImportModule +static PyObject *__Pyx_ImportModule(const char *name) { + PyObject *py_name = 0; + PyObject *py_module = 0; + py_name = __Pyx_PyIdentifier_FromString(name); + if (!py_name) + goto bad; + py_module = PyImport_Import(py_name); + Py_DECREF(py_name); + return py_module; +bad: + Py_XDECREF(py_name); + return 0; +} +#endif + +/* TypeImport */ + #ifndef __PYX_HAVE_RT_ImportType +#define __PYX_HAVE_RT_ImportType +static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, + size_t size, int strict) +{ + PyObject *py_module = 0; + PyObject *result = 0; + PyObject *py_name = 0; + char warning[200]; + Py_ssize_t basicsize; +#ifdef Py_LIMITED_API + PyObject *py_basicsize; +#endif + py_module = __Pyx_ImportModule(module_name); + if (!py_module) + goto bad; + py_name = __Pyx_PyIdentifier_FromString(class_name); + if (!py_name) + goto bad; + result = PyObject_GetAttr(py_module, py_name); + Py_DECREF(py_name); + py_name = 0; + Py_DECREF(py_module); + py_module = 0; + if (!result) + goto bad; + if (!PyType_Check(result)) { + PyErr_Format(PyExc_TypeError, + "%.200s.%.200s is not a type object", + module_name, class_name); + goto bad; + } +#ifndef Py_LIMITED_API + basicsize = ((PyTypeObject *)result)->tp_basicsize; +#else + py_basicsize = PyObject_GetAttrString(result, "__basicsize__"); + if (!py_basicsize) + goto bad; + basicsize = PyLong_AsSsize_t(py_basicsize); + Py_DECREF(py_basicsize); + py_basicsize = 0; + if (basicsize == (Py_ssize_t)-1 && PyErr_Occurred()) + goto bad; +#endif + if (!strict && (size_t)basicsize > size) { + PyOS_snprintf(warning, sizeof(warning), + "%s.%s size changed, may indicate binary incompatibility. Expected %zd, got %zd", + module_name, class_name, basicsize, size); + if (PyErr_WarnEx(NULL, warning, 0) < 0) goto bad; + } + else if ((size_t)basicsize != size) { + PyErr_Format(PyExc_ValueError, + "%.200s.%.200s has the wrong size, try recompiling. Expected %zd, got %zd", + module_name, class_name, basicsize, size); + goto bad; + } + return (PyTypeObject *)result; +bad: + Py_XDECREF(py_module); + Py_XDECREF(result); + return NULL; +} +#endif + +/* InitStrings */ + static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { + while (t->p) { + #if PY_MAJOR_VERSION < 3 + if (t->is_unicode) { + *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); + } else if (t->intern) { + *t->p = PyString_InternFromString(t->s); + } else { + *t->p = PyString_FromStringAndSize(t->s, t->n - 1); + } + #else + if (t->is_unicode | t->is_str) { + if (t->intern) { + *t->p = PyUnicode_InternFromString(t->s); + } else if (t->encoding) { + *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); + } else { + *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); + } + } else { + *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); + } + #endif + if (!*t->p) + return -1; + ++t; + } + return 0; +} + +static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { + return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); +} +static CYTHON_INLINE char* __Pyx_PyObject_AsString(PyObject* o) { + Py_ssize_t ignore; + return __Pyx_PyObject_AsStringAndSize(o, &ignore); +} +static CYTHON_INLINE char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { +#if CYTHON_COMPILING_IN_CPYTHON && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) + if ( +#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII + __Pyx_sys_getdefaultencoding_not_ascii && +#endif + PyUnicode_Check(o)) { +#if PY_VERSION_HEX < 0x03030000 + char* defenc_c; + PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); + if (!defenc) return NULL; + defenc_c = PyBytes_AS_STRING(defenc); +#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII + { + char* end = defenc_c + PyBytes_GET_SIZE(defenc); + char* c; + for (c = defenc_c; c < end; c++) { + if ((unsigned char) (*c) >= 128) { + PyUnicode_AsASCIIString(o); + return NULL; + } + } + } +#endif + *length = PyBytes_GET_SIZE(defenc); + return defenc_c; +#else + if (__Pyx_PyUnicode_READY(o) == -1) return NULL; +#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII + if (PyUnicode_IS_ASCII(o)) { + *length = PyUnicode_GET_LENGTH(o); + return PyUnicode_AsUTF8(o); + } else { + PyUnicode_AsASCIIString(o); + return NULL; + } +#else + return PyUnicode_AsUTF8AndSize(o, length); +#endif +#endif + } else +#endif +#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) + if (PyByteArray_Check(o)) { + *length = PyByteArray_GET_SIZE(o); + return PyByteArray_AS_STRING(o); + } else +#endif + { + char* result; + int r = PyBytes_AsStringAndSize(o, &result, length); + if (unlikely(r < 0)) { + return NULL; + } else { + return result; + } + } +} +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { + int is_true = x == Py_True; + if (is_true | (x == Py_False) | (x == Py_None)) return is_true; + else return PyObject_IsTrue(x); +} +static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { +#if CYTHON_USE_TYPE_SLOTS + PyNumberMethods *m; +#endif + const char *name = NULL; + PyObject *res = NULL; +#if PY_MAJOR_VERSION < 3 + if (PyInt_Check(x) || PyLong_Check(x)) +#else + if (PyLong_Check(x)) +#endif + return __Pyx_NewRef(x); +#if CYTHON_USE_TYPE_SLOTS + m = Py_TYPE(x)->tp_as_number; + #if PY_MAJOR_VERSION < 3 + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Int(x); + } + else if (m && m->nb_long) { + name = "long"; + res = PyNumber_Long(x); + } + #else + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Long(x); + } + #endif +#else + res = PyNumber_Int(x); +#endif + if (res) { +#if PY_MAJOR_VERSION < 3 + if (!PyInt_Check(res) && !PyLong_Check(res)) { +#else + if (!PyLong_Check(res)) { +#endif + PyErr_Format(PyExc_TypeError, + "__%.4s__ returned non-%.4s (type %.200s)", + name, name, Py_TYPE(res)->tp_name); + Py_DECREF(res); + return NULL; + } + } + else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_TypeError, + "an integer is required"); + } + return res; +} +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { + Py_ssize_t ival; + PyObject *x; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_CheckExact(b))) { + if (sizeof(Py_ssize_t) >= sizeof(long)) + return PyInt_AS_LONG(b); + else + return PyInt_AsSsize_t(x); + } +#endif + if (likely(PyLong_CheckExact(b))) { + #if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)b)->ob_digit; + const Py_ssize_t size = Py_SIZE(b); + if (likely(__Pyx_sst_abs(size) <= 1)) { + ival = likely(size) ? digits[0] : 0; + if (size == -1) ival = -ival; + return ival; + } else { + switch (size) { + case 2: + if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { + return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case -2: + if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { + return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case 3: + if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { + return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case -3: + if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { + return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case 4: + if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { + return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case -4: + if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { + return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + } + } + #endif + return PyLong_AsSsize_t(b); + } + x = PyNumber_Index(b); + if (!x) return -1; + ival = PyInt_AsSsize_t(x); + Py_DECREF(x); + return ival; +} +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { + return PyInt_FromSize_t(ival); +} + + +#endif /* Py_PYTHON_H */ diff --git a/dota_kit/poly_nms_gpu/poly_nms.hpp b/dota_kit/poly_nms_gpu/poly_nms.hpp new file mode 100644 index 0000000..61f2df7 --- /dev/null +++ b/dota_kit/poly_nms_gpu/poly_nms.hpp @@ -0,0 +1,12 @@ +// +// Created by dingjian on 18-5-24. +// + +#ifndef DOTA_DEVKIT_POLY_NMS_HPP +#define DOTA_DEVKIT_POLY_NMS_HPP + + +void _poly_nms(int* keep_out, int* num_out, const float* polys_host, int polys_num, + int polys_dim, float nms_overlap_thresh, int device_id); + +#endif //DOTA_DEVKIT_POLY_NMS_HPP diff --git a/dota_kit/poly_nms_gpu/poly_nms.pyx b/dota_kit/poly_nms_gpu/poly_nms.pyx new file mode 100644 index 0000000..100ec33 --- /dev/null +++ b/dota_kit/poly_nms_gpu/poly_nms.pyx @@ -0,0 +1,24 @@ +import numpy as np +cimport numpy as np + +assert sizeof(int) == sizeof(np.int32_t) + +cdef extern from "poly_nms.hpp": + void _poly_nms(np.int32_t*, int*, np.float32_t*, int, int, float, int) + +def poly_gpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh, + np.int32_t device_id=0): + cdef int boxes_num = dets.shape[0] + cdef int boxes_dim = dets.shape[1] + cdef int num_out + cdef np.ndarray[np.int32_t, ndim=1] \ + keep = np.zeros(boxes_num, dtype=np.int32) + cdef np.ndarray[np.float32_t, ndim=1] \ + scores = dets[:, 8] + cdef np.ndarray[np.int_t, ndim=1] \ + order = scores.argsort()[::-1] + cdef np.ndarray[np.float32_t, ndim=2] \ + sorted_dets = dets[order, :] + _poly_nms(&keep[0], &num_out, &sorted_dets[0, 0], boxes_num, boxes_dim, thresh, device_id) + keep = keep[:num_out] + return list(order[keep]) diff --git a/dota_kit/poly_nms_gpu/poly_nms_kernel.cu b/dota_kit/poly_nms_gpu/poly_nms_kernel.cu new file mode 100644 index 0000000..8f6a2cb --- /dev/null +++ b/dota_kit/poly_nms_gpu/poly_nms_kernel.cu @@ -0,0 +1,346 @@ + +#include "poly_nms.hpp" +#include +#include +#include +#include +#include + +using namespace std; + +//##define CUDA_CHECK(condition)\ +// +// do { +// cudaError_t error = condition; +// if (error != cudaSuccess) { +// +// } +// } + +#define CUDA_CHECK(condition) \ + /* Code block avoids redefinition of cudaError_t error */ \ + do { \ + cudaError_t error = condition; \ + if (error != cudaSuccess) { \ + std::cout << cudaGetErrorString(error) << std::endl; \ + } \ + } while (0) + +#define DIVUP(m,n) ((m) / (n) + ((m) % (n) > 0)) +int const threadsPerBlock = sizeof(unsigned long long) * 8; + + +#define maxn 51 +const double eps=1E-8; + +__device__ inline int sig(float d){ + return(d>eps)-(d<-eps); +} +// struct Point{ +// double x,y; Point(){} +// Point(double x,double y):x(x),y(y){} +// bool operator==(const Point&p)const{ +// return sig(x-p.x)==0&&sig(y-p.y)==0; +// } +// }; + +__device__ inline int point_eq(const float2 a, const float2 b) { + return sig(a.x - b.x) == 0 && sig(a.y - b.y)==0; +} + +__device__ inline void point_swap(float2 *a, float2 *b) { + float2 temp = *a; + *a = *b; + *b = temp; +} + +__device__ inline void point_reverse(float2 *first, float2* last) +{ + while ((first!=last)&&(first!=--last)) { + point_swap (first,last); + ++first; + } +} +// void point_reverse(Point* first, Point* last) +// { +// while ((first!=last)&&(first!=--last)) { +// point_swap (first,last); +// ++first; +// } +// } + + +__device__ inline float cross(float2 o,float2 a,float2 b){ //叉积 + return(a.x-o.x)*(b.y-o.y)-(b.x-o.x)*(a.y-o.y); +} +__device__ inline float area(float2* ps,int n){ + ps[n]=ps[0]; + float res=0; + for(int i=0;i0) pp[m++]=p[i]; +// if(sig(cross(a,b,p[i]))!=sig(cross(a,b,p[i+1]))) +// lineCross(a,b,p[i],p[i+1],pp[m++]); +// } +// n=0; +// for(int i=0;i1&&p[n-1]==p[0])n--; +// while(n>1&&point_eq(p[n-1], p[0]))n--; +// } + +__device__ inline void polygon_cut(float2*p,int&n,float2 a,float2 b, float2* pp){ + + int m=0;p[n]=p[0]; + for(int i=0;i0) pp[m++]=p[i]; + if(sig(cross(a,b,p[i]))!=sig(cross(a,b,p[i+1]))) + lineCross(a,b,p[i],p[i+1],pp[m++]); + } + n=0; + for(int i=0;i1&&p[n-1]==p[0])n--; + while(n>1&&point_eq(p[n-1], p[0]))n--; +} + +//---------------华丽的分隔线-----------------// +//返回三角形oab和三角形ocd的有向交面积,o是原点// +__device__ inline float intersectArea(float2 a,float2 b,float2 c,float2 d){ + float2 o = make_float2(0,0); + int s1=sig(cross(o,a,b)); + int s2=sig(cross(o,c,d)); + if(s1==0||s2==0)return 0.0;//退化,面积为0 + // if(s1==-1) swap(a,b); + // if(s2==-1) swap(c,d); + if (s1 == -1) point_swap(&a, &b); + if (s2 == -1) point_swap(&c, &d); + float2 p[10]={o,a,b}; + int n=3; + float2 pp[maxn]; + polygon_cut(p,n,o,c,pp); + polygon_cut(p,n,c,d,pp); + polygon_cut(p,n,d,o,pp); + float res=fabs(area(p,n)); + if(s1*s2==-1) res=-res;return res; +} +//求两多边形的交面积 +__device__ inline float intersectArea(float2*ps1,int n1,float2*ps2,int n2){ + if(area(ps1,n1)<0) point_reverse(ps1,ps1+n1); + if(area(ps2,n2)<0) point_reverse(ps2,ps2+n2); + ps1[n1]=ps1[0]; + ps2[n2]=ps2[0]; + float res=0; + for(int i=0;i p, vector q) { +// Point ps1[maxn],ps2[maxn]; +// int n1 = 4; +// int n2 = 4; +// for (int i = 0; i < 4; i++) { +// ps1[i].x = p[i * 2]; +// ps1[i].y = p[i * 2 + 1]; +// +// ps2[i].x = q[i * 2]; +// ps2[i].y = q[i * 2 + 1]; +// } +// double inter_area = intersectArea(ps1, n1, ps2, n2); +// double union_area = fabs(area(ps1, n1)) + fabs(area(ps2, n2)) - inter_area; +// double iou = inter_area / union_area; +// +//// cout << "inter_area:" << inter_area << endl; +//// cout << "union_area:" << union_area << endl; +//// cout << "iou:" << iou << endl; +// +// return iou; +//} + +__device__ inline float devPolyIoU(float const * const p, float const * const q) { + float2 ps1[maxn], ps2[maxn]; + int n1 = 4; + int n2 = 4; + for (int i = 0; i < 4; i++) { + ps1[i].x = p[i * 2]; + ps1[i].y = p[i * 2 + 1]; + + ps2[i].x = q[i * 2]; + ps2[i].y = q[i * 2 + 1]; + } + float inter_area = intersectArea(ps1, n1, ps2, n2); + float union_area = fabs(area(ps1, n1)) + fabs(area(ps2, n2)) - inter_area; + float iou = 0; + if (union_area == 0) { + iou = (inter_area + 1) / (union_area + 1); + } else { + iou = inter_area / union_area; + } + return iou; +} + +__global__ void poly_nms_kernel(const int n_polys, const float nms_overlap_thresh, + const float *dev_polys, unsigned long long *dev_mask) { + const int row_start = blockIdx.y; + const int col_start = blockIdx.x; + + const int row_size = + min(n_polys - row_start * threadsPerBlock, threadsPerBlock); + const int cols_size = + min(n_polys - col_start * threadsPerBlock, threadsPerBlock); + + __shared__ float block_polys[threadsPerBlock * 9]; + if (threadIdx.x < cols_size) { + block_polys[threadIdx.x * 9 + 0] = + dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 0]; + block_polys[threadIdx.x * 9 + 1] = + dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 1]; + block_polys[threadIdx.x * 9 + 2] = + dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 2]; + block_polys[threadIdx.x * 9 + 3] = + dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 3]; + block_polys[threadIdx.x * 9 + 4] = + dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 4]; + block_polys[threadIdx.x * 9 + 5] = + dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 5]; + block_polys[threadIdx.x * 9 + 6] = + dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 6]; + block_polys[threadIdx.x * 9 + 7] = + dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 7]; + block_polys[threadIdx.x * 9 + 8] = + dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 8]; + } + __syncthreads(); + + if (threadIdx.x < row_size) { + const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x; + const float *cur_box = dev_polys + cur_box_idx * 9; + int i = 0; + unsigned long long t = 0; + int start = 0; + if (row_start == col_start) { + start = threadIdx.x + 1; + } + for (i = start; i < cols_size; i++) { + if (devPolyIoU(cur_box, block_polys + i * 9) > nms_overlap_thresh) { + t |= 1ULL << i; + } + } + const int col_blocks = DIVUP(n_polys, threadsPerBlock); + dev_mask[cur_box_idx * col_blocks + col_start] = t; + } +} + +void _set_device(int device_id) { + int current_device; + CUDA_CHECK(cudaGetDevice(¤t_device)); + if (current_device == device_id) { + return; + } + // The call to cudaSetDevice must come before any calls to Get, which + // may perform initailization using the GPU. + CUDA_CHECK(cudaSetDevice(device_id)); +} + +void _poly_nms(int* keep_out, int* num_out, const float* polys_host, int polys_num, + int polys_dim, float nms_overlap_thresh, int device_id) { + float* polys_dev = NULL; + unsigned long long* mask_dev = NULL; + const int col_blocks = DIVUP(polys_num, threadsPerBlock); + + CUDA_CHECK(cudaMalloc(&polys_dev, + polys_num * polys_dim * sizeof(float))); + CUDA_CHECK(cudaMemcpy(polys_dev, + polys_host, + polys_num * polys_dim * sizeof(float), + cudaMemcpyHostToDevice)); + + CUDA_CHECK(cudaMalloc(&mask_dev, + polys_num * col_blocks * sizeof(unsigned long long))); + + dim3 blocks(DIVUP(polys_num, threadsPerBlock), + DIVUP(polys_num, threadsPerBlock)); + dim3 threads(threadsPerBlock); +// __global__ void poly_nms_kernel(const int n_polys, const float nms_overlap_thresh, +// const float *dev_polys, unsigned long long *dev_mask) + poly_nms_kernel<<>>(polys_num, + nms_overlap_thresh, + polys_dev, + mask_dev); + + std::vector mask_host(polys_num * col_blocks); + CUDA_CHECK(cudaMemcpy(&mask_host[0], + mask_dev, + sizeof(unsigned long long) * polys_num * col_blocks, + cudaMemcpyDeviceToHost)); + + std::vector remv(col_blocks); + memset(&remv[0], 0, sizeof(unsigned long long) * col_blocks); + // TODO: figure out it + int num_to_keep = 0; + for (int i = 0; i < polys_num; i++) { + int nblock = i / threadsPerBlock; + int inblock = i % threadsPerBlock; + + if (!(remv[nblock] & (1ULL << inblock))) { + keep_out[num_to_keep++] = i; + unsigned long long *p = &mask_host[0] + i * col_blocks; + for (int j = nblock; j < col_blocks; j++) { + remv[j] |= p[j]; + } + } + } + *num_out = num_to_keep; + + CUDA_CHECK(cudaFree(polys_dev)); + CUDA_CHECK(cudaFree(mask_dev)); +} + +// +//int main(){ +// double p[8] = {0, 0, 1, 0, 1, 1, 0, 1}; +// double q[8] = {0.5, 0.5, 1.5, 0.5, 1.5, 1.5, 0.5, 1.5}; +// vector P(p, p + 8); +// vector Q(q, q + 8); +// iou_poly(P, Q); +// return 0; +//} + +//int main(){ +// double p[8] = {0, 0, 1, 0, 1, 1, 0, 1}; +// double q[8] = {0.5, 0.5, 1.5, 0.5, 1.5, 1.5, 0.5, 1.5}; +// iou_poly(p, q); +// return 0; +//} \ No newline at end of file diff --git a/dota_kit/poly_nms_gpu/poly_nms_test.py b/dota_kit/poly_nms_gpu/poly_nms_test.py new file mode 100644 index 0000000..e69de29 diff --git a/dota_kit/poly_nms_gpu/poly_overlaps.cpp b/dota_kit/poly_nms_gpu/poly_overlaps.cpp new file mode 100644 index 0000000..ed194e2 --- /dev/null +++ b/dota_kit/poly_nms_gpu/poly_overlaps.cpp @@ -0,0 +1,7325 @@ +/* Generated by Cython 0.25.2 */ + +#define PY_SSIZE_T_CLEAN +#include "Python.h" +#ifndef Py_PYTHON_H + #error Python headers needed to compile C extensions, please install development version of Python. +#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03020000) + #error Cython requires Python 2.6+ or Python 3.2+. +#else +#define CYTHON_ABI "0_25_2" +#include +#ifndef offsetof + #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) +#endif +#if !defined(WIN32) && !defined(MS_WINDOWS) + #ifndef __stdcall + #define __stdcall + #endif + #ifndef __cdecl + #define __cdecl + #endif + #ifndef __fastcall + #define __fastcall + #endif +#endif +#ifndef DL_IMPORT + #define DL_IMPORT(t) t +#endif +#ifndef DL_EXPORT + #define DL_EXPORT(t) t +#endif +#ifndef HAVE_LONG_LONG + #if PY_VERSION_HEX >= 0x03030000 || (PY_MAJOR_VERSION == 2 && PY_VERSION_HEX >= 0x02070000) + #define HAVE_LONG_LONG + #endif +#endif +#ifndef PY_LONG_LONG + #define PY_LONG_LONG LONG_LONG +#endif +#ifndef Py_HUGE_VAL + #define Py_HUGE_VAL HUGE_VAL +#endif +#ifdef PYPY_VERSION + #define CYTHON_COMPILING_IN_PYPY 1 + #define CYTHON_COMPILING_IN_PYSTON 0 + #define CYTHON_COMPILING_IN_CPYTHON 0 + #undef CYTHON_USE_TYPE_SLOTS + #define CYTHON_USE_TYPE_SLOTS 0 + #undef CYTHON_USE_ASYNC_SLOTS + #define CYTHON_USE_ASYNC_SLOTS 0 + #undef CYTHON_USE_PYLIST_INTERNALS + #define CYTHON_USE_PYLIST_INTERNALS 0 + #undef CYTHON_USE_UNICODE_INTERNALS + #define CYTHON_USE_UNICODE_INTERNALS 0 + #undef CYTHON_USE_UNICODE_WRITER + #define CYTHON_USE_UNICODE_WRITER 0 + #undef CYTHON_USE_PYLONG_INTERNALS + #define CYTHON_USE_PYLONG_INTERNALS 0 + #undef CYTHON_AVOID_BORROWED_REFS + #define CYTHON_AVOID_BORROWED_REFS 1 + #undef CYTHON_ASSUME_SAFE_MACROS + #define CYTHON_ASSUME_SAFE_MACROS 0 + #undef CYTHON_UNPACK_METHODS + #define CYTHON_UNPACK_METHODS 0 + #undef CYTHON_FAST_THREAD_STATE + #define CYTHON_FAST_THREAD_STATE 0 + #undef CYTHON_FAST_PYCALL + #define CYTHON_FAST_PYCALL 0 +#elif defined(PYSTON_VERSION) + #define CYTHON_COMPILING_IN_PYPY 0 + #define CYTHON_COMPILING_IN_PYSTON 1 + #define CYTHON_COMPILING_IN_CPYTHON 0 + #ifndef CYTHON_USE_TYPE_SLOTS + #define CYTHON_USE_TYPE_SLOTS 1 + #endif + #undef CYTHON_USE_ASYNC_SLOTS + #define CYTHON_USE_ASYNC_SLOTS 0 + #undef CYTHON_USE_PYLIST_INTERNALS + #define CYTHON_USE_PYLIST_INTERNALS 0 + #ifndef CYTHON_USE_UNICODE_INTERNALS + #define CYTHON_USE_UNICODE_INTERNALS 1 + #endif + #undef CYTHON_USE_UNICODE_WRITER + #define CYTHON_USE_UNICODE_WRITER 0 + #undef CYTHON_USE_PYLONG_INTERNALS + #define CYTHON_USE_PYLONG_INTERNALS 0 + #ifndef CYTHON_AVOID_BORROWED_REFS + #define CYTHON_AVOID_BORROWED_REFS 0 + #endif + #ifndef CYTHON_ASSUME_SAFE_MACROS + #define CYTHON_ASSUME_SAFE_MACROS 1 + #endif + #ifndef CYTHON_UNPACK_METHODS + #define CYTHON_UNPACK_METHODS 1 + #endif + #undef CYTHON_FAST_THREAD_STATE + #define CYTHON_FAST_THREAD_STATE 0 + #undef CYTHON_FAST_PYCALL + #define CYTHON_FAST_PYCALL 0 +#else + #define CYTHON_COMPILING_IN_PYPY 0 + #define CYTHON_COMPILING_IN_PYSTON 0 + #define CYTHON_COMPILING_IN_CPYTHON 1 + #ifndef CYTHON_USE_TYPE_SLOTS + #define CYTHON_USE_TYPE_SLOTS 1 + #endif + #if PY_MAJOR_VERSION < 3 + #undef CYTHON_USE_ASYNC_SLOTS + #define CYTHON_USE_ASYNC_SLOTS 0 + #elif !defined(CYTHON_USE_ASYNC_SLOTS) + #define CYTHON_USE_ASYNC_SLOTS 1 + #endif + #if PY_VERSION_HEX < 0x02070000 + #undef CYTHON_USE_PYLONG_INTERNALS + #define CYTHON_USE_PYLONG_INTERNALS 0 + #elif !defined(CYTHON_USE_PYLONG_INTERNALS) + #define CYTHON_USE_PYLONG_INTERNALS 1 + #endif + #ifndef CYTHON_USE_PYLIST_INTERNALS + #define CYTHON_USE_PYLIST_INTERNALS 1 + #endif + #ifndef CYTHON_USE_UNICODE_INTERNALS + #define CYTHON_USE_UNICODE_INTERNALS 1 + #endif + #if PY_VERSION_HEX < 0x030300F0 + #undef CYTHON_USE_UNICODE_WRITER + #define CYTHON_USE_UNICODE_WRITER 0 + #elif !defined(CYTHON_USE_UNICODE_WRITER) + #define CYTHON_USE_UNICODE_WRITER 1 + #endif + #ifndef CYTHON_AVOID_BORROWED_REFS + #define CYTHON_AVOID_BORROWED_REFS 0 + #endif + #ifndef CYTHON_ASSUME_SAFE_MACROS + #define CYTHON_ASSUME_SAFE_MACROS 1 + #endif + #ifndef CYTHON_UNPACK_METHODS + #define CYTHON_UNPACK_METHODS 1 + #endif + #ifndef CYTHON_FAST_THREAD_STATE + #define CYTHON_FAST_THREAD_STATE 1 + #endif + #ifndef CYTHON_FAST_PYCALL + #define CYTHON_FAST_PYCALL 1 + #endif +#endif +#if !defined(CYTHON_FAST_PYCCALL) +#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) +#endif +#if CYTHON_USE_PYLONG_INTERNALS + #include "longintrepr.h" + #undef SHIFT + #undef BASE + #undef MASK +#endif +#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) + #define Py_OptimizeFlag 0 +#endif +#define __PYX_BUILD_PY_SSIZE_T "n" +#define CYTHON_FORMAT_SSIZE_T "z" +#if PY_MAJOR_VERSION < 3 + #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" + #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ + PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) + #define __Pyx_DefaultClassType PyClass_Type +#else + #define __Pyx_BUILTIN_MODULE_NAME "builtins" + #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ + PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) + #define __Pyx_DefaultClassType PyType_Type +#endif +#ifndef Py_TPFLAGS_CHECKTYPES + #define Py_TPFLAGS_CHECKTYPES 0 +#endif +#ifndef Py_TPFLAGS_HAVE_INDEX + #define Py_TPFLAGS_HAVE_INDEX 0 +#endif +#ifndef Py_TPFLAGS_HAVE_NEWBUFFER + #define Py_TPFLAGS_HAVE_NEWBUFFER 0 +#endif +#ifndef Py_TPFLAGS_HAVE_FINALIZE + #define Py_TPFLAGS_HAVE_FINALIZE 0 +#endif +#ifndef METH_FASTCALL + #define METH_FASTCALL 0x80 + typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject **args, + Py_ssize_t nargs, PyObject *kwnames); +#else + #define __Pyx_PyCFunctionFast _PyCFunctionFast +#endif +#if CYTHON_FAST_PYCCALL +#define __Pyx_PyFastCFunction_Check(func)\ + ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST))))) +#else +#define __Pyx_PyFastCFunction_Check(func) 0 +#endif +#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) + #define CYTHON_PEP393_ENABLED 1 + #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ + 0 : _PyUnicode_Ready((PyObject *)(op))) + #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) + #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) + #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) + #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u) + #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) + #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) + #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) + #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) +#else + #define CYTHON_PEP393_ENABLED 0 + #define PyUnicode_1BYTE_KIND 1 + #define PyUnicode_2BYTE_KIND 2 + #define PyUnicode_4BYTE_KIND 4 + #define __Pyx_PyUnicode_READY(op) (0) + #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) + #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) + #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) + #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE)) + #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) + #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) + #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) + #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) +#endif +#if CYTHON_COMPILING_IN_PYPY + #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) + #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) +#else + #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) + #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ + PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) +#endif +#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains) + #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) +#endif +#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check) + #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) +#endif +#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format) + #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) +#endif +#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) + #define PyObject_Malloc(s) PyMem_Malloc(s) + #define PyObject_Free(p) PyMem_Free(p) + #define PyObject_Realloc(p) PyMem_Realloc(p) +#endif +#if CYTHON_COMPILING_IN_PYSTON + #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co) + #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno) +#else + #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) + #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) +#endif +#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) +#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) +#if PY_MAJOR_VERSION >= 3 + #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) +#else + #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) +#endif +#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) + #define PyObject_ASCII(o) PyObject_Repr(o) +#endif +#if PY_MAJOR_VERSION >= 3 + #define PyBaseString_Type PyUnicode_Type + #define PyStringObject PyUnicodeObject + #define PyString_Type PyUnicode_Type + #define PyString_Check PyUnicode_Check + #define PyString_CheckExact PyUnicode_CheckExact +#endif +#if PY_MAJOR_VERSION >= 3 + #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) + #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) +#else + #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) + #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) +#endif +#ifndef PySet_CheckExact + #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type) +#endif +#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) +#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) +#if PY_MAJOR_VERSION >= 3 + #define PyIntObject PyLongObject + #define PyInt_Type PyLong_Type + #define PyInt_Check(op) PyLong_Check(op) + #define PyInt_CheckExact(op) PyLong_CheckExact(op) + #define PyInt_FromString PyLong_FromString + #define PyInt_FromUnicode PyLong_FromUnicode + #define PyInt_FromLong PyLong_FromLong + #define PyInt_FromSize_t PyLong_FromSize_t + #define PyInt_FromSsize_t PyLong_FromSsize_t + #define PyInt_AsLong PyLong_AsLong + #define PyInt_AS_LONG PyLong_AS_LONG + #define PyInt_AsSsize_t PyLong_AsSsize_t + #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask + #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask + #define PyNumber_Int PyNumber_Long +#endif +#if PY_MAJOR_VERSION >= 3 + #define PyBoolObject PyLongObject +#endif +#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY + #ifndef PyUnicode_InternFromString + #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) + #endif +#endif +#if PY_VERSION_HEX < 0x030200A4 + typedef long Py_hash_t; + #define __Pyx_PyInt_FromHash_t PyInt_FromLong + #define __Pyx_PyInt_AsHash_t PyInt_AsLong +#else + #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t + #define __Pyx_PyInt_AsHash_t PyInt_AsSsize_t +#endif +#if PY_MAJOR_VERSION >= 3 + #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : PyInstanceMethod_New(func)) +#else + #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass) +#endif +#if CYTHON_USE_ASYNC_SLOTS + #if PY_VERSION_HEX >= 0x030500B1 + #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods + #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) + #else + typedef struct { + unaryfunc am_await; + unaryfunc am_aiter; + unaryfunc am_anext; + } __Pyx_PyAsyncMethodsStruct; + #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) + #endif +#else + #define __Pyx_PyType_AsAsync(obj) NULL +#endif +#ifndef CYTHON_RESTRICT + #if defined(__GNUC__) + #define CYTHON_RESTRICT __restrict__ + #elif defined(_MSC_VER) && _MSC_VER >= 1400 + #define CYTHON_RESTRICT __restrict + #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L + #define CYTHON_RESTRICT restrict + #else + #define CYTHON_RESTRICT + #endif +#endif +#ifndef CYTHON_UNUSED +# if defined(__GNUC__) +# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) +# define CYTHON_UNUSED __attribute__ ((__unused__)) +# else +# define CYTHON_UNUSED +# endif +# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) +# define CYTHON_UNUSED __attribute__ ((__unused__)) +# else +# define CYTHON_UNUSED +# endif +#endif +#ifndef CYTHON_MAYBE_UNUSED_VAR +# if defined(__cplusplus) + template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { } +# else +# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x) +# endif +#endif +#ifndef CYTHON_NCP_UNUSED +# if CYTHON_COMPILING_IN_CPYTHON +# define CYTHON_NCP_UNUSED +# else +# define CYTHON_NCP_UNUSED CYTHON_UNUSED +# endif +#endif +#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) + +#ifndef __cplusplus + #error "Cython files generated with the C++ option must be compiled with a C++ compiler." +#endif +#ifndef CYTHON_INLINE + #if defined(__clang__) + #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) + #else + #define CYTHON_INLINE inline + #endif +#endif +template +void __Pyx_call_destructor(T& x) { + x.~T(); +} +template +class __Pyx_FakeReference { + public: + __Pyx_FakeReference() : ptr(NULL) { } + __Pyx_FakeReference(const T& ref) : ptr(const_cast(&ref)) { } + T *operator->() { return ptr; } + T *operator&() { return ptr; } + operator T&() { return *ptr; } + template bool operator ==(U other) { return *ptr == other; } + template bool operator !=(U other) { return *ptr != other; } + private: + T *ptr; +}; + +#if defined(WIN32) || defined(MS_WINDOWS) + #define _USE_MATH_DEFINES +#endif +#include +#ifdef NAN +#define __PYX_NAN() ((float) NAN) +#else +static CYTHON_INLINE float __PYX_NAN() { + float value; + memset(&value, 0xFF, sizeof(value)); + return value; +} +#endif +#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) +#define __Pyx_truncl trunc +#else +#define __Pyx_truncl truncl +#endif + + +#define __PYX_ERR(f_index, lineno, Ln_error) \ +{ \ + __pyx_filename = __pyx_f[f_index]; __pyx_lineno = lineno; __pyx_clineno = __LINE__; goto Ln_error; \ +} + +#if PY_MAJOR_VERSION >= 3 + #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) +#else + #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) +#endif + +#ifndef __PYX_EXTERN_C + #ifdef __cplusplus + #define __PYX_EXTERN_C extern "C" + #else + #define __PYX_EXTERN_C extern + #endif +#endif + +#define __PYX_HAVE__poly_overlaps +#define __PYX_HAVE_API__poly_overlaps +#include +#include +#include +#include "numpy/arrayobject.h" +#include "numpy/ufuncobject.h" +#include "poly_overlaps.hpp" +#ifdef _OPENMP +#include +#endif /* _OPENMP */ + +#ifdef PYREX_WITHOUT_ASSERTIONS +#define CYTHON_WITHOUT_ASSERTIONS +#endif + +typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; + const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; + +#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 +#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT 0 +#define __PYX_DEFAULT_STRING_ENCODING "" +#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString +#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize +#define __Pyx_uchar_cast(c) ((unsigned char)c) +#define __Pyx_long_cast(x) ((long)x) +#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ + (sizeof(type) < sizeof(Py_ssize_t)) ||\ + (sizeof(type) > sizeof(Py_ssize_t) &&\ + likely(v < (type)PY_SSIZE_T_MAX ||\ + v == (type)PY_SSIZE_T_MAX) &&\ + (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ + v == (type)PY_SSIZE_T_MIN))) ||\ + (sizeof(type) == sizeof(Py_ssize_t) &&\ + (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ + v == (type)PY_SSIZE_T_MAX))) ) +#if defined (__cplusplus) && __cplusplus >= 201103L + #include + #define __Pyx_sst_abs(value) std::abs(value) +#elif SIZEOF_INT >= SIZEOF_SIZE_T + #define __Pyx_sst_abs(value) abs(value) +#elif SIZEOF_LONG >= SIZEOF_SIZE_T + #define __Pyx_sst_abs(value) labs(value) +#elif defined (_MSC_VER) && defined (_M_X64) + #define __Pyx_sst_abs(value) _abs64(value) +#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L + #define __Pyx_sst_abs(value) llabs(value) +#elif defined (__GNUC__) + #define __Pyx_sst_abs(value) __builtin_llabs(value) +#else + #define __Pyx_sst_abs(value) ((value<0) ? -value : value) +#endif +static CYTHON_INLINE char* __Pyx_PyObject_AsString(PyObject*); +static CYTHON_INLINE char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); +#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) +#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) +#define __Pyx_PyBytes_FromString PyBytes_FromString +#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize +static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); +#if PY_MAJOR_VERSION < 3 + #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString + #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize +#else + #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString + #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize +#endif +#define __Pyx_PyObject_AsSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) +#define __Pyx_PyObject_AsUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) +#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) +#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) +#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) +#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) +#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) +#if PY_MAJOR_VERSION < 3 +static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) +{ + const Py_UNICODE *u_end = u; + while (*u_end++) ; + return (size_t)(u_end - u - 1); +} +#else +#define __Pyx_Py_UNICODE_strlen Py_UNICODE_strlen +#endif +#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) +#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode +#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode +#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) +#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) +#define __Pyx_PyBool_FromLong(b) ((b) ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False)) +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); +static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); +#if CYTHON_ASSUME_SAFE_MACROS +#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) +#else +#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) +#endif +#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) +#if PY_MAJOR_VERSION >= 3 +#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) +#else +#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) +#endif +#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) +#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII +static int __Pyx_sys_getdefaultencoding_not_ascii; +static int __Pyx_init_sys_getdefaultencoding_params(void) { + PyObject* sys; + PyObject* default_encoding = NULL; + PyObject* ascii_chars_u = NULL; + PyObject* ascii_chars_b = NULL; + const char* default_encoding_c; + sys = PyImport_ImportModule("sys"); + if (!sys) goto bad; + default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); + Py_DECREF(sys); + if (!default_encoding) goto bad; + default_encoding_c = PyBytes_AsString(default_encoding); + if (!default_encoding_c) goto bad; + if (strcmp(default_encoding_c, "ascii") == 0) { + __Pyx_sys_getdefaultencoding_not_ascii = 0; + } else { + char ascii_chars[128]; + int c; + for (c = 0; c < 128; c++) { + ascii_chars[c] = c; + } + __Pyx_sys_getdefaultencoding_not_ascii = 1; + ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); + if (!ascii_chars_u) goto bad; + ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); + if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { + PyErr_Format( + PyExc_ValueError, + "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", + default_encoding_c); + goto bad; + } + Py_DECREF(ascii_chars_u); + Py_DECREF(ascii_chars_b); + } + Py_DECREF(default_encoding); + return 0; +bad: + Py_XDECREF(default_encoding); + Py_XDECREF(ascii_chars_u); + Py_XDECREF(ascii_chars_b); + return -1; +} +#endif +#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 +#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) +#else +#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) +#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT +static char* __PYX_DEFAULT_STRING_ENCODING; +static int __Pyx_init_sys_getdefaultencoding_params(void) { + PyObject* sys; + PyObject* default_encoding = NULL; + char* default_encoding_c; + sys = PyImport_ImportModule("sys"); + if (!sys) goto bad; + default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); + Py_DECREF(sys); + if (!default_encoding) goto bad; + default_encoding_c = PyBytes_AsString(default_encoding); + if (!default_encoding_c) goto bad; + __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c)); + if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; + strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); + Py_DECREF(default_encoding); + return 0; +bad: + Py_XDECREF(default_encoding); + return -1; +} +#endif +#endif + + +/* Test for GCC > 2.95 */ +#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) + #define likely(x) __builtin_expect(!!(x), 1) + #define unlikely(x) __builtin_expect(!!(x), 0) +#else /* !__GNUC__ or GCC < 2.95 */ + #define likely(x) (x) + #define unlikely(x) (x) +#endif /* __GNUC__ */ + +static PyObject *__pyx_m; +static PyObject *__pyx_d; +static PyObject *__pyx_b; +static PyObject *__pyx_empty_tuple; +static PyObject *__pyx_empty_bytes; +static PyObject *__pyx_empty_unicode; +static int __pyx_lineno; +static int __pyx_clineno = 0; +static const char * __pyx_cfilenm= __FILE__; +static const char *__pyx_filename; + +/* Header.proto */ +#if !defined(CYTHON_CCOMPLEX) + #if defined(__cplusplus) + #define CYTHON_CCOMPLEX 1 + #elif defined(_Complex_I) + #define CYTHON_CCOMPLEX 1 + #else + #define CYTHON_CCOMPLEX 0 + #endif +#endif +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + #include + #else + #include + #endif +#endif +#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__) + #undef _Complex_I + #define _Complex_I 1.0fj +#endif + + +static const char *__pyx_f[] = { + "poly_overlaps.pyx", + "__init__.pxd", + "type.pxd", +}; +/* BufferFormatStructs.proto */ +#define IS_UNSIGNED(type) (((type) -1) > 0) +struct __Pyx_StructField_; +#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0) +typedef struct { + const char* name; + struct __Pyx_StructField_* fields; + size_t size; + size_t arraysize[8]; + int ndim; + char typegroup; + char is_unsigned; + int flags; +} __Pyx_TypeInfo; +typedef struct __Pyx_StructField_ { + __Pyx_TypeInfo* type; + const char* name; + size_t offset; +} __Pyx_StructField; +typedef struct { + __Pyx_StructField* field; + size_t parent_offset; +} __Pyx_BufFmt_StackElem; +typedef struct { + __Pyx_StructField root; + __Pyx_BufFmt_StackElem* head; + size_t fmt_offset; + size_t new_count, enc_count; + size_t struct_alignment; + int is_complex; + char enc_type; + char new_packmode; + char enc_packmode; + char is_valid_array; +} __Pyx_BufFmt_Context; + + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":725 + * # in Cython to enable them only on the right systems. + * + * ctypedef npy_int8 int8_t # <<<<<<<<<<<<<< + * ctypedef npy_int16 int16_t + * ctypedef npy_int32 int32_t + */ +typedef npy_int8 __pyx_t_5numpy_int8_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":726 + * + * ctypedef npy_int8 int8_t + * ctypedef npy_int16 int16_t # <<<<<<<<<<<<<< + * ctypedef npy_int32 int32_t + * ctypedef npy_int64 int64_t + */ +typedef npy_int16 __pyx_t_5numpy_int16_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":727 + * ctypedef npy_int8 int8_t + * ctypedef npy_int16 int16_t + * ctypedef npy_int32 int32_t # <<<<<<<<<<<<<< + * ctypedef npy_int64 int64_t + * #ctypedef npy_int96 int96_t + */ +typedef npy_int32 __pyx_t_5numpy_int32_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":728 + * ctypedef npy_int16 int16_t + * ctypedef npy_int32 int32_t + * ctypedef npy_int64 int64_t # <<<<<<<<<<<<<< + * #ctypedef npy_int96 int96_t + * #ctypedef npy_int128 int128_t + */ +typedef npy_int64 __pyx_t_5numpy_int64_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":732 + * #ctypedef npy_int128 int128_t + * + * ctypedef npy_uint8 uint8_t # <<<<<<<<<<<<<< + * ctypedef npy_uint16 uint16_t + * ctypedef npy_uint32 uint32_t + */ +typedef npy_uint8 __pyx_t_5numpy_uint8_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":733 + * + * ctypedef npy_uint8 uint8_t + * ctypedef npy_uint16 uint16_t # <<<<<<<<<<<<<< + * ctypedef npy_uint32 uint32_t + * ctypedef npy_uint64 uint64_t + */ +typedef npy_uint16 __pyx_t_5numpy_uint16_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":734 + * ctypedef npy_uint8 uint8_t + * ctypedef npy_uint16 uint16_t + * ctypedef npy_uint32 uint32_t # <<<<<<<<<<<<<< + * ctypedef npy_uint64 uint64_t + * #ctypedef npy_uint96 uint96_t + */ +typedef npy_uint32 __pyx_t_5numpy_uint32_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":735 + * ctypedef npy_uint16 uint16_t + * ctypedef npy_uint32 uint32_t + * ctypedef npy_uint64 uint64_t # <<<<<<<<<<<<<< + * #ctypedef npy_uint96 uint96_t + * #ctypedef npy_uint128 uint128_t + */ +typedef npy_uint64 __pyx_t_5numpy_uint64_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":739 + * #ctypedef npy_uint128 uint128_t + * + * ctypedef npy_float32 float32_t # <<<<<<<<<<<<<< + * ctypedef npy_float64 float64_t + * #ctypedef npy_float80 float80_t + */ +typedef npy_float32 __pyx_t_5numpy_float32_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":740 + * + * ctypedef npy_float32 float32_t + * ctypedef npy_float64 float64_t # <<<<<<<<<<<<<< + * #ctypedef npy_float80 float80_t + * #ctypedef npy_float128 float128_t + */ +typedef npy_float64 __pyx_t_5numpy_float64_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":749 + * # The int types are mapped a bit surprising -- + * # numpy.int corresponds to 'l' and numpy.long to 'q' + * ctypedef npy_long int_t # <<<<<<<<<<<<<< + * ctypedef npy_longlong long_t + * ctypedef npy_longlong longlong_t + */ +typedef npy_long __pyx_t_5numpy_int_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":750 + * # numpy.int corresponds to 'l' and numpy.long to 'q' + * ctypedef npy_long int_t + * ctypedef npy_longlong long_t # <<<<<<<<<<<<<< + * ctypedef npy_longlong longlong_t + * + */ +typedef npy_longlong __pyx_t_5numpy_long_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":751 + * ctypedef npy_long int_t + * ctypedef npy_longlong long_t + * ctypedef npy_longlong longlong_t # <<<<<<<<<<<<<< + * + * ctypedef npy_ulong uint_t + */ +typedef npy_longlong __pyx_t_5numpy_longlong_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":753 + * ctypedef npy_longlong longlong_t + * + * ctypedef npy_ulong uint_t # <<<<<<<<<<<<<< + * ctypedef npy_ulonglong ulong_t + * ctypedef npy_ulonglong ulonglong_t + */ +typedef npy_ulong __pyx_t_5numpy_uint_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":754 + * + * ctypedef npy_ulong uint_t + * ctypedef npy_ulonglong ulong_t # <<<<<<<<<<<<<< + * ctypedef npy_ulonglong ulonglong_t + * + */ +typedef npy_ulonglong __pyx_t_5numpy_ulong_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":755 + * ctypedef npy_ulong uint_t + * ctypedef npy_ulonglong ulong_t + * ctypedef npy_ulonglong ulonglong_t # <<<<<<<<<<<<<< + * + * ctypedef npy_intp intp_t + */ +typedef npy_ulonglong __pyx_t_5numpy_ulonglong_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":757 + * ctypedef npy_ulonglong ulonglong_t + * + * ctypedef npy_intp intp_t # <<<<<<<<<<<<<< + * ctypedef npy_uintp uintp_t + * + */ +typedef npy_intp __pyx_t_5numpy_intp_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":758 + * + * ctypedef npy_intp intp_t + * ctypedef npy_uintp uintp_t # <<<<<<<<<<<<<< + * + * ctypedef npy_double float_t + */ +typedef npy_uintp __pyx_t_5numpy_uintp_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":760 + * ctypedef npy_uintp uintp_t + * + * ctypedef npy_double float_t # <<<<<<<<<<<<<< + * ctypedef npy_double double_t + * ctypedef npy_longdouble longdouble_t + */ +typedef npy_double __pyx_t_5numpy_float_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":761 + * + * ctypedef npy_double float_t + * ctypedef npy_double double_t # <<<<<<<<<<<<<< + * ctypedef npy_longdouble longdouble_t + * + */ +typedef npy_double __pyx_t_5numpy_double_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":762 + * ctypedef npy_double float_t + * ctypedef npy_double double_t + * ctypedef npy_longdouble longdouble_t # <<<<<<<<<<<<<< + * + * ctypedef npy_cfloat cfloat_t + */ +typedef npy_longdouble __pyx_t_5numpy_longdouble_t; +/* Declarations.proto */ +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + typedef ::std::complex< float > __pyx_t_float_complex; + #else + typedef float _Complex __pyx_t_float_complex; + #endif +#else + typedef struct { float real, imag; } __pyx_t_float_complex; +#endif +static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float, float); + +/* Declarations.proto */ +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + typedef ::std::complex< double > __pyx_t_double_complex; + #else + typedef double _Complex __pyx_t_double_complex; + #endif +#else + typedef struct { double real, imag; } __pyx_t_double_complex; +#endif +static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double); + + +/*--- Type declarations ---*/ + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":764 + * ctypedef npy_longdouble longdouble_t + * + * ctypedef npy_cfloat cfloat_t # <<<<<<<<<<<<<< + * ctypedef npy_cdouble cdouble_t + * ctypedef npy_clongdouble clongdouble_t + */ +typedef npy_cfloat __pyx_t_5numpy_cfloat_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":765 + * + * ctypedef npy_cfloat cfloat_t + * ctypedef npy_cdouble cdouble_t # <<<<<<<<<<<<<< + * ctypedef npy_clongdouble clongdouble_t + * + */ +typedef npy_cdouble __pyx_t_5numpy_cdouble_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":766 + * ctypedef npy_cfloat cfloat_t + * ctypedef npy_cdouble cdouble_t + * ctypedef npy_clongdouble clongdouble_t # <<<<<<<<<<<<<< + * + * ctypedef npy_cdouble complex_t + */ +typedef npy_clongdouble __pyx_t_5numpy_clongdouble_t; + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":768 + * ctypedef npy_clongdouble clongdouble_t + * + * ctypedef npy_cdouble complex_t # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew1(a): + */ +typedef npy_cdouble __pyx_t_5numpy_complex_t; + +/* --- Runtime support code (head) --- */ +/* Refnanny.proto */ +#ifndef CYTHON_REFNANNY + #define CYTHON_REFNANNY 0 +#endif +#if CYTHON_REFNANNY + typedef struct { + void (*INCREF)(void*, PyObject*, int); + void (*DECREF)(void*, PyObject*, int); + void (*GOTREF)(void*, PyObject*, int); + void (*GIVEREF)(void*, PyObject*, int); + void* (*SetupContext)(const char*, int, const char*); + void (*FinishContext)(void**); + } __Pyx_RefNannyAPIStruct; + static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; + static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); + #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; +#ifdef WITH_THREAD + #define __Pyx_RefNannySetupContext(name, acquire_gil)\ + if (acquire_gil) {\ + PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ + __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ + PyGILState_Release(__pyx_gilstate_save);\ + } else {\ + __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ + } +#else + #define __Pyx_RefNannySetupContext(name, acquire_gil)\ + __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) +#endif + #define __Pyx_RefNannyFinishContext()\ + __Pyx_RefNanny->FinishContext(&__pyx_refnanny) + #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0) + #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0) + #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0) + #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0) +#else + #define __Pyx_RefNannyDeclarations + #define __Pyx_RefNannySetupContext(name, acquire_gil) + #define __Pyx_RefNannyFinishContext() + #define __Pyx_INCREF(r) Py_INCREF(r) + #define __Pyx_DECREF(r) Py_DECREF(r) + #define __Pyx_GOTREF(r) + #define __Pyx_GIVEREF(r) + #define __Pyx_XINCREF(r) Py_XINCREF(r) + #define __Pyx_XDECREF(r) Py_XDECREF(r) + #define __Pyx_XGOTREF(r) + #define __Pyx_XGIVEREF(r) +#endif +#define __Pyx_XDECREF_SET(r, v) do {\ + PyObject *tmp = (PyObject *) r;\ + r = v; __Pyx_XDECREF(tmp);\ + } while (0) +#define __Pyx_DECREF_SET(r, v) do {\ + PyObject *tmp = (PyObject *) r;\ + r = v; __Pyx_DECREF(tmp);\ + } while (0) +#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) +#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) + +/* RaiseArgTupleInvalid.proto */ +static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, + Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); + +/* RaiseDoubleKeywords.proto */ +static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); + +/* ParseKeywords.proto */ +static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\ + PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\ + const char* function_name); + +/* ArgTypeTest.proto */ +static CYTHON_INLINE int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed, + const char *name, int exact); + +/* BufferFormatCheck.proto */ +static CYTHON_INLINE int __Pyx_GetBufferAndValidate(Py_buffer* buf, PyObject* obj, + __Pyx_TypeInfo* dtype, int flags, int nd, int cast, __Pyx_BufFmt_StackElem* stack); +static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info); +static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts); +static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, + __Pyx_BufFmt_StackElem* stack, + __Pyx_TypeInfo* type); // PROTO + +/* PyObjectGetAttrStr.proto */ +#if CYTHON_USE_TYPE_SLOTS +static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { + PyTypeObject* tp = Py_TYPE(obj); + if (likely(tp->tp_getattro)) + return tp->tp_getattro(obj, attr_name); +#if PY_MAJOR_VERSION < 3 + if (likely(tp->tp_getattr)) + return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); +#endif + return PyObject_GetAttr(obj, attr_name); +} +#else +#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) +#endif + +/* GetBuiltinName.proto */ +static PyObject *__Pyx_GetBuiltinName(PyObject *name); + +/* GetModuleGlobalName.proto */ +static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name); + +/* PyObjectCall.proto */ +#if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); +#else +#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) +#endif + +/* ExtTypeTest.proto */ +static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); + +/* BufferIndexError.proto */ +static void __Pyx_RaiseBufferIndexError(int axis); + +#define __Pyx_BufPtrStrided2d(type, buf, i0, s0, i1, s1) (type)((char*)buf + i0 * s0 + i1 * s1) +/* PyThreadStateGet.proto */ +#if CYTHON_FAST_THREAD_STATE +#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; +#define __Pyx_PyThreadState_assign __pyx_tstate = PyThreadState_GET(); +#else +#define __Pyx_PyThreadState_declare +#define __Pyx_PyThreadState_assign +#endif + +/* PyErrFetchRestore.proto */ +#if CYTHON_FAST_THREAD_STATE +#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) +#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) +#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) +#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) +static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); +static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); +#else +#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) +#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) +#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) +#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) +#endif + +/* RaiseException.proto */ +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); + +/* DictGetItem.proto */ +#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY +static PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key) { + PyObject *value; + value = PyDict_GetItemWithError(d, key); + if (unlikely(!value)) { + if (!PyErr_Occurred()) { + PyObject* args = PyTuple_Pack(1, key); + if (likely(args)) + PyErr_SetObject(PyExc_KeyError, args); + Py_XDECREF(args); + } + return NULL; + } + Py_INCREF(value); + return value; +} +#else + #define __Pyx_PyDict_GetItem(d, key) PyObject_GetItem(d, key) +#endif + +/* RaiseTooManyValuesToUnpack.proto */ +static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); + +/* RaiseNeedMoreValuesToUnpack.proto */ +static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); + +/* RaiseNoneIterError.proto */ +static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void); + +/* SaveResetException.proto */ +#if CYTHON_FAST_THREAD_STATE +#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) +static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); +#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) +static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); +#else +#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) +#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) +#endif + +/* PyErrExceptionMatches.proto */ +#if CYTHON_FAST_THREAD_STATE +#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) +static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); +#else +#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) +#endif + +/* GetException.proto */ +#if CYTHON_FAST_THREAD_STATE +#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) +static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); +#else +static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); +#endif + +/* Import.proto */ +static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); + +/* CodeObjectCache.proto */ +typedef struct { + PyCodeObject* code_object; + int code_line; +} __Pyx_CodeObjectCacheEntry; +struct __Pyx_CodeObjectCache { + int count; + int max_count; + __Pyx_CodeObjectCacheEntry* entries; +}; +static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; +static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); +static PyCodeObject *__pyx_find_code_object(int code_line); +static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); + +/* AddTraceback.proto */ +static void __Pyx_AddTraceback(const char *funcname, int c_line, + int py_line, const char *filename); + +/* BufferStructDeclare.proto */ +typedef struct { + Py_ssize_t shape, strides, suboffsets; +} __Pyx_Buf_DimInfo; +typedef struct { + size_t refcount; + Py_buffer pybuffer; +} __Pyx_Buffer; +typedef struct { + __Pyx_Buffer *rcbuffer; + char *data; + __Pyx_Buf_DimInfo diminfo[8]; +} __Pyx_LocalBuf_ND; + +#if PY_MAJOR_VERSION < 3 + static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags); + static void __Pyx_ReleaseBuffer(Py_buffer *view); +#else + #define __Pyx_GetBuffer PyObject_GetBuffer + #define __Pyx_ReleaseBuffer PyBuffer_Release +#endif + + +/* None.proto */ +static Py_ssize_t __Pyx_zeros[] = {0, 0, 0, 0, 0, 0, 0, 0}; +static Py_ssize_t __Pyx_minusones[] = {-1, -1, -1, -1, -1, -1, -1, -1}; + +/* CIntToPy.proto */ +static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value); + +/* RealImag.proto */ +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + #define __Pyx_CREAL(z) ((z).real()) + #define __Pyx_CIMAG(z) ((z).imag()) + #else + #define __Pyx_CREAL(z) (__real__(z)) + #define __Pyx_CIMAG(z) (__imag__(z)) + #endif +#else + #define __Pyx_CREAL(z) ((z).real) + #define __Pyx_CIMAG(z) ((z).imag) +#endif +#if defined(__cplusplus) && CYTHON_CCOMPLEX\ + && (defined(_WIN32) || defined(__clang__) || (defined(__GNUC__) && (__GNUC__ >= 5 || __GNUC__ == 4 && __GNUC_MINOR__ >= 4 )) || __cplusplus >= 201103) + #define __Pyx_SET_CREAL(z,x) ((z).real(x)) + #define __Pyx_SET_CIMAG(z,y) ((z).imag(y)) +#else + #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x) + #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y) +#endif + +/* Arithmetic.proto */ +#if CYTHON_CCOMPLEX + #define __Pyx_c_eq_float(a, b) ((a)==(b)) + #define __Pyx_c_sum_float(a, b) ((a)+(b)) + #define __Pyx_c_diff_float(a, b) ((a)-(b)) + #define __Pyx_c_prod_float(a, b) ((a)*(b)) + #define __Pyx_c_quot_float(a, b) ((a)/(b)) + #define __Pyx_c_neg_float(a) (-(a)) + #ifdef __cplusplus + #define __Pyx_c_is_zero_float(z) ((z)==(float)0) + #define __Pyx_c_conj_float(z) (::std::conj(z)) + #if 1 + #define __Pyx_c_abs_float(z) (::std::abs(z)) + #define __Pyx_c_pow_float(a, b) (::std::pow(a, b)) + #endif + #else + #define __Pyx_c_is_zero_float(z) ((z)==0) + #define __Pyx_c_conj_float(z) (conjf(z)) + #if 1 + #define __Pyx_c_abs_float(z) (cabsf(z)) + #define __Pyx_c_pow_float(a, b) (cpowf(a, b)) + #endif + #endif +#else + static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex); + static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex); + #if 1 + static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex, __pyx_t_float_complex); + #endif +#endif + +/* Arithmetic.proto */ +#if CYTHON_CCOMPLEX + #define __Pyx_c_eq_double(a, b) ((a)==(b)) + #define __Pyx_c_sum_double(a, b) ((a)+(b)) + #define __Pyx_c_diff_double(a, b) ((a)-(b)) + #define __Pyx_c_prod_double(a, b) ((a)*(b)) + #define __Pyx_c_quot_double(a, b) ((a)/(b)) + #define __Pyx_c_neg_double(a) (-(a)) + #ifdef __cplusplus + #define __Pyx_c_is_zero_double(z) ((z)==(double)0) + #define __Pyx_c_conj_double(z) (::std::conj(z)) + #if 1 + #define __Pyx_c_abs_double(z) (::std::abs(z)) + #define __Pyx_c_pow_double(a, b) (::std::pow(a, b)) + #endif + #else + #define __Pyx_c_is_zero_double(z) ((z)==0) + #define __Pyx_c_conj_double(z) (conj(z)) + #if 1 + #define __Pyx_c_abs_double(z) (cabs(z)) + #define __Pyx_c_pow_double(a, b) (cpow(a, b)) + #endif + #endif +#else + static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex); + static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex); + #if 1 + static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex, __pyx_t_double_complex); + #endif +#endif + +/* CIntToPy.proto */ +static CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value); + +/* CIntFromPy.proto */ +static CYTHON_INLINE npy_int32 __Pyx_PyInt_As_npy_int32(PyObject *); + +/* CIntFromPy.proto */ +static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); + +/* CIntToPy.proto */ +static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); + +/* CIntFromPy.proto */ +static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); + +/* CheckBinaryVersion.proto */ +static int __Pyx_check_binary_version(void); + +/* PyIdentifierFromString.proto */ +#if !defined(__Pyx_PyIdentifier_FromString) +#if PY_MAJOR_VERSION < 3 + #define __Pyx_PyIdentifier_FromString(s) PyString_FromString(s) +#else + #define __Pyx_PyIdentifier_FromString(s) PyUnicode_FromString(s) +#endif +#endif + +/* ModuleImport.proto */ +static PyObject *__Pyx_ImportModule(const char *name); + +/* TypeImport.proto */ +static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, size_t size, int strict); + +/* InitStrings.proto */ +static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); + + +/* Module declarations from 'cpython.buffer' */ + +/* Module declarations from 'libc.string' */ + +/* Module declarations from 'libc.stdio' */ + +/* Module declarations from '__builtin__' */ + +/* Module declarations from 'cpython.type' */ +static PyTypeObject *__pyx_ptype_7cpython_4type_type = 0; + +/* Module declarations from 'cpython' */ + +/* Module declarations from 'cpython.object' */ + +/* Module declarations from 'cpython.ref' */ + +/* Module declarations from 'libc.stdlib' */ + +/* Module declarations from 'numpy' */ + +/* Module declarations from 'numpy' */ +static PyTypeObject *__pyx_ptype_5numpy_dtype = 0; +static PyTypeObject *__pyx_ptype_5numpy_flatiter = 0; +static PyTypeObject *__pyx_ptype_5numpy_broadcast = 0; +static PyTypeObject *__pyx_ptype_5numpy_ndarray = 0; +static PyTypeObject *__pyx_ptype_5numpy_ufunc = 0; +static CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *, char *, char *, int *); /*proto*/ + +/* Module declarations from 'poly_overlaps' */ +static __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t = { "float32_t", NULL, sizeof(__pyx_t_5numpy_float32_t), { 0 }, 0, 'R', 0, 0 }; +#define __Pyx_MODULE_NAME "poly_overlaps" +int __pyx_module_is_main_poly_overlaps = 0; + +/* Implementation of 'poly_overlaps' */ +static PyObject *__pyx_builtin_ValueError; +static PyObject *__pyx_builtin_range; +static PyObject *__pyx_builtin_RuntimeError; +static PyObject *__pyx_builtin_ImportError; +static const char __pyx_k_K[] = "K"; +static const char __pyx_k_N[] = "N"; +static const char __pyx_k_np[] = "np"; +static const char __pyx_k_main[] = "__main__"; +static const char __pyx_k_test[] = "__test__"; +static const char __pyx_k_boxes[] = "boxes"; +static const char __pyx_k_dtype[] = "dtype"; +static const char __pyx_k_numpy[] = "numpy"; +static const char __pyx_k_range[] = "range"; +static const char __pyx_k_zeros[] = "zeros"; +static const char __pyx_k_import[] = "__import__"; +static const char __pyx_k_float32[] = "float32"; +static const char __pyx_k_overlaps[] = "overlaps"; +static const char __pyx_k_device_id[] = "device_id"; +static const char __pyx_k_ValueError[] = "ValueError"; +static const char __pyx_k_ImportError[] = "ImportError"; +static const char __pyx_k_query_boxes[] = "query_boxes"; +static const char __pyx_k_RuntimeError[] = "RuntimeError"; +static const char __pyx_k_poly_overlaps[] = "poly_overlaps"; +static const char __pyx_k_ndarray_is_not_C_contiguous[] = "ndarray is not C contiguous"; +static const char __pyx_k_home_dingjian_code_DOTA_devkit[] = "/home/dingjian/code/DOTA_devkit/poly_nms_gpu/poly_overlaps.pyx"; +static const char __pyx_k_numpy_core_multiarray_failed_to[] = "numpy.core.multiarray failed to import"; +static const char __pyx_k_unknown_dtype_code_in_numpy_pxd[] = "unknown dtype code in numpy.pxd (%d)"; +static const char __pyx_k_Format_string_allocated_too_shor[] = "Format string allocated too short, see comment in numpy.pxd"; +static const char __pyx_k_Non_native_byte_order_not_suppor[] = "Non-native byte order not supported"; +static const char __pyx_k_ndarray_is_not_Fortran_contiguou[] = "ndarray is not Fortran contiguous"; +static const char __pyx_k_numpy_core_umath_failed_to_impor[] = "numpy.core.umath failed to import"; +static const char __pyx_k_Format_string_allocated_too_shor_2[] = "Format string allocated too short."; +static PyObject *__pyx_kp_u_Format_string_allocated_too_shor; +static PyObject *__pyx_kp_u_Format_string_allocated_too_shor_2; +static PyObject *__pyx_n_s_ImportError; +static PyObject *__pyx_n_s_K; +static PyObject *__pyx_n_s_N; +static PyObject *__pyx_kp_u_Non_native_byte_order_not_suppor; +static PyObject *__pyx_n_s_RuntimeError; +static PyObject *__pyx_n_s_ValueError; +static PyObject *__pyx_n_s_boxes; +static PyObject *__pyx_n_s_device_id; +static PyObject *__pyx_n_s_dtype; +static PyObject *__pyx_n_s_float32; +static PyObject *__pyx_kp_s_home_dingjian_code_DOTA_devkit; +static PyObject *__pyx_n_s_import; +static PyObject *__pyx_n_s_main; +static PyObject *__pyx_kp_u_ndarray_is_not_C_contiguous; +static PyObject *__pyx_kp_u_ndarray_is_not_Fortran_contiguou; +static PyObject *__pyx_n_s_np; +static PyObject *__pyx_n_s_numpy; +static PyObject *__pyx_kp_s_numpy_core_multiarray_failed_to; +static PyObject *__pyx_kp_s_numpy_core_umath_failed_to_impor; +static PyObject *__pyx_n_s_overlaps; +static PyObject *__pyx_n_s_poly_overlaps; +static PyObject *__pyx_n_s_query_boxes; +static PyObject *__pyx_n_s_range; +static PyObject *__pyx_n_s_test; +static PyObject *__pyx_kp_u_unknown_dtype_code_in_numpy_pxd; +static PyObject *__pyx_n_s_zeros; +static PyObject *__pyx_pf_13poly_overlaps_poly_overlaps(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_boxes, PyArrayObject *__pyx_v_query_boxes, __pyx_t_5numpy_int32_t __pyx_v_device_id); /* proto */ +static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ +static void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info); /* proto */ +static PyObject *__pyx_tuple_; +static PyObject *__pyx_tuple__2; +static PyObject *__pyx_tuple__3; +static PyObject *__pyx_tuple__4; +static PyObject *__pyx_tuple__5; +static PyObject *__pyx_tuple__6; +static PyObject *__pyx_tuple__7; +static PyObject *__pyx_tuple__8; +static PyObject *__pyx_tuple__9; +static PyObject *__pyx_tuple__10; +static PyObject *__pyx_codeobj__11; + +/* "poly_overlaps.pyx":7 + * void _overlaps(np.float32_t*, np.float32_t*, np.float32_t*, int, int, int) + * + * def poly_overlaps (np.ndarray[np.float32_t, ndim=2] boxes, np.ndarray[np.float32_t, ndim=2] query_boxes, np.int32_t device_id=0): # <<<<<<<<<<<<<< + * cdef int N = boxes.shape[0] + * cdef int K = query_boxes.shape[0] + */ + +/* Python wrapper */ +static PyObject *__pyx_pw_13poly_overlaps_1poly_overlaps(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static PyMethodDef __pyx_mdef_13poly_overlaps_1poly_overlaps = {"poly_overlaps", (PyCFunction)__pyx_pw_13poly_overlaps_1poly_overlaps, METH_VARARGS|METH_KEYWORDS, 0}; +static PyObject *__pyx_pw_13poly_overlaps_1poly_overlaps(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyArrayObject *__pyx_v_boxes = 0; + PyArrayObject *__pyx_v_query_boxes = 0; + __pyx_t_5numpy_int32_t __pyx_v_device_id; + PyObject *__pyx_r = 0; + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("poly_overlaps (wrapper)", 0); + { + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_boxes,&__pyx_n_s_query_boxes,&__pyx_n_s_device_id,0}; + PyObject* values[3] = {0,0,0}; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args; + const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); + switch (pos_args) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + kw_args = PyDict_Size(__pyx_kwds); + switch (pos_args) { + case 0: + if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_boxes)) != 0)) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_query_boxes)) != 0)) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("poly_overlaps", 0, 2, 3, 1); __PYX_ERR(0, 7, __pyx_L3_error) + } + case 2: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_device_id); + if (value) { values[2] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "poly_overlaps") < 0)) __PYX_ERR(0, 7, __pyx_L3_error) + } + } else { + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + __pyx_v_boxes = ((PyArrayObject *)values[0]); + __pyx_v_query_boxes = ((PyArrayObject *)values[1]); + if (values[2]) { + __pyx_v_device_id = __Pyx_PyInt_As_npy_int32(values[2]); if (unlikely((__pyx_v_device_id == ((npy_int32)-1)) && PyErr_Occurred())) __PYX_ERR(0, 7, __pyx_L3_error) + } else { + __pyx_v_device_id = ((__pyx_t_5numpy_int32_t)0); + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("poly_overlaps", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 7, __pyx_L3_error) + __pyx_L3_error:; + __Pyx_AddTraceback("poly_overlaps.poly_overlaps", __pyx_clineno, __pyx_lineno, __pyx_filename); + __Pyx_RefNannyFinishContext(); + return NULL; + __pyx_L4_argument_unpacking_done:; + if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_boxes), __pyx_ptype_5numpy_ndarray, 1, "boxes", 0))) __PYX_ERR(0, 7, __pyx_L1_error) + if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_query_boxes), __pyx_ptype_5numpy_ndarray, 1, "query_boxes", 0))) __PYX_ERR(0, 7, __pyx_L1_error) + __pyx_r = __pyx_pf_13poly_overlaps_poly_overlaps(__pyx_self, __pyx_v_boxes, __pyx_v_query_boxes, __pyx_v_device_id); + + /* function exit code */ + goto __pyx_L0; + __pyx_L1_error:; + __pyx_r = NULL; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static PyObject *__pyx_pf_13poly_overlaps_poly_overlaps(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_boxes, PyArrayObject *__pyx_v_query_boxes, __pyx_t_5numpy_int32_t __pyx_v_device_id) { + int __pyx_v_N; + int __pyx_v_K; + PyArrayObject *__pyx_v_overlaps = 0; + __Pyx_LocalBuf_ND __pyx_pybuffernd_boxes; + __Pyx_Buffer __pyx_pybuffer_boxes; + __Pyx_LocalBuf_ND __pyx_pybuffernd_overlaps; + __Pyx_Buffer __pyx_pybuffer_overlaps; + __Pyx_LocalBuf_ND __pyx_pybuffernd_query_boxes; + __Pyx_Buffer __pyx_pybuffer_query_boxes; + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + PyObject *__pyx_t_5 = NULL; + PyArrayObject *__pyx_t_6 = NULL; + Py_ssize_t __pyx_t_7; + Py_ssize_t __pyx_t_8; + int __pyx_t_9; + Py_ssize_t __pyx_t_10; + Py_ssize_t __pyx_t_11; + Py_ssize_t __pyx_t_12; + Py_ssize_t __pyx_t_13; + __Pyx_RefNannySetupContext("poly_overlaps", 0); + __pyx_pybuffer_overlaps.pybuffer.buf = NULL; + __pyx_pybuffer_overlaps.refcount = 0; + __pyx_pybuffernd_overlaps.data = NULL; + __pyx_pybuffernd_overlaps.rcbuffer = &__pyx_pybuffer_overlaps; + __pyx_pybuffer_boxes.pybuffer.buf = NULL; + __pyx_pybuffer_boxes.refcount = 0; + __pyx_pybuffernd_boxes.data = NULL; + __pyx_pybuffernd_boxes.rcbuffer = &__pyx_pybuffer_boxes; + __pyx_pybuffer_query_boxes.pybuffer.buf = NULL; + __pyx_pybuffer_query_boxes.refcount = 0; + __pyx_pybuffernd_query_boxes.data = NULL; + __pyx_pybuffernd_query_boxes.rcbuffer = &__pyx_pybuffer_query_boxes; + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_boxes.rcbuffer->pybuffer, (PyObject*)__pyx_v_boxes, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 7, __pyx_L1_error) + } + __pyx_pybuffernd_boxes.diminfo[0].strides = __pyx_pybuffernd_boxes.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_boxes.diminfo[0].shape = __pyx_pybuffernd_boxes.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_boxes.diminfo[1].strides = __pyx_pybuffernd_boxes.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_boxes.diminfo[1].shape = __pyx_pybuffernd_boxes.rcbuffer->pybuffer.shape[1]; + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_query_boxes.rcbuffer->pybuffer, (PyObject*)__pyx_v_query_boxes, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 7, __pyx_L1_error) + } + __pyx_pybuffernd_query_boxes.diminfo[0].strides = __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_query_boxes.diminfo[0].shape = __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_query_boxes.diminfo[1].strides = __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_query_boxes.diminfo[1].shape = __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.shape[1]; + + /* "poly_overlaps.pyx":8 + * + * def poly_overlaps (np.ndarray[np.float32_t, ndim=2] boxes, np.ndarray[np.float32_t, ndim=2] query_boxes, np.int32_t device_id=0): + * cdef int N = boxes.shape[0] # <<<<<<<<<<<<<< + * cdef int K = query_boxes.shape[0] + * cdef np.ndarray[np.float32_t, ndim=2] overlaps = np.zeros((N, K), dtype = np.float32) + */ + __pyx_v_N = (__pyx_v_boxes->dimensions[0]); + + /* "poly_overlaps.pyx":9 + * def poly_overlaps (np.ndarray[np.float32_t, ndim=2] boxes, np.ndarray[np.float32_t, ndim=2] query_boxes, np.int32_t device_id=0): + * cdef int N = boxes.shape[0] + * cdef int K = query_boxes.shape[0] # <<<<<<<<<<<<<< + * cdef np.ndarray[np.float32_t, ndim=2] overlaps = np.zeros((N, K), dtype = np.float32) + * _overlaps(&overlaps[0, 0], &boxes[0, 0], &query_boxes[0, 0], N, K, device_id) + */ + __pyx_v_K = (__pyx_v_query_boxes->dimensions[0]); + + /* "poly_overlaps.pyx":10 + * cdef int N = boxes.shape[0] + * cdef int K = query_boxes.shape[0] + * cdef np.ndarray[np.float32_t, ndim=2] overlaps = np.zeros((N, K), dtype = np.float32) # <<<<<<<<<<<<<< + * _overlaps(&overlaps[0, 0], &boxes[0, 0], &query_boxes[0, 0], N, K, device_id) + * return overlaps + */ + __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 10, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_zeros); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 10, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_N); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 10, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_K); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 10, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 10, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_GIVEREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1); + __Pyx_GIVEREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_3); + __pyx_t_1 = 0; + __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 10, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_GIVEREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); + __pyx_t_4 = 0; + __pyx_t_4 = PyDict_New(); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 10, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 10, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_float32); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 10, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + if (PyDict_SetItem(__pyx_t_4, __pyx_n_s_dtype, __pyx_t_5) < 0) __PYX_ERR(0, 10, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_3, __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 10, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 10, __pyx_L1_error) + __pyx_t_6 = ((PyArrayObject *)__pyx_t_5); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_overlaps.rcbuffer->pybuffer, (PyObject*)__pyx_t_6, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) { + __pyx_v_overlaps = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_overlaps.rcbuffer->pybuffer.buf = NULL; + __PYX_ERR(0, 10, __pyx_L1_error) + } else {__pyx_pybuffernd_overlaps.diminfo[0].strides = __pyx_pybuffernd_overlaps.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_overlaps.diminfo[0].shape = __pyx_pybuffernd_overlaps.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_overlaps.diminfo[1].strides = __pyx_pybuffernd_overlaps.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_overlaps.diminfo[1].shape = __pyx_pybuffernd_overlaps.rcbuffer->pybuffer.shape[1]; + } + } + __pyx_t_6 = 0; + __pyx_v_overlaps = ((PyArrayObject *)__pyx_t_5); + __pyx_t_5 = 0; + + /* "poly_overlaps.pyx":11 + * cdef int K = query_boxes.shape[0] + * cdef np.ndarray[np.float32_t, ndim=2] overlaps = np.zeros((N, K), dtype = np.float32) + * _overlaps(&overlaps[0, 0], &boxes[0, 0], &query_boxes[0, 0], N, K, device_id) # <<<<<<<<<<<<<< + * return overlaps + * + */ + __pyx_t_7 = 0; + __pyx_t_8 = 0; + __pyx_t_9 = -1; + if (__pyx_t_7 < 0) { + __pyx_t_7 += __pyx_pybuffernd_overlaps.diminfo[0].shape; + if (unlikely(__pyx_t_7 < 0)) __pyx_t_9 = 0; + } else if (unlikely(__pyx_t_7 >= __pyx_pybuffernd_overlaps.diminfo[0].shape)) __pyx_t_9 = 0; + if (__pyx_t_8 < 0) { + __pyx_t_8 += __pyx_pybuffernd_overlaps.diminfo[1].shape; + if (unlikely(__pyx_t_8 < 0)) __pyx_t_9 = 1; + } else if (unlikely(__pyx_t_8 >= __pyx_pybuffernd_overlaps.diminfo[1].shape)) __pyx_t_9 = 1; + if (unlikely(__pyx_t_9 != -1)) { + __Pyx_RaiseBufferIndexError(__pyx_t_9); + __PYX_ERR(0, 11, __pyx_L1_error) + } + __pyx_t_10 = 0; + __pyx_t_11 = 0; + __pyx_t_9 = -1; + if (__pyx_t_10 < 0) { + __pyx_t_10 += __pyx_pybuffernd_boxes.diminfo[0].shape; + if (unlikely(__pyx_t_10 < 0)) __pyx_t_9 = 0; + } else if (unlikely(__pyx_t_10 >= __pyx_pybuffernd_boxes.diminfo[0].shape)) __pyx_t_9 = 0; + if (__pyx_t_11 < 0) { + __pyx_t_11 += __pyx_pybuffernd_boxes.diminfo[1].shape; + if (unlikely(__pyx_t_11 < 0)) __pyx_t_9 = 1; + } else if (unlikely(__pyx_t_11 >= __pyx_pybuffernd_boxes.diminfo[1].shape)) __pyx_t_9 = 1; + if (unlikely(__pyx_t_9 != -1)) { + __Pyx_RaiseBufferIndexError(__pyx_t_9); + __PYX_ERR(0, 11, __pyx_L1_error) + } + __pyx_t_12 = 0; + __pyx_t_13 = 0; + __pyx_t_9 = -1; + if (__pyx_t_12 < 0) { + __pyx_t_12 += __pyx_pybuffernd_query_boxes.diminfo[0].shape; + if (unlikely(__pyx_t_12 < 0)) __pyx_t_9 = 0; + } else if (unlikely(__pyx_t_12 >= __pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_9 = 0; + if (__pyx_t_13 < 0) { + __pyx_t_13 += __pyx_pybuffernd_query_boxes.diminfo[1].shape; + if (unlikely(__pyx_t_13 < 0)) __pyx_t_9 = 1; + } else if (unlikely(__pyx_t_13 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_9 = 1; + if (unlikely(__pyx_t_9 != -1)) { + __Pyx_RaiseBufferIndexError(__pyx_t_9); + __PYX_ERR(0, 11, __pyx_L1_error) + } + _overlaps((&(*__Pyx_BufPtrStrided2d(__pyx_t_5numpy_float32_t *, __pyx_pybuffernd_overlaps.rcbuffer->pybuffer.buf, __pyx_t_7, __pyx_pybuffernd_overlaps.diminfo[0].strides, __pyx_t_8, __pyx_pybuffernd_overlaps.diminfo[1].strides))), (&(*__Pyx_BufPtrStrided2d(__pyx_t_5numpy_float32_t *, __pyx_pybuffernd_boxes.rcbuffer->pybuffer.buf, __pyx_t_10, __pyx_pybuffernd_boxes.diminfo[0].strides, __pyx_t_11, __pyx_pybuffernd_boxes.diminfo[1].strides))), (&(*__Pyx_BufPtrStrided2d(__pyx_t_5numpy_float32_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_12, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_13, __pyx_pybuffernd_query_boxes.diminfo[1].strides))), __pyx_v_N, __pyx_v_K, __pyx_v_device_id); + + /* "poly_overlaps.pyx":12 + * cdef np.ndarray[np.float32_t, ndim=2] overlaps = np.zeros((N, K), dtype = np.float32) + * _overlaps(&overlaps[0, 0], &boxes[0, 0], &query_boxes[0, 0], N, K, device_id) + * return overlaps # <<<<<<<<<<<<<< + * + * + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(((PyObject *)__pyx_v_overlaps)); + __pyx_r = ((PyObject *)__pyx_v_overlaps); + goto __pyx_L0; + + /* "poly_overlaps.pyx":7 + * void _overlaps(np.float32_t*, np.float32_t*, np.float32_t*, int, int, int) + * + * def poly_overlaps (np.ndarray[np.float32_t, ndim=2] boxes, np.ndarray[np.float32_t, ndim=2] query_boxes, np.int32_t device_id=0): # <<<<<<<<<<<<<< + * cdef int N = boxes.shape[0] + * cdef int K = query_boxes.shape[0] + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_5); + { PyObject *__pyx_type, *__pyx_value, *__pyx_tb; + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_boxes.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_overlaps.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_boxes.rcbuffer->pybuffer); + __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);} + __Pyx_AddTraceback("poly_overlaps.poly_overlaps", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = NULL; + goto __pyx_L2; + __pyx_L0:; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_boxes.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_overlaps.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_boxes.rcbuffer->pybuffer); + __pyx_L2:; + __Pyx_XDECREF((PyObject *)__pyx_v_overlaps); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":197 + * # experimental exception made for __getbuffer__ and __releasebuffer__ + * # -- the details of this may change. + * def __getbuffer__(ndarray self, Py_buffer* info, int flags): # <<<<<<<<<<<<<< + * # This implementation of getbuffer is geared towards Cython + * # requirements, and does not yet fullfill the PEP. + */ + +/* Python wrapper */ +static CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ +static CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { + int __pyx_r; + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); + __pyx_r = __pyx_pf_5numpy_7ndarray___getbuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); + + /* function exit code */ + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { + int __pyx_v_copy_shape; + int __pyx_v_i; + int __pyx_v_ndim; + int __pyx_v_endian_detector; + int __pyx_v_little_endian; + int __pyx_v_t; + char *__pyx_v_f; + PyArray_Descr *__pyx_v_descr = 0; + int __pyx_v_offset; + int __pyx_v_hasfields; + int __pyx_r; + __Pyx_RefNannyDeclarations + int __pyx_t_1; + int __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + int __pyx_t_5; + PyObject *__pyx_t_6 = NULL; + char *__pyx_t_7; + __Pyx_RefNannySetupContext("__getbuffer__", 0); + if (__pyx_v_info != NULL) { + __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); + __Pyx_GIVEREF(__pyx_v_info->obj); + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":203 + * # of flags + * + * if info == NULL: return # <<<<<<<<<<<<<< + * + * cdef int copy_shape, i, ndim + */ + __pyx_t_1 = ((__pyx_v_info == NULL) != 0); + if (__pyx_t_1) { + __pyx_r = 0; + goto __pyx_L0; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":206 + * + * cdef int copy_shape, i, ndim + * cdef int endian_detector = 1 # <<<<<<<<<<<<<< + * cdef bint little_endian = ((&endian_detector)[0] != 0) + * + */ + __pyx_v_endian_detector = 1; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":207 + * cdef int copy_shape, i, ndim + * cdef int endian_detector = 1 + * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<< + * + * ndim = PyArray_NDIM(self) + */ + __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":209 + * cdef bint little_endian = ((&endian_detector)[0] != 0) + * + * ndim = PyArray_NDIM(self) # <<<<<<<<<<<<<< + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + */ + __pyx_v_ndim = PyArray_NDIM(__pyx_v_self); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":211 + * ndim = PyArray_NDIM(self) + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * copy_shape = 1 + * else: + */ + __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0); + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":212 + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + * copy_shape = 1 # <<<<<<<<<<<<<< + * else: + * copy_shape = 0 + */ + __pyx_v_copy_shape = 1; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":211 + * ndim = PyArray_NDIM(self) + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * copy_shape = 1 + * else: + */ + goto __pyx_L4; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":214 + * copy_shape = 1 + * else: + * copy_shape = 0 # <<<<<<<<<<<<<< + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + */ + /*else*/ { + __pyx_v_copy_shape = 0; + } + __pyx_L4:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":216 + * copy_shape = 0 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") + */ + __pyx_t_2 = (((__pyx_v_flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS) != 0); + if (__pyx_t_2) { + } else { + __pyx_t_1 = __pyx_t_2; + goto __pyx_L6_bool_binop_done; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":217 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): # <<<<<<<<<<<<<< + * raise ValueError(u"ndarray is not C contiguous") + * + */ + __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_C_CONTIGUOUS) != 0)) != 0); + __pyx_t_1 = __pyx_t_2; + __pyx_L6_bool_binop_done:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":216 + * copy_shape = 0 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") + */ + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":218 + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") # <<<<<<<<<<<<<< + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + */ + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple_, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 218, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 218, __pyx_L1_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":216 + * copy_shape = 0 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":220 + * raise ValueError(u"ndarray is not C contiguous") + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") + */ + __pyx_t_2 = (((__pyx_v_flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS) != 0); + if (__pyx_t_2) { + } else { + __pyx_t_1 = __pyx_t_2; + goto __pyx_L9_bool_binop_done; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":221 + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): # <<<<<<<<<<<<<< + * raise ValueError(u"ndarray is not Fortran contiguous") + * + */ + __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_F_CONTIGUOUS) != 0)) != 0); + __pyx_t_1 = __pyx_t_2; + __pyx_L9_bool_binop_done:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":220 + * raise ValueError(u"ndarray is not C contiguous") + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") + */ + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":222 + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") # <<<<<<<<<<<<<< + * + * info.buf = PyArray_DATA(self) + */ + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 222, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 222, __pyx_L1_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":220 + * raise ValueError(u"ndarray is not C contiguous") + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":224 + * raise ValueError(u"ndarray is not Fortran contiguous") + * + * info.buf = PyArray_DATA(self) # <<<<<<<<<<<<<< + * info.ndim = ndim + * if copy_shape: + */ + __pyx_v_info->buf = PyArray_DATA(__pyx_v_self); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":225 + * + * info.buf = PyArray_DATA(self) + * info.ndim = ndim # <<<<<<<<<<<<<< + * if copy_shape: + * # Allocate new buffer for strides and shape info. + */ + __pyx_v_info->ndim = __pyx_v_ndim; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":226 + * info.buf = PyArray_DATA(self) + * info.ndim = ndim + * if copy_shape: # <<<<<<<<<<<<<< + * # Allocate new buffer for strides and shape info. + * # This is allocated as one block, strides first. + */ + __pyx_t_1 = (__pyx_v_copy_shape != 0); + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":229 + * # Allocate new buffer for strides and shape info. + * # This is allocated as one block, strides first. + * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) # <<<<<<<<<<<<<< + * info.shape = info.strides + ndim + * for i in range(ndim): + */ + __pyx_v_info->strides = ((Py_ssize_t *)malloc((((sizeof(Py_ssize_t)) * ((size_t)__pyx_v_ndim)) * 2))); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":230 + * # This is allocated as one block, strides first. + * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) + * info.shape = info.strides + ndim # <<<<<<<<<<<<<< + * for i in range(ndim): + * info.strides[i] = PyArray_STRIDES(self)[i] + */ + __pyx_v_info->shape = (__pyx_v_info->strides + __pyx_v_ndim); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":231 + * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) + * info.shape = info.strides + ndim + * for i in range(ndim): # <<<<<<<<<<<<<< + * info.strides[i] = PyArray_STRIDES(self)[i] + * info.shape[i] = PyArray_DIMS(self)[i] + */ + __pyx_t_4 = __pyx_v_ndim; + for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { + __pyx_v_i = __pyx_t_5; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":232 + * info.shape = info.strides + ndim + * for i in range(ndim): + * info.strides[i] = PyArray_STRIDES(self)[i] # <<<<<<<<<<<<<< + * info.shape[i] = PyArray_DIMS(self)[i] + * else: + */ + (__pyx_v_info->strides[__pyx_v_i]) = (PyArray_STRIDES(__pyx_v_self)[__pyx_v_i]); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":233 + * for i in range(ndim): + * info.strides[i] = PyArray_STRIDES(self)[i] + * info.shape[i] = PyArray_DIMS(self)[i] # <<<<<<<<<<<<<< + * else: + * info.strides = PyArray_STRIDES(self) + */ + (__pyx_v_info->shape[__pyx_v_i]) = (PyArray_DIMS(__pyx_v_self)[__pyx_v_i]); + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":226 + * info.buf = PyArray_DATA(self) + * info.ndim = ndim + * if copy_shape: # <<<<<<<<<<<<<< + * # Allocate new buffer for strides and shape info. + * # This is allocated as one block, strides first. + */ + goto __pyx_L11; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":235 + * info.shape[i] = PyArray_DIMS(self)[i] + * else: + * info.strides = PyArray_STRIDES(self) # <<<<<<<<<<<<<< + * info.shape = PyArray_DIMS(self) + * info.suboffsets = NULL + */ + /*else*/ { + __pyx_v_info->strides = ((Py_ssize_t *)PyArray_STRIDES(__pyx_v_self)); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":236 + * else: + * info.strides = PyArray_STRIDES(self) + * info.shape = PyArray_DIMS(self) # <<<<<<<<<<<<<< + * info.suboffsets = NULL + * info.itemsize = PyArray_ITEMSIZE(self) + */ + __pyx_v_info->shape = ((Py_ssize_t *)PyArray_DIMS(__pyx_v_self)); + } + __pyx_L11:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":237 + * info.strides = PyArray_STRIDES(self) + * info.shape = PyArray_DIMS(self) + * info.suboffsets = NULL # <<<<<<<<<<<<<< + * info.itemsize = PyArray_ITEMSIZE(self) + * info.readonly = not PyArray_ISWRITEABLE(self) + */ + __pyx_v_info->suboffsets = NULL; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":238 + * info.shape = PyArray_DIMS(self) + * info.suboffsets = NULL + * info.itemsize = PyArray_ITEMSIZE(self) # <<<<<<<<<<<<<< + * info.readonly = not PyArray_ISWRITEABLE(self) + * + */ + __pyx_v_info->itemsize = PyArray_ITEMSIZE(__pyx_v_self); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":239 + * info.suboffsets = NULL + * info.itemsize = PyArray_ITEMSIZE(self) + * info.readonly = not PyArray_ISWRITEABLE(self) # <<<<<<<<<<<<<< + * + * cdef int t + */ + __pyx_v_info->readonly = (!(PyArray_ISWRITEABLE(__pyx_v_self) != 0)); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":242 + * + * cdef int t + * cdef char* f = NULL # <<<<<<<<<<<<<< + * cdef dtype descr = self.descr + * cdef int offset + */ + __pyx_v_f = NULL; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":243 + * cdef int t + * cdef char* f = NULL + * cdef dtype descr = self.descr # <<<<<<<<<<<<<< + * cdef int offset + * + */ + __pyx_t_3 = ((PyObject *)__pyx_v_self->descr); + __Pyx_INCREF(__pyx_t_3); + __pyx_v_descr = ((PyArray_Descr *)__pyx_t_3); + __pyx_t_3 = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":246 + * cdef int offset + * + * cdef bint hasfields = PyDataType_HASFIELDS(descr) # <<<<<<<<<<<<<< + * + * if not hasfields and not copy_shape: + */ + __pyx_v_hasfields = PyDataType_HASFIELDS(__pyx_v_descr); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":248 + * cdef bint hasfields = PyDataType_HASFIELDS(descr) + * + * if not hasfields and not copy_shape: # <<<<<<<<<<<<<< + * # do not call releasebuffer + * info.obj = None + */ + __pyx_t_2 = ((!(__pyx_v_hasfields != 0)) != 0); + if (__pyx_t_2) { + } else { + __pyx_t_1 = __pyx_t_2; + goto __pyx_L15_bool_binop_done; + } + __pyx_t_2 = ((!(__pyx_v_copy_shape != 0)) != 0); + __pyx_t_1 = __pyx_t_2; + __pyx_L15_bool_binop_done:; + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":250 + * if not hasfields and not copy_shape: + * # do not call releasebuffer + * info.obj = None # <<<<<<<<<<<<<< + * else: + * # need to call releasebuffer + */ + __Pyx_INCREF(Py_None); + __Pyx_GIVEREF(Py_None); + __Pyx_GOTREF(__pyx_v_info->obj); + __Pyx_DECREF(__pyx_v_info->obj); + __pyx_v_info->obj = Py_None; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":248 + * cdef bint hasfields = PyDataType_HASFIELDS(descr) + * + * if not hasfields and not copy_shape: # <<<<<<<<<<<<<< + * # do not call releasebuffer + * info.obj = None + */ + goto __pyx_L14; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":253 + * else: + * # need to call releasebuffer + * info.obj = self # <<<<<<<<<<<<<< + * + * if not hasfields: + */ + /*else*/ { + __Pyx_INCREF(((PyObject *)__pyx_v_self)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); + __Pyx_GOTREF(__pyx_v_info->obj); + __Pyx_DECREF(__pyx_v_info->obj); + __pyx_v_info->obj = ((PyObject *)__pyx_v_self); + } + __pyx_L14:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":255 + * info.obj = self + * + * if not hasfields: # <<<<<<<<<<<<<< + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or + */ + __pyx_t_1 = ((!(__pyx_v_hasfields != 0)) != 0); + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":256 + * + * if not hasfields: + * t = descr.type_num # <<<<<<<<<<<<<< + * if ((descr.byteorder == c'>' and little_endian) or + * (descr.byteorder == c'<' and not little_endian)): + */ + __pyx_t_4 = __pyx_v_descr->type_num; + __pyx_v_t = __pyx_t_4; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":257 + * if not hasfields: + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + __pyx_t_2 = ((__pyx_v_descr->byteorder == '>') != 0); + if (!__pyx_t_2) { + goto __pyx_L20_next_or; + } else { + } + __pyx_t_2 = (__pyx_v_little_endian != 0); + if (!__pyx_t_2) { + } else { + __pyx_t_1 = __pyx_t_2; + goto __pyx_L19_bool_binop_done; + } + __pyx_L20_next_or:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":258 + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or + * (descr.byteorder == c'<' and not little_endian)): # <<<<<<<<<<<<<< + * raise ValueError(u"Non-native byte order not supported") + * if t == NPY_BYTE: f = "b" + */ + __pyx_t_2 = ((__pyx_v_descr->byteorder == '<') != 0); + if (__pyx_t_2) { + } else { + __pyx_t_1 = __pyx_t_2; + goto __pyx_L19_bool_binop_done; + } + __pyx_t_2 = ((!(__pyx_v_little_endian != 0)) != 0); + __pyx_t_1 = __pyx_t_2; + __pyx_L19_bool_binop_done:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":257 + * if not hasfields: + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":259 + * if ((descr.byteorder == c'>' and little_endian) or + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" + */ + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 259, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 259, __pyx_L1_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":257 + * if not hasfields: + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":260 + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + * if t == NPY_BYTE: f = "b" # <<<<<<<<<<<<<< + * elif t == NPY_UBYTE: f = "B" + * elif t == NPY_SHORT: f = "h" + */ + switch (__pyx_v_t) { + case NPY_BYTE: + __pyx_v_f = ((char *)"b"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":261 + * raise ValueError(u"Non-native byte order not supported") + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" # <<<<<<<<<<<<<< + * elif t == NPY_SHORT: f = "h" + * elif t == NPY_USHORT: f = "H" + */ + case NPY_UBYTE: + __pyx_v_f = ((char *)"B"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":262 + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" + * elif t == NPY_SHORT: f = "h" # <<<<<<<<<<<<<< + * elif t == NPY_USHORT: f = "H" + * elif t == NPY_INT: f = "i" + */ + case NPY_SHORT: + __pyx_v_f = ((char *)"h"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":263 + * elif t == NPY_UBYTE: f = "B" + * elif t == NPY_SHORT: f = "h" + * elif t == NPY_USHORT: f = "H" # <<<<<<<<<<<<<< + * elif t == NPY_INT: f = "i" + * elif t == NPY_UINT: f = "I" + */ + case NPY_USHORT: + __pyx_v_f = ((char *)"H"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":264 + * elif t == NPY_SHORT: f = "h" + * elif t == NPY_USHORT: f = "H" + * elif t == NPY_INT: f = "i" # <<<<<<<<<<<<<< + * elif t == NPY_UINT: f = "I" + * elif t == NPY_LONG: f = "l" + */ + case NPY_INT: + __pyx_v_f = ((char *)"i"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":265 + * elif t == NPY_USHORT: f = "H" + * elif t == NPY_INT: f = "i" + * elif t == NPY_UINT: f = "I" # <<<<<<<<<<<<<< + * elif t == NPY_LONG: f = "l" + * elif t == NPY_ULONG: f = "L" + */ + case NPY_UINT: + __pyx_v_f = ((char *)"I"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":266 + * elif t == NPY_INT: f = "i" + * elif t == NPY_UINT: f = "I" + * elif t == NPY_LONG: f = "l" # <<<<<<<<<<<<<< + * elif t == NPY_ULONG: f = "L" + * elif t == NPY_LONGLONG: f = "q" + */ + case NPY_LONG: + __pyx_v_f = ((char *)"l"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":267 + * elif t == NPY_UINT: f = "I" + * elif t == NPY_LONG: f = "l" + * elif t == NPY_ULONG: f = "L" # <<<<<<<<<<<<<< + * elif t == NPY_LONGLONG: f = "q" + * elif t == NPY_ULONGLONG: f = "Q" + */ + case NPY_ULONG: + __pyx_v_f = ((char *)"L"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":268 + * elif t == NPY_LONG: f = "l" + * elif t == NPY_ULONG: f = "L" + * elif t == NPY_LONGLONG: f = "q" # <<<<<<<<<<<<<< + * elif t == NPY_ULONGLONG: f = "Q" + * elif t == NPY_FLOAT: f = "f" + */ + case NPY_LONGLONG: + __pyx_v_f = ((char *)"q"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":269 + * elif t == NPY_ULONG: f = "L" + * elif t == NPY_LONGLONG: f = "q" + * elif t == NPY_ULONGLONG: f = "Q" # <<<<<<<<<<<<<< + * elif t == NPY_FLOAT: f = "f" + * elif t == NPY_DOUBLE: f = "d" + */ + case NPY_ULONGLONG: + __pyx_v_f = ((char *)"Q"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":270 + * elif t == NPY_LONGLONG: f = "q" + * elif t == NPY_ULONGLONG: f = "Q" + * elif t == NPY_FLOAT: f = "f" # <<<<<<<<<<<<<< + * elif t == NPY_DOUBLE: f = "d" + * elif t == NPY_LONGDOUBLE: f = "g" + */ + case NPY_FLOAT: + __pyx_v_f = ((char *)"f"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":271 + * elif t == NPY_ULONGLONG: f = "Q" + * elif t == NPY_FLOAT: f = "f" + * elif t == NPY_DOUBLE: f = "d" # <<<<<<<<<<<<<< + * elif t == NPY_LONGDOUBLE: f = "g" + * elif t == NPY_CFLOAT: f = "Zf" + */ + case NPY_DOUBLE: + __pyx_v_f = ((char *)"d"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":272 + * elif t == NPY_FLOAT: f = "f" + * elif t == NPY_DOUBLE: f = "d" + * elif t == NPY_LONGDOUBLE: f = "g" # <<<<<<<<<<<<<< + * elif t == NPY_CFLOAT: f = "Zf" + * elif t == NPY_CDOUBLE: f = "Zd" + */ + case NPY_LONGDOUBLE: + __pyx_v_f = ((char *)"g"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":273 + * elif t == NPY_DOUBLE: f = "d" + * elif t == NPY_LONGDOUBLE: f = "g" + * elif t == NPY_CFLOAT: f = "Zf" # <<<<<<<<<<<<<< + * elif t == NPY_CDOUBLE: f = "Zd" + * elif t == NPY_CLONGDOUBLE: f = "Zg" + */ + case NPY_CFLOAT: + __pyx_v_f = ((char *)"Zf"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":274 + * elif t == NPY_LONGDOUBLE: f = "g" + * elif t == NPY_CFLOAT: f = "Zf" + * elif t == NPY_CDOUBLE: f = "Zd" # <<<<<<<<<<<<<< + * elif t == NPY_CLONGDOUBLE: f = "Zg" + * elif t == NPY_OBJECT: f = "O" + */ + case NPY_CDOUBLE: + __pyx_v_f = ((char *)"Zd"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":275 + * elif t == NPY_CFLOAT: f = "Zf" + * elif t == NPY_CDOUBLE: f = "Zd" + * elif t == NPY_CLONGDOUBLE: f = "Zg" # <<<<<<<<<<<<<< + * elif t == NPY_OBJECT: f = "O" + * else: + */ + case NPY_CLONGDOUBLE: + __pyx_v_f = ((char *)"Zg"); + break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":276 + * elif t == NPY_CDOUBLE: f = "Zd" + * elif t == NPY_CLONGDOUBLE: f = "Zg" + * elif t == NPY_OBJECT: f = "O" # <<<<<<<<<<<<<< + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + */ + case NPY_OBJECT: + __pyx_v_f = ((char *)"O"); + break; + default: + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":278 + * elif t == NPY_OBJECT: f = "O" + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<< + * info.format = f + * return + */ + __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 278, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_6 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 278, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 278, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_GIVEREF(__pyx_t_6); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_6); + __pyx_t_6 = 0; + __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 278, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_Raise(__pyx_t_6, 0, 0, 0); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __PYX_ERR(1, 278, __pyx_L1_error) + break; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":279 + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + * info.format = f # <<<<<<<<<<<<<< + * return + * else: + */ + __pyx_v_info->format = __pyx_v_f; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":280 + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + * info.format = f + * return # <<<<<<<<<<<<<< + * else: + * info.format = stdlib.malloc(_buffer_format_string_len) + */ + __pyx_r = 0; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":255 + * info.obj = self + * + * if not hasfields: # <<<<<<<<<<<<<< + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":282 + * return + * else: + * info.format = stdlib.malloc(_buffer_format_string_len) # <<<<<<<<<<<<<< + * info.format[0] = c'^' # Native data types, manual alignment + * offset = 0 + */ + /*else*/ { + __pyx_v_info->format = ((char *)malloc(0xFF)); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":283 + * else: + * info.format = stdlib.malloc(_buffer_format_string_len) + * info.format[0] = c'^' # Native data types, manual alignment # <<<<<<<<<<<<<< + * offset = 0 + * f = _util_dtypestring(descr, info.format + 1, + */ + (__pyx_v_info->format[0]) = '^'; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":284 + * info.format = stdlib.malloc(_buffer_format_string_len) + * info.format[0] = c'^' # Native data types, manual alignment + * offset = 0 # <<<<<<<<<<<<<< + * f = _util_dtypestring(descr, info.format + 1, + * info.format + _buffer_format_string_len, + */ + __pyx_v_offset = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":285 + * info.format[0] = c'^' # Native data types, manual alignment + * offset = 0 + * f = _util_dtypestring(descr, info.format + 1, # <<<<<<<<<<<<<< + * info.format + _buffer_format_string_len, + * &offset) + */ + __pyx_t_7 = __pyx_f_5numpy__util_dtypestring(__pyx_v_descr, (__pyx_v_info->format + 1), (__pyx_v_info->format + 0xFF), (&__pyx_v_offset)); if (unlikely(__pyx_t_7 == NULL)) __PYX_ERR(1, 285, __pyx_L1_error) + __pyx_v_f = __pyx_t_7; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":288 + * info.format + _buffer_format_string_len, + * &offset) + * f[0] = c'\0' # Terminate format string # <<<<<<<<<<<<<< + * + * def __releasebuffer__(ndarray self, Py_buffer* info): + */ + (__pyx_v_f[0]) = '\x00'; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":197 + * # experimental exception made for __getbuffer__ and __releasebuffer__ + * # -- the details of this may change. + * def __getbuffer__(ndarray self, Py_buffer* info, int flags): # <<<<<<<<<<<<<< + * # This implementation of getbuffer is geared towards Cython + * # requirements, and does not yet fullfill the PEP. + */ + + /* function exit code */ + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_6); + __Pyx_AddTraceback("numpy.ndarray.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = -1; + if (__pyx_v_info != NULL && __pyx_v_info->obj != NULL) { + __Pyx_GOTREF(__pyx_v_info->obj); + __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = NULL; + } + goto __pyx_L2; + __pyx_L0:; + if (__pyx_v_info != NULL && __pyx_v_info->obj == Py_None) { + __Pyx_GOTREF(Py_None); + __Pyx_DECREF(Py_None); __pyx_v_info->obj = NULL; + } + __pyx_L2:; + __Pyx_XDECREF((PyObject *)__pyx_v_descr); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":290 + * f[0] = c'\0' # Terminate format string + * + * def __releasebuffer__(ndarray self, Py_buffer* info): # <<<<<<<<<<<<<< + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + */ + +/* Python wrapper */ +static CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info); /*proto*/ +static CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info) { + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__releasebuffer__ (wrapper)", 0); + __pyx_pf_5numpy_7ndarray_2__releasebuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info)); + + /* function exit code */ + __Pyx_RefNannyFinishContext(); +} + +static void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info) { + __Pyx_RefNannyDeclarations + int __pyx_t_1; + __Pyx_RefNannySetupContext("__releasebuffer__", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":291 + * + * def __releasebuffer__(ndarray self, Py_buffer* info): + * if PyArray_HASFIELDS(self): # <<<<<<<<<<<<<< + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + */ + __pyx_t_1 = (PyArray_HASFIELDS(__pyx_v_self) != 0); + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":292 + * def __releasebuffer__(ndarray self, Py_buffer* info): + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) # <<<<<<<<<<<<<< + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + * stdlib.free(info.strides) + */ + free(__pyx_v_info->format); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":291 + * + * def __releasebuffer__(ndarray self, Py_buffer* info): + * if PyArray_HASFIELDS(self): # <<<<<<<<<<<<<< + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":293 + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * stdlib.free(info.strides) + * # info.shape was stored after info.strides in the same block + */ + __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0); + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":294 + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + * stdlib.free(info.strides) # <<<<<<<<<<<<<< + * # info.shape was stored after info.strides in the same block + * + */ + free(__pyx_v_info->strides); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":293 + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * stdlib.free(info.strides) + * # info.shape was stored after info.strides in the same block + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":290 + * f[0] = c'\0' # Terminate format string + * + * def __releasebuffer__(ndarray self, Py_buffer* info): # <<<<<<<<<<<<<< + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + */ + + /* function exit code */ + __Pyx_RefNannyFinishContext(); +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":770 + * ctypedef npy_cdouble complex_t + * + * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(1, a) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyObject *__pyx_v_a) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew1", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":771 + * + * cdef inline object PyArray_MultiIterNew1(a): + * return PyArray_MultiIterNew(1, a) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew2(a, b): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(1, ((void *)__pyx_v_a)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 771, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":770 + * ctypedef npy_cdouble complex_t + * + * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(1, a) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew1", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":773 + * return PyArray_MultiIterNew(1, a) + * + * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(2, a, b) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyObject *__pyx_v_a, PyObject *__pyx_v_b) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew2", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":774 + * + * cdef inline object PyArray_MultiIterNew2(a, b): + * return PyArray_MultiIterNew(2, a, b) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(2, ((void *)__pyx_v_a), ((void *)__pyx_v_b)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 774, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":773 + * return PyArray_MultiIterNew(1, a) + * + * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(2, a, b) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew2", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":776 + * return PyArray_MultiIterNew(2, a, b) + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(3, a, b, c) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew3", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":777 + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): + * return PyArray_MultiIterNew(3, a, b, c) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(3, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 777, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":776 + * return PyArray_MultiIterNew(2, a, b) + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(3, a, b, c) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew3", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":779 + * return PyArray_MultiIterNew(3, a, b, c) + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(4, a, b, c, d) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew4(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew4", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":780 + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): + * return PyArray_MultiIterNew(4, a, b, c, d) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(4, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 780, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":779 + * return PyArray_MultiIterNew(3, a, b, c) + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(4, a, b, c, d) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew4", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":782 + * return PyArray_MultiIterNew(4, a, b, c, d) + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew5(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d, PyObject *__pyx_v_e) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew5", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":783 + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): + * return PyArray_MultiIterNew(5, a, b, c, d, e) # <<<<<<<<<<<<<< + * + * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(5, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d), ((void *)__pyx_v_e)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 783, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":782 + * return PyArray_MultiIterNew(4, a, b, c, d) + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew5", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":785 + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: # <<<<<<<<<<<<<< + * # Recursive utility function used in __getbuffer__ to get format + * # string. The new location in the format string is returned. + */ + +static CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *__pyx_v_descr, char *__pyx_v_f, char *__pyx_v_end, int *__pyx_v_offset) { + PyArray_Descr *__pyx_v_child = 0; + int __pyx_v_endian_detector; + int __pyx_v_little_endian; + PyObject *__pyx_v_fields = 0; + PyObject *__pyx_v_childname = NULL; + PyObject *__pyx_v_new_offset = NULL; + PyObject *__pyx_v_t = NULL; + char *__pyx_r; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + Py_ssize_t __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + int __pyx_t_5; + int __pyx_t_6; + int __pyx_t_7; + long __pyx_t_8; + char *__pyx_t_9; + __Pyx_RefNannySetupContext("_util_dtypestring", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":790 + * + * cdef dtype child + * cdef int endian_detector = 1 # <<<<<<<<<<<<<< + * cdef bint little_endian = ((&endian_detector)[0] != 0) + * cdef tuple fields + */ + __pyx_v_endian_detector = 1; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":791 + * cdef dtype child + * cdef int endian_detector = 1 + * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<< + * cdef tuple fields + * + */ + __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":794 + * cdef tuple fields + * + * for childname in descr.names: # <<<<<<<<<<<<<< + * fields = descr.fields[childname] + * child, new_offset = fields + */ + if (unlikely(__pyx_v_descr->names == Py_None)) { + PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); + __PYX_ERR(1, 794, __pyx_L1_error) + } + __pyx_t_1 = __pyx_v_descr->names; __Pyx_INCREF(__pyx_t_1); __pyx_t_2 = 0; + for (;;) { + if (__pyx_t_2 >= PyTuple_GET_SIZE(__pyx_t_1)) break; + #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS + __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_3); __pyx_t_2++; if (unlikely(0 < 0)) __PYX_ERR(1, 794, __pyx_L1_error) + #else + __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 794, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + #endif + __Pyx_XDECREF_SET(__pyx_v_childname, __pyx_t_3); + __pyx_t_3 = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":795 + * + * for childname in descr.names: + * fields = descr.fields[childname] # <<<<<<<<<<<<<< + * child, new_offset = fields + * + */ + if (unlikely(__pyx_v_descr->fields == Py_None)) { + PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); + __PYX_ERR(1, 795, __pyx_L1_error) + } + __pyx_t_3 = __Pyx_PyDict_GetItem(__pyx_v_descr->fields, __pyx_v_childname); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 795, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + if (!(likely(PyTuple_CheckExact(__pyx_t_3))||((__pyx_t_3) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_t_3)->tp_name), 0))) __PYX_ERR(1, 795, __pyx_L1_error) + __Pyx_XDECREF_SET(__pyx_v_fields, ((PyObject*)__pyx_t_3)); + __pyx_t_3 = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":796 + * for childname in descr.names: + * fields = descr.fields[childname] + * child, new_offset = fields # <<<<<<<<<<<<<< + * + * if (end - f) - (new_offset - offset[0]) < 15: + */ + if (likely(__pyx_v_fields != Py_None)) { + PyObject* sequence = __pyx_v_fields; + #if !CYTHON_COMPILING_IN_PYPY + Py_ssize_t size = Py_SIZE(sequence); + #else + Py_ssize_t size = PySequence_Size(sequence); + #endif + if (unlikely(size != 2)) { + if (size > 2) __Pyx_RaiseTooManyValuesError(2); + else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); + __PYX_ERR(1, 796, __pyx_L1_error) + } + #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS + __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); + __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); + __Pyx_INCREF(__pyx_t_3); + __Pyx_INCREF(__pyx_t_4); + #else + __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 796, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 796, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + #endif + } else { + __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 796, __pyx_L1_error) + } + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_dtype))))) __PYX_ERR(1, 796, __pyx_L1_error) + __Pyx_XDECREF_SET(__pyx_v_child, ((PyArray_Descr *)__pyx_t_3)); + __pyx_t_3 = 0; + __Pyx_XDECREF_SET(__pyx_v_new_offset, __pyx_t_4); + __pyx_t_4 = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":798 + * child, new_offset = fields + * + * if (end - f) - (new_offset - offset[0]) < 15: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + */ + __pyx_t_4 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 798, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyNumber_Subtract(__pyx_v_new_offset, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 798, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 798, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = ((((__pyx_v_end - __pyx_v_f) - ((int)__pyx_t_5)) < 15) != 0); + if (__pyx_t_6) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":799 + * + * if (end - f) - (new_offset - offset[0]) < 15: + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") # <<<<<<<<<<<<<< + * + * if ((child.byteorder == c'>' and little_endian) or + */ + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 799, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 799, __pyx_L1_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":798 + * child, new_offset = fields + * + * if (end - f) - (new_offset - offset[0]) < 15: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":801 + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + * if ((child.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (child.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + __pyx_t_7 = ((__pyx_v_child->byteorder == '>') != 0); + if (!__pyx_t_7) { + goto __pyx_L8_next_or; + } else { + } + __pyx_t_7 = (__pyx_v_little_endian != 0); + if (!__pyx_t_7) { + } else { + __pyx_t_6 = __pyx_t_7; + goto __pyx_L7_bool_binop_done; + } + __pyx_L8_next_or:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":802 + * + * if ((child.byteorder == c'>' and little_endian) or + * (child.byteorder == c'<' and not little_endian)): # <<<<<<<<<<<<<< + * raise ValueError(u"Non-native byte order not supported") + * # One could encode it in the format string and have Cython + */ + __pyx_t_7 = ((__pyx_v_child->byteorder == '<') != 0); + if (__pyx_t_7) { + } else { + __pyx_t_6 = __pyx_t_7; + goto __pyx_L7_bool_binop_done; + } + __pyx_t_7 = ((!(__pyx_v_little_endian != 0)) != 0); + __pyx_t_6 = __pyx_t_7; + __pyx_L7_bool_binop_done:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":801 + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + * if ((child.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (child.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + if (__pyx_t_6) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":803 + * if ((child.byteorder == c'>' and little_endian) or + * (child.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * # One could encode it in the format string and have Cython + * # complain instead, BUT: < and > in format strings also imply + */ + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 803, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 803, __pyx_L1_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":801 + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + * if ((child.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (child.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":813 + * + * # Output padding bytes + * while offset[0] < new_offset: # <<<<<<<<<<<<<< + * f[0] = 120 # "x"; pad byte + * f += 1 + */ + while (1) { + __pyx_t_3 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 813, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_t_3, __pyx_v_new_offset, Py_LT); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 813, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 813, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (!__pyx_t_6) break; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":814 + * # Output padding bytes + * while offset[0] < new_offset: + * f[0] = 120 # "x"; pad byte # <<<<<<<<<<<<<< + * f += 1 + * offset[0] += 1 + */ + (__pyx_v_f[0]) = 0x78; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":815 + * while offset[0] < new_offset: + * f[0] = 120 # "x"; pad byte + * f += 1 # <<<<<<<<<<<<<< + * offset[0] += 1 + * + */ + __pyx_v_f = (__pyx_v_f + 1); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":816 + * f[0] = 120 # "x"; pad byte + * f += 1 + * offset[0] += 1 # <<<<<<<<<<<<<< + * + * offset[0] += child.itemsize + */ + __pyx_t_8 = 0; + (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + 1); + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":818 + * offset[0] += 1 + * + * offset[0] += child.itemsize # <<<<<<<<<<<<<< + * + * if not PyDataType_HASFIELDS(child): + */ + __pyx_t_8 = 0; + (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + __pyx_v_child->elsize); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":820 + * offset[0] += child.itemsize + * + * if not PyDataType_HASFIELDS(child): # <<<<<<<<<<<<<< + * t = child.type_num + * if end - f < 5: + */ + __pyx_t_6 = ((!(PyDataType_HASFIELDS(__pyx_v_child) != 0)) != 0); + if (__pyx_t_6) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":821 + * + * if not PyDataType_HASFIELDS(child): + * t = child.type_num # <<<<<<<<<<<<<< + * if end - f < 5: + * raise RuntimeError(u"Format string allocated too short.") + */ + __pyx_t_4 = __Pyx_PyInt_From_int(__pyx_v_child->type_num); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 821, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_XDECREF_SET(__pyx_v_t, __pyx_t_4); + __pyx_t_4 = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":822 + * if not PyDataType_HASFIELDS(child): + * t = child.type_num + * if end - f < 5: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short.") + * + */ + __pyx_t_6 = (((__pyx_v_end - __pyx_v_f) < 5) != 0); + if (__pyx_t_6) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":823 + * t = child.type_num + * if end - f < 5: + * raise RuntimeError(u"Format string allocated too short.") # <<<<<<<<<<<<<< + * + * # Until ticket #99 is fixed, use integers to avoid warnings + */ + __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 823, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_Raise(__pyx_t_4, 0, 0, 0); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __PYX_ERR(1, 823, __pyx_L1_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":822 + * if not PyDataType_HASFIELDS(child): + * t = child.type_num + * if end - f < 5: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short.") + * + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":826 + * + * # Until ticket #99 is fixed, use integers to avoid warnings + * if t == NPY_BYTE: f[0] = 98 #"b" # <<<<<<<<<<<<<< + * elif t == NPY_UBYTE: f[0] = 66 #"B" + * elif t == NPY_SHORT: f[0] = 104 #"h" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_BYTE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 826, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 826, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 826, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 98; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":827 + * # Until ticket #99 is fixed, use integers to avoid warnings + * if t == NPY_BYTE: f[0] = 98 #"b" + * elif t == NPY_UBYTE: f[0] = 66 #"B" # <<<<<<<<<<<<<< + * elif t == NPY_SHORT: f[0] = 104 #"h" + * elif t == NPY_USHORT: f[0] = 72 #"H" + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UBYTE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 827, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 827, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 827, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 66; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":828 + * if t == NPY_BYTE: f[0] = 98 #"b" + * elif t == NPY_UBYTE: f[0] = 66 #"B" + * elif t == NPY_SHORT: f[0] = 104 #"h" # <<<<<<<<<<<<<< + * elif t == NPY_USHORT: f[0] = 72 #"H" + * elif t == NPY_INT: f[0] = 105 #"i" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_SHORT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 828, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 828, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 828, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x68; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":829 + * elif t == NPY_UBYTE: f[0] = 66 #"B" + * elif t == NPY_SHORT: f[0] = 104 #"h" + * elif t == NPY_USHORT: f[0] = 72 #"H" # <<<<<<<<<<<<<< + * elif t == NPY_INT: f[0] = 105 #"i" + * elif t == NPY_UINT: f[0] = 73 #"I" + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_USHORT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 829, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 829, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 829, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 72; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":830 + * elif t == NPY_SHORT: f[0] = 104 #"h" + * elif t == NPY_USHORT: f[0] = 72 #"H" + * elif t == NPY_INT: f[0] = 105 #"i" # <<<<<<<<<<<<<< + * elif t == NPY_UINT: f[0] = 73 #"I" + * elif t == NPY_LONG: f[0] = 108 #"l" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_INT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 830, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 830, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 830, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x69; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":831 + * elif t == NPY_USHORT: f[0] = 72 #"H" + * elif t == NPY_INT: f[0] = 105 #"i" + * elif t == NPY_UINT: f[0] = 73 #"I" # <<<<<<<<<<<<<< + * elif t == NPY_LONG: f[0] = 108 #"l" + * elif t == NPY_ULONG: f[0] = 76 #"L" + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UINT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 831, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 831, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 831, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 73; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":832 + * elif t == NPY_INT: f[0] = 105 #"i" + * elif t == NPY_UINT: f[0] = 73 #"I" + * elif t == NPY_LONG: f[0] = 108 #"l" # <<<<<<<<<<<<<< + * elif t == NPY_ULONG: f[0] = 76 #"L" + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 832, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 832, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 832, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x6C; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":833 + * elif t == NPY_UINT: f[0] = 73 #"I" + * elif t == NPY_LONG: f[0] = 108 #"l" + * elif t == NPY_ULONG: f[0] = 76 #"L" # <<<<<<<<<<<<<< + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 833, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 833, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 833, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 76; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":834 + * elif t == NPY_LONG: f[0] = 108 #"l" + * elif t == NPY_ULONG: f[0] = 76 #"L" + * elif t == NPY_LONGLONG: f[0] = 113 #"q" # <<<<<<<<<<<<<< + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + * elif t == NPY_FLOAT: f[0] = 102 #"f" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGLONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 834, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 834, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 834, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x71; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":835 + * elif t == NPY_ULONG: f[0] = 76 #"L" + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" # <<<<<<<<<<<<<< + * elif t == NPY_FLOAT: f[0] = 102 #"f" + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONGLONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 835, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 835, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 835, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 81; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":836 + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + * elif t == NPY_FLOAT: f[0] = 102 #"f" # <<<<<<<<<<<<<< + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_FLOAT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 836, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 836, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 836, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x66; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":837 + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + * elif t == NPY_FLOAT: f[0] = 102 #"f" + * elif t == NPY_DOUBLE: f[0] = 100 #"d" # <<<<<<<<<<<<<< + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_DOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 837, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 837, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 837, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x64; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":838 + * elif t == NPY_FLOAT: f[0] = 102 #"f" + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" # <<<<<<<<<<<<<< + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 838, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 838, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 838, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x67; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":839 + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf # <<<<<<<<<<<<<< + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CFLOAT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 839, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 839, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 839, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 90; + (__pyx_v_f[1]) = 0x66; + __pyx_v_f = (__pyx_v_f + 1); + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":840 + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd # <<<<<<<<<<<<<< + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg + * elif t == NPY_OBJECT: f[0] = 79 #"O" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 840, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 840, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 840, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 90; + (__pyx_v_f[1]) = 0x64; + __pyx_v_f = (__pyx_v_f + 1); + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":841 + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg # <<<<<<<<<<<<<< + * elif t == NPY_OBJECT: f[0] = 79 #"O" + * else: + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CLONGDOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 841, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 841, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 841, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 90; + (__pyx_v_f[1]) = 0x67; + __pyx_v_f = (__pyx_v_f + 1); + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":842 + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg + * elif t == NPY_OBJECT: f[0] = 79 #"O" # <<<<<<<<<<<<<< + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_OBJECT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 842, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 842, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 842, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 79; + goto __pyx_L15; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":844 + * elif t == NPY_OBJECT: f[0] = 79 #"O" + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<< + * f += 1 + * else: + */ + /*else*/ { + __pyx_t_3 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 844, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 844, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_GIVEREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 844, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 844, __pyx_L1_error) + } + __pyx_L15:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":845 + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + * f += 1 # <<<<<<<<<<<<<< + * else: + * # Cython ignores struct boundary information ("T{...}"), + */ + __pyx_v_f = (__pyx_v_f + 1); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":820 + * offset[0] += child.itemsize + * + * if not PyDataType_HASFIELDS(child): # <<<<<<<<<<<<<< + * t = child.type_num + * if end - f < 5: + */ + goto __pyx_L13; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":849 + * # Cython ignores struct boundary information ("T{...}"), + * # so don't output it + * f = _util_dtypestring(child, f, end, offset) # <<<<<<<<<<<<<< + * return f + * + */ + /*else*/ { + __pyx_t_9 = __pyx_f_5numpy__util_dtypestring(__pyx_v_child, __pyx_v_f, __pyx_v_end, __pyx_v_offset); if (unlikely(__pyx_t_9 == NULL)) __PYX_ERR(1, 849, __pyx_L1_error) + __pyx_v_f = __pyx_t_9; + } + __pyx_L13:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":794 + * cdef tuple fields + * + * for childname in descr.names: # <<<<<<<<<<<<<< + * fields = descr.fields[childname] + * child, new_offset = fields + */ + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":850 + * # so don't output it + * f = _util_dtypestring(child, f, end, offset) + * return f # <<<<<<<<<<<<<< + * + * + */ + __pyx_r = __pyx_v_f; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":785 + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: # <<<<<<<<<<<<<< + * # Recursive utility function used in __getbuffer__ to get format + * # string. The new location in the format string is returned. + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("numpy._util_dtypestring", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XDECREF((PyObject *)__pyx_v_child); + __Pyx_XDECREF(__pyx_v_fields); + __Pyx_XDECREF(__pyx_v_childname); + __Pyx_XDECREF(__pyx_v_new_offset); + __Pyx_XDECREF(__pyx_v_t); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":966 + * + * + * cdef inline void set_array_base(ndarray arr, object base): # <<<<<<<<<<<<<< + * cdef PyObject* baseptr + * if base is None: + */ + +static CYTHON_INLINE void __pyx_f_5numpy_set_array_base(PyArrayObject *__pyx_v_arr, PyObject *__pyx_v_base) { + PyObject *__pyx_v_baseptr; + __Pyx_RefNannyDeclarations + int __pyx_t_1; + int __pyx_t_2; + __Pyx_RefNannySetupContext("set_array_base", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":968 + * cdef inline void set_array_base(ndarray arr, object base): + * cdef PyObject* baseptr + * if base is None: # <<<<<<<<<<<<<< + * baseptr = NULL + * else: + */ + __pyx_t_1 = (__pyx_v_base == Py_None); + __pyx_t_2 = (__pyx_t_1 != 0); + if (__pyx_t_2) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":969 + * cdef PyObject* baseptr + * if base is None: + * baseptr = NULL # <<<<<<<<<<<<<< + * else: + * Py_INCREF(base) # important to do this before decref below! + */ + __pyx_v_baseptr = NULL; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":968 + * cdef inline void set_array_base(ndarray arr, object base): + * cdef PyObject* baseptr + * if base is None: # <<<<<<<<<<<<<< + * baseptr = NULL + * else: + */ + goto __pyx_L3; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":971 + * baseptr = NULL + * else: + * Py_INCREF(base) # important to do this before decref below! # <<<<<<<<<<<<<< + * baseptr = base + * Py_XDECREF(arr.base) + */ + /*else*/ { + Py_INCREF(__pyx_v_base); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":972 + * else: + * Py_INCREF(base) # important to do this before decref below! + * baseptr = base # <<<<<<<<<<<<<< + * Py_XDECREF(arr.base) + * arr.base = baseptr + */ + __pyx_v_baseptr = ((PyObject *)__pyx_v_base); + } + __pyx_L3:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":973 + * Py_INCREF(base) # important to do this before decref below! + * baseptr = base + * Py_XDECREF(arr.base) # <<<<<<<<<<<<<< + * arr.base = baseptr + * + */ + Py_XDECREF(__pyx_v_arr->base); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":974 + * baseptr = base + * Py_XDECREF(arr.base) + * arr.base = baseptr # <<<<<<<<<<<<<< + * + * cdef inline object get_array_base(ndarray arr): + */ + __pyx_v_arr->base = __pyx_v_baseptr; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":966 + * + * + * cdef inline void set_array_base(ndarray arr, object base): # <<<<<<<<<<<<<< + * cdef PyObject* baseptr + * if base is None: + */ + + /* function exit code */ + __Pyx_RefNannyFinishContext(); +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":976 + * arr.base = baseptr + * + * cdef inline object get_array_base(ndarray arr): # <<<<<<<<<<<<<< + * if arr.base is NULL: + * return None + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_get_array_base(PyArrayObject *__pyx_v_arr) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + int __pyx_t_1; + __Pyx_RefNannySetupContext("get_array_base", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":977 + * + * cdef inline object get_array_base(ndarray arr): + * if arr.base is NULL: # <<<<<<<<<<<<<< + * return None + * else: + */ + __pyx_t_1 = ((__pyx_v_arr->base == NULL) != 0); + if (__pyx_t_1) { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":978 + * cdef inline object get_array_base(ndarray arr): + * if arr.base is NULL: + * return None # <<<<<<<<<<<<<< + * else: + * return arr.base + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(Py_None); + __pyx_r = Py_None; + goto __pyx_L0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":977 + * + * cdef inline object get_array_base(ndarray arr): + * if arr.base is NULL: # <<<<<<<<<<<<<< + * return None + * else: + */ + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":980 + * return None + * else: + * return arr.base # <<<<<<<<<<<<<< + * + * + */ + /*else*/ { + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(((PyObject *)__pyx_v_arr->base)); + __pyx_r = ((PyObject *)__pyx_v_arr->base); + goto __pyx_L0; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":976 + * arr.base = baseptr + * + * cdef inline object get_array_base(ndarray arr): # <<<<<<<<<<<<<< + * if arr.base is NULL: + * return None + */ + + /* function exit code */ + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":985 + * # Versions of the import_* functions which are more suitable for + * # Cython code. + * cdef inline int import_array() except -1: # <<<<<<<<<<<<<< + * try: + * _import_array() + */ + +static CYTHON_INLINE int __pyx_f_5numpy_import_array(void) { + int __pyx_r; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + PyObject *__pyx_t_5 = NULL; + PyObject *__pyx_t_6 = NULL; + PyObject *__pyx_t_7 = NULL; + PyObject *__pyx_t_8 = NULL; + __Pyx_RefNannySetupContext("import_array", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":986 + * # Cython code. + * cdef inline int import_array() except -1: + * try: # <<<<<<<<<<<<<< + * _import_array() + * except Exception: + */ + { + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); + __Pyx_XGOTREF(__pyx_t_1); + __Pyx_XGOTREF(__pyx_t_2); + __Pyx_XGOTREF(__pyx_t_3); + /*try:*/ { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":987 + * cdef inline int import_array() except -1: + * try: + * _import_array() # <<<<<<<<<<<<<< + * except Exception: + * raise ImportError("numpy.core.multiarray failed to import") + */ + __pyx_t_4 = _import_array(); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(1, 987, __pyx_L3_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":986 + * # Cython code. + * cdef inline int import_array() except -1: + * try: # <<<<<<<<<<<<<< + * _import_array() + * except Exception: + */ + } + __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; + goto __pyx_L10_try_end; + __pyx_L3_error:; + __Pyx_PyThreadState_assign + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":988 + * try: + * _import_array() + * except Exception: # <<<<<<<<<<<<<< + * raise ImportError("numpy.core.multiarray failed to import") + * + */ + __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0]))); + if (__pyx_t_4) { + __Pyx_AddTraceback("numpy.import_array", __pyx_clineno, __pyx_lineno, __pyx_filename); + if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 988, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_GOTREF(__pyx_t_6); + __Pyx_GOTREF(__pyx_t_7); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":989 + * _import_array() + * except Exception: + * raise ImportError("numpy.core.multiarray failed to import") # <<<<<<<<<<<<<< + * + * cdef inline int import_umath() except -1: + */ + __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 989, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_8); + __Pyx_Raise(__pyx_t_8, 0, 0, 0); + __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; + __PYX_ERR(1, 989, __pyx_L5_except_error) + } + goto __pyx_L5_except_error; + __pyx_L5_except_error:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":986 + * # Cython code. + * cdef inline int import_array() except -1: + * try: # <<<<<<<<<<<<<< + * _import_array() + * except Exception: + */ + __Pyx_PyThreadState_assign + __Pyx_XGIVEREF(__pyx_t_1); + __Pyx_XGIVEREF(__pyx_t_2); + __Pyx_XGIVEREF(__pyx_t_3); + __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); + goto __pyx_L1_error; + __pyx_L10_try_end:; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":985 + * # Versions of the import_* functions which are more suitable for + * # Cython code. + * cdef inline int import_array() except -1: # <<<<<<<<<<<<<< + * try: + * _import_array() + */ + + /* function exit code */ + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_5); + __Pyx_XDECREF(__pyx_t_6); + __Pyx_XDECREF(__pyx_t_7); + __Pyx_XDECREF(__pyx_t_8); + __Pyx_AddTraceback("numpy.import_array", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = -1; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":991 + * raise ImportError("numpy.core.multiarray failed to import") + * + * cdef inline int import_umath() except -1: # <<<<<<<<<<<<<< + * try: + * _import_umath() + */ + +static CYTHON_INLINE int __pyx_f_5numpy_import_umath(void) { + int __pyx_r; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + PyObject *__pyx_t_5 = NULL; + PyObject *__pyx_t_6 = NULL; + PyObject *__pyx_t_7 = NULL; + PyObject *__pyx_t_8 = NULL; + __Pyx_RefNannySetupContext("import_umath", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":992 + * + * cdef inline int import_umath() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + { + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); + __Pyx_XGOTREF(__pyx_t_1); + __Pyx_XGOTREF(__pyx_t_2); + __Pyx_XGOTREF(__pyx_t_3); + /*try:*/ { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":993 + * cdef inline int import_umath() except -1: + * try: + * _import_umath() # <<<<<<<<<<<<<< + * except Exception: + * raise ImportError("numpy.core.umath failed to import") + */ + __pyx_t_4 = _import_umath(); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(1, 993, __pyx_L3_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":992 + * + * cdef inline int import_umath() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + } + __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; + goto __pyx_L10_try_end; + __pyx_L3_error:; + __Pyx_PyThreadState_assign + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":994 + * try: + * _import_umath() + * except Exception: # <<<<<<<<<<<<<< + * raise ImportError("numpy.core.umath failed to import") + * + */ + __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0]))); + if (__pyx_t_4) { + __Pyx_AddTraceback("numpy.import_umath", __pyx_clineno, __pyx_lineno, __pyx_filename); + if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 994, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_GOTREF(__pyx_t_6); + __Pyx_GOTREF(__pyx_t_7); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":995 + * _import_umath() + * except Exception: + * raise ImportError("numpy.core.umath failed to import") # <<<<<<<<<<<<<< + * + * cdef inline int import_ufunc() except -1: + */ + __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 995, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_8); + __Pyx_Raise(__pyx_t_8, 0, 0, 0); + __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; + __PYX_ERR(1, 995, __pyx_L5_except_error) + } + goto __pyx_L5_except_error; + __pyx_L5_except_error:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":992 + * + * cdef inline int import_umath() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + __Pyx_PyThreadState_assign + __Pyx_XGIVEREF(__pyx_t_1); + __Pyx_XGIVEREF(__pyx_t_2); + __Pyx_XGIVEREF(__pyx_t_3); + __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); + goto __pyx_L1_error; + __pyx_L10_try_end:; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":991 + * raise ImportError("numpy.core.multiarray failed to import") + * + * cdef inline int import_umath() except -1: # <<<<<<<<<<<<<< + * try: + * _import_umath() + */ + + /* function exit code */ + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_5); + __Pyx_XDECREF(__pyx_t_6); + __Pyx_XDECREF(__pyx_t_7); + __Pyx_XDECREF(__pyx_t_8); + __Pyx_AddTraceback("numpy.import_umath", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = -1; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":997 + * raise ImportError("numpy.core.umath failed to import") + * + * cdef inline int import_ufunc() except -1: # <<<<<<<<<<<<<< + * try: + * _import_umath() + */ + +static CYTHON_INLINE int __pyx_f_5numpy_import_ufunc(void) { + int __pyx_r; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + PyObject *__pyx_t_5 = NULL; + PyObject *__pyx_t_6 = NULL; + PyObject *__pyx_t_7 = NULL; + PyObject *__pyx_t_8 = NULL; + __Pyx_RefNannySetupContext("import_ufunc", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":998 + * + * cdef inline int import_ufunc() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + { + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); + __Pyx_XGOTREF(__pyx_t_1); + __Pyx_XGOTREF(__pyx_t_2); + __Pyx_XGOTREF(__pyx_t_3); + /*try:*/ { + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":999 + * cdef inline int import_ufunc() except -1: + * try: + * _import_umath() # <<<<<<<<<<<<<< + * except Exception: + * raise ImportError("numpy.core.umath failed to import") + */ + __pyx_t_4 = _import_umath(); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(1, 999, __pyx_L3_error) + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":998 + * + * cdef inline int import_ufunc() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + } + __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; + goto __pyx_L10_try_end; + __pyx_L3_error:; + __Pyx_PyThreadState_assign + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":1000 + * try: + * _import_umath() + * except Exception: # <<<<<<<<<<<<<< + * raise ImportError("numpy.core.umath failed to import") + */ + __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0]))); + if (__pyx_t_4) { + __Pyx_AddTraceback("numpy.import_ufunc", __pyx_clineno, __pyx_lineno, __pyx_filename); + if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 1000, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_GOTREF(__pyx_t_6); + __Pyx_GOTREF(__pyx_t_7); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":1001 + * _import_umath() + * except Exception: + * raise ImportError("numpy.core.umath failed to import") # <<<<<<<<<<<<<< + */ + __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 1001, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_8); + __Pyx_Raise(__pyx_t_8, 0, 0, 0); + __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; + __PYX_ERR(1, 1001, __pyx_L5_except_error) + } + goto __pyx_L5_except_error; + __pyx_L5_except_error:; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":998 + * + * cdef inline int import_ufunc() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + __Pyx_PyThreadState_assign + __Pyx_XGIVEREF(__pyx_t_1); + __Pyx_XGIVEREF(__pyx_t_2); + __Pyx_XGIVEREF(__pyx_t_3); + __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); + goto __pyx_L1_error; + __pyx_L10_try_end:; + } + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":997 + * raise ImportError("numpy.core.umath failed to import") + * + * cdef inline int import_ufunc() except -1: # <<<<<<<<<<<<<< + * try: + * _import_umath() + */ + + /* function exit code */ + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_5); + __Pyx_XDECREF(__pyx_t_6); + __Pyx_XDECREF(__pyx_t_7); + __Pyx_XDECREF(__pyx_t_8); + __Pyx_AddTraceback("numpy.import_ufunc", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = -1; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static PyMethodDef __pyx_methods[] = { + {0, 0, 0, 0} +}; + +#if PY_MAJOR_VERSION >= 3 +static struct PyModuleDef __pyx_moduledef = { + #if PY_VERSION_HEX < 0x03020000 + { PyObject_HEAD_INIT(NULL) NULL, 0, NULL }, + #else + PyModuleDef_HEAD_INIT, + #endif + "poly_overlaps", + 0, /* m_doc */ + -1, /* m_size */ + __pyx_methods /* m_methods */, + NULL, /* m_reload */ + NULL, /* m_traverse */ + NULL, /* m_clear */ + NULL /* m_free */ +}; +#endif + +static __Pyx_StringTabEntry __pyx_string_tab[] = { + {&__pyx_kp_u_Format_string_allocated_too_shor, __pyx_k_Format_string_allocated_too_shor, sizeof(__pyx_k_Format_string_allocated_too_shor), 0, 1, 0, 0}, + {&__pyx_kp_u_Format_string_allocated_too_shor_2, __pyx_k_Format_string_allocated_too_shor_2, sizeof(__pyx_k_Format_string_allocated_too_shor_2), 0, 1, 0, 0}, + {&__pyx_n_s_ImportError, __pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 0, 1, 1}, + {&__pyx_n_s_K, __pyx_k_K, sizeof(__pyx_k_K), 0, 0, 1, 1}, + {&__pyx_n_s_N, __pyx_k_N, sizeof(__pyx_k_N), 0, 0, 1, 1}, + {&__pyx_kp_u_Non_native_byte_order_not_suppor, __pyx_k_Non_native_byte_order_not_suppor, sizeof(__pyx_k_Non_native_byte_order_not_suppor), 0, 1, 0, 0}, + {&__pyx_n_s_RuntimeError, __pyx_k_RuntimeError, sizeof(__pyx_k_RuntimeError), 0, 0, 1, 1}, + {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1}, + {&__pyx_n_s_boxes, __pyx_k_boxes, sizeof(__pyx_k_boxes), 0, 0, 1, 1}, + {&__pyx_n_s_device_id, __pyx_k_device_id, sizeof(__pyx_k_device_id), 0, 0, 1, 1}, + {&__pyx_n_s_dtype, __pyx_k_dtype, sizeof(__pyx_k_dtype), 0, 0, 1, 1}, + {&__pyx_n_s_float32, __pyx_k_float32, sizeof(__pyx_k_float32), 0, 0, 1, 1}, + {&__pyx_kp_s_home_dingjian_code_DOTA_devkit, __pyx_k_home_dingjian_code_DOTA_devkit, sizeof(__pyx_k_home_dingjian_code_DOTA_devkit), 0, 0, 1, 0}, + {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, + {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, + {&__pyx_kp_u_ndarray_is_not_C_contiguous, __pyx_k_ndarray_is_not_C_contiguous, sizeof(__pyx_k_ndarray_is_not_C_contiguous), 0, 1, 0, 0}, + {&__pyx_kp_u_ndarray_is_not_Fortran_contiguou, __pyx_k_ndarray_is_not_Fortran_contiguou, sizeof(__pyx_k_ndarray_is_not_Fortran_contiguou), 0, 1, 0, 0}, + {&__pyx_n_s_np, __pyx_k_np, sizeof(__pyx_k_np), 0, 0, 1, 1}, + {&__pyx_n_s_numpy, __pyx_k_numpy, sizeof(__pyx_k_numpy), 0, 0, 1, 1}, + {&__pyx_kp_s_numpy_core_multiarray_failed_to, __pyx_k_numpy_core_multiarray_failed_to, sizeof(__pyx_k_numpy_core_multiarray_failed_to), 0, 0, 1, 0}, + {&__pyx_kp_s_numpy_core_umath_failed_to_impor, __pyx_k_numpy_core_umath_failed_to_impor, sizeof(__pyx_k_numpy_core_umath_failed_to_impor), 0, 0, 1, 0}, + {&__pyx_n_s_overlaps, __pyx_k_overlaps, sizeof(__pyx_k_overlaps), 0, 0, 1, 1}, + {&__pyx_n_s_poly_overlaps, __pyx_k_poly_overlaps, sizeof(__pyx_k_poly_overlaps), 0, 0, 1, 1}, + {&__pyx_n_s_query_boxes, __pyx_k_query_boxes, sizeof(__pyx_k_query_boxes), 0, 0, 1, 1}, + {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1}, + {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, + {&__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_k_unknown_dtype_code_in_numpy_pxd, sizeof(__pyx_k_unknown_dtype_code_in_numpy_pxd), 0, 1, 0, 0}, + {&__pyx_n_s_zeros, __pyx_k_zeros, sizeof(__pyx_k_zeros), 0, 0, 1, 1}, + {0, 0, 0, 0, 0, 0, 0} +}; +static int __Pyx_InitCachedBuiltins(void) { + __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 218, __pyx_L1_error) + __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(1, 231, __pyx_L1_error) + __pyx_builtin_RuntimeError = __Pyx_GetBuiltinName(__pyx_n_s_RuntimeError); if (!__pyx_builtin_RuntimeError) __PYX_ERR(1, 799, __pyx_L1_error) + __pyx_builtin_ImportError = __Pyx_GetBuiltinName(__pyx_n_s_ImportError); if (!__pyx_builtin_ImportError) __PYX_ERR(1, 989, __pyx_L1_error) + return 0; + __pyx_L1_error:; + return -1; +} + +static int __Pyx_InitCachedConstants(void) { + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":218 + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") # <<<<<<<<<<<<<< + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + */ + __pyx_tuple_ = PyTuple_Pack(1, __pyx_kp_u_ndarray_is_not_C_contiguous); if (unlikely(!__pyx_tuple_)) __PYX_ERR(1, 218, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple_); + __Pyx_GIVEREF(__pyx_tuple_); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":222 + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") # <<<<<<<<<<<<<< + * + * info.buf = PyArray_DATA(self) + */ + __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_u_ndarray_is_not_Fortran_contiguou); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(1, 222, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__2); + __Pyx_GIVEREF(__pyx_tuple__2); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":259 + * if ((descr.byteorder == c'>' and little_endian) or + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" + */ + __pyx_tuple__3 = PyTuple_Pack(1, __pyx_kp_u_Non_native_byte_order_not_suppor); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(1, 259, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__3); + __Pyx_GIVEREF(__pyx_tuple__3); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":799 + * + * if (end - f) - (new_offset - offset[0]) < 15: + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") # <<<<<<<<<<<<<< + * + * if ((child.byteorder == c'>' and little_endian) or + */ + __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_u_Format_string_allocated_too_shor); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(1, 799, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__4); + __Pyx_GIVEREF(__pyx_tuple__4); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":803 + * if ((child.byteorder == c'>' and little_endian) or + * (child.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * # One could encode it in the format string and have Cython + * # complain instead, BUT: < and > in format strings also imply + */ + __pyx_tuple__5 = PyTuple_Pack(1, __pyx_kp_u_Non_native_byte_order_not_suppor); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(1, 803, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__5); + __Pyx_GIVEREF(__pyx_tuple__5); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":823 + * t = child.type_num + * if end - f < 5: + * raise RuntimeError(u"Format string allocated too short.") # <<<<<<<<<<<<<< + * + * # Until ticket #99 is fixed, use integers to avoid warnings + */ + __pyx_tuple__6 = PyTuple_Pack(1, __pyx_kp_u_Format_string_allocated_too_shor_2); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(1, 823, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__6); + __Pyx_GIVEREF(__pyx_tuple__6); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":989 + * _import_array() + * except Exception: + * raise ImportError("numpy.core.multiarray failed to import") # <<<<<<<<<<<<<< + * + * cdef inline int import_umath() except -1: + */ + __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_multiarray_failed_to); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(1, 989, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__7); + __Pyx_GIVEREF(__pyx_tuple__7); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":995 + * _import_umath() + * except Exception: + * raise ImportError("numpy.core.umath failed to import") # <<<<<<<<<<<<<< + * + * cdef inline int import_ufunc() except -1: + */ + __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_umath_failed_to_impor); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(1, 995, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__8); + __Pyx_GIVEREF(__pyx_tuple__8); + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":1001 + * _import_umath() + * except Exception: + * raise ImportError("numpy.core.umath failed to import") # <<<<<<<<<<<<<< + */ + __pyx_tuple__9 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_umath_failed_to_impor); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(1, 1001, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__9); + __Pyx_GIVEREF(__pyx_tuple__9); + + /* "poly_overlaps.pyx":7 + * void _overlaps(np.float32_t*, np.float32_t*, np.float32_t*, int, int, int) + * + * def poly_overlaps (np.ndarray[np.float32_t, ndim=2] boxes, np.ndarray[np.float32_t, ndim=2] query_boxes, np.int32_t device_id=0): # <<<<<<<<<<<<<< + * cdef int N = boxes.shape[0] + * cdef int K = query_boxes.shape[0] + */ + __pyx_tuple__10 = PyTuple_Pack(6, __pyx_n_s_boxes, __pyx_n_s_query_boxes, __pyx_n_s_device_id, __pyx_n_s_N, __pyx_n_s_K, __pyx_n_s_overlaps); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(0, 7, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__10); + __Pyx_GIVEREF(__pyx_tuple__10); + __pyx_codeobj__11 = (PyObject*)__Pyx_PyCode_New(3, 0, 6, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__10, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_dingjian_code_DOTA_devkit, __pyx_n_s_poly_overlaps, 7, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__11)) __PYX_ERR(0, 7, __pyx_L1_error) + __Pyx_RefNannyFinishContext(); + return 0; + __pyx_L1_error:; + __Pyx_RefNannyFinishContext(); + return -1; +} + +static int __Pyx_InitGlobals(void) { + if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error); + return 0; + __pyx_L1_error:; + return -1; +} + +#if PY_MAJOR_VERSION < 3 +PyMODINIT_FUNC initpoly_overlaps(void); /*proto*/ +PyMODINIT_FUNC initpoly_overlaps(void) +#else +PyMODINIT_FUNC PyInit_poly_overlaps(void); /*proto*/ +PyMODINIT_FUNC PyInit_poly_overlaps(void) +#endif +{ + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannyDeclarations + #if CYTHON_REFNANNY + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); + if (!__Pyx_RefNanny) { + PyErr_Clear(); + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); + if (!__Pyx_RefNanny) + Py_FatalError("failed to import 'refnanny' module"); + } + #endif + __Pyx_RefNannySetupContext("PyMODINIT_FUNC PyInit_poly_overlaps(void)", 0); + if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) + #ifdef __Pyx_CyFunction_USED + if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_FusedFunction_USED + if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_Coroutine_USED + if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_Generator_USED + if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_StopAsyncIteration_USED + if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + /*--- Library function declarations ---*/ + /*--- Threads initialization code ---*/ + #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS + #ifdef WITH_THREAD /* Python build with threading support? */ + PyEval_InitThreads(); + #endif + #endif + /*--- Module creation code ---*/ + #if PY_MAJOR_VERSION < 3 + __pyx_m = Py_InitModule4("poly_overlaps", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); + #else + __pyx_m = PyModule_Create(&__pyx_moduledef); + #endif + if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) + Py_INCREF(__pyx_d); + __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) + #if CYTHON_COMPILING_IN_PYPY + Py_INCREF(__pyx_b); + #endif + if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); + /*--- Initialize various global constants etc. ---*/ + if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) + if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + if (__pyx_module_is_main_poly_overlaps) { + if (PyObject_SetAttrString(__pyx_m, "__name__", __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) + } + #if PY_MAJOR_VERSION >= 3 + { + PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) + if (!PyDict_GetItemString(modules, "poly_overlaps")) { + if (unlikely(PyDict_SetItemString(modules, "poly_overlaps", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error) + } + } + #endif + /*--- Builtin init code ---*/ + if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + /*--- Constants init code ---*/ + if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + /*--- Global init code ---*/ + /*--- Variable export code ---*/ + /*--- Function export code ---*/ + /*--- Type init code ---*/ + /*--- Type import code ---*/ + __pyx_ptype_7cpython_4type_type = __Pyx_ImportType(__Pyx_BUILTIN_MODULE_NAME, "type", + #if CYTHON_COMPILING_IN_PYPY + sizeof(PyTypeObject), + #else + sizeof(PyHeapTypeObject), + #endif + 0); if (unlikely(!__pyx_ptype_7cpython_4type_type)) __PYX_ERR(2, 9, __pyx_L1_error) + __pyx_ptype_5numpy_dtype = __Pyx_ImportType("numpy", "dtype", sizeof(PyArray_Descr), 0); if (unlikely(!__pyx_ptype_5numpy_dtype)) __PYX_ERR(1, 155, __pyx_L1_error) + __pyx_ptype_5numpy_flatiter = __Pyx_ImportType("numpy", "flatiter", sizeof(PyArrayIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_flatiter)) __PYX_ERR(1, 168, __pyx_L1_error) + __pyx_ptype_5numpy_broadcast = __Pyx_ImportType("numpy", "broadcast", sizeof(PyArrayMultiIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_broadcast)) __PYX_ERR(1, 172, __pyx_L1_error) + __pyx_ptype_5numpy_ndarray = __Pyx_ImportType("numpy", "ndarray", sizeof(PyArrayObject), 0); if (unlikely(!__pyx_ptype_5numpy_ndarray)) __PYX_ERR(1, 181, __pyx_L1_error) + __pyx_ptype_5numpy_ufunc = __Pyx_ImportType("numpy", "ufunc", sizeof(PyUFuncObject), 0); if (unlikely(!__pyx_ptype_5numpy_ufunc)) __PYX_ERR(1, 861, __pyx_L1_error) + /*--- Variable import code ---*/ + /*--- Function import code ---*/ + /*--- Execution code ---*/ + #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) + if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + + /* "poly_overlaps.pyx":1 + * import numpy as np # <<<<<<<<<<<<<< + * cimport numpy as np + * + */ + __pyx_t_1 = __Pyx_Import(__pyx_n_s_numpy, 0, -1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + if (PyDict_SetItem(__pyx_d, __pyx_n_s_np, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "poly_overlaps.pyx":7 + * void _overlaps(np.float32_t*, np.float32_t*, np.float32_t*, int, int, int) + * + * def poly_overlaps (np.ndarray[np.float32_t, ndim=2] boxes, np.ndarray[np.float32_t, ndim=2] query_boxes, np.int32_t device_id=0): # <<<<<<<<<<<<<< + * cdef int N = boxes.shape[0] + * cdef int K = query_boxes.shape[0] + */ + __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_13poly_overlaps_1poly_overlaps, NULL, __pyx_n_s_poly_overlaps); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 7, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + if (PyDict_SetItem(__pyx_d, __pyx_n_s_poly_overlaps, __pyx_t_1) < 0) __PYX_ERR(0, 7, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "poly_overlaps.pyx":1 + * import numpy as np # <<<<<<<<<<<<<< + * cimport numpy as np + * + */ + __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "../../../anaconda3/envs/py35/lib/python3.5/site-packages/Cython/Includes/numpy/__init__.pxd":997 + * raise ImportError("numpy.core.umath failed to import") + * + * cdef inline int import_ufunc() except -1: # <<<<<<<<<<<<<< + * try: + * _import_umath() + */ + + /*--- Wrapped vars code ---*/ + + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + if (__pyx_m) { + if (__pyx_d) { + __Pyx_AddTraceback("init poly_overlaps", __pyx_clineno, __pyx_lineno, __pyx_filename); + } + Py_DECREF(__pyx_m); __pyx_m = 0; + } else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_ImportError, "init poly_overlaps"); + } + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + #if PY_MAJOR_VERSION < 3 + return; + #else + return __pyx_m; + #endif +} + +/* --- Runtime support code --- */ +/* Refnanny */ +#if CYTHON_REFNANNY +static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { + PyObject *m = NULL, *p = NULL; + void *r = NULL; + m = PyImport_ImportModule((char *)modname); + if (!m) goto end; + p = PyObject_GetAttrString(m, (char *)"RefNannyAPI"); + if (!p) goto end; + r = PyLong_AsVoidPtr(p); +end: + Py_XDECREF(p); + Py_XDECREF(m); + return (__Pyx_RefNannyAPIStruct *)r; +} +#endif + +/* RaiseArgTupleInvalid */ +static void __Pyx_RaiseArgtupleInvalid( + const char* func_name, + int exact, + Py_ssize_t num_min, + Py_ssize_t num_max, + Py_ssize_t num_found) +{ + Py_ssize_t num_expected; + const char *more_or_less; + if (num_found < num_min) { + num_expected = num_min; + more_or_less = "at least"; + } else { + num_expected = num_max; + more_or_less = "at most"; + } + if (exact) { + more_or_less = "exactly"; + } + PyErr_Format(PyExc_TypeError, + "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", + func_name, more_or_less, num_expected, + (num_expected == 1) ? "" : "s", num_found); +} + +/* RaiseDoubleKeywords */ +static void __Pyx_RaiseDoubleKeywordsError( + const char* func_name, + PyObject* kw_name) +{ + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION >= 3 + "%s() got multiple values for keyword argument '%U'", func_name, kw_name); + #else + "%s() got multiple values for keyword argument '%s'", func_name, + PyString_AsString(kw_name)); + #endif +} + +/* ParseKeywords */ +static int __Pyx_ParseOptionalKeywords( + PyObject *kwds, + PyObject **argnames[], + PyObject *kwds2, + PyObject *values[], + Py_ssize_t num_pos_args, + const char* function_name) +{ + PyObject *key = 0, *value = 0; + Py_ssize_t pos = 0; + PyObject*** name; + PyObject*** first_kw_arg = argnames + num_pos_args; + while (PyDict_Next(kwds, &pos, &key, &value)) { + name = first_kw_arg; + while (*name && (**name != key)) name++; + if (*name) { + values[name-argnames] = value; + continue; + } + name = first_kw_arg; + #if PY_MAJOR_VERSION < 3 + if (likely(PyString_CheckExact(key)) || likely(PyString_Check(key))) { + while (*name) { + if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) + && _PyString_Eq(**name, key)) { + values[name-argnames] = value; + break; + } + name++; + } + if (*name) continue; + else { + PyObject*** argname = argnames; + while (argname != first_kw_arg) { + if ((**argname == key) || ( + (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) + && _PyString_Eq(**argname, key))) { + goto arg_passed_twice; + } + argname++; + } + } + } else + #endif + if (likely(PyUnicode_Check(key))) { + while (*name) { + int cmp = (**name == key) ? 0 : + #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 + (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 : + #endif + PyUnicode_Compare(**name, key); + if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; + if (cmp == 0) { + values[name-argnames] = value; + break; + } + name++; + } + if (*name) continue; + else { + PyObject*** argname = argnames; + while (argname != first_kw_arg) { + int cmp = (**argname == key) ? 0 : + #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 + (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 : + #endif + PyUnicode_Compare(**argname, key); + if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; + if (cmp == 0) goto arg_passed_twice; + argname++; + } + } + } else + goto invalid_keyword_type; + if (kwds2) { + if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; + } else { + goto invalid_keyword; + } + } + return 0; +arg_passed_twice: + __Pyx_RaiseDoubleKeywordsError(function_name, key); + goto bad; +invalid_keyword_type: + PyErr_Format(PyExc_TypeError, + "%.200s() keywords must be strings", function_name); + goto bad; +invalid_keyword: + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION < 3 + "%.200s() got an unexpected keyword argument '%.200s'", + function_name, PyString_AsString(key)); + #else + "%s() got an unexpected keyword argument '%U'", + function_name, key); + #endif +bad: + return -1; +} + +/* ArgTypeTest */ +static void __Pyx_RaiseArgumentTypeInvalid(const char* name, PyObject *obj, PyTypeObject *type) { + PyErr_Format(PyExc_TypeError, + "Argument '%.200s' has incorrect type (expected %.200s, got %.200s)", + name, type->tp_name, Py_TYPE(obj)->tp_name); +} +static CYTHON_INLINE int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed, + const char *name, int exact) +{ + if (unlikely(!type)) { + PyErr_SetString(PyExc_SystemError, "Missing type object"); + return 0; + } + if (none_allowed && obj == Py_None) return 1; + else if (exact) { + if (likely(Py_TYPE(obj) == type)) return 1; + #if PY_MAJOR_VERSION == 2 + else if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1; + #endif + } + else { + if (likely(PyObject_TypeCheck(obj, type))) return 1; + } + __Pyx_RaiseArgumentTypeInvalid(name, obj, type); + return 0; +} + +/* BufferFormatCheck */ +static CYTHON_INLINE int __Pyx_IsLittleEndian(void) { + unsigned int n = 1; + return *(unsigned char*)(&n) != 0; +} +static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, + __Pyx_BufFmt_StackElem* stack, + __Pyx_TypeInfo* type) { + stack[0].field = &ctx->root; + stack[0].parent_offset = 0; + ctx->root.type = type; + ctx->root.name = "buffer dtype"; + ctx->root.offset = 0; + ctx->head = stack; + ctx->head->field = &ctx->root; + ctx->fmt_offset = 0; + ctx->head->parent_offset = 0; + ctx->new_packmode = '@'; + ctx->enc_packmode = '@'; + ctx->new_count = 1; + ctx->enc_count = 0; + ctx->enc_type = 0; + ctx->is_complex = 0; + ctx->is_valid_array = 0; + ctx->struct_alignment = 0; + while (type->typegroup == 'S') { + ++ctx->head; + ctx->head->field = type->fields; + ctx->head->parent_offset = 0; + type = type->fields->type; + } +} +static int __Pyx_BufFmt_ParseNumber(const char** ts) { + int count; + const char* t = *ts; + if (*t < '0' || *t > '9') { + return -1; + } else { + count = *t++ - '0'; + while (*t >= '0' && *t < '9') { + count *= 10; + count += *t++ - '0'; + } + } + *ts = t; + return count; +} +static int __Pyx_BufFmt_ExpectNumber(const char **ts) { + int number = __Pyx_BufFmt_ParseNumber(ts); + if (number == -1) + PyErr_Format(PyExc_ValueError,\ + "Does not understand character buffer dtype format string ('%c')", **ts); + return number; +} +static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) { + PyErr_Format(PyExc_ValueError, + "Unexpected format string character: '%c'", ch); +} +static const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) { + switch (ch) { + case 'c': return "'char'"; + case 'b': return "'signed char'"; + case 'B': return "'unsigned char'"; + case 'h': return "'short'"; + case 'H': return "'unsigned short'"; + case 'i': return "'int'"; + case 'I': return "'unsigned int'"; + case 'l': return "'long'"; + case 'L': return "'unsigned long'"; + case 'q': return "'long long'"; + case 'Q': return "'unsigned long long'"; + case 'f': return (is_complex ? "'complex float'" : "'float'"); + case 'd': return (is_complex ? "'complex double'" : "'double'"); + case 'g': return (is_complex ? "'complex long double'" : "'long double'"); + case 'T': return "a struct"; + case 'O': return "Python object"; + case 'P': return "a pointer"; + case 's': case 'p': return "a string"; + case 0: return "end"; + default: return "unparseable format string"; + } +} +static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) { + switch (ch) { + case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return 2; + case 'i': case 'I': case 'l': case 'L': return 4; + case 'q': case 'Q': return 8; + case 'f': return (is_complex ? 8 : 4); + case 'd': return (is_complex ? 16 : 8); + case 'g': { + PyErr_SetString(PyExc_ValueError, "Python does not define a standard format string size for long double ('g').."); + return 0; + } + case 'O': case 'P': return sizeof(void*); + default: + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } +} +static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) { + switch (ch) { + case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return sizeof(short); + case 'i': case 'I': return sizeof(int); + case 'l': case 'L': return sizeof(long); + #ifdef HAVE_LONG_LONG + case 'q': case 'Q': return sizeof(PY_LONG_LONG); + #endif + case 'f': return sizeof(float) * (is_complex ? 2 : 1); + case 'd': return sizeof(double) * (is_complex ? 2 : 1); + case 'g': return sizeof(long double) * (is_complex ? 2 : 1); + case 'O': case 'P': return sizeof(void*); + default: { + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } + } +} +typedef struct { char c; short x; } __Pyx_st_short; +typedef struct { char c; int x; } __Pyx_st_int; +typedef struct { char c; long x; } __Pyx_st_long; +typedef struct { char c; float x; } __Pyx_st_float; +typedef struct { char c; double x; } __Pyx_st_double; +typedef struct { char c; long double x; } __Pyx_st_longdouble; +typedef struct { char c; void *x; } __Pyx_st_void_p; +#ifdef HAVE_LONG_LONG +typedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong; +#endif +static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED int is_complex) { + switch (ch) { + case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short); + case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int); + case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long); +#ifdef HAVE_LONG_LONG + case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG); +#endif + case 'f': return sizeof(__Pyx_st_float) - sizeof(float); + case 'd': return sizeof(__Pyx_st_double) - sizeof(double); + case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double); + case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*); + default: + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } +} +/* These are for computing the padding at the end of the struct to align + on the first member of the struct. This will probably the same as above, + but we don't have any guarantees. + */ +typedef struct { short x; char c; } __Pyx_pad_short; +typedef struct { int x; char c; } __Pyx_pad_int; +typedef struct { long x; char c; } __Pyx_pad_long; +typedef struct { float x; char c; } __Pyx_pad_float; +typedef struct { double x; char c; } __Pyx_pad_double; +typedef struct { long double x; char c; } __Pyx_pad_longdouble; +typedef struct { void *x; char c; } __Pyx_pad_void_p; +#ifdef HAVE_LONG_LONG +typedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong; +#endif +static size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int is_complex) { + switch (ch) { + case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short); + case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int); + case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long); +#ifdef HAVE_LONG_LONG + case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG); +#endif + case 'f': return sizeof(__Pyx_pad_float) - sizeof(float); + case 'd': return sizeof(__Pyx_pad_double) - sizeof(double); + case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double); + case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*); + default: + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } +} +static char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) { + switch (ch) { + case 'c': + return 'H'; + case 'b': case 'h': case 'i': + case 'l': case 'q': case 's': case 'p': + return 'I'; + case 'B': case 'H': case 'I': case 'L': case 'Q': + return 'U'; + case 'f': case 'd': case 'g': + return (is_complex ? 'C' : 'R'); + case 'O': + return 'O'; + case 'P': + return 'P'; + default: { + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } + } +} +static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) { + if (ctx->head == NULL || ctx->head->field == &ctx->root) { + const char* expected; + const char* quote; + if (ctx->head == NULL) { + expected = "end"; + quote = ""; + } else { + expected = ctx->head->field->type->name; + quote = "'"; + } + PyErr_Format(PyExc_ValueError, + "Buffer dtype mismatch, expected %s%s%s but got %s", + quote, expected, quote, + __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex)); + } else { + __Pyx_StructField* field = ctx->head->field; + __Pyx_StructField* parent = (ctx->head - 1)->field; + PyErr_Format(PyExc_ValueError, + "Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'", + field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex), + parent->type->name, field->name); + } +} +static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) { + char group; + size_t size, offset, arraysize = 1; + if (ctx->enc_type == 0) return 0; + if (ctx->head->field->type->arraysize[0]) { + int i, ndim = 0; + if (ctx->enc_type == 's' || ctx->enc_type == 'p') { + ctx->is_valid_array = ctx->head->field->type->ndim == 1; + ndim = 1; + if (ctx->enc_count != ctx->head->field->type->arraysize[0]) { + PyErr_Format(PyExc_ValueError, + "Expected a dimension of size %zu, got %zu", + ctx->head->field->type->arraysize[0], ctx->enc_count); + return -1; + } + } + if (!ctx->is_valid_array) { + PyErr_Format(PyExc_ValueError, "Expected %d dimensions, got %d", + ctx->head->field->type->ndim, ndim); + return -1; + } + for (i = 0; i < ctx->head->field->type->ndim; i++) { + arraysize *= ctx->head->field->type->arraysize[i]; + } + ctx->is_valid_array = 0; + ctx->enc_count = 1; + } + group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex); + do { + __Pyx_StructField* field = ctx->head->field; + __Pyx_TypeInfo* type = field->type; + if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') { + size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex); + } else { + size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex); + } + if (ctx->enc_packmode == '@') { + size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex); + size_t align_mod_offset; + if (align_at == 0) return -1; + align_mod_offset = ctx->fmt_offset % align_at; + if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset; + if (ctx->struct_alignment == 0) + ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type, + ctx->is_complex); + } + if (type->size != size || type->typegroup != group) { + if (type->typegroup == 'C' && type->fields != NULL) { + size_t parent_offset = ctx->head->parent_offset + field->offset; + ++ctx->head; + ctx->head->field = type->fields; + ctx->head->parent_offset = parent_offset; + continue; + } + if ((type->typegroup == 'H' || group == 'H') && type->size == size) { + } else { + __Pyx_BufFmt_RaiseExpected(ctx); + return -1; + } + } + offset = ctx->head->parent_offset + field->offset; + if (ctx->fmt_offset != offset) { + PyErr_Format(PyExc_ValueError, + "Buffer dtype mismatch; next field is at offset %" CYTHON_FORMAT_SSIZE_T "d but %" CYTHON_FORMAT_SSIZE_T "d expected", + (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset); + return -1; + } + ctx->fmt_offset += size; + if (arraysize) + ctx->fmt_offset += (arraysize - 1) * size; + --ctx->enc_count; + while (1) { + if (field == &ctx->root) { + ctx->head = NULL; + if (ctx->enc_count != 0) { + __Pyx_BufFmt_RaiseExpected(ctx); + return -1; + } + break; + } + ctx->head->field = ++field; + if (field->type == NULL) { + --ctx->head; + field = ctx->head->field; + continue; + } else if (field->type->typegroup == 'S') { + size_t parent_offset = ctx->head->parent_offset + field->offset; + if (field->type->fields->type == NULL) continue; + field = field->type->fields; + ++ctx->head; + ctx->head->field = field; + ctx->head->parent_offset = parent_offset; + break; + } else { + break; + } + } + } while (ctx->enc_count); + ctx->enc_type = 0; + ctx->is_complex = 0; + return 0; +} +static CYTHON_INLINE PyObject * +__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp) +{ + const char *ts = *tsp; + int i = 0, number; + int ndim = ctx->head->field->type->ndim; +; + ++ts; + if (ctx->new_count != 1) { + PyErr_SetString(PyExc_ValueError, + "Cannot handle repeated arrays in format string"); + return NULL; + } + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + while (*ts && *ts != ')') { + switch (*ts) { + case ' ': case '\f': case '\r': case '\n': case '\t': case '\v': continue; + default: break; + } + number = __Pyx_BufFmt_ExpectNumber(&ts); + if (number == -1) return NULL; + if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i]) + return PyErr_Format(PyExc_ValueError, + "Expected a dimension of size %zu, got %d", + ctx->head->field->type->arraysize[i], number); + if (*ts != ',' && *ts != ')') + return PyErr_Format(PyExc_ValueError, + "Expected a comma in format string, got '%c'", *ts); + if (*ts == ',') ts++; + i++; + } + if (i != ndim) + return PyErr_Format(PyExc_ValueError, "Expected %d dimension(s), got %d", + ctx->head->field->type->ndim, i); + if (!*ts) { + PyErr_SetString(PyExc_ValueError, + "Unexpected end of format string, expected ')'"); + return NULL; + } + ctx->is_valid_array = 1; + ctx->new_count = 1; + *tsp = ++ts; + return Py_None; +} +static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) { + int got_Z = 0; + while (1) { + switch(*ts) { + case 0: + if (ctx->enc_type != 0 && ctx->head == NULL) { + __Pyx_BufFmt_RaiseExpected(ctx); + return NULL; + } + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + if (ctx->head != NULL) { + __Pyx_BufFmt_RaiseExpected(ctx); + return NULL; + } + return ts; + case ' ': + case '\r': + case '\n': + ++ts; + break; + case '<': + if (!__Pyx_IsLittleEndian()) { + PyErr_SetString(PyExc_ValueError, "Little-endian buffer not supported on big-endian compiler"); + return NULL; + } + ctx->new_packmode = '='; + ++ts; + break; + case '>': + case '!': + if (__Pyx_IsLittleEndian()) { + PyErr_SetString(PyExc_ValueError, "Big-endian buffer not supported on little-endian compiler"); + return NULL; + } + ctx->new_packmode = '='; + ++ts; + break; + case '=': + case '@': + case '^': + ctx->new_packmode = *ts++; + break; + case 'T': + { + const char* ts_after_sub; + size_t i, struct_count = ctx->new_count; + size_t struct_alignment = ctx->struct_alignment; + ctx->new_count = 1; + ++ts; + if (*ts != '{') { + PyErr_SetString(PyExc_ValueError, "Buffer acquisition: Expected '{' after 'T'"); + return NULL; + } + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->enc_type = 0; + ctx->enc_count = 0; + ctx->struct_alignment = 0; + ++ts; + ts_after_sub = ts; + for (i = 0; i != struct_count; ++i) { + ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts); + if (!ts_after_sub) return NULL; + } + ts = ts_after_sub; + if (struct_alignment) ctx->struct_alignment = struct_alignment; + } + break; + case '}': + { + size_t alignment = ctx->struct_alignment; + ++ts; + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->enc_type = 0; + if (alignment && ctx->fmt_offset % alignment) { + ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment); + } + } + return ts; + case 'x': + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->fmt_offset += ctx->new_count; + ctx->new_count = 1; + ctx->enc_count = 0; + ctx->enc_type = 0; + ctx->enc_packmode = ctx->new_packmode; + ++ts; + break; + case 'Z': + got_Z = 1; + ++ts; + if (*ts != 'f' && *ts != 'd' && *ts != 'g') { + __Pyx_BufFmt_RaiseUnexpectedChar('Z'); + return NULL; + } + case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I': + case 'l': case 'L': case 'q': case 'Q': + case 'f': case 'd': case 'g': + case 'O': case 'p': + if (ctx->enc_type == *ts && got_Z == ctx->is_complex && + ctx->enc_packmode == ctx->new_packmode) { + ctx->enc_count += ctx->new_count; + ctx->new_count = 1; + got_Z = 0; + ++ts; + break; + } + case 's': + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->enc_count = ctx->new_count; + ctx->enc_packmode = ctx->new_packmode; + ctx->enc_type = *ts; + ctx->is_complex = got_Z; + ++ts; + ctx->new_count = 1; + got_Z = 0; + break; + case ':': + ++ts; + while(*ts != ':') ++ts; + ++ts; + break; + case '(': + if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL; + break; + default: + { + int number = __Pyx_BufFmt_ExpectNumber(&ts); + if (number == -1) return NULL; + ctx->new_count = (size_t)number; + } + } + } +} +static CYTHON_INLINE void __Pyx_ZeroBuffer(Py_buffer* buf) { + buf->buf = NULL; + buf->obj = NULL; + buf->strides = __Pyx_zeros; + buf->shape = __Pyx_zeros; + buf->suboffsets = __Pyx_minusones; +} +static CYTHON_INLINE int __Pyx_GetBufferAndValidate( + Py_buffer* buf, PyObject* obj, __Pyx_TypeInfo* dtype, int flags, + int nd, int cast, __Pyx_BufFmt_StackElem* stack) +{ + if (obj == Py_None || obj == NULL) { + __Pyx_ZeroBuffer(buf); + return 0; + } + buf->buf = NULL; + if (__Pyx_GetBuffer(obj, buf, flags) == -1) goto fail; + if (buf->ndim != nd) { + PyErr_Format(PyExc_ValueError, + "Buffer has wrong number of dimensions (expected %d, got %d)", + nd, buf->ndim); + goto fail; + } + if (!cast) { + __Pyx_BufFmt_Context ctx; + __Pyx_BufFmt_Init(&ctx, stack, dtype); + if (!__Pyx_BufFmt_CheckString(&ctx, buf->format)) goto fail; + } + if ((unsigned)buf->itemsize != dtype->size) { + PyErr_Format(PyExc_ValueError, + "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "d byte%s) does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "d byte%s)", + buf->itemsize, (buf->itemsize > 1) ? "s" : "", + dtype->name, (Py_ssize_t)dtype->size, (dtype->size > 1) ? "s" : ""); + goto fail; + } + if (buf->suboffsets == NULL) buf->suboffsets = __Pyx_minusones; + return 0; +fail:; + __Pyx_ZeroBuffer(buf); + return -1; +} +static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info) { + if (info->buf == NULL) return; + if (info->suboffsets == __Pyx_minusones) info->suboffsets = NULL; + __Pyx_ReleaseBuffer(info); +} + +/* GetBuiltinName */ + static PyObject *__Pyx_GetBuiltinName(PyObject *name) { + PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name); + if (unlikely(!result)) { + PyErr_Format(PyExc_NameError, +#if PY_MAJOR_VERSION >= 3 + "name '%U' is not defined", name); +#else + "name '%.200s' is not defined", PyString_AS_STRING(name)); +#endif + } + return result; +} + +/* GetModuleGlobalName */ + static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name) { + PyObject *result; +#if !CYTHON_AVOID_BORROWED_REFS + result = PyDict_GetItem(__pyx_d, name); + if (likely(result)) { + Py_INCREF(result); + } else { +#else + result = PyObject_GetItem(__pyx_d, name); + if (!result) { + PyErr_Clear(); +#endif + result = __Pyx_GetBuiltinName(name); + } + return result; +} + +/* PyObjectCall */ + #if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { + PyObject *result; + ternaryfunc call = func->ob_type->tp_call; + if (unlikely(!call)) + return PyObject_Call(func, arg, kw); + if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) + return NULL; + result = (*call)(func, arg, kw); + Py_LeaveRecursiveCall(); + if (unlikely(!result) && unlikely(!PyErr_Occurred())) { + PyErr_SetString( + PyExc_SystemError, + "NULL result without error in PyObject_Call"); + } + return result; +} +#endif + +/* ExtTypeTest */ + static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { + if (unlikely(!type)) { + PyErr_SetString(PyExc_SystemError, "Missing type object"); + return 0; + } + if (likely(PyObject_TypeCheck(obj, type))) + return 1; + PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s", + Py_TYPE(obj)->tp_name, type->tp_name); + return 0; +} + +/* BufferIndexError */ + static void __Pyx_RaiseBufferIndexError(int axis) { + PyErr_Format(PyExc_IndexError, + "Out of bounds on buffer access (axis %d)", axis); +} + +/* PyErrFetchRestore */ + #if CYTHON_FAST_THREAD_STATE +static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { + PyObject *tmp_type, *tmp_value, *tmp_tb; + tmp_type = tstate->curexc_type; + tmp_value = tstate->curexc_value; + tmp_tb = tstate->curexc_traceback; + tstate->curexc_type = type; + tstate->curexc_value = value; + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_type); + Py_XDECREF(tmp_value); + Py_XDECREF(tmp_tb); +} +static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { + *type = tstate->curexc_type; + *value = tstate->curexc_value; + *tb = tstate->curexc_traceback; + tstate->curexc_type = 0; + tstate->curexc_value = 0; + tstate->curexc_traceback = 0; +} +#endif + +/* RaiseException */ + #if PY_MAJOR_VERSION < 3 +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, + CYTHON_UNUSED PyObject *cause) { + __Pyx_PyThreadState_declare + Py_XINCREF(type); + if (!value || value == Py_None) + value = NULL; + else + Py_INCREF(value); + if (!tb || tb == Py_None) + tb = NULL; + else { + Py_INCREF(tb); + if (!PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto raise_error; + } + } + if (PyType_Check(type)) { +#if CYTHON_COMPILING_IN_PYPY + if (!value) { + Py_INCREF(Py_None); + value = Py_None; + } +#endif + PyErr_NormalizeException(&type, &value, &tb); + } else { + if (value) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto raise_error; + } + value = type; + type = (PyObject*) Py_TYPE(type); + Py_INCREF(type); + if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto raise_error; + } + } + __Pyx_PyThreadState_assign + __Pyx_ErrRestore(type, value, tb); + return; +raise_error: + Py_XDECREF(value); + Py_XDECREF(type); + Py_XDECREF(tb); + return; +} +#else +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { + PyObject* owned_instance = NULL; + if (tb == Py_None) { + tb = 0; + } else if (tb && !PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto bad; + } + if (value == Py_None) + value = 0; + if (PyExceptionInstance_Check(type)) { + if (value) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto bad; + } + value = type; + type = (PyObject*) Py_TYPE(value); + } else if (PyExceptionClass_Check(type)) { + PyObject *instance_class = NULL; + if (value && PyExceptionInstance_Check(value)) { + instance_class = (PyObject*) Py_TYPE(value); + if (instance_class != type) { + int is_subclass = PyObject_IsSubclass(instance_class, type); + if (!is_subclass) { + instance_class = NULL; + } else if (unlikely(is_subclass == -1)) { + goto bad; + } else { + type = instance_class; + } + } + } + if (!instance_class) { + PyObject *args; + if (!value) + args = PyTuple_New(0); + else if (PyTuple_Check(value)) { + Py_INCREF(value); + args = value; + } else + args = PyTuple_Pack(1, value); + if (!args) + goto bad; + owned_instance = PyObject_Call(type, args, NULL); + Py_DECREF(args); + if (!owned_instance) + goto bad; + value = owned_instance; + if (!PyExceptionInstance_Check(value)) { + PyErr_Format(PyExc_TypeError, + "calling %R should have returned an instance of " + "BaseException, not %R", + type, Py_TYPE(value)); + goto bad; + } + } + } else { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto bad; + } +#if PY_VERSION_HEX >= 0x03030000 + if (cause) { +#else + if (cause && cause != Py_None) { +#endif + PyObject *fixed_cause; + if (cause == Py_None) { + fixed_cause = NULL; + } else if (PyExceptionClass_Check(cause)) { + fixed_cause = PyObject_CallObject(cause, NULL); + if (fixed_cause == NULL) + goto bad; + } else if (PyExceptionInstance_Check(cause)) { + fixed_cause = cause; + Py_INCREF(fixed_cause); + } else { + PyErr_SetString(PyExc_TypeError, + "exception causes must derive from " + "BaseException"); + goto bad; + } + PyException_SetCause(value, fixed_cause); + } + PyErr_SetObject(type, value); + if (tb) { +#if CYTHON_COMPILING_IN_PYPY + PyObject *tmp_type, *tmp_value, *tmp_tb; + PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); + Py_INCREF(tb); + PyErr_Restore(tmp_type, tmp_value, tb); + Py_XDECREF(tmp_tb); +#else + PyThreadState *tstate = PyThreadState_GET(); + PyObject* tmp_tb = tstate->curexc_traceback; + if (tb != tmp_tb) { + Py_INCREF(tb); + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_tb); + } +#endif + } +bad: + Py_XDECREF(owned_instance); + return; +} +#endif + +/* RaiseTooManyValuesToUnpack */ + static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { + PyErr_Format(PyExc_ValueError, + "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); +} + +/* RaiseNeedMoreValuesToUnpack */ + static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { + PyErr_Format(PyExc_ValueError, + "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", + index, (index == 1) ? "" : "s"); +} + +/* RaiseNoneIterError */ + static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) { + PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); +} + +/* SaveResetException */ + #if CYTHON_FAST_THREAD_STATE +static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { + *type = tstate->exc_type; + *value = tstate->exc_value; + *tb = tstate->exc_traceback; + Py_XINCREF(*type); + Py_XINCREF(*value); + Py_XINCREF(*tb); +} +static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { + PyObject *tmp_type, *tmp_value, *tmp_tb; + tmp_type = tstate->exc_type; + tmp_value = tstate->exc_value; + tmp_tb = tstate->exc_traceback; + tstate->exc_type = type; + tstate->exc_value = value; + tstate->exc_traceback = tb; + Py_XDECREF(tmp_type); + Py_XDECREF(tmp_value); + Py_XDECREF(tmp_tb); +} +#endif + +/* PyErrExceptionMatches */ + #if CYTHON_FAST_THREAD_STATE +static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err) { + PyObject *exc_type = tstate->curexc_type; + if (exc_type == err) return 1; + if (unlikely(!exc_type)) return 0; + return PyErr_GivenExceptionMatches(exc_type, err); +} +#endif + +/* GetException */ + #if CYTHON_FAST_THREAD_STATE +static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { +#else +static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) { +#endif + PyObject *local_type, *local_value, *local_tb; +#if CYTHON_FAST_THREAD_STATE + PyObject *tmp_type, *tmp_value, *tmp_tb; + local_type = tstate->curexc_type; + local_value = tstate->curexc_value; + local_tb = tstate->curexc_traceback; + tstate->curexc_type = 0; + tstate->curexc_value = 0; + tstate->curexc_traceback = 0; +#else + PyErr_Fetch(&local_type, &local_value, &local_tb); +#endif + PyErr_NormalizeException(&local_type, &local_value, &local_tb); +#if CYTHON_FAST_THREAD_STATE + if (unlikely(tstate->curexc_type)) +#else + if (unlikely(PyErr_Occurred())) +#endif + goto bad; + #if PY_MAJOR_VERSION >= 3 + if (local_tb) { + if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) + goto bad; + } + #endif + Py_XINCREF(local_tb); + Py_XINCREF(local_type); + Py_XINCREF(local_value); + *type = local_type; + *value = local_value; + *tb = local_tb; +#if CYTHON_FAST_THREAD_STATE + tmp_type = tstate->exc_type; + tmp_value = tstate->exc_value; + tmp_tb = tstate->exc_traceback; + tstate->exc_type = local_type; + tstate->exc_value = local_value; + tstate->exc_traceback = local_tb; + Py_XDECREF(tmp_type); + Py_XDECREF(tmp_value); + Py_XDECREF(tmp_tb); +#else + PyErr_SetExcInfo(local_type, local_value, local_tb); +#endif + return 0; +bad: + *type = 0; + *value = 0; + *tb = 0; + Py_XDECREF(local_type); + Py_XDECREF(local_value); + Py_XDECREF(local_tb); + return -1; +} + +/* Import */ + static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { + PyObject *empty_list = 0; + PyObject *module = 0; + PyObject *global_dict = 0; + PyObject *empty_dict = 0; + PyObject *list; + #if PY_VERSION_HEX < 0x03030000 + PyObject *py_import; + py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); + if (!py_import) + goto bad; + #endif + if (from_list) + list = from_list; + else { + empty_list = PyList_New(0); + if (!empty_list) + goto bad; + list = empty_list; + } + global_dict = PyModule_GetDict(__pyx_m); + if (!global_dict) + goto bad; + empty_dict = PyDict_New(); + if (!empty_dict) + goto bad; + { + #if PY_MAJOR_VERSION >= 3 + if (level == -1) { + if (strchr(__Pyx_MODULE_NAME, '.')) { + #if PY_VERSION_HEX < 0x03030000 + PyObject *py_level = PyInt_FromLong(1); + if (!py_level) + goto bad; + module = PyObject_CallFunctionObjArgs(py_import, + name, global_dict, empty_dict, list, py_level, NULL); + Py_DECREF(py_level); + #else + module = PyImport_ImportModuleLevelObject( + name, global_dict, empty_dict, list, 1); + #endif + if (!module) { + if (!PyErr_ExceptionMatches(PyExc_ImportError)) + goto bad; + PyErr_Clear(); + } + } + level = 0; + } + #endif + if (!module) { + #if PY_VERSION_HEX < 0x03030000 + PyObject *py_level = PyInt_FromLong(level); + if (!py_level) + goto bad; + module = PyObject_CallFunctionObjArgs(py_import, + name, global_dict, empty_dict, list, py_level, NULL); + Py_DECREF(py_level); + #else + module = PyImport_ImportModuleLevelObject( + name, global_dict, empty_dict, list, level); + #endif + } + } +bad: + #if PY_VERSION_HEX < 0x03030000 + Py_XDECREF(py_import); + #endif + Py_XDECREF(empty_list); + Py_XDECREF(empty_dict); + return module; +} + +/* CodeObjectCache */ + static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { + int start = 0, mid = 0, end = count - 1; + if (end >= 0 && code_line > entries[end].code_line) { + return count; + } + while (start < end) { + mid = start + (end - start) / 2; + if (code_line < entries[mid].code_line) { + end = mid; + } else if (code_line > entries[mid].code_line) { + start = mid + 1; + } else { + return mid; + } + } + if (code_line <= entries[mid].code_line) { + return mid; + } else { + return mid + 1; + } +} +static PyCodeObject *__pyx_find_code_object(int code_line) { + PyCodeObject* code_object; + int pos; + if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { + return NULL; + } + pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); + if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { + return NULL; + } + code_object = __pyx_code_cache.entries[pos].code_object; + Py_INCREF(code_object); + return code_object; +} +static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { + int pos, i; + __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; + if (unlikely(!code_line)) { + return; + } + if (unlikely(!entries)) { + entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); + if (likely(entries)) { + __pyx_code_cache.entries = entries; + __pyx_code_cache.max_count = 64; + __pyx_code_cache.count = 1; + entries[0].code_line = code_line; + entries[0].code_object = code_object; + Py_INCREF(code_object); + } + return; + } + pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); + if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { + PyCodeObject* tmp = entries[pos].code_object; + entries[pos].code_object = code_object; + Py_DECREF(tmp); + return; + } + if (__pyx_code_cache.count == __pyx_code_cache.max_count) { + int new_max = __pyx_code_cache.max_count + 64; + entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( + __pyx_code_cache.entries, (size_t)new_max*sizeof(__Pyx_CodeObjectCacheEntry)); + if (unlikely(!entries)) { + return; + } + __pyx_code_cache.entries = entries; + __pyx_code_cache.max_count = new_max; + } + for (i=__pyx_code_cache.count; i>pos; i--) { + entries[i] = entries[i-1]; + } + entries[pos].code_line = code_line; + entries[pos].code_object = code_object; + __pyx_code_cache.count++; + Py_INCREF(code_object); +} + +/* AddTraceback */ + #include "compile.h" +#include "frameobject.h" +#include "traceback.h" +static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( + const char *funcname, int c_line, + int py_line, const char *filename) { + PyCodeObject *py_code = 0; + PyObject *py_srcfile = 0; + PyObject *py_funcname = 0; + #if PY_MAJOR_VERSION < 3 + py_srcfile = PyString_FromString(filename); + #else + py_srcfile = PyUnicode_FromString(filename); + #endif + if (!py_srcfile) goto bad; + if (c_line) { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); + #else + py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); + #endif + } + else { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromString(funcname); + #else + py_funcname = PyUnicode_FromString(funcname); + #endif + } + if (!py_funcname) goto bad; + py_code = __Pyx_PyCode_New( + 0, + 0, + 0, + 0, + 0, + __pyx_empty_bytes, /*PyObject *code,*/ + __pyx_empty_tuple, /*PyObject *consts,*/ + __pyx_empty_tuple, /*PyObject *names,*/ + __pyx_empty_tuple, /*PyObject *varnames,*/ + __pyx_empty_tuple, /*PyObject *freevars,*/ + __pyx_empty_tuple, /*PyObject *cellvars,*/ + py_srcfile, /*PyObject *filename,*/ + py_funcname, /*PyObject *name,*/ + py_line, + __pyx_empty_bytes /*PyObject *lnotab*/ + ); + Py_DECREF(py_srcfile); + Py_DECREF(py_funcname); + return py_code; +bad: + Py_XDECREF(py_srcfile); + Py_XDECREF(py_funcname); + return NULL; +} +static void __Pyx_AddTraceback(const char *funcname, int c_line, + int py_line, const char *filename) { + PyCodeObject *py_code = 0; + PyFrameObject *py_frame = 0; + py_code = __pyx_find_code_object(c_line ? c_line : py_line); + if (!py_code) { + py_code = __Pyx_CreateCodeObjectForTraceback( + funcname, c_line, py_line, filename); + if (!py_code) goto bad; + __pyx_insert_code_object(c_line ? c_line : py_line, py_code); + } + py_frame = PyFrame_New( + PyThreadState_GET(), /*PyThreadState *tstate,*/ + py_code, /*PyCodeObject *code,*/ + __pyx_d, /*PyObject *globals,*/ + 0 /*PyObject *locals*/ + ); + if (!py_frame) goto bad; + __Pyx_PyFrame_SetLineNumber(py_frame, py_line); + PyTraceBack_Here(py_frame); +bad: + Py_XDECREF(py_code); + Py_XDECREF(py_frame); +} + +#if PY_MAJOR_VERSION < 3 +static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) { + if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags); + if (PyObject_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) return __pyx_pw_5numpy_7ndarray_1__getbuffer__(obj, view, flags); + PyErr_Format(PyExc_TypeError, "'%.200s' does not have the buffer interface", Py_TYPE(obj)->tp_name); + return -1; +} +static void __Pyx_ReleaseBuffer(Py_buffer *view) { + PyObject *obj = view->obj; + if (!obj) return; + if (PyObject_CheckBuffer(obj)) { + PyBuffer_Release(view); + return; + } + if (PyObject_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) { __pyx_pw_5numpy_7ndarray_3__releasebuffer__(obj, view); return; } + Py_DECREF(obj); + view->obj = NULL; +} +#endif + + + /* CIntFromPyVerify */ + #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ + __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) +#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ + __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) +#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ + {\ + func_type value = func_value;\ + if (sizeof(target_type) < sizeof(func_type)) {\ + if (unlikely(value != (func_type) (target_type) value)) {\ + func_type zero = 0;\ + if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ + return (target_type) -1;\ + if (is_unsigned && unlikely(value < zero))\ + goto raise_neg_overflow;\ + else\ + goto raise_overflow;\ + }\ + }\ + return (target_type) value;\ + } + +/* CIntToPy */ + static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) { + const int neg_one = (int) -1, const_zero = (int) 0; + const int is_unsigned = neg_one > const_zero; + if (is_unsigned) { + if (sizeof(int) < sizeof(long)) { + return PyInt_FromLong((long) value); + } else if (sizeof(int) <= sizeof(unsigned long)) { + return PyLong_FromUnsignedLong((unsigned long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); +#endif + } + } else { + if (sizeof(int) <= sizeof(long)) { + return PyInt_FromLong((long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { + return PyLong_FromLongLong((PY_LONG_LONG) value); +#endif + } + } + { + int one = 1; int little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&value; + return _PyLong_FromByteArray(bytes, sizeof(int), + little, !is_unsigned); + } +} + +/* Declarations */ + #if CYTHON_CCOMPLEX + #ifdef __cplusplus + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + return ::std::complex< float >(x, y); + } + #else + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + return x + y*(__pyx_t_float_complex)_Complex_I; + } + #endif +#else + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + __pyx_t_float_complex z; + z.real = x; + z.imag = y; + return z; + } +#endif + +/* Arithmetic */ + #if CYTHON_CCOMPLEX +#else + static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + return (a.real == b.real) && (a.imag == b.imag); + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real + b.real; + z.imag = a.imag + b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real - b.real; + z.imag = a.imag - b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real * b.real - a.imag * b.imag; + z.imag = a.real * b.imag + a.imag * b.real; + return z; + } + #if 1 + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + if (b.imag == 0) { + return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.real); + } else if (fabsf(b.real) >= fabsf(b.imag)) { + if (b.real == 0 && b.imag == 0) { + return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.imag); + } else { + float r = b.imag / b.real; + float s = 1.0 / (b.real + b.imag * r); + return __pyx_t_float_complex_from_parts( + (a.real + a.imag * r) * s, (a.imag - a.real * r) * s); + } + } else { + float r = b.real / b.imag; + float s = 1.0 / (b.imag + b.real * r); + return __pyx_t_float_complex_from_parts( + (a.real * r + a.imag) * s, (a.imag * r - a.real) * s); + } + } + #else + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + if (b.imag == 0) { + return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.real); + } else { + float denom = b.real * b.real + b.imag * b.imag; + return __pyx_t_float_complex_from_parts( + (a.real * b.real + a.imag * b.imag) / denom, + (a.imag * b.real - a.real * b.imag) / denom); + } + } + #endif + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex a) { + __pyx_t_float_complex z; + z.real = -a.real; + z.imag = -a.imag; + return z; + } + static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex a) { + return (a.real == 0) && (a.imag == 0); + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex a) { + __pyx_t_float_complex z; + z.real = a.real; + z.imag = -a.imag; + return z; + } + #if 1 + static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex z) { + #if !defined(HAVE_HYPOT) || defined(_MSC_VER) + return sqrtf(z.real*z.real + z.imag*z.imag); + #else + return hypotf(z.real, z.imag); + #endif + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + float r, lnr, theta, z_r, z_theta; + if (b.imag == 0 && b.real == (int)b.real) { + if (b.real < 0) { + float denom = a.real * a.real + a.imag * a.imag; + a.real = a.real / denom; + a.imag = -a.imag / denom; + b.real = -b.real; + } + switch ((int)b.real) { + case 0: + z.real = 1; + z.imag = 0; + return z; + case 1: + return a; + case 2: + z = __Pyx_c_prod_float(a, a); + return __Pyx_c_prod_float(a, a); + case 3: + z = __Pyx_c_prod_float(a, a); + return __Pyx_c_prod_float(z, a); + case 4: + z = __Pyx_c_prod_float(a, a); + return __Pyx_c_prod_float(z, z); + } + } + if (a.imag == 0) { + if (a.real == 0) { + return a; + } else if (b.imag == 0) { + z.real = powf(a.real, b.real); + z.imag = 0; + return z; + } else if (a.real > 0) { + r = a.real; + theta = 0; + } else { + r = -a.real; + theta = atan2f(0, -1); + } + } else { + r = __Pyx_c_abs_float(a); + theta = atan2f(a.imag, a.real); + } + lnr = logf(r); + z_r = expf(lnr * b.real - theta * b.imag); + z_theta = theta * b.real + lnr * b.imag; + z.real = z_r * cosf(z_theta); + z.imag = z_r * sinf(z_theta); + return z; + } + #endif +#endif + +/* Declarations */ + #if CYTHON_CCOMPLEX + #ifdef __cplusplus + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + return ::std::complex< double >(x, y); + } + #else + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + return x + y*(__pyx_t_double_complex)_Complex_I; + } + #endif +#else + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + __pyx_t_double_complex z; + z.real = x; + z.imag = y; + return z; + } +#endif + +/* Arithmetic */ + #if CYTHON_CCOMPLEX +#else + static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + return (a.real == b.real) && (a.imag == b.imag); + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real + b.real; + z.imag = a.imag + b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real - b.real; + z.imag = a.imag - b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real * b.real - a.imag * b.imag; + z.imag = a.real * b.imag + a.imag * b.real; + return z; + } + #if 1 + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + if (b.imag == 0) { + return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real); + } else if (fabs(b.real) >= fabs(b.imag)) { + if (b.real == 0 && b.imag == 0) { + return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.imag); + } else { + double r = b.imag / b.real; + double s = 1.0 / (b.real + b.imag * r); + return __pyx_t_double_complex_from_parts( + (a.real + a.imag * r) * s, (a.imag - a.real * r) * s); + } + } else { + double r = b.real / b.imag; + double s = 1.0 / (b.imag + b.real * r); + return __pyx_t_double_complex_from_parts( + (a.real * r + a.imag) * s, (a.imag * r - a.real) * s); + } + } + #else + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + if (b.imag == 0) { + return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real); + } else { + double denom = b.real * b.real + b.imag * b.imag; + return __pyx_t_double_complex_from_parts( + (a.real * b.real + a.imag * b.imag) / denom, + (a.imag * b.real - a.real * b.imag) / denom); + } + } + #endif + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex a) { + __pyx_t_double_complex z; + z.real = -a.real; + z.imag = -a.imag; + return z; + } + static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex a) { + return (a.real == 0) && (a.imag == 0); + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex a) { + __pyx_t_double_complex z; + z.real = a.real; + z.imag = -a.imag; + return z; + } + #if 1 + static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex z) { + #if !defined(HAVE_HYPOT) || defined(_MSC_VER) + return sqrt(z.real*z.real + z.imag*z.imag); + #else + return hypot(z.real, z.imag); + #endif + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + double r, lnr, theta, z_r, z_theta; + if (b.imag == 0 && b.real == (int)b.real) { + if (b.real < 0) { + double denom = a.real * a.real + a.imag * a.imag; + a.real = a.real / denom; + a.imag = -a.imag / denom; + b.real = -b.real; + } + switch ((int)b.real) { + case 0: + z.real = 1; + z.imag = 0; + return z; + case 1: + return a; + case 2: + z = __Pyx_c_prod_double(a, a); + return __Pyx_c_prod_double(a, a); + case 3: + z = __Pyx_c_prod_double(a, a); + return __Pyx_c_prod_double(z, a); + case 4: + z = __Pyx_c_prod_double(a, a); + return __Pyx_c_prod_double(z, z); + } + } + if (a.imag == 0) { + if (a.real == 0) { + return a; + } else if (b.imag == 0) { + z.real = pow(a.real, b.real); + z.imag = 0; + return z; + } else if (a.real > 0) { + r = a.real; + theta = 0; + } else { + r = -a.real; + theta = atan2(0, -1); + } + } else { + r = __Pyx_c_abs_double(a); + theta = atan2(a.imag, a.real); + } + lnr = log(r); + z_r = exp(lnr * b.real - theta * b.imag); + z_theta = theta * b.real + lnr * b.imag; + z.real = z_r * cos(z_theta); + z.imag = z_r * sin(z_theta); + return z; + } + #endif +#endif + +/* CIntToPy */ + static CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value) { + const enum NPY_TYPES neg_one = (enum NPY_TYPES) -1, const_zero = (enum NPY_TYPES) 0; + const int is_unsigned = neg_one > const_zero; + if (is_unsigned) { + if (sizeof(enum NPY_TYPES) < sizeof(long)) { + return PyInt_FromLong((long) value); + } else if (sizeof(enum NPY_TYPES) <= sizeof(unsigned long)) { + return PyLong_FromUnsignedLong((unsigned long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(enum NPY_TYPES) <= sizeof(unsigned PY_LONG_LONG)) { + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); +#endif + } + } else { + if (sizeof(enum NPY_TYPES) <= sizeof(long)) { + return PyInt_FromLong((long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(enum NPY_TYPES) <= sizeof(PY_LONG_LONG)) { + return PyLong_FromLongLong((PY_LONG_LONG) value); +#endif + } + } + { + int one = 1; int little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&value; + return _PyLong_FromByteArray(bytes, sizeof(enum NPY_TYPES), + little, !is_unsigned); + } +} + +/* CIntFromPy */ + static CYTHON_INLINE npy_int32 __Pyx_PyInt_As_npy_int32(PyObject *x) { + const npy_int32 neg_one = (npy_int32) -1, const_zero = (npy_int32) 0; + const int is_unsigned = neg_one > const_zero; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_Check(x))) { + if (sizeof(npy_int32) < sizeof(long)) { + __PYX_VERIFY_RETURN_INT(npy_int32, long, PyInt_AS_LONG(x)) + } else { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + goto raise_neg_overflow; + } + return (npy_int32) val; + } + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (npy_int32) 0; + case 1: __PYX_VERIFY_RETURN_INT(npy_int32, digit, digits[0]) + case 2: + if (8 * sizeof(npy_int32) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) >= 2 * PyLong_SHIFT) { + return (npy_int32) (((((npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0])); + } + } + break; + case 3: + if (8 * sizeof(npy_int32) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) >= 3 * PyLong_SHIFT) { + return (npy_int32) (((((((npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0])); + } + } + break; + case 4: + if (8 * sizeof(npy_int32) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) >= 4 * PyLong_SHIFT) { + return (npy_int32) (((((((((npy_int32)digits[3]) << PyLong_SHIFT) | (npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0])); + } + } + break; + } +#endif +#if CYTHON_COMPILING_IN_CPYTHON + if (unlikely(Py_SIZE(x) < 0)) { + goto raise_neg_overflow; + } +#else + { + int result = PyObject_RichCompareBool(x, Py_False, Py_LT); + if (unlikely(result < 0)) + return (npy_int32) -1; + if (unlikely(result == 1)) + goto raise_neg_overflow; + } +#endif + if (sizeof(npy_int32) <= sizeof(unsigned long)) { + __PYX_VERIFY_RETURN_INT_EXC(npy_int32, unsigned long, PyLong_AsUnsignedLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(npy_int32) <= sizeof(unsigned PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(npy_int32, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) +#endif + } + } else { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (npy_int32) 0; + case -1: __PYX_VERIFY_RETURN_INT(npy_int32, sdigit, (sdigit) (-(sdigit)digits[0])) + case 1: __PYX_VERIFY_RETURN_INT(npy_int32, digit, +digits[0]) + case -2: + if (8 * sizeof(npy_int32) - 1 > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 2 * PyLong_SHIFT) { + return (npy_int32) (((npy_int32)-1)*(((((npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + case 2: + if (8 * sizeof(npy_int32) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 2 * PyLong_SHIFT) { + return (npy_int32) ((((((npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + case -3: + if (8 * sizeof(npy_int32) - 1 > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 3 * PyLong_SHIFT) { + return (npy_int32) (((npy_int32)-1)*(((((((npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + case 3: + if (8 * sizeof(npy_int32) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 3 * PyLong_SHIFT) { + return (npy_int32) ((((((((npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + case -4: + if (8 * sizeof(npy_int32) - 1 > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 4 * PyLong_SHIFT) { + return (npy_int32) (((npy_int32)-1)*(((((((((npy_int32)digits[3]) << PyLong_SHIFT) | (npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + case 4: + if (8 * sizeof(npy_int32) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 4 * PyLong_SHIFT) { + return (npy_int32) ((((((((((npy_int32)digits[3]) << PyLong_SHIFT) | (npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + } +#endif + if (sizeof(npy_int32) <= sizeof(long)) { + __PYX_VERIFY_RETURN_INT_EXC(npy_int32, long, PyLong_AsLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(npy_int32) <= sizeof(PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(npy_int32, PY_LONG_LONG, PyLong_AsLongLong(x)) +#endif + } + } + { +#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) + PyErr_SetString(PyExc_RuntimeError, + "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); +#else + npy_int32 val; + PyObject *v = __Pyx_PyNumber_IntOrLong(x); + #if PY_MAJOR_VERSION < 3 + if (likely(v) && !PyLong_Check(v)) { + PyObject *tmp = v; + v = PyNumber_Long(tmp); + Py_DECREF(tmp); + } + #endif + if (likely(v)) { + int one = 1; int is_little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&val; + int ret = _PyLong_AsByteArray((PyLongObject *)v, + bytes, sizeof(val), + is_little, !is_unsigned); + Py_DECREF(v); + if (likely(!ret)) + return val; + } +#endif + return (npy_int32) -1; + } + } else { + npy_int32 val; + PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); + if (!tmp) return (npy_int32) -1; + val = __Pyx_PyInt_As_npy_int32(tmp); + Py_DECREF(tmp); + return val; + } +raise_overflow: + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to npy_int32"); + return (npy_int32) -1; +raise_neg_overflow: + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to npy_int32"); + return (npy_int32) -1; +} + +/* CIntFromPy */ + static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { + const int neg_one = (int) -1, const_zero = (int) 0; + const int is_unsigned = neg_one > const_zero; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_Check(x))) { + if (sizeof(int) < sizeof(long)) { + __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) + } else { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + goto raise_neg_overflow; + } + return (int) val; + } + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (int) 0; + case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) + case 2: + if (8 * sizeof(int) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) { + return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); + } + } + break; + case 3: + if (8 * sizeof(int) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) { + return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); + } + } + break; + case 4: + if (8 * sizeof(int) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) { + return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); + } + } + break; + } +#endif +#if CYTHON_COMPILING_IN_CPYTHON + if (unlikely(Py_SIZE(x) < 0)) { + goto raise_neg_overflow; + } +#else + { + int result = PyObject_RichCompareBool(x, Py_False, Py_LT); + if (unlikely(result < 0)) + return (int) -1; + if (unlikely(result == 1)) + goto raise_neg_overflow; + } +#endif + if (sizeof(int) <= sizeof(unsigned long)) { + __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) +#endif + } + } else { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (int) 0; + case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) + case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) + case -2: + if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { + return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case 2: + if (8 * sizeof(int) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { + return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case -3: + if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { + return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case 3: + if (8 * sizeof(int) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { + return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case -4: + if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { + return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case 4: + if (8 * sizeof(int) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { + return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + } +#endif + if (sizeof(int) <= sizeof(long)) { + __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) +#endif + } + } + { +#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) + PyErr_SetString(PyExc_RuntimeError, + "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); +#else + int val; + PyObject *v = __Pyx_PyNumber_IntOrLong(x); + #if PY_MAJOR_VERSION < 3 + if (likely(v) && !PyLong_Check(v)) { + PyObject *tmp = v; + v = PyNumber_Long(tmp); + Py_DECREF(tmp); + } + #endif + if (likely(v)) { + int one = 1; int is_little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&val; + int ret = _PyLong_AsByteArray((PyLongObject *)v, + bytes, sizeof(val), + is_little, !is_unsigned); + Py_DECREF(v); + if (likely(!ret)) + return val; + } +#endif + return (int) -1; + } + } else { + int val; + PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); + if (!tmp) return (int) -1; + val = __Pyx_PyInt_As_int(tmp); + Py_DECREF(tmp); + return val; + } +raise_overflow: + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to int"); + return (int) -1; +raise_neg_overflow: + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to int"); + return (int) -1; +} + +/* CIntToPy */ + static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { + const long neg_one = (long) -1, const_zero = (long) 0; + const int is_unsigned = neg_one > const_zero; + if (is_unsigned) { + if (sizeof(long) < sizeof(long)) { + return PyInt_FromLong((long) value); + } else if (sizeof(long) <= sizeof(unsigned long)) { + return PyLong_FromUnsignedLong((unsigned long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); +#endif + } + } else { + if (sizeof(long) <= sizeof(long)) { + return PyInt_FromLong((long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { + return PyLong_FromLongLong((PY_LONG_LONG) value); +#endif + } + } + { + int one = 1; int little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&value; + return _PyLong_FromByteArray(bytes, sizeof(long), + little, !is_unsigned); + } +} + +/* CIntFromPy */ + static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { + const long neg_one = (long) -1, const_zero = (long) 0; + const int is_unsigned = neg_one > const_zero; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_Check(x))) { + if (sizeof(long) < sizeof(long)) { + __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) + } else { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + goto raise_neg_overflow; + } + return (long) val; + } + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (long) 0; + case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) + case 2: + if (8 * sizeof(long) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) { + return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); + } + } + break; + case 3: + if (8 * sizeof(long) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) { + return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); + } + } + break; + case 4: + if (8 * sizeof(long) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) { + return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); + } + } + break; + } +#endif +#if CYTHON_COMPILING_IN_CPYTHON + if (unlikely(Py_SIZE(x) < 0)) { + goto raise_neg_overflow; + } +#else + { + int result = PyObject_RichCompareBool(x, Py_False, Py_LT); + if (unlikely(result < 0)) + return (long) -1; + if (unlikely(result == 1)) + goto raise_neg_overflow; + } +#endif + if (sizeof(long) <= sizeof(unsigned long)) { + __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) +#endif + } + } else { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (long) 0; + case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) + case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) + case -2: + if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { + return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case 2: + if (8 * sizeof(long) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { + return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case -3: + if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { + return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case 3: + if (8 * sizeof(long) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { + return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case -4: + if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { + return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case 4: + if (8 * sizeof(long) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { + return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + } +#endif + if (sizeof(long) <= sizeof(long)) { + __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) +#endif + } + } + { +#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) + PyErr_SetString(PyExc_RuntimeError, + "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); +#else + long val; + PyObject *v = __Pyx_PyNumber_IntOrLong(x); + #if PY_MAJOR_VERSION < 3 + if (likely(v) && !PyLong_Check(v)) { + PyObject *tmp = v; + v = PyNumber_Long(tmp); + Py_DECREF(tmp); + } + #endif + if (likely(v)) { + int one = 1; int is_little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&val; + int ret = _PyLong_AsByteArray((PyLongObject *)v, + bytes, sizeof(val), + is_little, !is_unsigned); + Py_DECREF(v); + if (likely(!ret)) + return val; + } +#endif + return (long) -1; + } + } else { + long val; + PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); + if (!tmp) return (long) -1; + val = __Pyx_PyInt_As_long(tmp); + Py_DECREF(tmp); + return val; + } +raise_overflow: + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to long"); + return (long) -1; +raise_neg_overflow: + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to long"); + return (long) -1; +} + +/* CheckBinaryVersion */ + static int __Pyx_check_binary_version(void) { + char ctversion[4], rtversion[4]; + PyOS_snprintf(ctversion, 4, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); + PyOS_snprintf(rtversion, 4, "%s", Py_GetVersion()); + if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) { + char message[200]; + PyOS_snprintf(message, sizeof(message), + "compiletime version %s of module '%.100s' " + "does not match runtime version %s", + ctversion, __Pyx_MODULE_NAME, rtversion); + return PyErr_WarnEx(NULL, message, 1); + } + return 0; +} + +/* ModuleImport */ + #ifndef __PYX_HAVE_RT_ImportModule +#define __PYX_HAVE_RT_ImportModule +static PyObject *__Pyx_ImportModule(const char *name) { + PyObject *py_name = 0; + PyObject *py_module = 0; + py_name = __Pyx_PyIdentifier_FromString(name); + if (!py_name) + goto bad; + py_module = PyImport_Import(py_name); + Py_DECREF(py_name); + return py_module; +bad: + Py_XDECREF(py_name); + return 0; +} +#endif + +/* TypeImport */ + #ifndef __PYX_HAVE_RT_ImportType +#define __PYX_HAVE_RT_ImportType +static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, + size_t size, int strict) +{ + PyObject *py_module = 0; + PyObject *result = 0; + PyObject *py_name = 0; + char warning[200]; + Py_ssize_t basicsize; +#ifdef Py_LIMITED_API + PyObject *py_basicsize; +#endif + py_module = __Pyx_ImportModule(module_name); + if (!py_module) + goto bad; + py_name = __Pyx_PyIdentifier_FromString(class_name); + if (!py_name) + goto bad; + result = PyObject_GetAttr(py_module, py_name); + Py_DECREF(py_name); + py_name = 0; + Py_DECREF(py_module); + py_module = 0; + if (!result) + goto bad; + if (!PyType_Check(result)) { + PyErr_Format(PyExc_TypeError, + "%.200s.%.200s is not a type object", + module_name, class_name); + goto bad; + } +#ifndef Py_LIMITED_API + basicsize = ((PyTypeObject *)result)->tp_basicsize; +#else + py_basicsize = PyObject_GetAttrString(result, "__basicsize__"); + if (!py_basicsize) + goto bad; + basicsize = PyLong_AsSsize_t(py_basicsize); + Py_DECREF(py_basicsize); + py_basicsize = 0; + if (basicsize == (Py_ssize_t)-1 && PyErr_Occurred()) + goto bad; +#endif + if (!strict && (size_t)basicsize > size) { + PyOS_snprintf(warning, sizeof(warning), + "%s.%s size changed, may indicate binary incompatibility. Expected %zd, got %zd", + module_name, class_name, basicsize, size); + if (PyErr_WarnEx(NULL, warning, 0) < 0) goto bad; + } + else if ((size_t)basicsize != size) { + PyErr_Format(PyExc_ValueError, + "%.200s.%.200s has the wrong size, try recompiling. Expected %zd, got %zd", + module_name, class_name, basicsize, size); + goto bad; + } + return (PyTypeObject *)result; +bad: + Py_XDECREF(py_module); + Py_XDECREF(result); + return NULL; +} +#endif + +/* InitStrings */ + static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { + while (t->p) { + #if PY_MAJOR_VERSION < 3 + if (t->is_unicode) { + *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); + } else if (t->intern) { + *t->p = PyString_InternFromString(t->s); + } else { + *t->p = PyString_FromStringAndSize(t->s, t->n - 1); + } + #else + if (t->is_unicode | t->is_str) { + if (t->intern) { + *t->p = PyUnicode_InternFromString(t->s); + } else if (t->encoding) { + *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); + } else { + *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); + } + } else { + *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); + } + #endif + if (!*t->p) + return -1; + ++t; + } + return 0; +} + +static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { + return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); +} +static CYTHON_INLINE char* __Pyx_PyObject_AsString(PyObject* o) { + Py_ssize_t ignore; + return __Pyx_PyObject_AsStringAndSize(o, &ignore); +} +static CYTHON_INLINE char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { +#if CYTHON_COMPILING_IN_CPYTHON && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) + if ( +#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII + __Pyx_sys_getdefaultencoding_not_ascii && +#endif + PyUnicode_Check(o)) { +#if PY_VERSION_HEX < 0x03030000 + char* defenc_c; + PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); + if (!defenc) return NULL; + defenc_c = PyBytes_AS_STRING(defenc); +#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII + { + char* end = defenc_c + PyBytes_GET_SIZE(defenc); + char* c; + for (c = defenc_c; c < end; c++) { + if ((unsigned char) (*c) >= 128) { + PyUnicode_AsASCIIString(o); + return NULL; + } + } + } +#endif + *length = PyBytes_GET_SIZE(defenc); + return defenc_c; +#else + if (__Pyx_PyUnicode_READY(o) == -1) return NULL; +#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII + if (PyUnicode_IS_ASCII(o)) { + *length = PyUnicode_GET_LENGTH(o); + return PyUnicode_AsUTF8(o); + } else { + PyUnicode_AsASCIIString(o); + return NULL; + } +#else + return PyUnicode_AsUTF8AndSize(o, length); +#endif +#endif + } else +#endif +#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) + if (PyByteArray_Check(o)) { + *length = PyByteArray_GET_SIZE(o); + return PyByteArray_AS_STRING(o); + } else +#endif + { + char* result; + int r = PyBytes_AsStringAndSize(o, &result, length); + if (unlikely(r < 0)) { + return NULL; + } else { + return result; + } + } +} +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { + int is_true = x == Py_True; + if (is_true | (x == Py_False) | (x == Py_None)) return is_true; + else return PyObject_IsTrue(x); +} +static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { +#if CYTHON_USE_TYPE_SLOTS + PyNumberMethods *m; +#endif + const char *name = NULL; + PyObject *res = NULL; +#if PY_MAJOR_VERSION < 3 + if (PyInt_Check(x) || PyLong_Check(x)) +#else + if (PyLong_Check(x)) +#endif + return __Pyx_NewRef(x); +#if CYTHON_USE_TYPE_SLOTS + m = Py_TYPE(x)->tp_as_number; + #if PY_MAJOR_VERSION < 3 + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Int(x); + } + else if (m && m->nb_long) { + name = "long"; + res = PyNumber_Long(x); + } + #else + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Long(x); + } + #endif +#else + res = PyNumber_Int(x); +#endif + if (res) { +#if PY_MAJOR_VERSION < 3 + if (!PyInt_Check(res) && !PyLong_Check(res)) { +#else + if (!PyLong_Check(res)) { +#endif + PyErr_Format(PyExc_TypeError, + "__%.4s__ returned non-%.4s (type %.200s)", + name, name, Py_TYPE(res)->tp_name); + Py_DECREF(res); + return NULL; + } + } + else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_TypeError, + "an integer is required"); + } + return res; +} +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { + Py_ssize_t ival; + PyObject *x; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_CheckExact(b))) { + if (sizeof(Py_ssize_t) >= sizeof(long)) + return PyInt_AS_LONG(b); + else + return PyInt_AsSsize_t(x); + } +#endif + if (likely(PyLong_CheckExact(b))) { + #if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)b)->ob_digit; + const Py_ssize_t size = Py_SIZE(b); + if (likely(__Pyx_sst_abs(size) <= 1)) { + ival = likely(size) ? digits[0] : 0; + if (size == -1) ival = -ival; + return ival; + } else { + switch (size) { + case 2: + if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { + return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case -2: + if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { + return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case 3: + if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { + return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case -3: + if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { + return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case 4: + if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { + return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case -4: + if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { + return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + } + } + #endif + return PyLong_AsSsize_t(b); + } + x = PyNumber_Index(b); + if (!x) return -1; + ival = PyInt_AsSsize_t(x); + Py_DECREF(x); + return ival; +} +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { + return PyInt_FromSize_t(ival); +} + + +#endif /* Py_PYTHON_H */ diff --git a/dota_kit/poly_nms_gpu/poly_overlaps.hpp b/dota_kit/poly_nms_gpu/poly_overlaps.hpp new file mode 100644 index 0000000..1be3e9b --- /dev/null +++ b/dota_kit/poly_nms_gpu/poly_overlaps.hpp @@ -0,0 +1 @@ +void _overlaps(float* overlaps,const float* boxes,const float* query_boxes, int n, int k, int device_id); diff --git a/dota_kit/poly_nms_gpu/poly_overlaps.pyx b/dota_kit/poly_nms_gpu/poly_overlaps.pyx new file mode 100644 index 0000000..14b08cd --- /dev/null +++ b/dota_kit/poly_nms_gpu/poly_overlaps.pyx @@ -0,0 +1,14 @@ +import numpy as np +cimport numpy as np + +cdef extern from "poly_overlaps.hpp": + void _overlaps(np.float32_t*, np.float32_t*, np.float32_t*, int, int, int) + +def poly_overlaps (np.ndarray[np.float32_t, ndim=2] boxes, np.ndarray[np.float32_t, ndim=2] query_boxes, np.int32_t device_id=0): + cdef int N = boxes.shape[0] + cdef int K = query_boxes.shape[0] + cdef np.ndarray[np.float32_t, ndim=2] overlaps = np.zeros((N, K), dtype = np.float32) + _overlaps(&overlaps[0, 0], &boxes[0, 0], &query_boxes[0, 0], N, K, device_id) + return overlaps + + diff --git a/dota_kit/poly_nms_gpu/poly_overlaps_kernel.cu b/dota_kit/poly_nms_gpu/poly_overlaps_kernel.cu new file mode 100644 index 0000000..bf23625 --- /dev/null +++ b/dota_kit/poly_nms_gpu/poly_overlaps_kernel.cu @@ -0,0 +1,427 @@ + +#include "poly_overlaps.hpp" +#include +#include +#include +#include +#include + +using namespace std; + +//##define CUDA_CHECK(condition)\ +// +// do { +// cudaError_t error = condition; +// if (error != cudaSuccess) { +// +// } +// } + +#define CUDA_CHECK(condition) \ + /* Code block avoids redefinition of cudaError_t error */ \ + do { \ + cudaError_t error = condition; \ + if (error != cudaSuccess) { \ + std::cout << cudaGetErrorString(error) << std::endl; \ + } \ + } while (0) + +#define DIVUP(m,n) ((m) / (n) + ((m) % (n) > 0)) +int const threadsPerBlock = sizeof(unsigned long long) * 8; + + +#define maxn 510 +const double eps=1E-8; + +__device__ inline int sig(float d){ + return(d>eps)-(d<-eps); +} +// struct Point{ +// double x,y; Point(){} +// Point(double x,double y):x(x),y(y){} +// bool operator==(const Point&p)const{ +// return sig(x-p.x)==0&&sig(y-p.y)==0; +// } +// }; + +__device__ inline int point_eq(const float2 a, const float2 b) { + return (sig(a.x - b.x) == 0) && (sig(a.y - b.y)==0); +} + +__device__ inline void point_swap(float2 *a, float2 *b) { + float2 temp = *a; + *a = *b; + *b = temp; +} + +__device__ inline void point_reverse(float2 *first, float2* last) +{ + while ((first!=last)&&(first!=--last)) { + point_swap (first,last); + ++first; + } +} +// void point_reverse(Point* first, Point* last) +// { +// while ((first!=last)&&(first!=--last)) { +// point_swap (first,last); +// ++first; +// } +// } + + +__device__ inline float cross(float2 o,float2 a,float2 b){ //叉积 + return(a.x-o.x)*(b.y-o.y)-(b.x-o.x)*(a.y-o.y); +} +__device__ inline float area(float2* ps,int n){ + ps[n]=ps[0]; + float res=0; + for(int i=0;i0) pp[m++]=p[i]; +// if(sig(cross(a,b,p[i]))!=sig(cross(a,b,p[i+1]))) +// lineCross(a,b,p[i],p[i+1],pp[m++]); +// } +// n=0; + +// for(int i=0;i1&&p[n-1]==p[0])n--; +// while(n>1&&point_eq(p[n-1], p[0]))n--; +// // int x = blockIdx.x * blockDim.x + threadIdx.x; +// // // corresponding to k +// // int y = blockIdx.y * blockDim.y + threadIdx.y; +// // int offset = x * 1 + y; +// // printf("polygon_cut, offset\n"); +// } + +__device__ inline void polygon_cut(float2*p,int&n,float2 a,float2 b, float2* pp){ + // TODO: The static variable may be the reason, why single thread is ok, multiple threads are not work + // printf("polygon_cut, offset\n"); + + // static float2 pp[maxn]; + int m=0;p[n]=p[0]; + for(int i=0;i0) pp[m++]=p[i]; + if(sig(cross(a,b,p[i]))!=sig(cross(a,b,p[i+1]))) + lineCross(a,b,p[i],p[i+1],pp[m++]); + } + n=0; + + for(int i=0;i1&&p[n-1]==p[0])n--; + while(n>1&&point_eq(p[n-1], p[0]))n--; + // int x = blockIdx.x * blockDim.x + threadIdx.x; + // // corresponding to k + // int y = blockIdx.y * blockDim.y + threadIdx.y; + // int offset = x * 1 + y; + // printf("polygon_cut, offset\n"); +} + +//---------------华丽的分隔线-----------------// +//返回三角形oab和三角形ocd的有向交面积,o是原点// +__device__ inline float intersectArea(float2 a,float2 b,float2 c,float2 d){ + float2 o = make_float2(0,0); + int s1=sig(cross(o,a,b)); + int s2=sig(cross(o,c,d)); + if(s1==0||s2==0)return 0.0;//退化,面积为0 + // if(s1==-1) swap(a,b); + // if(s2==-1) swap(c,d); + // printf("before swap\n"); + // printf("a.x %f, a.y %f\n", a.x, a.y); + // printf("b.x %f, b.y %f\n", b.x, b.y); + if(s1 == -1) point_swap(&a, &b); + // printf("a.x %f, a.y %f\n", a.x, a.y); + // printf("b.x %f, b.y %f\n", b.x, b.y); + // printf("after swap\n"); + if(s2 == -1) point_swap(&c, &d); + float2 p[10]={o,a,b}; + int n=3; + + // // manually implement polygon_cut(p, n, a, b) + // float2 pp[maxn]; + // // polygon_cut(p, n, o, c) + // int m=0;p[n]=p[0]; + // for(int i=0;i0) pp[m++]=p[i]; + // if(sig(cross(o,c,p[i]))!=sig(cross(o,c,p[i+1]))) + // lineCross(o,c,p[i],p[i+1],pp[m++]); + // } + // n=0; + + // for(int i=0;i1&&point_eq(p[n-1], p[0]))n--; + + // // polygon_cut(p, n, c, d) + // m=0;p[n]=p[0]; + // for(int i=0;i0) pp[m++]=p[i]; + // if(sig(cross(c,d,p[i]))!=sig(cross(c,d,p[i+1]))) + // lineCross(c,d,p[i],p[i+1],pp[m++]); + // } + // n=0; + + // for(int i=0;i1&&point_eq(p[n-1], p[0]))n--; + + // // polygon_cut(p, n, d, o) + // m=0;p[n]=p[0]; + // for(int i=0;i0) pp[m++]=p[i]; + // if(sig(cross(d,o,p[i]))!=sig(cross(d,o,p[i+1]))) + // lineCross(d,o,p[i],p[i+1],pp[m++]); + // } + // n=0; + + // for(int i=0;i1&&point_eq(p[n-1], p[0]))n--; + float2 pp[maxn]; + polygon_cut(p,n,o,c,pp); + polygon_cut(p,n,c,d,pp); + polygon_cut(p,n,d,o,pp); + float res=fabs(area(p,n)); + int x = blockIdx.x * blockDim.x + threadIdx.x; + // corresponding to k + int y = blockIdx.y * blockDim.y + threadIdx.y; + int offset = x * 1 + y; + // printf("intersectArea2, offset: %d, %f, %f, %f, %f, %f, %f, %f, %f, res: %f\n", offset, a.x, a.y, b.x, b.y, c.x, c.y, d.x, d.y, res); + if(s1*s2==-1) res=-res;return res; + +} +//求两多边形的交面积 +// TODO: here changed the input, this need to be debug +__device__ inline float intersectArea(float2*ps1,int n1,float2*ps2,int n2){ + int x = blockIdx.x * blockDim.x + threadIdx.x; + // corresponding to k + int y = blockIdx.y * blockDim.y + threadIdx.y; + int offset = x * 1 + y; + if(area(ps1,n1)<0) point_reverse(ps1,ps1+n1); + if(area(ps2,n2)<0) point_reverse(ps2,ps2+n2); + ps1[n1]=ps1[0]; + ps2[n2]=ps2[0]; + float res=0; + for(int i=0;i p, vector q) { +// Point ps1[maxn],ps2[maxn]; +// int n1 = 4; +// int n2 = 4; +// for (int i = 0; i < 4; i++) { +// ps1[i].x = p[i * 2]; +// ps1[i].y = p[i * 2 + 1]; +// +// ps2[i].x = q[i * 2]; +// ps2[i].y = q[i * 2 + 1]; +// } +// double inter_area = intersectArea(ps1, n1, ps2, n2); +// double union_area = fabs(area(ps1, n1)) + fabs(area(ps2, n2)) - inter_area; +// double iou = inter_area / union_area; +// +//// cout << "inter_area:" << inter_area << endl; +//// cout << "union_area:" << union_area << endl; +//// cout << "iou:" << iou << endl; +// +// return iou; +//} + +__device__ inline void RotBox2Poly(float const * const dbox, float2 * ps) { + float cs = cos(dbox[4]); + float ss = sin(dbox[4]); + float w = dbox[2]; + float h = dbox[3]; + + float x_ctr = dbox[0]; + float y_ctr = dbox[1]; + ps[0].x = x_ctr + cs * (w / 2.0) - ss * (-h / 2.0); + ps[1].x = x_ctr + cs * (w / 2.0) - ss * (h / 2.0); + ps[2].x = x_ctr + cs * (-w / 2.0) - ss * (h / 2.0); + ps[3].x = x_ctr + cs * (-w / 2.0) - ss * (-h / 2.0); + + ps[0].y = y_ctr + ss * (w / 2.0) + cs * (-h / 2.0); + ps[1].y = y_ctr + ss * (w / 2.0) + cs * (h / 2.0); + ps[2].y = y_ctr + ss * (-w / 2.0) + cs * (h / 2.0); + ps[3].y = y_ctr + ss * (-w / 2.0) + cs * (-h / 2.0); +} + + +__device__ inline float devPolyIoU(float const * const dbbox1, float const * const dbbox2) { + + + float2 ps1[maxn], ps2[maxn]; + int n1 = 4; + int n2 = 4; + + + + + RotBox2Poly(dbbox1, ps1); + RotBox2Poly(dbbox2, ps2); + + // printf("ps1: %f, %f, %f, %f, %f, %f, %f, %f\n", ps1[0].x, ps1[0].y, ps1[1].x, ps1[1].y, ps1[2].x, ps1[2].y, ps1[3].x, ps1[3].y); + // printf("ps2: %f, %f, %f, %f, %f, %f, %f, %f\n", ps2[0].x, ps2[0].y, ps2[1].x, ps2[1].y, ps2[2].x, ps2[2].y, ps2[3].x, ps2[3].y); + float inter_area = intersectArea(ps1, n1, ps2, n2); + //printf("inter_area: %f \n", inter_area); + float union_area = fabs(area(ps1, n1)) + fabs(area(ps2, n2)) - inter_area; + //printf("before union_area\n"); + //printf("union_area: %f \n", union_area); + float iou = 0; + if (union_area == 0) { + iou = (inter_area + 1) / (union_area + 1); + } else { + iou = inter_area / union_area; + } + // printf("iou: %f \n", iou); + return iou; +} + +__global__ void overlaps_kernel(const int N, const int K, const float* dev_boxes, + const float * dev_query_boxes, float* dev_overlaps) { + +// const int col_start = blockIdx.y; +// const int row_start = blockIdx.x; + + // corresponding to n + int x = blockIdx.x * blockDim.x + threadIdx.x; + // corresponding to k + int y = blockIdx.y * blockDim.y + threadIdx.y; + if ((x < N) && (y < K)) { + int offset = x * K + y; + + //printf + // printf("offset: %d dbbox: %f %f %f %f %f\n", offset, (dev_boxes + x*5)[0], + // (dev_boxes + x*5)[1], (dev_boxes + x*5)[2], (dev_boxes + x*5)[3], + // (dev_boxes + x*5)[4] ); + // printf("offset: %d dbbox: %f %f %f %f %f\n", offset, (dev_query_boxes + y*5)[0], + // (dev_query_boxes + y*5)[1], (dev_query_boxes + y*5)[2], (dev_query_boxes + y*5)[3], + // (dev_query_boxes + y*5)[4] ); + + dev_overlaps[offset] = devPolyIoU(dev_boxes + x * 5, dev_query_boxes + y * 5); + } +} + + +void _set_device(int device_id) { + int current_device; + CUDA_CHECK(cudaGetDevice(¤t_device)); + if (current_device == device_id) { + return; + } + // The call to cudaSetDevice must come before any calls to Get, which + // may perform initialization using the GPU. + CUDA_CHECK(cudaSetDevice(device_id)); +} + + +void _overlaps(float* overlaps,const float* boxes,const float* query_boxes, int n, int k, int device_id) { + + _set_device(device_id); + + float* overlaps_dev = NULL; + float* boxes_dev = NULL; + float* query_boxes_dev = NULL; + + + CUDA_CHECK(cudaMalloc(&boxes_dev, + n * 5 * sizeof(float))); + + + + CUDA_CHECK(cudaMemcpy(boxes_dev, + boxes, + n * 5 * sizeof(float), + cudaMemcpyHostToDevice)); + + + + CUDA_CHECK(cudaMalloc(&query_boxes_dev, + k * 5 * sizeof(float))); + + + + CUDA_CHECK(cudaMemcpy(query_boxes_dev, + query_boxes, + k * 5 * sizeof(float), + cudaMemcpyHostToDevice)); + + CUDA_CHECK(cudaMalloc(&overlaps_dev, + n * k * sizeof(float))); + + + if (true){} + + + dim3 blocks(DIVUP(n, 32), + DIVUP(k, 32)); + + dim3 threads(32, 32); + + + overlaps_kernel<<>>(n, k, + boxes_dev, + query_boxes_dev, + overlaps_dev); + + CUDA_CHECK(cudaMemcpy(overlaps, + overlaps_dev, + n * k * sizeof(float), + cudaMemcpyDeviceToHost)); + + + CUDA_CHECK(cudaFree(overlaps_dev)); + CUDA_CHECK(cudaFree(boxes_dev)); + CUDA_CHECK(cudaFree(query_boxes_dev)); + +} diff --git a/dota_kit/poly_nms_gpu/setup.py b/dota_kit/poly_nms_gpu/setup.py new file mode 100644 index 0000000..7a0b250 --- /dev/null +++ b/dota_kit/poly_nms_gpu/setup.py @@ -0,0 +1,152 @@ +""" + setup.py file for SWIG example +""" +import os +from os.path import join as pjoin +from setuptools import setup +from distutils.extension import Extension +from Cython.Distutils import build_ext +import subprocess +import numpy as np + +def find_in_path(name, path): + "Find a file in a search path" + # Adapted fom + # http://code.activestate.com/recipes/52224-find-a-file-given-a-search-path/ + for dir in path.split(os.pathsep): + binpath = pjoin(dir, name) + if os.path.exists(binpath): + return os.path.abspath(binpath) + return None + + +def locate_cuda(): + """Locate the CUDA environment on the system + + Returns a dict with keys 'home', 'nvcc', 'include', and 'lib64' + and values giving the absolute path to each directory. + + Starts by looking for the CUDAHOME env variable. If not found, everything + is based on finding 'nvcc' in the PATH. + """ + + # first check if the CUDAHOME env variable is in use + if 'CUDAHOME' in os.environ: + home = os.environ['CUDAHOME'] + nvcc = pjoin(home, 'bin', 'nvcc') + else: + # otherwise, search the PATH for NVCC + default_path = pjoin(os.sep, 'usr', 'local', 'cuda', 'bin') + nvcc = find_in_path('nvcc', os.environ['PATH'] + os.pathsep + default_path) + if nvcc is None: + raise EnvironmentError('The nvcc binary could not be ' + 'located in your $PATH. Either add it to your path, or set $CUDAHOME') + home = os.path.dirname(os.path.dirname(nvcc)) + + cudaconfig = {'home':home, 'nvcc':nvcc, + 'include': pjoin(home, 'include'), + 'lib64': pjoin(home, 'lib64')} + try: + for k, v in cudaconfig.iteritems(): + if not os.path.exists(v): + raise EnvironmentError('The CUDA %s path could not be located in %s' % (k, v)) + except: + for k, v in cudaconfig.items(): + if not os.path.exists(v): + raise EnvironmentError('The CUDA %s path could not be located in %s' % (k, v)) + return cudaconfig +CUDA = locate_cuda() + + +# Obtain the numpy include directory. This logic works across numpy versions. +try: + numpy_include = np.get_include() +except AttributeError: + numpy_include = np.get_numpy_include() + +def customize_compiler_for_nvcc(self): + """inject deep into distutils to customize how the dispatch + to gcc/nvcc works. + + If you subclass UnixCCompiler, it's not trivial to get your subclass + injected in, and still have the right customizations (i.e. + distutils.sysconfig.customize_compiler) run on it. So instead of going + the OO route, I have this. Note, it's kindof like a wierd functional + subclassing going on.""" + + # tell the compiler it can processes .cu + self.src_extensions.append('.cu') + + # save references to the default compiler_so and _comple methods + default_compiler_so = self.compiler_so + super = self._compile + + # now redefine the _compile method. This gets executed for each + # object but distutils doesn't have the ability to change compilers + # based on source extension: we add it. + def _compile(obj, src, ext, cc_args, extra_postargs, pp_opts): + if os.path.splitext(src)[1] == '.cu': + # use the cuda for .cu files + self.set_executable('compiler_so', CUDA['nvcc']) + # use only a subset of the extra_postargs, which are 1-1 translated + # from the extra_compile_args in the Extension class + postargs = extra_postargs['nvcc'] + else: + postargs = extra_postargs['gcc'] + + super(obj, src, ext, cc_args, postargs, pp_opts) + # reset the default compiler_so, which we might have changed for cuda + self.compiler_so = default_compiler_so + + # inject our redefined _compile method into the class + self._compile = _compile + + +# run the customize_compiler +class custom_build_ext(build_ext): + def build_extensions(self): + customize_compiler_for_nvcc(self.compiler) + build_ext.build_extensions(self) + +ext_modules = [ + Extension('poly_nms', + ['poly_nms_kernel.cu', 'poly_nms.pyx'], + library_dirs=[CUDA['lib64']], + libraries=['cudart'], + language='c++', + runtime_library_dirs=[CUDA['lib64']], + # this syntax is specific to this build system + # we're only going to use certain compiler args with nvcc and not with + # gcc the implementation of this trick is in customize_compiler() below + extra_compile_args={'gcc': ["-Wno-unused-function"], + 'nvcc': ['-arch=sm_35', + '--ptxas-options=-v', + '-c', + '--compiler-options', + "'-fPIC'"]}, + include_dirs=[numpy_include, CUDA['include']] + ), + Extension('poly_overlaps', + ['poly_overlaps_kernel.cu', 'poly_overlaps.pyx'], + library_dirs=[CUDA['lib64']], + libraries=['cudart'], + language='c++', + runtime_library_dirs=[CUDA['lib64']], + # this syntax is specific to this build system + # we're only going to use certain compiler args with nvcc and not with + # gcc the implementation of this trick is in customize_compiler() below + extra_compile_args={'gcc': ["-Wno-unused-function"], + 'nvcc': ['-arch=sm_35', + '--ptxas-options=-v', + '-c', + '--compiler-options', + "'-fPIC'"]}, + include_dirs=[numpy_include, CUDA['include']] + ), +] +setup( + name='rotation', + ext_modules=ext_modules, + # inject our custom trigger + cmdclass={'build_ext': custom_build_ext}, +) diff --git a/dota_kit/polyiou.cpp b/dota_kit/polyiou.cpp new file mode 100644 index 0000000..0a28d1a --- /dev/null +++ b/dota_kit/polyiou.cpp @@ -0,0 +1,137 @@ + +#include +#include +#include +#include +#include +using namespace std; +#define maxn 510 +const double eps=1E-8; +int sig(double d){ + return(d>eps)-(d<-eps); +} +struct Point{ + double x,y; Point(){} + Point(double x,double y):x(x),y(y){} + bool operator==(const Point&p)const{ + return sig(x-p.x)==0&&sig(y-p.y)==0; + } +}; +double cross(Point o,Point a,Point b){ //叉积 + return(a.x-o.x)*(b.y-o.y)-(b.x-o.x)*(a.y-o.y); +} +double area(Point* ps,int n){ + ps[n]=ps[0]; + double res=0; + for(int i=0;i0) pp[m++]=p[i]; + if(sig(cross(a,b,p[i]))!=sig(cross(a,b,p[i+1]))) + lineCross(a,b,p[i],p[i+1],pp[m++]); + } + n=0; + for(int i=0;i1&&p[n-1]==p[0])n--; +} +//---------------华丽的分隔线-----------------// +//返回三角形oab和三角形ocd的有向交面积,o是原点// +double intersectArea(Point a,Point b,Point c,Point d){ + Point o(0,0); + int s1=sig(cross(o,a,b)); + int s2=sig(cross(o,c,d)); + if(s1==0||s2==0)return 0.0;//退化,面积为0 + if(s1==-1) swap(a,b); + if(s2==-1) swap(c,d); + Point p[10]={o,a,b}; + int n=3; + polygon_cut(p,n,o,c); + polygon_cut(p,n,c,d); + polygon_cut(p,n,d,o); + double res=fabs(area(p,n)); + if(s1*s2==-1) res=-res;return res; +} +//求两多边形的交面积 +double intersectArea(Point*ps1,int n1,Point*ps2,int n2){ + if(area(ps1,n1)<0) reverse(ps1,ps1+n1); + if(area(ps2,n2)<0) reverse(ps2,ps2+n2); + ps1[n1]=ps1[0]; + ps2[n2]=ps2[0]; + double res=0; + for(int i=0;i p, vector q) { + Point ps1[maxn],ps2[maxn]; + int n1 = 4; + int n2 = 4; + for (int i = 0; i < 4; i++) { + ps1[i].x = p[i * 2]; + ps1[i].y = p[i * 2 + 1]; + + ps2[i].x = q[i * 2]; + ps2[i].y = q[i * 2 + 1]; + } + double inter_area = intersectArea(ps1, n1, ps2, n2); +// printf("inter_area: %f \n", inter_area); + double union_area = fabs(area(ps1, n1)) + fabs(area(ps2, n2)) - inter_area; +// printf("union_area: %f \n", union_area); + double iou = 0; + if (union_area == 0) { + iou = (inter_area + 1) / (union_area + 1); + } else { + iou = inter_area / union_area; + } + +// cout << "inter_area:" << inter_area << endl; +// cout << "union_area:" << union_area << endl; +// cout << "iou:" << iou << endl; + + return iou; +} +// +//int main(){ +// double p[8] = {0, 0, 1, 0, 1, 1, 0, 1}; +// double q[8] = {0.5, 0.5, 1.5, 0.5, 1.5, 1.5, 0.5, 1.5}; +// vector P(p, p + 8); +// vector Q(q, q + 8); +// iou_poly(P, Q); +// return 0; +//} + +//int main(){ +// double p[8] = {0, 0, 1, 0, 1, 1, 0, 1}; +// double q[8] = {0.5, 0.5, 1.5, 0.5, 1.5, 1.5, 0.5, 1.5}; +// iou_poly(p, q); +// return 0; +//} \ No newline at end of file diff --git a/dota_kit/polyiou.h b/dota_kit/polyiou.h new file mode 100644 index 0000000..dca2679 --- /dev/null +++ b/dota_kit/polyiou.h @@ -0,0 +1,10 @@ +// +// Created by dingjian on 18-2-3. +// + +#ifndef POLYIOU_POLYIOU_H +#define POLYIOU_POLYIOU_H + +#include +double iou_poly(std::vector p, std::vector q); +#endif //POLYIOU_POLYIOU_H diff --git a/dota_kit/polyiou.i b/dota_kit/polyiou.i new file mode 100644 index 0000000..3bf8252 --- /dev/null +++ b/dota_kit/polyiou.i @@ -0,0 +1,19 @@ +%module polyiou +%include "std_vector.i" + +namespace std { + %template(VectorDouble) vector; +}; + +%{ +#define SWIG_FILE_WITH_INIT +#include +#include +#include +#include + +#include "polyiou.h" +%} + +%include "polyiou.h" + diff --git a/dota_kit/polyiou.py b/dota_kit/polyiou.py new file mode 100644 index 0000000..00ff746 --- /dev/null +++ b/dota_kit/polyiou.py @@ -0,0 +1,276 @@ +# This file was automatically generated by SWIG (http://www.swig.org). +# Version 3.0.8 +# +# Do not make changes to this file unless you know what you are doing--modify +# the SWIG interface file instead. + + + + + +from sys import version_info +if version_info >= (2, 6, 0): + def swig_import_helper(): + from os.path import dirname + import imp + fp = None + try: + fp, pathname, description = imp.find_module('_polyiou', [dirname(__file__)]) + except ImportError: + import _polyiou + return _polyiou + if fp is not None: + try: + _mod = imp.load_module('_polyiou', fp, pathname, description) + finally: + fp.close() + return _mod + _polyiou = swig_import_helper() + del swig_import_helper +else: + import _polyiou +del version_info +try: + _swig_property = property +except NameError: + pass # Python < 2.2 doesn't have 'property'. + + +def _swig_setattr_nondynamic(self, class_type, name, value, static=1): + if (name == "thisown"): + return self.this.own(value) + if (name == "this"): + if type(value).__name__ == 'SwigPyObject': + self.__dict__[name] = value + return + method = class_type.__swig_setmethods__.get(name, None) + if method: + return method(self, value) + if (not static): + if _newclass: + object.__setattr__(self, name, value) + else: + self.__dict__[name] = value + else: + raise AttributeError("You cannot add attributes to %s" % self) + + +def _swig_setattr(self, class_type, name, value): + return _swig_setattr_nondynamic(self, class_type, name, value, 0) + + +def _swig_getattr_nondynamic(self, class_type, name, static=1): + if (name == "thisown"): + return self.this.own() + method = class_type.__swig_getmethods__.get(name, None) + if method: + return method(self) + if (not static): + return object.__getattr__(self, name) + else: + raise AttributeError(name) + +def _swig_getattr(self, class_type, name): + return _swig_getattr_nondynamic(self, class_type, name, 0) + + +def _swig_repr(self): + try: + strthis = "proxy of " + self.this.__repr__() + except Exception: + strthis = "" + return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,) + +try: + _object = object + _newclass = 1 +except AttributeError: + class _object: + pass + _newclass = 0 + + +class SwigPyIterator(_object): + __swig_setmethods__ = {} + __setattr__ = lambda self, name, value: _swig_setattr(self, SwigPyIterator, name, value) + __swig_getmethods__ = {} + __getattr__ = lambda self, name: _swig_getattr(self, SwigPyIterator, name) + + def __init__(self, *args, **kwargs): + raise AttributeError("No constructor defined - class is abstract") + __repr__ = _swig_repr + __swig_destroy__ = _polyiou.delete_SwigPyIterator + __del__ = lambda self: None + + def value(self): + return _polyiou.SwigPyIterator_value(self) + + def incr(self, n=1): + return _polyiou.SwigPyIterator_incr(self, n) + + def decr(self, n=1): + return _polyiou.SwigPyIterator_decr(self, n) + + def distance(self, x): + return _polyiou.SwigPyIterator_distance(self, x) + + def equal(self, x): + return _polyiou.SwigPyIterator_equal(self, x) + + def copy(self): + return _polyiou.SwigPyIterator_copy(self) + + def next(self): + return _polyiou.SwigPyIterator_next(self) + + def __next__(self): + return _polyiou.SwigPyIterator___next__(self) + + def previous(self): + return _polyiou.SwigPyIterator_previous(self) + + def advance(self, n): + return _polyiou.SwigPyIterator_advance(self, n) + + def __eq__(self, x): + return _polyiou.SwigPyIterator___eq__(self, x) + + def __ne__(self, x): + return _polyiou.SwigPyIterator___ne__(self, x) + + def __iadd__(self, n): + return _polyiou.SwigPyIterator___iadd__(self, n) + + def __isub__(self, n): + return _polyiou.SwigPyIterator___isub__(self, n) + + def __add__(self, n): + return _polyiou.SwigPyIterator___add__(self, n) + + def __sub__(self, *args): + return _polyiou.SwigPyIterator___sub__(self, *args) + def __iter__(self): + return self +SwigPyIterator_swigregister = _polyiou.SwigPyIterator_swigregister +SwigPyIterator_swigregister(SwigPyIterator) + +class VectorDouble(_object): + __swig_setmethods__ = {} + __setattr__ = lambda self, name, value: _swig_setattr(self, VectorDouble, name, value) + __swig_getmethods__ = {} + __getattr__ = lambda self, name: _swig_getattr(self, VectorDouble, name) + __repr__ = _swig_repr + + def iterator(self): + return _polyiou.VectorDouble_iterator(self) + def __iter__(self): + return self.iterator() + + def __nonzero__(self): + return _polyiou.VectorDouble___nonzero__(self) + + def __bool__(self): + return _polyiou.VectorDouble___bool__(self) + + def __len__(self): + return _polyiou.VectorDouble___len__(self) + + def __getslice__(self, i, j): + return _polyiou.VectorDouble___getslice__(self, i, j) + + def __setslice__(self, *args): + return _polyiou.VectorDouble___setslice__(self, *args) + + def __delslice__(self, i, j): + return _polyiou.VectorDouble___delslice__(self, i, j) + + def __delitem__(self, *args): + return _polyiou.VectorDouble___delitem__(self, *args) + + def __getitem__(self, *args): + return _polyiou.VectorDouble___getitem__(self, *args) + + def __setitem__(self, *args): + return _polyiou.VectorDouble___setitem__(self, *args) + + def pop(self): + return _polyiou.VectorDouble_pop(self) + + def append(self, x): + return _polyiou.VectorDouble_append(self, x) + + def empty(self): + return _polyiou.VectorDouble_empty(self) + + def size(self): + return _polyiou.VectorDouble_size(self) + + def swap(self, v): + return _polyiou.VectorDouble_swap(self, v) + + def begin(self): + return _polyiou.VectorDouble_begin(self) + + def end(self): + return _polyiou.VectorDouble_end(self) + + def rbegin(self): + return _polyiou.VectorDouble_rbegin(self) + + def rend(self): + return _polyiou.VectorDouble_rend(self) + + def clear(self): + return _polyiou.VectorDouble_clear(self) + + def get_allocator(self): + return _polyiou.VectorDouble_get_allocator(self) + + def pop_back(self): + return _polyiou.VectorDouble_pop_back(self) + + def erase(self, *args): + return _polyiou.VectorDouble_erase(self, *args) + + def __init__(self, *args): + this = _polyiou.new_VectorDouble(*args) + try: + self.this.append(this) + except Exception: + self.this = this + + def push_back(self, x): + return _polyiou.VectorDouble_push_back(self, x) + + def front(self): + return _polyiou.VectorDouble_front(self) + + def back(self): + return _polyiou.VectorDouble_back(self) + + def assign(self, n, x): + return _polyiou.VectorDouble_assign(self, n, x) + + def resize(self, *args): + return _polyiou.VectorDouble_resize(self, *args) + + def insert(self, *args): + return _polyiou.VectorDouble_insert(self, *args) + + def reserve(self, n): + return _polyiou.VectorDouble_reserve(self, n) + + def capacity(self): + return _polyiou.VectorDouble_capacity(self) + __swig_destroy__ = _polyiou.delete_VectorDouble + __del__ = lambda self: None +VectorDouble_swigregister = _polyiou.VectorDouble_swigregister +VectorDouble_swigregister(VectorDouble) + + +def iou_poly(p, q): + return _polyiou.iou_poly(p, q) +iou_poly = _polyiou.iou_poly +# This file is compatible with both classic and new-style classes. + + diff --git a/dota_kit/readme.md b/dota_kit/readme.md new file mode 100644 index 0000000..90090fd --- /dev/null +++ b/dota_kit/readme.md @@ -0,0 +1,54 @@ +The code is useful for DOTA or +ODAI. The code provide the following function +
    +
  • + Load and visulize the data. +
  • +
  • + Evaluate the result. +
  • +
  • + Split and merge the picture and label. +
  • +
+ +### What is DOTA? +

+Dota is a large-scale dataset for object detection in aerial images. +It can be used to develop and evaluate object detectors in aerial images. +We will continue to update DOTA, to grow in size and scope and to reflect evolving real-world conditions. +Different from general object detectin dataset. Each instance of DOTA is labeled by an arbitrary (8 d.o.f.) quadrilateral. +For the detail of DOTA-v1.0, you can refer to our +paper. +

+ +### What is ODAI? +

+ODAI is a contest of object detetion in aerial images on ICPR'2018. It is based on DOTA-v1. The contest is ongoing now. +

+ +### Installation +1. install swig +``` + sudo apt-get install swig +``` +2. create the c++ extension for python +``` + swig -c++ -python polyiou.i + python setup.py build_ext --inplace +``` + +### Usage +1. For read and visualize data, you can use DOTA.py +2. For evaluation the result, you can refer to the "dota_evaluation_task1.py" and "dota_evaluation_task2.py" +3. For split the large image, you can refer to the "ImgSplit" +4. For merge the results detected on the patches, you can refer to the ResultMerge.py + +An example is shown in the demo. +The subdirectory of "basepath"(which is used in "DOTA.py", "ImgSplit.py") is in the structure of +``` +. +├── images +└── labelTxt +``` + diff --git a/dota_kit/setup.py b/dota_kit/setup.py new file mode 100644 index 0000000..329d11d --- /dev/null +++ b/dota_kit/setup.py @@ -0,0 +1,16 @@ +""" + setup.py file for SWIG example +""" +from distutils.core import setup, Extension +import numpy + +polyiou_module = Extension('_polyiou', + sources=['polyiou_wrap.cxx', 'polyiou.cpp'], + ) +setup(name = 'polyiou', + version = '0.1', + author = "SWIG Docs", + description = """Simple swig example from docs""", + ext_modules = [polyiou_module], + py_modules = ["polyiou"], +) diff --git a/dota_kit/split.sh b/dota_kit/split.sh new file mode 100644 index 0000000..a90c02f --- /dev/null +++ b/dota_kit/split.sh @@ -0,0 +1,7 @@ +#!/bin/bash +set - x +set - e +export PYTHONUNBUFFERED="True" + +python ImgSplit.py --dataset /data/dota_new/dota/val --dest /data/dota_new/dota/split/val --scale 1.0 + diff --git a/experiments/faster_rcnn/cfgs/resnet_v1_101_dota_RoITransformer_trainval_rcnn_end2end.yaml b/experiments/faster_rcnn/cfgs/resnet_v1_101_dota_RoITransformer_trainval_rcnn_end2end.yaml new file mode 100644 index 0000000..1b5c9fe --- /dev/null +++ b/experiments/faster_rcnn/cfgs/resnet_v1_101_dota_RoITransformer_trainval_rcnn_end2end.yaml @@ -0,0 +1,215 @@ +--- +MXNET_VERSION: "mxnet" +output_path: "./output/rcnn/DOTA" +symbol: resnet_v1_101_rcnn_light_head_RoITransformer +gpus: '0,1,2,3' +CLASS_AGNOSTIC: true +SCALES: +- 1024 +- 1024 +#- 600 +#- 1000 +#TEST_SCALES: [[1024, 1024], [512, 512]] +default: + frequent: 100 + kvstore: device +network: + pretrained: "./model/pretrained_model/resnet_v1_101" + pretrained_epoch: 0 + PIXEL_MEANS: + - 103.06 + - 115.90 + - 123.15 + IMAGE_STRIDE: 0 + RCNN_FEAT_STRIDE: 16 + RPN_FEAT_STRIDE: 16 + FIXED_PARAMS: + - conv1 + - bn_conv1 + - res2 + - bn2 + - gamma + - beta + FIXED_PARAMS_SHARED: + - conv1 + - bn_conv1 + - res2 + - bn2 + - res3 + - bn3 + - res4 + - bn4 + - gamma + - beta + ANCHOR_RATIOS: + - 0.5 + - 1 + - 2 + ANCHOR_SCALES: + - 4 + - 8 + - 16 + - 32 + - 64 +# NUM_ANCHORS: 12 + NUM_ANCHORS: 15 + + RRoI_REGRESSION: true + RRoI_CLASS_AGNOSTIC: false + RCNN_FG_AGNOSTIC: true +dataset: + NUM_CLASSES: 16 + dataset: DOTA_oriented_v2 +# dataset_path: "./data/coco" + dataset_path: "data/dota_1024" + image_set: train + root_path: "data/dota_1024" + test_image_set: test + proposal: rpn +TRAIN: +# lr: 0.000125 + lr: 0.0005 +# lr: 0.001 + lr_step: '30,40' + warmup: true + warmup_lr: 0.00005 + # typically we will use 8000 warmup step for single GPU for COCO + warmup_step: 1000 + begin_epoch: 0 + end_epoch: 40 + # for debug + model_prefix: 'rcnn_dota' + # whether resume training + # set false for debug + RESUME: false + # whether flip image + FLIP: false + # whether shuffle image + # set false for debug + SHUFFLE: true + # whether use OHEM + ENABLE_OHEM: false + # size of images for each device, 2 for rcnn, 1 for rpn and e2e + BATCH_IMAGES: 1 + # e2e changes behavior of anchor loader and metric + END2END: true + # group images with similar aspect ratio + ASPECT_GROUPING: true + # R-CNN + # rcnn rois batch size + BATCH_ROIS: 512 + BATCH_ROIS_OHEM: 128 +# BATCH_ROIS: -1 +# BATCH_ROIS_OHEM: 512 + # rcnn rois sampling params + FG_FRACTION: 0.5 + FG_THRESH: 0.5 + BG_THRESH_HI: 0.5 + BG_THRESH_LO: 0.0 + # rcnn bounding box regression params + BBOX_REGRESSION_THRESH: 0.5 + BBOX_WEIGHTS: + - 1.0 + - 1.0 + - 1.0 + - 1.0 + - 1.0 + + # RPN anchor loader + # rpn anchors batch size +# RPN_BATCH_SIZE: 256 + RPN_BATCH_SIZE: 512 + + # rpn anchors sampling params + RPN_FG_FRACTION: 0.5 + RPN_POSITIVE_OVERLAP: 0.7 + RPN_NEGATIVE_OVERLAP: 0.3 + RPN_CLOBBER_POSITIVES: false + # rpn bounding box regression params + RPN_BBOX_WEIGHTS: + - 1.0 + - 1.0 + - 1.0 + - 1.0 + RPN_POSITIVE_WEIGHT: -1.0 + # used for end2end training + # RPN proposal + CXX_PROPOSAL: true + RPN_NMS_THRESH: 0.7 + RPN_PRE_NMS_TOP_N: 6000 + RPN_POST_NMS_TOP_N: 800 + RPN_MIN_SIZE: 10 + # approximate bounding box regression + BBOX_NORMALIZATION_PRECOMPUTED: true + BBOX_MEANS: + - 0.0 + - 0.0 + - 0.0 + - 0.0 + - 0.0 + BBOX_STDS: + - 0.1 + - 0.1 + - 0.2 + - 0.2 + - 0.05 + + # param for Rroi + # set n = 1 for debug + RRoI_PRE_NMS_TOP_N: 800 + RRoI_POST_NMS_TOP_N: 800 + RRoI_NMS_THRESH: 0.5 + RRoI_MIN_SIZE: 10 + RRoI_FG_THRESH: 0.5 + RRoI_SCORE_THRESH: 0.0 + + RRoI_ENABLE_OHEM: false + RRoI_BATCH_ROIS: 512 + RRoI_FG_FRACTION: 0.5 + RRoI_BATCH_ROIS_OHEM: 256 + + RRoI_BBOX_MEANS: + - 0.0 + - 0.0 + - 0.0 + - 0.0 + - 0.0 + + RRoI_BBOX_STDS: + - 0.05 + - 0.05 + - 0.1 + - 0.1 + - 0.025 + +TEST: + # use rpn to generate proposal + HAS_RPN: true + # size of images for each device + BATCH_IMAGES: 1 + # RPN proposal + CXX_PROPOSAL: true + RPN_NMS_THRESH: 0.7 + RPN_PRE_NMS_TOP_N: 6000 + RPN_POST_NMS_TOP_N: 1000 + RPN_MIN_SIZE: 10 + # RPN generate proposal + PROPOSAL_NMS_THRESH: 0.7 + PROPOSAL_PRE_NMS_TOP_N: 20000 + PROPOSAL_POST_NMS_TOP_N: 1000 + PROPOSAL_MIN_SIZE: 10 + + # param for Rroi + # set n = 1 fo debug + RRoI_PRE_NMS_TOP_N: 1000 + RRoI_POST_NMS_TOP_N: 1000 + RRoI_NMS_THRESH: 0.5 + RRoI_MIN_SIZE: 10 + + # RCNN nms + NMS: 0.3 + test_epoch: 40 +# test_epoch: 12 + max_per_image: 1000 + + save_img_path: '/home/dj/data/vis_debug_RoITransformer' diff --git a/experiments/faster_rcnn/cfgs/resnet_v1_101_dota_light_head_deformpsroi_trainval_rcnn_end2end.yaml b/experiments/faster_rcnn/cfgs/resnet_v1_101_dota_light_head_deformpsroi_trainval_rcnn_end2end.yaml new file mode 100644 index 0000000..f4e859f --- /dev/null +++ b/experiments/faster_rcnn/cfgs/resnet_v1_101_dota_light_head_deformpsroi_trainval_rcnn_end2end.yaml @@ -0,0 +1,174 @@ +--- +MXNET_VERSION: "mxnet" +output_path: "./output/rcnn/DOTA" +symbol: resnet_v1_101_rcnn_light_head_deformpsroi +gpus: '0,1,2,3' +CLASS_AGNOSTIC: false +SCALES: +- 1024 +- 1024 +#- 600 +#- 1000 +#TEST_SCALES: [[1024, 1024], [512, 512]] +default: + frequent: 100 + kvstore: device +network: + pretrained: "./model/pretrained_model/resnet_v1_101" + pretrained_epoch: 0 + PIXEL_MEANS: + - 103.06 + - 115.90 + - 123.15 + IMAGE_STRIDE: 0 + RCNN_FEAT_STRIDE: 16 + RPN_FEAT_STRIDE: 16 + FIXED_PARAMS: + - conv1 + - bn_conv1 + - res2 + - bn2 + - gamma + - beta + FIXED_PARAMS_SHARED: + - conv1 + - bn_conv1 + - res2 + - bn2 + - res3 + - bn3 + - res4 + - bn4 + - gamma + - beta + ANCHOR_RATIOS: + - 0.5 + - 1 + - 2 + ANCHOR_SCALES: + - 4 + - 8 + - 16 + - 32 + - 64 +# NUM_ANCHORS: 12 + NUM_ANCHORS: 15 + RRoI_REGRESSION: false +dataset: + NUM_CLASSES: 16 + dataset: DOTA_oriented_v2 +# dataset_path: "./data/coco" + dataset_path: "data/dota_1024" + image_set: train + root_path: "data/dota_1024" + test_image_set: test + proposal: rpn +TRAIN: + lr: 0.0005 + lr_step: '30,40' + warmup: true + warmup_lr: 0.00005 + # typically we will use 8000 warmup step for single GPU for COCO + warmup_step: 1000 + begin_epoch: 0 + end_epoch: 40 + # for debug + model_prefix: 'rcnn_dota' + # whether resume training + RESUME: false + # whether flip image + FLIP: false + # whether shuffle image + SHUFFLE: true + # whether use OHEM + ENABLE_OHEM: false + # size of images for each device, 2 for rcnn, 1 for rpn and e2e + BATCH_IMAGES: 1 + # e2e changes behavior of anchor loader and metric + END2END: true + # group images with similar aspect ratio + ASPECT_GROUPING: true + # R-CNN + # rcnn rois batch size +# BATCH_ROIS: 256 + BATCH_ROIS: 512 + BATCH_ROIS_OHEM: 128 +# BATCH_ROIS: -1 +# BATCH_ROIS_OHEM: 512 + # rcnn rois sampling params +# FG_FRACTION: 0.25 + FG_FRACTION: 0.5 + FG_THRESH: 0.5 + BG_THRESH_HI: 0.5 + BG_THRESH_LO: 0.0 + # rcnn bounding box regression params + BBOX_REGRESSION_THRESH: 0.5 + BBOX_WEIGHTS: + - 1.0 + - 1.0 + - 1.0 + - 1.0 + - 1.0 + + # RPN anchor loader + # rpn anchors batch size +# RPN_BATCH_SIZE: 256 + RPN_BATCH_SIZE: 512 + + # rpn anchors sampling params + RPN_FG_FRACTION: 0.5 + RPN_POSITIVE_OVERLAP: 0.7 + RPN_NEGATIVE_OVERLAP: 0.3 + RPN_CLOBBER_POSITIVES: false + # rpn bounding box regression params + RPN_BBOX_WEIGHTS: + - 1.0 + - 1.0 + - 1.0 + - 1.0 + RPN_POSITIVE_WEIGHT: -1.0 + # used for end2end training + # RPN proposal + CXX_PROPOSAL: true + RPN_NMS_THRESH: 0.7 +# RPN_PRE_NMS_TOP_N: 12000 +# RPN_POST_NMS_TOP_N: 2000 + RPN_PRE_NMS_TOP_N: 6000 + RPN_POST_NMS_TOP_N: 800 + RPN_MIN_SIZE: 10 + # approximate bounding box regression + BBOX_NORMALIZATION_PRECOMPUTED: true + BBOX_MEANS: + - 0.0 + - 0.0 + - 0.0 + - 0.0 + - 0.0 + BBOX_STDS: + - 0.1 + - 0.1 + - 0.2 + - 0.2 + - 0.05 +TEST: + # use rpn to generate proposal + HAS_RPN: true + # size of images for each device + BATCH_IMAGES: 1 + # RPN proposal + CXX_PROPOSAL: true + RPN_NMS_THRESH: 0.7 + RPN_PRE_NMS_TOP_N: 6000 + RPN_POST_NMS_TOP_N: 1000 + RPN_MIN_SIZE: 10 + # RPN generate proposal + PROPOSAL_NMS_THRESH: 0.7 + PROPOSAL_PRE_NMS_TOP_N: 6000 + PROPOSAL_POST_NMS_TOP_N: 1000 + PROPOSAL_MIN_SIZE: 10 + # RCNN nms + NMS: 0.3 + test_epoch: 40 + max_per_image: 1000 + + save_img_path: '/home/dj/data/vis_debug_light_faster_deform_psroi' diff --git a/experiments/faster_rcnn/cfgs/resnet_v1_101_dota_light_head_trainval_rcnn_end2end.yaml b/experiments/faster_rcnn/cfgs/resnet_v1_101_dota_light_head_trainval_rcnn_end2end.yaml new file mode 100644 index 0000000..160709c --- /dev/null +++ b/experiments/faster_rcnn/cfgs/resnet_v1_101_dota_light_head_trainval_rcnn_end2end.yaml @@ -0,0 +1,175 @@ +--- +MXNET_VERSION: "mxnet" +output_path: "./output/rcnn/DOTA" +symbol: resnet_v1_101_rcnn_light_head +gpus: '0,1,2,3' +CLASS_AGNOSTIC: false +SCALES: +- 1024 +- 1024 +#- 600 +#- 1000 +#TEST_SCALES: [[1024, 1024], [512, 512]] +default: + frequent: 100 + kvstore: device +network: + pretrained: "./model/pretrained_model/resnet_v1_101" + pretrained_epoch: 0 + PIXEL_MEANS: + - 103.06 + - 115.90 + - 123.15 + IMAGE_STRIDE: 0 + RCNN_FEAT_STRIDE: 16 + RPN_FEAT_STRIDE: 16 + FIXED_PARAMS: + - conv1 + - bn_conv1 + - res2 + - bn2 + - gamma + - beta + FIXED_PARAMS_SHARED: + - conv1 + - bn_conv1 + - res2 + - bn2 + - res3 + - bn3 + - res4 + - bn4 + - gamma + - beta + ANCHOR_RATIOS: + - 0.5 + - 1 + - 2 + ANCHOR_SCALES: + - 4 + - 8 + - 16 + - 32 + - 64 +# NUM_ANCHORS: 12 + NUM_ANCHORS: 15 + RRoI_REGRESSION: false +dataset: + NUM_CLASSES: 16 + dataset: DOTA_oriented_v2 +# dataset_path: "./data/coco" + dataset_path: "data/dota_1024" + image_set: train + root_path: "data/dota_1024" + test_image_set: test + proposal: rpn +TRAIN: +# lr: 0.000125 + lr: 0.0005 + lr_step: '30,40' + warmup: true + warmup_lr: 0.00005 + # typically we will use 8000 warmup step for single GPU for COCO + warmup_step: 1000 + begin_epoch: 0 + end_epoch: 40 + # for debug + model_prefix: 'rcnn_dota' + # whether resume training + RESUME: false + # whether flip image + FLIP: false + # whether shuffle image + SHUFFLE: true + # whether use OHEM + ENABLE_OHEM: false + # size of images for each device, 2 for rcnn, 1 for rpn and e2e + BATCH_IMAGES: 1 + # e2e changes behavior of anchor loader and metric + END2END: true + # group images with similar aspect ratio + ASPECT_GROUPING: true + # R-CNN + # rcnn rois batch size +# BATCH_ROIS: 256 + BATCH_ROIS: 512 + BATCH_ROIS_OHEM: 128 +# BATCH_ROIS: -1 +# BATCH_ROIS_OHEM: 512 + # rcnn rois sampling params +# FG_FRACTION: 0.25 + FG_FRACTION: 0.5 + FG_THRESH: 0.5 + BG_THRESH_HI: 0.5 + BG_THRESH_LO: 0.0 + # rcnn bounding box regression params + BBOX_REGRESSION_THRESH: 0.5 + BBOX_WEIGHTS: + - 1.0 + - 1.0 + - 1.0 + - 1.0 + - 1.0 + + # RPN anchor loader + # rpn anchors batch size +# RPN_BATCH_SIZE: 256 + RPN_BATCH_SIZE: 512 + + # rpn anchors sampling params + RPN_FG_FRACTION: 0.5 + RPN_POSITIVE_OVERLAP: 0.7 + RPN_NEGATIVE_OVERLAP: 0.3 + RPN_CLOBBER_POSITIVES: false + # rpn bounding box regression params + RPN_BBOX_WEIGHTS: + - 1.0 + - 1.0 + - 1.0 + - 1.0 + RPN_POSITIVE_WEIGHT: -1.0 + # used for end2end training + # RPN proposal + CXX_PROPOSAL: true + RPN_NMS_THRESH: 0.7 +# RPN_PRE_NMS_TOP_N: 12000 +# RPN_POST_NMS_TOP_N: 2000 + RPN_PRE_NMS_TOP_N: 6000 + RPN_POST_NMS_TOP_N: 800 + RPN_MIN_SIZE: 10 + # approximate bounding box regression + BBOX_NORMALIZATION_PRECOMPUTED: true + BBOX_MEANS: + - 0.0 + - 0.0 + - 0.0 + - 0.0 + - 0.0 + BBOX_STDS: + - 0.1 + - 0.1 + - 0.2 + - 0.2 + - 0.05 +TEST: + # use rpn to generate proposal + HAS_RPN: true + # size of images for each device + BATCH_IMAGES: 1 + # RPN proposal + CXX_PROPOSAL: true + RPN_NMS_THRESH: 0.7 + RPN_PRE_NMS_TOP_N: 6000 + RPN_POST_NMS_TOP_N: 1000 + RPN_MIN_SIZE: 10 + # RPN generate proposal + PROPOSAL_NMS_THRESH: 0.7 + PROPOSAL_PRE_NMS_TOP_N: 6000 + PROPOSAL_POST_NMS_TOP_N: 1000 + PROPOSAL_MIN_SIZE: 10 + # RCNN nms + NMS: 0.3 + test_epoch: 40 + max_per_image: 1000 + + save_img_path: '/home/dj/data/vis_debug_light_faster_12' diff --git a/experiments/faster_rcnn/rcnn_end2end_train_test.py b/experiments/faster_rcnn/rcnn_end2end_train_test.py new file mode 100644 index 0000000..5070f77 --- /dev/null +++ b/experiments/faster_rcnn/rcnn_end2end_train_test.py @@ -0,0 +1,26 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Guodong Zhang +# -------------------------------------------------------- + +import os +import sys +os.environ['PYTHONUNBUFFERED'] = '1' +os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0' +os.environ['MXNET_ENABLE_GPU_P2P'] = '0' +#os.environ['MXNET_ENGINE_TYPE'] = 'NaiveEngine' +this_dir = os.path.dirname(__file__) +sys.path.insert(0, os.path.join(this_dir, '..', '..', 'faster_rcnn')) + +import train_end2end +import test + +if __name__ == "__main__": + train_end2end.main() + test.main() + + + + diff --git a/experiments/faster_rcnn/rcnn_end2end_train_test_RoITransformer.py b/experiments/faster_rcnn/rcnn_end2end_train_test_RoITransformer.py new file mode 100644 index 0000000..671dabf --- /dev/null +++ b/experiments/faster_rcnn/rcnn_end2end_train_test_RoITransformer.py @@ -0,0 +1,28 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Haozhi Qi +# -------------------------------------------------------- + +import os +import sys +os.environ['PYTHONUNBUFFERED'] = '1' +os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0' +os.environ['MXNET_ENABLE_GPU_P2P'] = '0' +# os.environ['MXNET_ENGINE_TYPE'] = 'NaiveEngine' +this_dir = os.path.dirname(__file__) +sys.path.insert(0, os.path.join(this_dir, '..', '..', 'faster_rcnn')) + +import train_end2end_poly_RoITransformer +import test_poly + +if __name__ == "__main__": + train_end2end_poly_RoITransformer.main() + + test_poly.main() + + + + + diff --git a/experiments/faster_rcnn/rcnn_end2end_train_test_poly.py b/experiments/faster_rcnn/rcnn_end2end_train_test_poly.py new file mode 100644 index 0000000..eecaca9 --- /dev/null +++ b/experiments/faster_rcnn/rcnn_end2end_train_test_poly.py @@ -0,0 +1,28 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Haozhi Qi +# -------------------------------------------------------- + +import os +import sys +os.environ['PYTHONUNBUFFERED'] = '1' +os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0' +os.environ['MXNET_ENABLE_GPU_P2P'] = '0' +# os.environ['MXNET_ENGINE_TYPE'] = 'NaiveEngine' +this_dir = os.path.dirname(__file__) +sys.path.insert(0, os.path.join(this_dir, '..', '..', 'faster_rcnn')) + +import train_end2end_poly +import test_poly + +if __name__ == "__main__": + train_end2end_poly.main() + + test_poly.main() + + + + + diff --git a/experiments/faster_rcnn/rcnn_test.py b/experiments/faster_rcnn/rcnn_test.py new file mode 100644 index 0000000..1df4715 --- /dev/null +++ b/experiments/faster_rcnn/rcnn_test.py @@ -0,0 +1,19 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Guodong Zhang +# -------------------------------------------------------- + +import os +import sys +os.environ['PYTHONUNBUFFERED'] = '1' +os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0' +os.environ['MXNET_ENABLE_GPU_P2P'] = '0' +this_dir = os.path.dirname(__file__) +sys.path.insert(0, os.path.join(this_dir, '..', '..', 'faster_rcnn')) + +import test + +if __name__ == "__main__": + test.main() diff --git a/experiments/faster_rcnn/rcnn_test_poly.py b/experiments/faster_rcnn/rcnn_test_poly.py new file mode 100644 index 0000000..b0997c9 --- /dev/null +++ b/experiments/faster_rcnn/rcnn_test_poly.py @@ -0,0 +1,19 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Guodong Zhang +# -------------------------------------------------------- + +import os +import sys +os.environ['PYTHONUNBUFFERED'] = '1' +os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0' +os.environ['MXNET_ENABLE_GPU_P2P'] = '0' +this_dir = os.path.dirname(__file__) +sys.path.insert(0, os.path.join(this_dir, '..', '..', 'faster_rcnn')) + +import test_poly + +if __name__ == "__main__": + test_poly.main() diff --git a/experiments/faster_rcnn/rcnn_train_test.py b/experiments/faster_rcnn/rcnn_train_test.py new file mode 100644 index 0000000..5d5d7eb --- /dev/null +++ b/experiments/faster_rcnn/rcnn_train_test.py @@ -0,0 +1,25 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Guodong Zhang +# -------------------------------------------------------- + +import os +import sys +os.environ['PYTHONUNBUFFERED'] = '1' +os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0' +os.environ['MXNET_ENABLE_GPU_P2P'] = '0' +this_dir = os.path.dirname(__file__) +sys.path.insert(0, os.path.join(this_dir, '..', '..', 'faster_rcnn')) + +import train_rcnn +import test + +if __name__ == "__main__": + train_rcnn.main() + test.main() + + + + diff --git a/experiments/fpn/cfgs/resnet_v1_101_dota_rotbox_light_head_RoITransformer_trainval_fpn_end2end.yaml b/experiments/fpn/cfgs/resnet_v1_101_dota_rotbox_light_head_RoITransformer_trainval_fpn_end2end.yaml new file mode 100644 index 0000000..639d245 --- /dev/null +++ b/experiments/fpn/cfgs/resnet_v1_101_dota_rotbox_light_head_RoITransformer_trainval_fpn_end2end.yaml @@ -0,0 +1,224 @@ +--- +MXNET_VERSION: "mxnet" +output_path: "./output/fpn/DOTA" +#symbol: resnet_v1_101_fpn_dcn_rcnn +symbol: resnet_v1_101_fpn_rcnn_rotbox_light_head_RoITransformer +gpus: '1,3' +CLASS_AGNOSTIC: true +SCALES: +- 1024 +- 1024 +#SCALES: [[1024, 1024], [659,1280]] + +TEST_SCALES: [[1024, 1024]] # single scale testing,[512,512] +#TEST_SCALES: [[480, 800], [576, 900], [688, 1100], [800, 1200], [1200, 1600], [1400, 2000]] # multi-scale testing +default: + frequent: 100 + kvstore: device +network: + pretrained: "./model/pretrained_model/resnet_v1_101" + pretrained_epoch: 0 + PIXEL_MEANS: + - 103.06 + - 115.90 + - 123.15 + IMAGE_STRIDE: 32 + RCNN_FEAT_STRIDE: 16 + RPN_FEAT_STRIDE: + - 4 + - 8 + - 16 + - 32 + - 64 + FIXED_PARAMS: + - conv1 + - bn_conv1 + - res2 + - bn2 + - gamma + - beta + FIXED_PARAMS_SHARED: + - conv1 + - bn_conv1 + - res2 + - bn2 + - res3 + - bn3 + - res4 + - bn4 + - gamma + - beta + ANCHOR_RATIOS: + - 0.5 + - 1 + - 2 + ANCHOR_SCALES: + - 8 + NUM_ANCHORS: 3 + BOXENCODING: rotbox + # TODO: set it false for debug + RRoI_REGRESSION: true + RRoI_CLASS_AGNOSTIC: false + RCNN_FG_AGNOSTIC: true +dataset: + NUM_CLASSES: 16 + dataset: DOTA_oriented_v2 +# dataset_path: "./data/coco" + dataset_path: "data/dota_1024" + image_set: train + root_path: "data/dota_1024" + + test_image_set: test + proposal: rpn +TRAIN: + lr: 0.005 + warmup_lr: 0.001 + warmup_step: 250 + warmup: true + # lr: 0.000001 + lr_step: '5,8' + wd: 0.0001 + begin_epoch: 0 + end_epoch: 10 + model_prefix: 'fpn_DOTA_oriented' + # whether resume training + RESUME: false + # whether flip image + FLIP: false + # whether shuffle image + SHUFFLE: true + # whether use OHEM + ENABLE_OHEM: false +# ENABLE_OHEM: true + # size of images for each device, 2 for rcnn, 1 for rpn and e2e + BATCH_IMAGES: 1 + # e2e changes behavior of anchor loader and metric + END2END: true + # group images with similar aspect ratio + ASPECT_GROUPING: true + # R-CNN + # rcnn rois batch size +# BATCH_ROIS: -1 + BATCH_ROIS: 512 +# BATCH_ROIS_OHEM: 512 + # rcnn rois sampling params + FG_FRACTION: 0.5 + FG_THRESH: 0.5 + BG_THRESH_HI: 0.5 + BG_THRESH_LO: 0.0 + # rcnn bounding box regression params + BBOX_REGRESSION_THRESH: 0.5 + BBOX_WEIGHTS: + - 1.0 + - 1.0 + - 1.0 + - 1.0 + - 1.0 + + # RPN anchor loader + # rpn anchors batch size +# RPN_BATCH_SIZE: 256 + RPN_BATCH_SIZE: 512 + + # rpn anchors sampling params + RPN_FG_FRACTION: 0.5 + RPN_POSITIVE_OVERLAP: 0.7 + RPN_NEGATIVE_OVERLAP: 0.3 + RPN_CLOBBER_POSITIVES: false + # rpn bounding box regression params + RPN_BBOX_WEIGHTS: + - 1.0 + - 1.0 + - 1.0 + - 1.0 + RPN_POSITIVE_WEIGHT: -1.0 + # used for end2end training + # RPN proposal + CXX_PROPOSAL: false + RPN_NMS_THRESH: 0.7 +# RPN_PRE_NMS_TOP_N: 12000 + RPN_PRE_NMS_TOP_N: 6000 + RPN_POST_NMS_TOP_N: 800 + RPN_MIN_SIZE: 10 + # approximate bounding box regression + BBOX_NORMALIZATION_PRECOMPUTED: true + BBOX_MEANS: + - 0.0 + - 0.0 + - 0.0 + - 0.0 + - 0.0 + + BBOX_STDS: + - 0.1 + - 0.1 + - 0.2 + - 0.2 + - 0.05 + + # param for Rroi + RRoI_PRE_NMS_TOP_N: 800 + RRoI_POST_NMS_TOP_N: 800 + RRoI_NMS_THRESH: 0.5 + RRoI_MIN_SIZE: 10 + RRoI_FG_THRESH: 0.5 + RRoI_SCORE_THRESH: 0.0 + + RRoI_ENABLE_OHEM: false + RRoI_BATCH_ROIS: 512 + RRoI_FG_FRACTION: 0.5 + RRoI_BATCH_ROIS_OHEM: 256 + + RRoI_BBOX_MEANS: + - 0.0 + - 0.0 + - 0.0 + - 0.0 + - 0.0 + + RRoI_BBOX_STDS: + - 0.05 + - 0.05 + - 0.1 + - 0.1 + - 0.025 + + +TEST: + # use rpn to generate proposal + HAS_RPN: true + # size of images for each device + BATCH_IMAGES: 1 + # RPN proposal + CXX_PROPOSAL: false + RPN_NMS_THRESH: 0.7 +# RPN_PRE_NMS_TOP_N: 12000 +# RPN_POST_NMS_TOP_N: 2000 + RPN_PRE_NMS_TOP_N: 6000 + RPN_POST_NMS_TOP_N: 1000 + RPN_MIN_SIZE: 10 + # RPN generate proposal + PROPOSAL_NMS_THRESH: 0.7 +# PROPOSAL_PRE_NMS_TOP_N: 20000 +# PROPOSAL_POST_NMS_TOP_N: 2000 + PROPOSAL_PRE_NMS_TOP_N: 20000 + PROPOSAL_POST_NMS_TOP_N: 1000 + PROPOSAL_MIN_SIZE: 10 + + # param for Rroi + RRoI_PRE_NMS_TOP_N: 1000 + RRoI_POST_NMS_TOP_N: 1000 + RRoI_NMS_THRESH: 0.7 + RRoI_MIN_SIZE: 10 + + # RCNN nms + NMS: 0.3 + USE_SOFTNMS: true + SOFTNMS_THRESH: 0.6 + test_epoch: 8 + max_per_image: 2000 + # soft nms + USE_SOFTNMS: true + SOFTNMS_THRESH: 0.6 + + save_img_path: '/home/dj/data/vis_debug_Rroi_fpn' \ No newline at end of file diff --git a/experiments/fpn/cfgs/resnet_v1_101_dota_rotbox_light_head_trainval_fpn_end2end.yaml b/experiments/fpn/cfgs/resnet_v1_101_dota_rotbox_light_head_trainval_fpn_end2end.yaml new file mode 100644 index 0000000..8113ab3 --- /dev/null +++ b/experiments/fpn/cfgs/resnet_v1_101_dota_rotbox_light_head_trainval_fpn_end2end.yaml @@ -0,0 +1,185 @@ +--- +MXNET_VERSION: "mxnet" +output_path: "./output/fpn/DOTA" +#symbol: resnet_v1_101_fpn_dcn_rcnn +symbol: resnet_v1_101_fpn_rcnn_rotbox_light_head +gpus: '0,1,2,3' +CLASS_AGNOSTIC: true +SCALES: +- 1024 +- 1024 +#SCALES: [[1024, 1024], [659,1280]] + +TEST_SCALES: [[1024, 1024]] # single scale testing,[512,512] +#TEST_SCALES: [[480, 800], [576, 900], [688, 1100], [800, 1200], [1200, 1600], [1400, 2000]] # multi-scale testing +default: + frequent: 100 + kvstore: device +network: + pretrained: "./model/pretrained_model/resnet_v1_101" + pretrained_epoch: 0 + PIXEL_MEANS: + - 103.06 + - 115.90 + - 123.15 + IMAGE_STRIDE: 32 + RCNN_FEAT_STRIDE: 16 + RPN_FEAT_STRIDE: + - 4 + - 8 + - 16 + - 32 + - 64 + FIXED_PARAMS: + - conv1 + - bn_conv1 + - res2 + - bn2 + - gamma + - beta + FIXED_PARAMS_SHARED: + - conv1 + - bn_conv1 + - res2 + - bn2 + - res3 + - bn3 + - res4 + - bn4 + - gamma + - beta + ANCHOR_RATIOS: + - 0.5 + - 1 + - 2 + ANCHOR_SCALES: + - 8 + NUM_ANCHORS: 3 + BOXENCODING: rotbox + RPN_THIN_FEATRUE: true + RRoI_REGRESSION: false + POOLING_MODE: alignave +dataset: + NUM_CLASSES: 16 + dataset: DOTA_oriented_v2 + dataset_path: "data/dota_1024" + + image_set: train + root_path: "data/dota_1024" + test_image_set: test + proposal: rpn +TRAIN: + lr: 0.005 + warmup_lr: 0.001 + warmup_step: 250 + warmup: true + # lr: 0.000001 + lr_step: '5,8' + wd: 0.0001 + begin_epoch: 0 + end_epoch: 10 + model_prefix: 'fpn_DOTA_oriented' + # whether resume training + RESUME: false + # whether flip image + FLIP: false + # whether shuffle image + SHUFFLE: true + # whether use OHEM +# ENABLE_OHEM: true + ENABLE_OHEM: false + # size of images for each device, 2 for rcnn, 1 for rpn and e2e + BATCH_IMAGES: 1 + # e2e changes behavior of anchor loader and metric + END2END: true + # group images with similar aspect ratio + ASPECT_GROUPING: true + # R-CNN + # rcnn rois batch size +# BATCH_ROIS: -1 + BATCH_ROIS: 512 + BATCH_ROIS_OHEM: 512 + # rcnn rois sampling params + FG_FRACTION: 0.35 + FG_THRESH: 0.5 + BG_THRESH_HI: 0.5 + BG_THRESH_LO: 0.0 + # rcnn bounding box regression params + BBOX_REGRESSION_THRESH: 0.5 + BBOX_WEIGHTS: + - 1.0 + - 1.0 + - 1.0 + - 1.0 + - 1.0 + # RPN anchor loader + # rpn anchors batch size + RPN_BATCH_SIZE: 512 + # rpn anchors sampling params + RPN_FG_FRACTION: 0.5 + RPN_POSITIVE_OVERLAP: 0.7 + RPN_NEGATIVE_OVERLAP: 0.3 + RPN_CLOBBER_POSITIVES: false + # rpn bounding box regression params + RPN_BBOX_WEIGHTS: + - 1.0 + - 1.0 + - 1.0 + - 1.0 + RPN_POSITIVE_WEIGHT: -1.0 + # used for end2end training + # RPN proposal + CXX_PROPOSAL: false + RPN_NMS_THRESH: 0.7 +# RPN_PRE_NMS_TOP_N: 12000 + RPN_PRE_NMS_TOP_N: 6000 +# RPN_POST_NMS_TOP_N: 2000 + RPN_POST_NMS_TOP_N: 600 + RPN_MIN_SIZE: 10 + # approximate bounding box regression +# BBOX_NORMALIZATION_PRECOMPUTED: false + BBOX_NORMALIZATION_PRECOMPUTED: true + BBOX_MEANS: + - 0.0 + - 0.0 + - 0.0 + - 0.0 + - 0.0 + + BBOX_STDS: + - 0.1 + - 0.1 + - 0.2 + - 0.2 + - 0.05 +TEST: + # use rpn to generate proposal + HAS_RPN: true + # size of images for each device + BATCH_IMAGES: 1 + # RPN proposal + CXX_PROPOSAL: false + RPN_NMS_THRESH: 0.7 +# RPN_PRE_NMS_TOP_N: 12000 +# RPN_POST_NMS_TOP_N: 2000 + RPN_PRE_NMS_TOP_N: 6000 + RPN_POST_NMS_TOP_N: 1000 + RPN_MIN_SIZE: 10 + # RPN generate proposal + PROPOSAL_NMS_THRESH: 0.7 +# PROPOSAL_PRE_NMS_TOP_N: 20000 +# PROPOSAL_POST_NMS_TOP_N: 2000 + PROPOSAL_PRE_NMS_TOP_N: 6000 + PROPOSAL_POST_NMS_TOP_N: 1000 + PROPOSAL_MIN_SIZE: 10 + # RCNN nms + NMS: 0.3 + USE_SOFTNMS: true + SOFTNMS_THRESH: 0.6 + test_epoch: 8 + max_per_image: 2000 + # soft nms + USE_SOFTNMS: true + SOFTNMS_THRESH: 0.6 + + save_img_path: '/home/dj/data/vis_debug3' \ No newline at end of file diff --git a/experiments/fpn/fpn_end2end_train_test.py b/experiments/fpn/fpn_end2end_train_test.py new file mode 100644 index 0000000..a4cc70b --- /dev/null +++ b/experiments/fpn/fpn_end2end_train_test.py @@ -0,0 +1,28 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Haozhi Qi +# -------------------------------------------------------- +import os +import sys +os.environ['PYTHONUNBUFFERED'] = '1' +os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0' +os.environ['MXNET_ENABLE_GPU_P2P'] = '0' +# os.environ['MXNET_ENGINE_TYPE'] = 'NaiveEngine' +this_dir = os.path.dirname(__file__) +sys.path.insert(0, os.path.join(this_dir, '..', '..', 'fpn')) + +import train_end2end +import test + +if __name__ == "__main__": + train_end2end.main() + + + test.main() + + + + + diff --git a/experiments/fpn/fpn_end2end_train_test_RoITransformer.py b/experiments/fpn/fpn_end2end_train_test_RoITransformer.py new file mode 100644 index 0000000..19f6400 --- /dev/null +++ b/experiments/fpn/fpn_end2end_train_test_RoITransformer.py @@ -0,0 +1,27 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Haozhi Qi +# -------------------------------------------------------- +import os +import sys +os.environ['PYTHONUNBUFFERED'] = '1' +os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0' +os.environ['MXNET_ENABLE_GPU_P2P'] = '0' +# os.environ['MXNET_ENGINE_TYPE'] = 'NaiveEngine' +this_dir = os.path.dirname(__file__) +sys.path.insert(0, os.path.join(this_dir, '..', '..', 'fpn')) + +import train_end2end_rotbox_RoITransformer +import test_poly + +if __name__ == "__main__": + # train_end2end_poly.main() + train_end2end_rotbox_RoITransformer.main() + test_poly.main() + + + + + diff --git a/experiments/fpn/fpn_end2end_train_test_poly.py b/experiments/fpn/fpn_end2end_train_test_poly.py new file mode 100644 index 0000000..80f19e5 --- /dev/null +++ b/experiments/fpn/fpn_end2end_train_test_poly.py @@ -0,0 +1,28 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Haozhi Qi +# -------------------------------------------------------- +import os +import sys +os.environ['PYTHONUNBUFFERED'] = '1' +os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0' +os.environ['MXNET_ENABLE_GPU_P2P'] = '0' +# os.environ['MXNET_ENGINE_TYPE'] = 'NaiveEngine' +this_dir = os.path.dirname(__file__) +sys.path.insert(0, os.path.join(this_dir, '..', '..', 'fpn')) + +import train_end2end_poly + +import test_poly + +if __name__ == "__main__": + train_end2end_poly.main() + + test_poly.main() + + + + + diff --git a/experiments/fpn/fpn_test.py b/experiments/fpn/fpn_test.py new file mode 100644 index 0000000..ab4a737 --- /dev/null +++ b/experiments/fpn/fpn_test.py @@ -0,0 +1,20 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Haozhi Qi +# -------------------------------------------------------- + +import os +import sys +os.environ['PYTHONUNBUFFERED'] = '1' +os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0' +os.environ['MXNET_ENABLE_GPU_P2P'] = '0' +# os.environ['MXNET_ENGINE_TYPE'] = 'NaiveEngine' +this_dir = os.path.dirname(__file__) +sys.path.insert(0, os.path.join(this_dir, '..', '..', 'fpn')) + +import test + +if __name__ == "__main__": + test.main() diff --git a/experiments/fpn/fpn_test_poly.py b/experiments/fpn/fpn_test_poly.py new file mode 100644 index 0000000..28de8d6 --- /dev/null +++ b/experiments/fpn/fpn_test_poly.py @@ -0,0 +1,19 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Haozhi Qi +# -------------------------------------------------------- + +import os +import sys +os.environ['PYTHONUNBUFFERED'] = '1' +os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0' +os.environ['MXNET_ENABLE_GPU_P2P'] = '0' +# os.environ['MXNET_ENGINE_TYPE'] = 'NaiveEngine' +this_dir = os.path.dirname(__file__) +sys.path.insert(0, os.path.join(this_dir, '..', '..', 'fpn')) + +import test_poly +if __name__ == "__main__": + test_poly.main() diff --git a/faster_rcnn/__init__.py b/faster_rcnn/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/faster_rcnn/_init_paths.py b/faster_rcnn/_init_paths.py new file mode 100644 index 0000000..5bbe057 --- /dev/null +++ b/faster_rcnn/_init_paths.py @@ -0,0 +1,11 @@ +import os.path as osp +import sys + +def add_path(path): + if path not in sys.path: + sys.path.insert(0, path) + +this_dir = osp.dirname(__file__) + +lib_path = osp.join(this_dir, '..', 'lib') +add_path(lib_path) diff --git a/faster_rcnn/config/__init__.py b/faster_rcnn/config/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/faster_rcnn/config/config.py b/faster_rcnn/config/config.py new file mode 100644 index 0000000..7da1789 --- /dev/null +++ b/faster_rcnn/config/config.py @@ -0,0 +1,193 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong, Bin Xiao +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import yaml +import numpy as np +from easydict import EasyDict as edict + +config = edict() + +config.MXNET_VERSION = '' +config.output_path = '' +config.symbol = '' +config.gpus = '' +config.CLASS_AGNOSTIC = True +# config.SCALES = [[(659, 1280)] ,[(1024,1024)]] # first is scale (the shorter side); second is max size +config.SCALES = [(659, 1280)] +# default training +config.default = edict() +config.default.frequent = 20 +config.default.kvstore = 'device' + +# network related params +config.network = edict() +config.network.pretrained = '' +config.network.pretrained_epoch = 0 +config.network.PIXEL_MEANS = np.array([0, 0, 0]) +config.network.IMAGE_STRIDE = 0 +config.network.RPN_FEAT_STRIDE = 16 +config.network.RCNN_FEAT_STRIDE = 16 +config.network.FIXED_PARAMS = ['gamma', 'beta'] +config.network.FIXED_PARAMS_SHARED = ['gamma', 'beta'] +config.network.ANCHOR_SCALES = (8, 16, 32) +config.network.ANCHOR_RATIOS = (0.5, 1, 2) +config.network.NUM_ANCHORS = len(config.network.ANCHOR_SCALES) * len(config.network.ANCHOR_RATIOS) + +# dataset related params +config.dataset = edict() +config.dataset.dataset = 'PascalVOC' +config.dataset.image_set = '2007_trainval' +config.dataset.test_image_set = '2007_test' +config.dataset.root_path = './data' +config.dataset.dataset_path = './data/VOCdevkit' +config.dataset.NUM_CLASSES = 21 + + +config.TRAIN = edict() + +config.TRAIN.lr = 0 +config.TRAIN.lr_step = '' +config.TRAIN.lr_factor = 0.1 +config.TRAIN.warmup = False +config.TRAIN.warmup_lr = 0 +config.TRAIN.warmup_step = 0 +config.TRAIN.momentum = 0.9 +config.TRAIN.wd = 0.0005 +config.TRAIN.begin_epoch = 0 +config.TRAIN.end_epoch = 0 +config.TRAIN.model_prefix = '' + +config.TRAIN.ALTERNATE = edict() +config.TRAIN.ALTERNATE.RPN_BATCH_IMAGES = 0 +config.TRAIN.ALTERNATE.RCNN_BATCH_IMAGES = 0 +config.TRAIN.ALTERNATE.rpn1_lr = 0 +config.TRAIN.ALTERNATE.rpn1_lr_step = '' # recommend '2' +config.TRAIN.ALTERNATE.rpn1_epoch = 0 # recommend 3 +config.TRAIN.ALTERNATE.rfcn1_lr = 0 +config.TRAIN.ALTERNATE.rfcn1_lr_step = '' # recommend '5' +config.TRAIN.ALTERNATE.rfcn1_epoch = 0 # recommend 8 +config.TRAIN.ALTERNATE.rpn2_lr = 0 +config.TRAIN.ALTERNATE.rpn2_lr_step = '' # recommend '2' +config.TRAIN.ALTERNATE.rpn2_epoch = 0 # recommend 3 +config.TRAIN.ALTERNATE.rfcn2_lr = 0 +config.TRAIN.ALTERNATE.rfcn2_lr_step = '' # recommend '5' +config.TRAIN.ALTERNATE.rfcn2_epoch = 0 # recommend 8 +# optional +config.TRAIN.ALTERNATE.rpn3_lr = 0 +config.TRAIN.ALTERNATE.rpn3_lr_step = '' # recommend '2' +config.TRAIN.ALTERNATE.rpn3_epoch = 0 # recommend 3 + +# whether resume training +config.TRAIN.RESUME = False +# whether flip image +config.TRAIN.FLIP = True +# whether shuffle image +config.TRAIN.SHUFFLE = True +# whether use OHEM +config.TRAIN.ENABLE_OHEM = False +# size of images for each device, 2 for rcnn, 1 for rpn and e2e +config.TRAIN.BATCH_IMAGES = 2 +# e2e changes behavior of anchor loader and metric +config.TRAIN.END2END = False +# group images with similar aspect ratio +config.TRAIN.ASPECT_GROUPING = True + +# R-CNN +# rcnn rois batch size +config.TRAIN.BATCH_ROIS = 128 +config.TRAIN.BATCH_ROIS_OHEM = 128 +# rcnn rois sampling params +config.TRAIN.FG_FRACTION = 0.25 +config.TRAIN.FG_THRESH = 0.5 +config.TRAIN.BG_THRESH_HI = 0.5 +config.TRAIN.BG_THRESH_LO = 0.0 +# rcnn bounding box regression params +config.TRAIN.BBOX_REGRESSION_THRESH = 0.5 +config.TRAIN.BBOX_WEIGHTS = np.array([1.0, 1.0, 1.0, 1.0]) + +# RPN anchor loader +# rpn anchors batch size +config.TRAIN.RPN_BATCH_SIZE = 256 +# rpn anchors sampling params +config.TRAIN.RPN_FG_FRACTION = 0.5 +config.TRAIN.RPN_POSITIVE_OVERLAP = 0.7 +config.TRAIN.RPN_NEGATIVE_OVERLAP = 0.3 +config.TRAIN.RPN_CLOBBER_POSITIVES = False +# rpn bounding box regression params +config.TRAIN.RPN_BBOX_WEIGHTS = (1.0, 1.0, 1.0, 1.0) +config.TRAIN.RPN_POSITIVE_WEIGHT = -1.0 + +# used for end2end training +# RPN proposal +config.TRAIN.CXX_PROPOSAL = True +config.TRAIN.RPN_NMS_THRESH = 0.7 +config.TRAIN.RPN_PRE_NMS_TOP_N = 12000 +config.TRAIN.RPN_POST_NMS_TOP_N = 2000 +config.TRAIN.RPN_MIN_SIZE = config.network.RPN_FEAT_STRIDE +# approximate bounding box regression +config.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED = False +config.TRAIN.BBOX_MEANS = (0.0, 0.0, 0.0, 0.0) +config.TRAIN.BBOX_STDS = (0.1, 0.1, 0.2, 0.2) + +config.TEST = edict() + +# R-CNN testing +# use rpn to generate proposal +config.TEST.HAS_RPN = False +# size of images for each device +config.TEST.BATCH_IMAGES = 1 + +# RPN proposal +config.TEST.CXX_PROPOSAL = True +config.TEST.RPN_NMS_THRESH = 0.7 +config.TEST.RPN_PRE_NMS_TOP_N = 6000 +config.TEST.RPN_POST_NMS_TOP_N = 300 +config.TEST.RPN_MIN_SIZE = config.network.RPN_FEAT_STRIDE + +# RPN generate proposal +config.TEST.PROPOSAL_NMS_THRESH = 0.7 +config.TEST.PROPOSAL_PRE_NMS_TOP_N = 20000 +config.TEST.PROPOSAL_POST_NMS_TOP_N = 2000 +config.TEST.PROPOSAL_MIN_SIZE = config.network.RPN_FEAT_STRIDE + +# RCNN nms +config.TEST.NMS = 0.3 + +config.TEST.max_per_image = 300 + +# Test Model Epoch +config.TEST.test_epoch = 0 + + +def update_config(config_file): + exp_config = None + with open(config_file) as f: + exp_config = edict(yaml.load(f)) + for k, v in exp_config.items(): + if k in config: + if isinstance(v, dict): + if k == 'TRAIN': + if 'BBOX_WEIGHTS' in v: + v['BBOX_WEIGHTS'] = np.array(v['BBOX_WEIGHTS']) + elif k == 'network': + if 'PIXEL_MEANS' in v: + v['PIXEL_MEANS'] = np.array(v['PIXEL_MEANS']) + for vk, vv in v.items(): + config[k][vk] = vv + else: + if k == 'SCALES': + config[k][0] = (tuple(v)) + else: + config[k] = v + else: + raise ValueError("key must exist in config.py") diff --git a/faster_rcnn/core/DataParallelExecutorGroup.py b/faster_rcnn/core/DataParallelExecutorGroup.py new file mode 100644 index 0000000..69fdd5c --- /dev/null +++ b/faster_rcnn/core/DataParallelExecutorGroup.py @@ -0,0 +1,596 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import logging +import numpy as np + +from mxnet import context as ctx +from mxnet import ndarray as nd +from mxnet.io import DataDesc +from mxnet.executor_manager import _split_input_slice + + + +def _load_general(data, targets, major_axis): + """Load a list of arrays into a list of arrays specified by slices""" + for d_src, d_targets in zip(data, targets): + if isinstance(d_targets, nd.NDArray): + d_src.copyto(d_targets) + elif isinstance(d_src, (list, tuple)): + for src, dst in zip(d_src, d_targets): + src.copyto(dst) + else: + raise NotImplementedError + + + +def _load_data(batch, targets, major_axis): + """Load data into sliced arrays""" + _load_general(batch.data, targets, major_axis) + + +def _load_label(batch, targets, major_axis): + """Load label into sliced arrays""" + _load_general(batch.label, targets, major_axis) + + +def _merge_multi_context(outputs, major_axis): + """Merge outputs that lives on multiple context into one, so that they look + like living on one context. + """ + rets = [] + for tensors, axis in zip(outputs, major_axis): + if axis >= 0: + rets.append(nd.concatenate(tensors, axis=axis, always_copy=False)) + else: + # negative axis means the there is no batch_size axis, and all the + # results should be the same on each device. We simply take the + # first one, without checking they are actually the same + rets.append(tensors[0]) + return rets + + + +class DataParallelExecutorGroup(object): + """DataParallelExecutorGroup is a group of executors that lives on a group of devices. + This is a helper class used to implement data parallelization. Each mini-batch will + be split and run on the devices. + + Parameters + ---------- + symbol : Symbol + The common symbolic computation graph for all executors. + contexts : list + A list of contexts. + workload : list + If not `None`, could be a list of numbers that specify the workload to be assigned + to different context. Larger number indicate heavier workload. + data_shapes : list + Should be a list of (name, shape) tuples, for the shapes of data. Note the order is + important and should be the same as the order that the `DataIter` provide the data. + label_shapes : list + Should be a list of (name, shape) tuples, for the shapes of label. Note the order is + important and should be the same as the order that the `DataIter` provide the label. + param_names : list + A list of strings, indicating the names of parameters (e.g. weights, filters, etc.) + in the computation graph. + for_training : bool + Indicate whether the executors should be bind for training. When not doing training, + the memory for gradients will not be allocated. + inputs_need_grad : bool + Indicate whether the gradients for the input data should be computed. This is currently + not used. It will be useful for implementing composition of modules. + shared_group : DataParallelExecutorGroup + Default is `None`. This is used in bucketing. When not `None`, it should be a executor + group corresponding to a different bucket. In other words, it will correspond to a different + symbol but with the same set of parameters (e.g. unrolled RNNs with different lengths). + In this case, many memory will be shared. + logger : Logger + Default is `logging`. + fixed_param_names: list of str + Indicate parameters to be fixed during training. Parameters in this list will not allocate + space for gradient, nor do gradient calculation. + grad_req : str, list of str, dict of str to str + Requirement for gradient accumulation. Can be 'write', 'add', or 'null' + (default to 'write'). + Can be specified globally (str) or for each argument (list, dict). + """ + def __init__(self, symbol, contexts, workload, data_shapes, label_shapes, param_names, + for_training, inputs_need_grad, shared_group=None, logger=logging, + fixed_param_names=None, grad_req='write', state_names=None): + self.param_names = param_names + self.arg_names = symbol.list_arguments() + self.aux_names = symbol.list_auxiliary_states() + + self.symbol = symbol + self.contexts = contexts + self.workload = workload + + self.for_training = for_training + self.inputs_need_grad = inputs_need_grad + + self.logger = logger + #In the future we should have a better way to profile memory per device (haibin) + # self._total_exec_bytes = 0 + self.fixed_param_names = fixed_param_names + if self.fixed_param_names is None: + self.fixed_param_names = [] + + self.state_names = state_names + if self.state_names is None: + self.state_names = [] + + if not for_training: + grad_req = 'null' + + # data_shapes = [x if isinstance(x, DataDesc) else DataDesc(*x) for x in data_shapes] + # if label_shapes is not None: + # label_shapes = [x if isinstance(x, DataDesc) else DataDesc(*x) for x in label_shapes] + + data_names = [x.name for x in data_shapes[0]] + + if isinstance(grad_req, str): + self.grad_req = {} + for k in self.arg_names: + if k in self.param_names: + self.grad_req[k] = 'null' if k in self.fixed_param_names else grad_req + elif k in data_names: + self.grad_req[k] = grad_req if self.inputs_need_grad else 'null' + else: + self.grad_req[k] = 'null' + elif isinstance(grad_req, (list, tuple)): + assert len(grad_req) == len(self.arg_names) + self.grad_req = dict(zip(self.arg_names, grad_req)) + elif isinstance(grad_req, dict): + self.grad_req = {} + for k in self.arg_names: + if k in self.param_names: + self.grad_req[k] = 'null' if k in self.fixed_param_names else 'write' + elif k in data_names: + self.grad_req[k] = 'write' if self.inputs_need_grad else 'null' + else: + self.grad_req[k] = 'null' + self.grad_req.update(grad_req) + else: + raise ValueError("grad_req must be one of str, list, tuple, or dict.") + + if shared_group is not None: + self.shared_data_arrays = shared_group.shared_data_arrays + else: + self.shared_data_arrays = [{} for _ in contexts] + + # initialize some instance variables + self.batch_size = len(data_shapes) + self.slices = None + self.execs = [] + self._default_execs = None + self.data_arrays = None + self.label_arrays = None + self.param_arrays = None + self.state_arrays = None + self.grad_arrays = None + self.aux_arrays = None + self.input_grad_arrays = None + + self.data_shapes = None + self.label_shapes = None + self.data_layouts = None + self.label_layouts = None + self.output_layouts = [DataDesc.get_batch_axis(self.symbol[name].attr('__layout__')) + for name in self.symbol.list_outputs()] + self.bind_exec(data_shapes, label_shapes, shared_group) + + def decide_slices(self, data_shapes): + """Decide the slices for each context according to the workload. + + Parameters + ---------- + data_shapes : list + list of (name, shape) specifying the shapes for the input data or label. + """ + assert len(data_shapes) > 0 + major_axis = [DataDesc.get_batch_axis(x.layout) for x in data_shapes] + + for (name, shape), axis in zip(data_shapes, major_axis): + if axis == -1: + continue + + batch_size = shape[axis] + if self.batch_size is not None: + assert batch_size == self.batch_size, ("all data must have the same batch size: " + + ("batch_size = %d, but " % self.batch_size) + + ("%s has shape %s" % (name, shape))) + else: + self.batch_size = batch_size + self.slices = _split_input_slice(self.batch_size, self.workload) + + return major_axis + + def _collect_arrays(self): + """Collect internal arrays from executors.""" + # convenient data structures + self.data_arrays = [[e.arg_dict[name] for name, _ in self.data_shapes[0]] for e in self.execs] + + self.state_arrays = [[e.arg_dict[name] for e in self.execs] + for name in self.state_names] + + if self.label_shapes is not None: + self.label_arrays = [[e.arg_dict[name] for name, _ in self.label_shapes[0]] for e in self.execs] + else: + self.label_arrays = None + + self.param_arrays = [[exec_.arg_arrays[i] for exec_ in self.execs] + for i, name in enumerate(self.arg_names) + if name in self.param_names] + if self.for_training: + self.grad_arrays = [[exec_.grad_arrays[i] for exec_ in self.execs] + for i, name in enumerate(self.arg_names) + if name in self.param_names] + else: + self.grad_arrays = None + + data_names = [x[0] for x in self.data_shapes] + if self.inputs_need_grad: + self.input_grad_arrays = [[exec_.grad_arrays[i] for exec_ in self.execs] + for i, name in enumerate(self.arg_names) + if name in data_names] + else: + self.input_grad_arrays = None + + self.aux_arrays = [[exec_.aux_arrays[i] for exec_ in self.execs] + for i in range(len(self.aux_names))] + + def bind_exec(self, data_shapes, label_shapes, shared_group=None, reshape=False): + """Bind executors on their respective devices. + + Parameters + ---------- + data_shapes : list + label_shapes : list + shared_group : DataParallelExecutorGroup + reshape : bool + """ + assert reshape or not self.execs + + for i in range(len(self.contexts)): + data_shapes_i = data_shapes[i] + if label_shapes is not None: + label_shapes_i = label_shapes[i] + else: + label_shapes_i = [] + + if reshape: + self.execs[i] = self._default_execs[i].reshape( + allow_up_sizing=True, **dict(data_shapes_i + label_shapes_i)) + else: + self.execs.append(self._bind_ith_exec(i, data_shapes_i, label_shapes_i, + shared_group)) + + self.data_shapes = data_shapes + self.label_shapes = label_shapes + self._collect_arrays() + + def reshape(self, data_shapes, label_shapes): + """Reshape executors. + + Parameters + ---------- + data_shapes : list + label_shapes : list + """ + if self._default_execs is None: + self._default_execs = [i for i in self.execs] + for i in range(len(self.contexts)): + self.execs[i] = self._default_execs[i].reshape( + allow_up_sizing=True, **dict(data_shapes[i] + (label_shapes[i] if label_shapes is not None else [])) + ) + self.data_shapes = data_shapes + self.label_shapes = label_shapes + self._collect_arrays() + + + def set_params(self, arg_params, aux_params): + """Assign, i.e. copy parameters to all the executors. + + Parameters + ---------- + arg_params : dict + A dictionary of name to `NDArray` parameter mapping. + aux_params : dict + A dictionary of name to `NDArray` auxiliary variable mapping. + """ + for exec_ in self.execs: + exec_.copy_params_from(arg_params, aux_params) + + def get_params(self, arg_params, aux_params): + """ Copy data from each executor to `arg_params` and `aux_params`. + + Parameters + ---------- + arg_params : list of NDArray + target parameter arrays + aux_params : list of NDArray + target aux arrays + + Notes + ----- + - This function will inplace update the NDArrays in arg_params and aux_params. + """ + for name, block in zip(self.param_names, self.param_arrays): + weight = sum(w.copyto(ctx.cpu()) for w in block) / len(block) + weight.astype(arg_params[name].dtype).copyto(arg_params[name]) + for name, block in zip(self.aux_names, self.aux_arrays): + weight = sum(w.copyto(ctx.cpu()) for w in block) / len(block) + weight.astype(aux_params[name].dtype).copyto(aux_params[name]) + + def forward(self, data_batch, is_train=None): + """Split `data_batch` according to workload and run forward on each devices. + + Parameters + ---------- + data_batch : DataBatch + Or could be any object implementing similar interface. + is_train : bool + The hint for the backend, indicating whether we are during training phase. + Default is `None`, then the value `self.for_training` will be used. + Returns + ------- + + """ + _load_data(data_batch, self.data_arrays, self.data_layouts) + if is_train is None: + is_train = self.for_training + + if self.label_arrays is not None: + assert not is_train or data_batch.label + if data_batch.label: + _load_label(data_batch, self.label_arrays, self.label_layouts) + + for exec_ in self.execs: + exec_.forward(is_train=is_train) + + + def get_outputs(self, merge_multi_context=True): + """Get outputs of the previous forward computation. + + Parameters + ---------- + merge_multi_context : bool + Default is `True`. In the case when data-parallelism is used, the outputs + will be collected from multiple devices. A `True` value indicate that we + should merge the collected results so that they look like from a single + executor. + + Returns + ------- + If `merge_multi_context` is `True`, it is like `[out1, out2]`. Otherwise, it + is like `[[out1_dev1, out1_dev2], [out2_dev1, out2_dev2]]`. All the output + elements are `NDArray`. + """ + outputs = [[exec_.outputs[i] for exec_ in self.execs] + for i in range(len(self.execs[0].outputs))] + if merge_multi_context: + outputs = _merge_multi_context(outputs, self.output_layouts) + return outputs + + def get_states(self, merge_multi_context=True): + """Get states from all devices + + Parameters + ---------- + merge_multi_context : bool + Default is `True`. In the case when data-parallelism is used, the states + will be collected from multiple devices. A `True` value indicate that we + should merge the collected results so that they look like from a single + executor. + + Returns + ------- + If `merge_multi_context` is `True`, it is like `[out1, out2]`. Otherwise, it + is like `[[out1_dev1, out1_dev2], [out2_dev1, out2_dev2]]`. All the output + elements are `NDArray`. + """ + assert not merge_multi_context, \ + "merge_multi_context=True is not supported for get_states yet." + return self.state_arrays + + def set_states(self, states=None, value=None): + """Set value for states. Only one of states & value can be specified. + + Parameters + ---------- + states : list of list of NDArrays + source states arrays formatted like [[state1_dev1, state1_dev2], + [state2_dev1, state2_dev2]]. + value : number + a single scalar value for all state arrays. + """ + if states is not None: + assert value is None, "Only one of states & value can be specified." + _load_general(states, self.state_arrays, (0,)*len(states)) + else: + assert value is not None, "At least one of states & value must be specified." + assert states is None, "Only one of states & value can be specified." + for d_dst in self.state_arrays: + for dst in d_dst: + dst[:] = value + + def get_input_grads(self, merge_multi_context=True): + """Get the gradients with respect to the inputs of the module. + + Parameters + ---------- + merge_multi_context : bool + Default is `True`. In the case when data-parallelism is used, the outputs + will be collected from multiple devices. A `True` value indicate that we + should merge the collected results so that they look like from a single + executor. + + Returns + ------- + If `merge_multi_context` is `True`, it is like `[grad1, grad2]`. Otherwise, it + is like `[[grad1_dev1, grad1_dev2], [grad2_dev1, grad2_dev2]]`. All the output + elements are `NDArray`. + """ + assert self.inputs_need_grad + if merge_multi_context: + return _merge_multi_context(self.input_grad_arrays, self.data_layouts) + return self.input_grad_arrays + + def backward(self, out_grads=None): + """Run backward on all devices. A backward should be called after + a call to the forward function. Backward cannot be called unless + `self.for_training` is `True`. + + Parameters + ---------- + out_grads : NDArray or list of NDArray, optional + Gradient on the outputs to be propagated back. + This parameter is only needed when bind is called + on outputs that are not a loss function. + """ + assert self.for_training, 're-bind with for_training=True to run backward' + if out_grads is None: + out_grads = [] + + for i, exec_ in enumerate(self.execs): + out_grads_slice = [] + exec_.backward(out_grads=out_grads_slice) + + def update_metric(self, eval_metric, labels): + """Accumulate the performance according to `eval_metric` on all devices. + + Parameters + ---------- + eval_metric : EvalMetric + The metric used for evaluation. + labels : list of NDArray + Typically comes from `label` of a `DataBatch`. + """ + for texec, labels in zip(self.execs, labels): + eval_metric.update(labels, texec.outputs) + + def _bind_ith_exec(self, i, data_shapes, label_shapes, shared_group): + """Internal utility function to bind the i-th executor. + """ + shared_exec = None if shared_group is None else shared_group.execs[i] + context = self.contexts[i] + shared_data_arrays = self.shared_data_arrays[i] + + input_shapes = dict(data_shapes) + if label_shapes is not None: + input_shapes.update(dict(label_shapes)) + + arg_shapes, _, aux_shapes = self.symbol.infer_shape(**input_shapes) + assert arg_shapes is not None, "shape inference failed" + + input_types = {x.name: x.dtype for x in data_shapes} + if label_shapes is not None: + input_types.update({x.name: x.dtype for x in label_shapes}) + arg_types, _, aux_types = self.symbol.infer_type(**input_types) + assert arg_types is not None, "type inference failed" + + arg_arrays = [] + grad_arrays = {} if self.for_training else None + + def _get_or_reshape(name, shared_data_arrays, arg_shape, arg_type, context, logger): + """Internal helper to get a memory block or re-use by re-shaping""" + if name in shared_data_arrays: + arg_arr = shared_data_arrays[name] + + if np.prod(arg_arr.shape) >= np.prod(arg_shape): + # nice, we can directly re-use this data blob + assert arg_arr.dtype == arg_type + arg_arr = arg_arr.reshape(arg_shape) + else: + logger.warning(('bucketing: data "%s" has a shape %s' % (name, arg_shape)) + + (', which is larger than already allocated ') + + ('shape %s' % (arg_arr.shape,)) + + ('. Need to re-allocate. Consider putting ') + + ('default_bucket_key to') + + (' be the bucket taking the largest input for better ') + + ('memory sharing.')) + arg_arr = nd.zeros(arg_shape, context, dtype=arg_type) + + # replace existing shared array because the new one is bigger + shared_data_arrays[name] = arg_arr + else: + arg_arr = nd.zeros(arg_shape, context, dtype=arg_type) + shared_data_arrays[name] = arg_arr + + return arg_arr + + # create or borrow arguments and gradients + for j in range(len(self.arg_names)): + name = self.arg_names[j] + if name in self.param_names: # model parameters + if shared_exec is None: + arg_arr = nd.zeros(arg_shapes[j], context, dtype=arg_types[j]) + if self.grad_req[name] != 'null': + grad_arr = nd.zeros(arg_shapes[j], context, dtype=arg_types[j]) + grad_arrays[name] = grad_arr + else: + arg_arr = shared_exec.arg_dict[name] + assert arg_arr.shape == arg_shapes[j] + assert arg_arr.dtype == arg_types[j] + if self.grad_req[name] != 'null': + grad_arrays[name] = shared_exec.grad_dict[name] + else: # data, label, or states + arg_arr = _get_or_reshape(name, shared_data_arrays, arg_shapes[j], arg_types[j], + context, self.logger) + + # data might also need grad if inputs_need_grad is True + if self.grad_req[name] != 'null': + grad_arrays[name] = _get_or_reshape('grad of ' + name, shared_data_arrays, + arg_shapes[j], arg_types[j], context, + self.logger) + + arg_arrays.append(arg_arr) + + # create or borrow aux variables + if shared_exec is None: + aux_arrays = [nd.zeros(s, context, dtype=t) for s, t in zip(aux_shapes, aux_types)] + else: + for j, arr in enumerate(shared_exec.aux_arrays): + assert aux_shapes[j] == arr.shape + assert aux_types[j] == arr.dtype + aux_arrays = shared_exec.aux_arrays[:] + + executor = self.symbol.bind(ctx=context, args=arg_arrays, + args_grad=grad_arrays, aux_states=aux_arrays, + grad_req=self.grad_req, shared_exec=shared_exec) + # Get the total bytes allocated for this executor + return executor + + def _sliced_shape(self, shapes, i, major_axis): + """Get the sliced shapes for the i-th executor. + + Parameters + ---------- + shapes : list of (str, tuple) + The original (name, shape) pairs. + i : int + Which executor we are dealing with. + """ + sliced_shapes = [] + for desc, axis in zip(shapes, major_axis): + shape = list(desc.shape) + if axis >= 0: + shape[axis] = self.slices[i].stop - self.slices[i].start + sliced_shapes.append(DataDesc(desc.name, tuple(shape), desc.dtype, desc.layout)) + return sliced_shapes + + def install_monitor(self, mon): + """Install monitor on all executors""" + for exe in self.execs: + mon.install(exe) diff --git a/faster_rcnn/core/__init__.py b/faster_rcnn/core/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/faster_rcnn/core/callback.py b/faster_rcnn/core/callback.py new file mode 100644 index 0000000..24460eb --- /dev/null +++ b/faster_rcnn/core/callback.py @@ -0,0 +1,77 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import time +import logging +import mxnet as mx + + +class Speedometer(object): + def __init__(self, batch_size, frequent=50): + self.batch_size = batch_size + self.frequent = frequent + self.init = False + self.tic = 0 + self.last_count = 0 + + def __call__(self, param): + """Callback to Show speed.""" + count = param.nbatch + if self.last_count > count: + self.init = False + self.last_count = count + + if self.init: + if count % self.frequent == 0: + speed = self.frequent * self.batch_size / (time.time() - self.tic) + s = '' + if param.eval_metric is not None: + name, value = param.eval_metric.get() + s = "Epoch[%d] Batch [%d]\tSpeed: %.2f samples/sec\tTrain-" % (param.epoch, count, speed) + for n, v in zip(name, value): + s += "%s=%f,\t" % (n, v) + else: + s = "Iter[%d] Batch [%d]\tSpeed: %.2f samples/sec" % (param.epoch, count, speed) + + logging.info(s) + print(s) + self.tic = time.time() + else: + self.init = True + self.tic = time.time() + + +def do_checkpoint(prefix, means, stds): + def _callback(iter_no, sym, arg, aux): + arg['bbox_pred_weight_test'] = (arg['bbox_pred_weight'].T * mx.nd.array(stds)).T + arg['bbox_pred_bias_test'] = arg['bbox_pred_bias'] * mx.nd.array(stds) + mx.nd.array(means) + mx.model.save_checkpoint(prefix, iter_no + 1, sym, arg, aux) + arg.pop('bbox_pred_weight_test') + arg.pop('bbox_pred_bias_test') + return _callback + +def do_checkpoint_Rroi(prefix, means, stds, Rroi_means, Rroi_stds): + def _callback(iter_no, sym, arg, aux): + # pdb.set_trace() + arg['bbox_pred_weight_test'] = (arg['bbox_pred_weight'].T * mx.nd.array(stds)).T + arg['bbox_pred_bias_test'] = arg['bbox_pred_bias'] * mx.nd.array(stds) + mx.nd.array(means) + # params for Rroi regression + arg['Rroi_bbox_pred_weight_test'] = (arg['Rroi_bbox_pred_weight'].T * mx.nd.array(Rroi_stds)).T + arg['Rroi_bbox_pred_bias_test'] = arg['Rroi_bbox_pred_bias'] * mx.nd.array(Rroi_stds) + mx.nd.array(Rroi_means) + + mx.model.save_checkpoint(prefix, iter_no + 1, sym, arg, aux) + arg.pop('bbox_pred_weight_test') + arg.pop('bbox_pred_bias_test') + arg.pop('Rroi_bbox_pred_weight_test') + arg.pop('Rroi_bbox_pred_bias_test') + return _callback \ No newline at end of file diff --git a/faster_rcnn/core/loader.py b/faster_rcnn/core/loader.py new file mode 100644 index 0000000..681b300 --- /dev/null +++ b/faster_rcnn/core/loader.py @@ -0,0 +1,737 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import numpy as np +import mxnet as mx +from mxnet.executor_manager import _split_input_slice + +from config.config import config +from utils.image import tensor_vstack +from rpn.rpn import get_rpn_batch_poly, assign_anchor_poly +from rpn.rpn import get_rpn_testbatch, get_rpn_batch, assign_anchor +from rcnn import get_rcnn_testbatch, get_rcnn_batch + + +class TestLoader(mx.io.DataIter): + def __init__(self, roidb, config, batch_size=1, shuffle=False, + has_rpn=False): + super(TestLoader, self).__init__() + + # save parameters as properties + self.cfg = config + self.roidb = roidb + self.batch_size = batch_size + self.shuffle = shuffle + self.has_rpn = has_rpn + + # infer properties from roidb + self.size = len(self.roidb) + self.index = np.arange(self.size) + + # decide data and label names (only for training) + if has_rpn: + self.data_name = ['data', 'im_info'] + else: + self.data_name = ['data', 'rois'] + self.label_name = None + + # status variable for synchronization between get_data and get_label + self.cur = 0 + self.data = None + self.label = [] + self.im_info = None + + # get first batch to fill in provide_data and provide_label + self.reset() + self.get_batch() + + @property + def provide_data(self): + return [[(k, v.shape) for k, v in zip(self.data_name, idata)] for idata in self.data] + + @property + def provide_label(self): + return [None for _ in range(len(self.data))] + + @property + def provide_data_single(self): + return [(k, v.shape) for k, v in zip(self.data_name, self.data[0])] + + @property + def provide_label_single(self): + return None + + def reset(self): + self.cur = 0 + if self.shuffle: + np.random.shuffle(self.index) + + def iter_next(self): + return self.cur < self.size + + def next(self): + if self.iter_next(): + self.get_batch() + self.cur += self.batch_size + return self.im_info, mx.io.DataBatch(data=self.data, label=self.label, + pad=self.getpad(), index=self.getindex(), + provide_data=self.provide_data, provide_label=self.provide_label) + else: + raise StopIteration + + def getindex(self): + return self.cur / self.batch_size + + def getpad(self): + if self.cur + self.batch_size > self.size: + return self.cur + self.batch_size - self.size + else: + return 0 + + def get_batch(self): + cur_from = self.cur + cur_to = min(cur_from + self.batch_size, self.size) + roidb = [self.roidb[self.index[i]] for i in range(cur_from, cur_to)] + if self.has_rpn: + data, label, im_info = get_rpn_testbatch(roidb, self.cfg) + else: + data, label, im_info = get_rcnn_testbatch(roidb, self.cfg) + self.data = [[mx.nd.array(idata[name]) for name in self.data_name] for idata in data] + self.im_info = im_info + + def get_batch_individual(self): + cur_from = self.cur + cur_to = min(cur_from + self.batch_size, self.size) + roidb = [self.roidb[self.index[i]] for i in range(cur_from, cur_to)] + if self.has_rpn: + data, label, im_info = get_rpn_testbatch(roidb, self.cfg) + else: + data, label, im_info = get_rcnn_testbatch(roidb, self.cfg) + self.data = [mx.nd.array(data[name]) for name in self.data_name] + self.im_info = im_info + + +class ROIIter(mx.io.DataIter): + def __init__(self, roidb, config, batch_size=2, shuffle=False, ctx=None, work_load_list=None, aspect_grouping=False): + """ + This Iter will provide roi data to Fast R-CNN network + :param roidb: must be preprocessed + :param batch_size: must divide BATCH_SIZE(128) + :param shuffle: bool + :param ctx: list of contexts + :param work_load_list: list of work load + :param aspect_grouping: group images with similar aspects + :return: ROIIter + """ + super(ROIIter, self).__init__() + + # save parameters as properties + self.roidb = roidb + self.cfg = config + self.batch_size = batch_size + self.shuffle = shuffle + self.ctx = ctx + if self.ctx is None: + self.ctx = [mx.cpu()] + self.work_load_list = work_load_list + self.aspect_grouping = aspect_grouping + + # infer properties from roidb + self.size = len(roidb) + self.index = np.arange(self.size) + + # decide data and label names (only for training) + self.data_name = ['data', 'rois'] + self.label_name = ['label', 'bbox_target', 'bbox_weight'] + + # status variable for synchronization between get_data and get_label + self.cur = 0 + self.batch = None + self.data = None + self.label = None + + # get first batch to fill in provide_data and provide_label + self.reset() + self.get_batch_individual() + + @property + def provide_data(self): + return [[(k, v.shape) for k, v in zip(self.data_name, self.data[i])] for i in xrange(len(self.data))] + + @property + def provide_label(self): + return [[(k, v.shape) for k, v in zip(self.label_name, self.label[i])] for i in xrange(len(self.data))] + + @property + def provide_data_single(self): + return [(k, v.shape) for k, v in zip(self.data_name, self.data[0])] + + @property + def provide_label_single(self): + return [(k, v.shape) for k, v in zip(self.label_name, self.label[0])] + + def reset(self): + self.cur = 0 + if self.shuffle: + if self.aspect_grouping: + widths = np.array([r['width'] for r in self.roidb]) + heights = np.array([r['height'] for r in self.roidb]) + horz = (widths >= heights) + vert = np.logical_not(horz) + horz_inds = np.where(horz)[0] + vert_inds = np.where(vert)[0] + inds = np.hstack((np.random.permutation(horz_inds), np.random.permutation(vert_inds))) + extra = inds.shape[0] % self.batch_size + inds_ = np.reshape(inds[:-extra], (-1, self.batch_size)) + row_perm = np.random.permutation(np.arange(inds_.shape[0])) + inds[:-extra] = np.reshape(inds_[row_perm, :], (-1,)) + self.index = inds + else: + np.random.shuffle(self.index) + + def iter_next(self): + return self.cur + self.batch_size <= self.size + + def next(self): + if self.iter_next(): + self.get_batch_individual() + self.cur += self.batch_size + return mx.io.DataBatch(data=self.data, label=self.label, + pad=self.getpad(), index=self.getindex(), + provide_data=self.provide_data, provide_label=self.provide_label) + else: + raise StopIteration + + def getindex(self): + return self.cur / self.batch_size + + def getpad(self): + if self.cur + self.batch_size > self.size: + return self.cur + self.batch_size - self.size + else: + return 0 + + def get_batch(self): + # slice roidb + cur_from = self.cur + cur_to = min(cur_from + self.batch_size, self.size) + roidb = [self.roidb[self.index[i]] for i in range(cur_from, cur_to)] + + # decide multi device slices + work_load_list = self.work_load_list + ctx = self.ctx + if work_load_list is None: + work_load_list = [1] * len(ctx) + assert isinstance(work_load_list, list) and len(work_load_list) == len(ctx), \ + "Invalid settings for work load. " + slices = _split_input_slice(self.batch_size, work_load_list) + + # get each device + data_list = [] + label_list = [] + for islice in slices: + iroidb = [roidb[i] for i in range(islice.start, islice.stop)] + data, label = get_rcnn_batch(iroidb, self.cfg) + data_list.append(data) + label_list.append(label) + + all_data = dict() + for key in data_list[0].keys(): + all_data[key] = tensor_vstack([batch[key] for batch in data_list]) + + all_label = dict() + for key in label_list[0].keys(): + all_label[key] = tensor_vstack([batch[key] for batch in label_list]) + + self.data = [mx.nd.array(all_data[name]) for name in self.data_name] + self.label = [mx.nd.array(all_label[name]) for name in self.label_name] + + def get_batch_individual(self): + # slice roidb + cur_from = self.cur + cur_to = min(cur_from + self.batch_size, self.size) + roidb = [self.roidb[self.index[i]] for i in range(cur_from, cur_to)] + + # decide multi device slices + work_load_list = self.work_load_list + ctx = self.ctx + if work_load_list is None: + work_load_list = [1] * len(ctx) + assert isinstance(work_load_list, list) and len(work_load_list) == len(ctx), \ + "Invalid settings for work load. " + slices = _split_input_slice(self.batch_size, work_load_list) + + rst = [] + for idx, islice in enumerate(slices): + iroidb = [roidb[i] for i in range(islice.start, islice.stop)] + rst.append(self.parfetch(iroidb)) + + all_data = [_['data'] for _ in rst] + all_label = [_['label'] for _ in rst] + self.data = [[mx.nd.array(data[key]) for key in self.data_name] for data in all_data] + self.label = [[mx.nd.array(label[key]) for key in self.label_name] for label in all_label] + + def parfetch(self, iroidb): + data, label = get_rcnn_batch(iroidb, self.cfg) + return {'data': data, 'label': label} + + +class AnchorLoader(mx.io.DataIter): + + def __init__(self, feat_sym, roidb, cfg, batch_size=1, shuffle=False, ctx=None, work_load_list=None, + feat_stride=16, anchor_scales=(8, 16, 32), anchor_ratios=(0.5, 1, 2), allowed_border=0, + aspect_grouping=False): + """ + This Iter will provide roi data to Fast R-CNN network + :param feat_sym: to infer shape of assign_output + :param roidb: must be preprocessed + :param batch_size: must divide BATCH_SIZE(128) + :param shuffle: bool + :param ctx: list of contexts + :param work_load_list: list of work load + :param aspect_grouping: group images with similar aspects + :return: AnchorLoader + """ + super(AnchorLoader, self).__init__() + + # save parameters as properties + self.feat_sym = feat_sym + self.roidb = roidb + self.cfg = cfg + self.batch_size = batch_size + self.shuffle = shuffle + self.ctx = ctx + if self.ctx is None: + self.ctx = [mx.cpu()] + self.work_load_list = work_load_list + self.feat_stride = feat_stride + self.anchor_scales = anchor_scales + self.anchor_ratios = anchor_ratios + self.allowed_border = allowed_border + self.aspect_grouping = aspect_grouping + + # infer properties from roidb + self.size = len(roidb) + self.index = np.arange(self.size) + + # decide data and label names + if config.TRAIN.END2END: + self.data_name = ['data', 'im_info', 'gt_boxes'] + else: + self.data_name = ['data'] + self.label_name = ['label', 'bbox_target', 'bbox_weight'] + + # status variable for synchronization between get_data and get_label + self.cur = 0 + self.batch = None + self.data = None + self.label = None + + # get first batch to fill in provide_data and provide_label + self.reset() + self.get_batch_individual() + + @property + def provide_data(self): + return [[(k, v.shape) for k, v in zip(self.data_name, self.data[i])] for i in xrange(len(self.data))] + + @property + def provide_label(self): + return [[(k, v.shape) for k, v in zip(self.label_name, self.label[i])] for i in xrange(len(self.data))] + + @property + def provide_data_single(self): + return [(k, v.shape) for k, v in zip(self.data_name, self.data[0])] + + @property + def provide_label_single(self): + return [(k, v.shape) for k, v in zip(self.label_name, self.label[0])] + + def reset(self): + self.cur = 0 + if self.shuffle: + if self.aspect_grouping: + widths = np.array([r['width'] for r in self.roidb]) + heights = np.array([r['height'] for r in self.roidb]) + horz = (widths >= heights) + vert = np.logical_not(horz) + horz_inds = np.where(horz)[0] + vert_inds = np.where(vert)[0] + inds = np.hstack((np.random.permutation(horz_inds), np.random.permutation(vert_inds))) + extra = inds.shape[0] % self.batch_size + inds_ = np.reshape(inds[:-extra], (-1, self.batch_size)) + row_perm = np.random.permutation(np.arange(inds_.shape[0])) + inds[:-extra] = np.reshape(inds_[row_perm, :], (-1,)) + self.index = inds + else: + np.random.shuffle(self.index) + + def iter_next(self): + return self.cur + self.batch_size <= self.size + + def next(self): + if self.iter_next(): + self.get_batch_individual() + self.cur += self.batch_size + return mx.io.DataBatch(data=self.data, label=self.label, + pad=self.getpad(), index=self.getindex(), + provide_data=self.provide_data, provide_label=self.provide_label) + else: + raise StopIteration + + def getindex(self): + return self.cur / self.batch_size + + def getpad(self): + if self.cur + self.batch_size > self.size: + return self.cur + self.batch_size - self.size + else: + return 0 + + def infer_shape(self, max_data_shape=None, max_label_shape=None): + """ Return maximum data and label shape for single gpu """ + if max_data_shape is None: + max_data_shape = [] + if max_label_shape is None: + max_label_shape = [] + max_shapes = dict(max_data_shape + max_label_shape) + input_batch_size = max_shapes['data'][0] + im_info = [[max_shapes['data'][2], max_shapes['data'][3], 1.0]] + _, feat_shape, _ = self.feat_sym.infer_shape(**max_shapes) + label = assign_anchor(feat_shape[0], np.zeros((0, 5)), im_info, self.cfg, + self.feat_stride, self.anchor_scales, self.anchor_ratios, self.allowed_border) + label = [label[k] for k in self.label_name] + label_shape = [(k, tuple([input_batch_size] + list(v.shape[1:]))) for k, v in zip(self.label_name, label)] + return max_data_shape, label_shape + + def get_batch(self): + # slice roidb + cur_from = self.cur + cur_to = min(cur_from + self.batch_size, self.size) + roidb = [self.roidb[self.index[i]] for i in range(cur_from, cur_to)] + + # decide multi device slice + work_load_list = self.work_load_list + ctx = self.ctx + if work_load_list is None: + work_load_list = [1] * len(ctx) + assert isinstance(work_load_list, list) and len(work_load_list) == len(ctx), \ + "Invalid settings for work load. " + slices = _split_input_slice(self.batch_size, work_load_list) + + # get testing data for multigpu + data_list = [] + label_list = [] + for islice in slices: + iroidb = [roidb[i] for i in range(islice.start, islice.stop)] + data, label = get_rpn_batch(iroidb, self.cfg) + data_list.append(data) + label_list.append(label) + + # pad data first and then assign anchor (read label) + data_tensor = tensor_vstack([batch['data'] for batch in data_list]) + for data, data_pad in zip(data_list, data_tensor): + data['data'] = data_pad[np.newaxis, :] + + new_label_list = [] + for data, label in zip(data_list, label_list): + # infer label shape + data_shape = {k: v.shape for k, v in data.items()} + del data_shape['im_info'] + _, feat_shape, _ = self.feat_sym.infer_shape(**data_shape) + feat_shape = [int(i) for i in feat_shape[0]] + + # add gt_boxes to data for e2e + data['gt_boxes'] = label['gt_boxes'][np.newaxis, :, :] + + # assign anchor for label + label = assign_anchor(feat_shape, label['gt_boxes'], data['im_info'], self.cfg, + self.feat_stride, self.anchor_scales, + self.anchor_ratios, self.allowed_border) + new_label_list.append(label) + + all_data = dict() + for key in self.data_name: + all_data[key] = tensor_vstack([batch[key] for batch in data_list]) + + all_label = dict() + for key in self.label_name: + pad = -1 if key == 'label' else 0 + all_label[key] = tensor_vstack([batch[key] for batch in new_label_list], pad=pad) + + self.data = [mx.nd.array(all_data[key]) for key in self.data_name] + self.label = [mx.nd.array(all_label[key]) for key in self.label_name] + + def get_batch_individual(self): + cur_from = self.cur + cur_to = min(cur_from + self.batch_size, self.size) + roidb = [self.roidb[self.index[i]] for i in range(cur_from, cur_to)] + # decide multi device slice + work_load_list = self.work_load_list + ctx = self.ctx + if work_load_list is None: + work_load_list = [1] * len(ctx) + assert isinstance(work_load_list, list) and len(work_load_list) == len(ctx), \ + "Invalid settings for work load. " + slices = _split_input_slice(self.batch_size, work_load_list) + rst = [] + for idx, islice in enumerate(slices): + iroidb = [roidb[i] for i in range(islice.start, islice.stop)] + rst.append(self.parfetch(iroidb)) + all_data = [_['data'] for _ in rst] + all_label = [_['label'] for _ in rst] + self.data = [[mx.nd.array(data[key]) for key in self.data_name] for data in all_data] + self.label = [[mx.nd.array(label[key]) for key in self.label_name] for label in all_label] + + def parfetch(self, iroidb): + # get testing data for multigpu + data, label = get_rpn_batch(iroidb, self.cfg) + data_shape = {k: v.shape for k, v in data.items()} + del data_shape['im_info'] + _, feat_shape, _ = self.feat_sym.infer_shape(**data_shape) + feat_shape = [int(i) for i in feat_shape[0]] + + # add gt_boxes to data for e2e + data['gt_boxes'] = label['gt_boxes'][np.newaxis, :, :] + + # assign anchor for label + label = assign_anchor(feat_shape, label['gt_boxes'], data['im_info'], self.cfg, + self.feat_stride, self.anchor_scales, + self.anchor_ratios, self.allowed_border) + return {'data': data, 'label': label} + +class AnchorLoader_poly(mx.io.DataIter): + def __init__(self, feat_sym, roidb, cfg, batch_size=1, shuffle=False, ctx=None, work_load_list=None, + feat_stride=16, anchor_scales=(8, 16, 32), anchor_ratios=(0.5, 1, 2), allowed_border=0, + aspect_grouping=False): + """ + This Iter will provide roi data to Fast R-CNN network + :param feat_sym: to infer shape of assign_output + :param roidb: must be preprocessed + :param batch_size: must divide BATCH_SIZE(128) + :param shuffle: bool + :param ctx: list of contexts + :param work_load_list: list of work load + :param aspect_grouping: group images with similar aspects + :return: AnchorLoader + """ + super(AnchorLoader_poly, self).__init__() + + # save parameters as properties + self.feat_sym = feat_sym + self.roidb = roidb + self.cfg = cfg + self.batch_size = batch_size + self.shuffle = shuffle + self.ctx = ctx + if self.ctx is None: + self.ctx = [mx.cpu()] + self.work_load_list = work_load_list + self.feat_stride = feat_stride + self.anchor_scales = anchor_scales + self.anchor_ratios = anchor_ratios + self.allowed_border = allowed_border + self.aspect_grouping = aspect_grouping + + # infer properties from roidb + self.size = len(roidb) + self.index = np.arange(self.size) + + # decide data and label names + if config.TRAIN.END2END: + self.data_name = ['data', 'im_info', 'gt_boxes'] + else: + self.data_name = ['data'] + self.label_name = ['label', 'bbox_target', 'bbox_weight'] + + # status variable for synchronization between get_data and get_label + self.cur = 0 + self.batch = None + self.data = None + self.label = None + + # get first batch to fill in provide_data and provide_label + self.reset() + self.get_batch_individual() + + @property + def provide_data(self): + return [[(k, v.shape) for k, v in zip(self.data_name, self.data[i])] for i in xrange(len(self.data))] + + @property + def provide_label(self): + return [[(k, v.shape) for k, v in zip(self.label_name, self.label[i])] for i in xrange(len(self.data))] + + @property + def provide_data_single(self): + return [(k, v.shape) for k, v in zip(self.data_name, self.data[0])] + + @property + def provide_label_single(self): + return [(k, v.shape) for k, v in zip(self.label_name, self.label[0])] + + def reset(self): + self.cur = 0 + if self.shuffle: + if self.aspect_grouping: + widths = np.array([r['width'] for r in self.roidb]) + heights = np.array([r['height'] for r in self.roidb]) + horz = (widths >= heights) + vert = np.logical_not(horz) + horz_inds = np.where(horz)[0] + vert_inds = np.where(vert)[0] + inds = np.hstack((np.random.permutation(horz_inds), np.random.permutation(vert_inds))) + extra = inds.shape[0] % self.batch_size + inds_ = np.reshape(inds[:-extra], (-1, self.batch_size)) + row_perm = np.random.permutation(np.arange(inds_.shape[0])) + inds[:-extra] = np.reshape(inds_[row_perm, :], (-1,)) + self.index = inds + else: + np.random.shuffle(self.index) + + def iter_next(self): + return self.cur + self.batch_size <= self.size + + def next(self): + if self.iter_next(): + self.get_batch_individual() + self.cur += self.batch_size + return mx.io.DataBatch(data=self.data, label=self.label, + pad=self.getpad(), index=self.getindex(), + provide_data=self.provide_data, provide_label=self.provide_label) + else: + raise StopIteration + + def getindex(self): + return self.cur / self.batch_size + + def getpad(self): + if self.cur + self.batch_size > self.size: + return self.cur + self.batch_size - self.size + else: + return 0 + + def infer_shape(self, max_data_shape=None, max_label_shape=None): + """ Return maximum data and label shape for single gpu """ + if max_data_shape is None: + max_data_shape = [] + if max_label_shape is None: + max_label_shape = [] + max_shapes = dict(max_data_shape + max_label_shape) + input_batch_size = max_shapes['data'][0] + # change the shape of im_info + im_info = [[max_shapes['data'][2], max_shapes['data'][3], 1.0]] + _, feat_shape, _ = self.feat_sym.infer_shape(**max_shapes) + label = assign_anchor_poly(feat_shape[0], np.zeros((0, 9)), im_info, self.cfg, + self.feat_stride, self.anchor_scales, self.anchor_ratios, self.allowed_border) + label = [label[k] for k in self.label_name] + label_shape = [(k, tuple([input_batch_size] + list(v.shape[1:]))) for k, v in zip(self.label_name, label)] + return max_data_shape, label_shape + + def get_batch(self): + # slice roidb + cur_from = self.cur + cur_to = min(cur_from + self.batch_size, self.size) + roidb = [self.roidb[self.index[i]] for i in range(cur_from, cur_to)] + + # decide multi device slice + work_load_list = self.work_load_list + ctx = self.ctx + if work_load_list is None: + work_load_list = [1] * len(ctx) + assert isinstance(work_load_list, list) and len(work_load_list) == len(ctx), \ + "Invalid settings for work load. " + slices = _split_input_slice(self.batch_size, work_load_list) + + # get testing data for multigpu + data_list = [] + label_list = [] + for islice in slices: + iroidb = [roidb[i] for i in range(islice.start, islice.stop)] + data, label = get_rpn_batch_poly(iroidb, self.cfg) + data_list.append(data) + label_list.append(label) + + # pad data first and then assign anchor (read label) + data_tensor = tensor_vstack([batch['data'] for batch in data_list]) + for data, data_pad in zip(data_list, data_tensor): + data['data'] = data_pad[np.newaxis, :] + + new_label_list = [] + for data, label in zip(data_list, label_list): + # infer label shape + data_shape = {k: v.shape for k, v in data.items()} + del data_shape['im_info'] + _, feat_shape, _ = self.feat_sym.infer_shape(**data_shape) + feat_shape = [int(i) for i in feat_shape[0]] + + # add gt_boxes to data for e2e + data['gt_boxes'] = label['gt_boxes'][np.newaxis, :, :] + + # assign quadrangle anchor for label + label = assign_anchor_poly(feat_shape, label['gt_boxes'], data['im_info'], self.cfg, + self.feat_stride, self.anchor_scales, + self.anchor_ratios, self.allowed_border) + new_label_list.append(label) + + all_data = dict() + for key in self.data_name: + all_data[key] = tensor_vstack([batch[key] for batch in data_list]) + + all_label = dict() + for key in self.label_name: + pad = -1 if key == 'label' else 0 + all_label[key] = tensor_vstack([batch[key] for batch in new_label_list], pad=pad) + + self.data = [mx.nd.array(all_data[key]) for key in self.data_name] + self.label = [mx.nd.array(all_label[key]) for key in self.label_name] + + def get_batch_individual(self): + cur_from = self.cur + cur_to = min(cur_from + self.batch_size, self.size) + roidb = [self.roidb[self.index[i]] for i in range(cur_from, cur_to)] + # decide multi device slice + work_load_list = self.work_load_list + ctx = self.ctx + if work_load_list is None: + work_load_list = [1] * len(ctx) + assert isinstance(work_load_list, list) and len(work_load_list) == len(ctx), \ + "Invalid settings for work load. " + slices = _split_input_slice(self.batch_size, work_load_list) + rst = [] + for idx, islice in enumerate(slices): + iroidb = [roidb[i] for i in range(islice.start, islice.stop)] + rst.append(self.parfetch(iroidb)) + all_data = [_['data'] for _ in rst] + all_label = [_['label'] for _ in rst] + self.data = [[mx.nd.array(data[key]) for key in self.data_name] for data in all_data] + self.label = [[mx.nd.array(label[key]) for key in self.label_name] for label in all_label] + + def parfetch(self, iroidb): + # get testing data for multigpu + data, label = get_rpn_batch_poly(iroidb, self.cfg) + # print data + # print label + data_shape = {k: v.shape for k, v in data.items()} + del data_shape['im_info'] + _, feat_shape, _ = self.feat_sym.infer_shape(**data_shape) + feat_shape = [int(i) for i in feat_shape[0]] + + # add gt_boxes to data for e2e + data['gt_boxes'] = label['gt_boxes'][np.newaxis, :, :] + + # assign anchor for label + label = assign_anchor_poly(feat_shape, label['gt_boxes'], data['im_info'], self.cfg, + self.feat_stride, self.anchor_scales, + self.anchor_ratios, self.allowed_border) + return {'data': data, 'label': label} diff --git a/faster_rcnn/core/metric.py b/faster_rcnn/core/metric.py new file mode 100644 index 0000000..8894916 --- /dev/null +++ b/faster_rcnn/core/metric.py @@ -0,0 +1,408 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import mxnet as mx +import numpy as np + + +def get_rpn_names(): + pred = ['rpn_cls_prob', 'rpn_bbox_loss'] + label = ['rpn_label', 'rpn_bbox_target', 'rpn_bbox_weight'] + return pred, label + + +def get_rcnn_names(cfg): + pred = ['rcnn_cls_prob', 'rcnn_bbox_loss'] + label = ['rcnn_label', 'rcnn_bbox_target', 'rcnn_bbox_weight'] + if cfg.TRAIN.ENABLE_OHEM or cfg.TRAIN.END2END: + pred.append('rcnn_label') + if cfg.TRAIN.END2END: + rpn_pred, rpn_label = get_rpn_names() + pred = rpn_pred + pred + label = rpn_label + return pred, label + +def get_Rroi_names(cfg): + pred, label = get_rcnn_names(cfg) + pred.append('Rroi_rcnn_cls_prob') + pred.append('Rroi_rcnn_bbox_loss') + + if cfg.TRAIN.ENABLE_OHEM or cfg.TRAIN.END2END: + pred.append('Rroi_rcnn_label') + + return pred, label + +class RPNAccMetric(mx.metric.EvalMetric): + def __init__(self): + super(RPNAccMetric, self).__init__('RPNAcc') + self.pred, self.label = get_rpn_names() + + def update(self, labels, preds): + pred = preds[self.pred.index('rpn_cls_prob')] + label = labels[self.label.index('rpn_label')] + + # pred (b, c, p) or (b, c, h, w) + pred_label = mx.ndarray.argmax_channel(pred).asnumpy().astype('int32') + pred_label = pred_label.reshape((pred_label.shape[0], -1)) + # label (b, p) + label = label.asnumpy().astype('int32') + + # filter with keep_inds + keep_inds = np.where(label != -1) + pred_label = pred_label[keep_inds] + label = label[keep_inds] + + self.sum_metric += np.sum(pred_label.flat == label.flat) + self.num_inst += len(pred_label.flat) + +class RPNLogLossMetric(mx.metric.EvalMetric): + def __init__(self): + super(RPNLogLossMetric, self).__init__('RPNLogLoss') + self.pred, self.label = get_rpn_names() + + def update(self, labels, preds): + pred = preds[self.pred.index('rpn_cls_prob')] + label = labels[self.label.index('rpn_label')] + + # label (b, p) + label = label.asnumpy().astype('int32').reshape((-1)) + # pred (b, c, p) or (b, c, h, w) --> (b, p, c) --> (b*p, c) + pred = pred.asnumpy().reshape((pred.shape[0], pred.shape[1], -1)).transpose((0, 2, 1)) + pred = pred.reshape((label.shape[0], -1)) + + # filter with keep_inds + keep_inds = np.where(label != -1)[0] + label = label[keep_inds] + cls = pred[keep_inds, label] + + cls += 1e-14 + cls_loss = -1 * np.log(cls) + cls_loss = np.sum(cls_loss) + self.sum_metric += cls_loss + self.num_inst += label.shape[0] + +class RPNL1LossMetric(mx.metric.EvalMetric): + def __init__(self): + super(RPNL1LossMetric, self).__init__('RPNL1Loss') + self.pred, self.label = get_rpn_names() + + def update(self, labels, preds): + bbox_loss = preds[self.pred.index('rpn_bbox_loss')].asnumpy() + + # calculate num_inst (average on those kept anchors) + label = labels[self.label.index('rpn_label')].asnumpy() + num_inst = np.sum(label != -1) + + self.sum_metric += np.sum(bbox_loss) + self.num_inst += num_inst + +class RPNFGFraction(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RPNFGFraction, self).__init__('Proposal FG Fraction') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_rcnn_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('rcnn_label')] + else: + label = labels[self.label.index('rcnn_label')] + num_classes = pred.shape[-1] + # selection of ground truth label is different from softmax or sigmoid classifier + label = label.asnumpy().reshape(-1, ).astype('int32') + fg_inds = np.where(label > 0)[0] + bg_inds = np.where(label == 0)[0] + self.sum_metric += fg_inds.shape[0] + self.num_inst += (fg_inds.shape[0] + bg_inds.shape[0]) + +class RCNNFGAccuracy(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RCNNFGAccuracy, self).__init__('R-CNN FG Accuracy') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_rcnn_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('rcnn_label')] + else: + label = labels[self.label.index('rcnn_label')] + num_classes = pred.shape[-1] + pred_label = pred.asnumpy().reshape(-1, num_classes).argmax(axis=1).astype('int32') + # selection of ground truth label is different from softmax or sigmoid classifier + label = label.asnumpy().reshape(-1, ).astype('int32') + keep_inds = np.where(label > 0) + # filter out -1 label because of OHEM or invalid samples + pred_label = pred_label[keep_inds] + label = label[keep_inds] + + self.sum_metric += np.sum(np.equal(pred_label.flat, label.flat)) + self.num_inst += pred_label.shape[0] + +class RCNNAccMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RCNNAccMetric, self).__init__('RCNNAcc') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_rcnn_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('rcnn_label')] + else: + label = labels[self.label.index('rcnn_label')] + + last_dim = pred.shape[-1] + pred_label = pred.asnumpy().reshape(-1, last_dim).argmax(axis=1).astype('int32') + label = label.asnumpy().reshape(-1,).astype('int32') + + # filter with keep_inds + keep_inds = np.where(label != -1) + pred_label = pred_label[keep_inds] + label = label[keep_inds] + + self.sum_metric += np.sum(pred_label.flat == label.flat) + self.num_inst += len(pred_label.flat) + +class RCNNLogLossMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RCNNLogLossMetric, self).__init__('RCNNLogLoss') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_rcnn_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('rcnn_label')] + else: + label = labels[self.label.index('rcnn_label')] + + last_dim = pred.shape[-1] + pred = pred.asnumpy().reshape(-1, last_dim) + label = label.asnumpy().reshape(-1,).astype('int32') + + # filter with keep_inds + keep_inds = np.where(label != -1)[0] + label = label[keep_inds] + cls = pred[keep_inds, label] + + cls += 1e-14 + cls_loss = -1 * np.log(cls) + cls_loss = np.sum(cls_loss) + self.sum_metric += cls_loss + self.num_inst += label.shape[0] + +class RCNNL1LossMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RCNNL1LossMetric, self).__init__('RCNNL1Loss') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_rcnn_names(cfg) + + def update(self, labels, preds): + bbox_loss = preds[self.pred.index('rcnn_bbox_loss')].asnumpy() + if self.ohem: + label = preds[self.pred.index('rcnn_label')].asnumpy() + else: + if self.e2e: + label = preds[self.pred.index('rcnn_label')].asnumpy() + else: + label = labels[self.label.index('rcnn_label')].asnumpy() + + # calculate num_inst (average on those kept anchors) + num_inst = np.sum(label != -1) + + self.sum_metric += np.sum(bbox_loss) + self.num_inst += num_inst + +class RCNNFGFraction(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RCNNFGFraction, self).__init__('RRoI Proposal FG Fraction') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_Rroi_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('Rroi_rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('Rroi_rcnn_label')] + else: + label = labels[self.label.index('Rroi_rcnn_label')] + num_classes = pred.shape[-1] + # selection of ground truth label is different from softmax or sigmoid classifier + label = label.asnumpy().reshape(-1, ).astype('int32') + fg_inds = np.where(label > 0)[0] + bg_inds = np.where(label == 0)[0] + self.sum_metric += fg_inds.shape[0] + self.num_inst += (fg_inds.shape[0] + bg_inds.shape[0]) + +class RRoIRCNNFGAccuracy(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RRoIRCNNFGAccuracy, self).__init__('RRoIR-CNN FG Accuracy') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_Rroi_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('Rroi_rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('Rroi_rcnn_label')] + else: + label = labels[self.label.index('Rroi_rcnn_label')] + num_classes = pred.shape[-1] + pred_label = pred.asnumpy().reshape(-1, num_classes).argmax(axis=1).astype('int32') + # selection of ground truth label is different from softmax or sigmoid classifier + label = label.asnumpy().reshape(-1, ).astype('int32') + keep_inds = np.where(label > 0) + # filter out -1 label because of OHEM or invalid samples + pred_label = pred_label[keep_inds] + label = label[keep_inds] + + self.sum_metric += np.sum(np.equal(pred_label.flat, label.flat)) + self.num_inst += pred_label.shape[0] + +class RRoIAccMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RRoIAccMetric, self).__init__('RRoIAcc') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_Rroi_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('Rroi_rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('Rroi_rcnn_label')] + else: + label = labels[self.label.index('Rroi_rcnn_label')] + + last_dim = pred.shape[-1] + pred_label = pred.asnumpy().reshape(-1, last_dim).argmax(axis=1).astype('int32') + label = label.asnumpy().reshape(-1,).astype('int32') + + # filter with keep_inds + keep_inds = np.where(label != -1) + pred_label = pred_label[keep_inds] + label = label[keep_inds] + + self.sum_metric += np.sum(pred_label.flat == label.flat) + self.num_inst += len(pred_label.flat) + +class RRoIRCNNLogLossMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RRoIRCNNLogLossMetric, self).__init__('RRoIRCNNLogLoss') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_Rroi_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('Rroi_rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('Rroi_rcnn_label')] + else: + label = labels[self.label.index('Rroi_rcnn_label')] + + last_dim = pred.shape[-1] + pred = pred.asnumpy().reshape(-1, last_dim) + label = label.asnumpy().reshape(-1,).astype('int32') + + # filter with keep_inds + keep_inds = np.where(label != -1)[0] + label = label[keep_inds] + cls = pred[keep_inds, label] + + cls += 1e-14 + cls_loss = -1 * np.log(cls) + cls_loss = np.sum(cls_loss) + self.sum_metric += cls_loss + self.num_inst += label.shape[0] + +class RRoIRCNNL1LossMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RRoIRCNNL1LossMetric, self).__init__('RRoIRCNNL1Loss') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_Rroi_names(cfg) + + def update(self, labels, preds): + # pdb.set_trace() + bbox_loss = preds[self.pred.index('Rroi_rcnn_bbox_loss')].asnumpy() + if self.ohem: + label = preds[self.pred.index('Rroi_rcnn_label')].asnumpy() + else: + if self.e2e: + label = preds[self.pred.index('Rroi_rcnn_label')].asnumpy() + else: + label = labels[self.label.index('Rroi_rcnn_label')].asnumpy() + # pdb.set_trace() + # calculate num_inst (average on those kept anchors) + num_inst = np.sum(label != -1) + + self.sum_metric += np.sum(bbox_loss) + self.num_inst += num_inst + +class STLogLossMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(STLogLossMetric, self).__init__('STLogLoss') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_rcnn_names(cfg) + + def update(self, labels, preds): + # pred = preds[self.pred.index('rcnn_cls_prob')] + # if self.ohem or self.e2e: + # label = preds[self.pred.index('rcnn_label')] + # else: + # label = labels[self.label.index('rcnn_label')] + + pred = preds[-1] + label = preds[-2] + + last_dim = pred.shape[-1] + pred = pred.asnumpy().reshape(-1, last_dim) + label = label.asnumpy().reshape(-1,).astype('int32') + + # filter with keep_inds + keep_inds = np.where(label != -1)[0] + label = label[keep_inds] + cls = pred[keep_inds, label] + + cls += 1e-14 + cls_loss = -1 * np.log(cls) + cls_loss = np.sum(cls_loss) + self.sum_metric += cls_loss + self.num_inst += label.shape[0] + + +class STL1LossMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(STL1LossMetric, self).__init__('STL1Loss') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_rcnn_names(cfg) + # add st loss here + self.pred.append('st_loss') + + def update(self, labels, preds): + st_loss = preds[self.pred.index('st_loss')].asnumpy() + + label = preds[-2].asnumpy() + # pdb.set_trace() + num_inst = np.sum(label != 0) + + self.sum_metric += np.sum(st_loss) + self.num_inst += num_inst \ No newline at end of file diff --git a/faster_rcnn/core/module.py b/faster_rcnn/core/module.py new file mode 100644 index 0000000..0aae9bd --- /dev/null +++ b/faster_rcnn/core/module.py @@ -0,0 +1,1086 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Zheng Zhang +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- +"""A `MutableModule` implement the `BaseModule` API, and allows input shape +varying with training iterations. If shapes vary, executors will rebind, +using shared arrays from the initial module binded with maximum shape. +""" + +import time +import logging +import warnings + +from mxnet import context as ctx +from mxnet.initializer import Uniform, InitDesc +from mxnet.module.base_module import BaseModule, _check_input_names, _parse_data_desc, _as_list +from mxnet.model import _create_kvstore, _initialize_kvstore, _update_params, _update_params_on_kvstore, load_checkpoint, BatchEndParam +from mxnet import metric +# from mxnet.module.executor_group import DataParallelExecutorGroup + +from .DataParallelExecutorGroup import DataParallelExecutorGroup +from mxnet import ndarray as nd +from mxnet import optimizer as opt +import sys + +class Module(BaseModule): + """Module is a basic module that wrap a `Symbol`. It is functionally the same + as the `FeedForward` model, except under the module API. + + Parameters + ---------- + symbol : Symbol + data_names : list of str + Default is `('data')` for a typical model used in image classification. + label_names : list of str + Default is `('softmax_label')` for a typical model used in image + classification. + logger : Logger + Default is `logging`. + context : Context or list of Context + Default is `cpu()`. + work_load_list : list of number + Default `None`, indicating uniform workload. + fixed_param_names: list of str + Default `None`, indicating no network parameters are fixed. + state_names : list of str + states are similar to data and label, but not provided by data iterator. + Instead they are initialized to 0 and can be set by set_states() + """ + def __init__(self, symbol, data_names=('data',), label_names=('softmax_label',), + logger=logging, context=ctx.cpu(), work_load_list=None, + fixed_param_names=None, state_names=None): + super(Module, self).__init__(logger=logger) + + if isinstance(context, ctx.Context): + context = [context] + self._context = context + if work_load_list is None: + work_load_list = [1] * len(self._context) + assert len(work_load_list) == len(self._context) + self._work_load_list = work_load_list + + self._symbol = symbol + + data_names = list(data_names) if data_names is not None else [] + label_names = list(label_names) if label_names is not None else [] + state_names = list(state_names) if state_names is not None else [] + fixed_param_names = list(fixed_param_names) if fixed_param_names is not None else [] + + _check_input_names(symbol, data_names, "data", True) + _check_input_names(symbol, label_names, "label", False) + _check_input_names(symbol, state_names, "state", True) + _check_input_names(symbol, fixed_param_names, "fixed_param", True) + + arg_names = symbol.list_arguments() + input_names = data_names + label_names + state_names + self._param_names = [x for x in arg_names if x not in input_names] + self._fixed_param_names = fixed_param_names + self._aux_names = symbol.list_auxiliary_states() + self._data_names = data_names + self._label_names = label_names + self._state_names = state_names + self._output_names = symbol.list_outputs() + + self._arg_params = None + self._aux_params = None + self._params_dirty = False + + self._optimizer = None + self._kvstore = None + self._update_on_kvstore = None + self._updater = None + self._preload_opt_states = None + self._grad_req = None + + self._exec_group = None + self._data_shapes = None + self._label_shapes = None + + @staticmethod + def load(prefix, epoch, load_optimizer_states=False, **kwargs): + """Create a model from previously saved checkpoint. + + Parameters + ---------- + prefix : str + path prefix of saved model files. You should have + "prefix-symbol.json", "prefix-xxxx.params", and + optionally "prefix-xxxx.states", where xxxx is the + epoch number. + epoch : int + epoch to load. + load_optimizer_states : bool + whether to load optimizer states. Checkpoint needs + to have been made with save_optimizer_states=True. + data_names : list of str + Default is `('data')` for a typical model used in image classification. + label_names : list of str + Default is `('softmax_label')` for a typical model used in image + classification. + logger : Logger + Default is `logging`. + context : Context or list of Context + Default is `cpu()`. + work_load_list : list of number + Default `None`, indicating uniform workload. + fixed_param_names: list of str + Default `None`, indicating no network parameters are fixed. + """ + sym, args, auxs = load_checkpoint(prefix, epoch) + mod = Module(symbol=sym, **kwargs) + mod._arg_params = args + mod._aux_params = auxs + mod.params_initialized = True + if load_optimizer_states: + mod._preload_opt_states = '%s-%04d.states'%(prefix, epoch) + return mod + + def save_checkpoint(self, prefix, epoch, save_optimizer_states=False): + """Save current progress to checkpoint. + Use mx.callback.module_checkpoint as epoch_end_callback to save during training. + + Parameters + ---------- + prefix : str + The file prefix to checkpoint to + epoch : int + The current epoch number + save_optimizer_states : bool + Whether to save optimizer states for continue training + """ + self._symbol.save('%s-symbol.json'%prefix) + param_name = '%s-%04d.params' % (prefix, epoch) + self.save_params(param_name) + logging.info('Saved checkpoint to \"%s\"', param_name) + if save_optimizer_states: + state_name = '%s-%04d.states' % (prefix, epoch) + self.save_optimizer_states(state_name) + logging.info('Saved optimizer state to \"%s\"', state_name) + + def _reset_bind(self): + """Internal function to reset binded state.""" + self.binded = False + self._exec_group = None + self._data_shapes = None + self._label_shapes = None + + @property + def data_names(self): + """A list of names for data required by this module.""" + return self._data_names + + @property + def label_names(self): + """A list of names for labels required by this module.""" + return self._label_names + + @property + def output_names(self): + """A list of names for the outputs of this module.""" + return self._output_names + + @property + def data_shapes(self): + """Get data shapes. + Returns + ------- + A list of `(name, shape)` pairs. + """ + assert self.binded + return self._data_shapes + + @property + def label_shapes(self): + """Get label shapes. + Returns + ------- + A list of `(name, shape)` pairs. The return value could be `None` if + the module does not need labels, or if the module is not binded for + training (in this case, label information is not available). + """ + assert self.binded + return self._label_shapes + + @property + def output_shapes(self): + """Get output shapes. + Returns + ------- + A list of `(name, shape)` pairs. + """ + assert self.binded + return self._exec_group.get_output_shapes() + + def get_params(self): + """Get current parameters. + Returns + ------- + `(arg_params, aux_params)`, each a dictionary of name to parameters (in + `NDArray`) mapping. + """ + assert self.binded and self.params_initialized + + if self._params_dirty: + self._sync_params_from_devices() + return (self._arg_params, self._aux_params) + + def init_params(self, initializer=Uniform(0.01), arg_params=None, aux_params=None, + allow_missing=False, force_init=False, allow_extra=False): + """Initialize the parameters and auxiliary states. + + Parameters + ---------- + initializer : Initializer + Called to initialize parameters if needed. + arg_params : dict + If not None, should be a dictionary of existing arg_params. Initialization + will be copied from that. + aux_params : dict + If not None, should be a dictionary of existing aux_params. Initialization + will be copied from that. + allow_missing : bool + If true, params could contain missing values, and the initializer will be + called to fill those missing params. + force_init : bool + If true, will force re-initialize even if already initialized. + """ + if self.params_initialized and not force_init: + warnings.warn("Parameters already initialized and force_init=False. " + "init_params call ignored.", stacklevel=2) + return + assert self.binded, 'call bind before initializing the parameters' + + def _impl(name, arr, cache): + """Internal helper for parameter initialization""" + if cache is not None: + if name in cache: + cache_arr = cache[name] + + # just in case the cached array is just the target itself + if cache_arr is not arr: + cache_arr.copyto(arr) + else: + if not allow_missing: + raise RuntimeError("%s is not presented" % name) + if initializer != None: + initializer(name, arr) + else: + initializer(name, arr) + + attrs = self._symbol.attr_dict() + for name, arr in self._arg_params.items(): + desc = InitDesc(name, attrs.get(name, None)) + _impl(desc, arr, arg_params) + + for name, arr in self._aux_params.items(): + desc = InitDesc(name, attrs.get(name, None)) + _impl(desc, arr, aux_params) + + self.params_initialized = True + self._params_dirty = False + + # copy the initialized parameters to devices + self._exec_group.set_params(self._arg_params, self._aux_params) + + def set_params(self, arg_params, aux_params, allow_missing=False, force_init=True): + """Assign parameter and aux state values. + + Parameters + ---------- + arg_params : dict + Dictionary of name to value (`NDArray`) mapping. + aux_params : dict + Dictionary of name to value (`NDArray`) mapping. + allow_missing : bool + If true, params could contain missing values, and the initializer will be + called to fill those missing params. + force_init : bool + If true, will force re-initialize even if already initialized. + + Examples + -------- + An example of setting module parameters:: + >>> sym, arg_params, aux_params = \ + >>> mx.model.load_checkpoint(model_prefix, n_epoch_load) + >>> mod.set_params(arg_params=arg_params, aux_params=aux_params) + """ + if not allow_missing: + self.init_params(initializer=None, arg_params=arg_params, aux_params=aux_params, + allow_missing=allow_missing, force_init=force_init) + return + + if self.params_initialized and not force_init: + warnings.warn("Parameters already initialized and force_init=False. " + "set_params call ignored.", stacklevel=2) + return + + self._exec_group.set_params(arg_params, aux_params) + + # because we didn't update self._arg_params, they are dirty now. + self._params_dirty = True + self.params_initialized = True + + def bind(self, data_shapes, label_shapes=None, for_training=True, + inputs_need_grad=False, force_rebind=False, shared_module=None, + grad_req='write'): + """Bind the symbols to construct executors. This is necessary before one + can perform computation with the module. + + Parameters + ---------- + data_shapes : list of (str, tuple) + Typically is `data_iter.provide_data`. + label_shapes : list of (str, tuple) + Typically is `data_iter.provide_label`. + for_training : bool + Default is `True`. Whether the executors should be bind for training. + inputs_need_grad : bool + Default is `False`. Whether the gradients to the input data need to be computed. + Typically this is not needed. But this might be needed when implementing composition + of modules. + force_rebind : bool + Default is `False`. This function does nothing if the executors are already + binded. But with this `True`, the executors will be forced to rebind. + shared_module : Module + Default is `None`. This is used in bucketing. When not `None`, the shared module + essentially corresponds to a different bucket -- a module with different symbol + but with the same sets of parameters (e.g. unrolled RNNs with different lengths). + """ + # force rebinding is typically used when one want to switch from + # training to prediction phase. + if force_rebind: + self._reset_bind() + + if self.binded: + self.logger.warning('Already binded, ignoring bind()') + return + + self.for_training = for_training + self.inputs_need_grad = inputs_need_grad + self.binded = True + self._grad_req = grad_req + + if not for_training: + assert not inputs_need_grad + else: + pass + # this is not True, as some module might not contains a loss function + # that consumes the labels + # assert label_shapes is not None + + # self._data_shapes, self._label_shapes = _parse_data_desc( + # self.data_names, self.label_names, data_shapes, label_shapes) + self._data_shapes, self._label_shapes = zip(*[_parse_data_desc(self.data_names, self.label_names, data_shape, label_shape) + for data_shape, label_shape in zip(data_shapes, label_shapes)]) + if self._label_shapes.count(None) == len(self._label_shapes): + self._label_shapes = None + + if shared_module is not None: + assert isinstance(shared_module, Module) and \ + shared_module.binded and shared_module.params_initialized + shared_group = shared_module._exec_group + else: + shared_group = None + self._exec_group = DataParallelExecutorGroup(self._symbol, self._context, + self._work_load_list, self._data_shapes, + self._label_shapes, self._param_names, + for_training, inputs_need_grad, + shared_group, logger=self.logger, + fixed_param_names=self._fixed_param_names, + grad_req=grad_req, + state_names=self._state_names) + # self._total_exec_bytes = self._exec_group._total_exec_bytes + if shared_module is not None: + self.params_initialized = True + self._arg_params = shared_module._arg_params + self._aux_params = shared_module._aux_params + elif self.params_initialized: + # if the parameters are already initialized, we are re-binding + # so automatically copy the already initialized params + self._exec_group.set_params(self._arg_params, self._aux_params) + else: + assert self._arg_params is None and self._aux_params is None + param_arrays = [ + nd.zeros(x[0].shape, dtype=x[0].dtype) + for x in self._exec_group.param_arrays + ] + self._arg_params = {name:arr for name, arr in zip(self._param_names, param_arrays)} + + aux_arrays = [ + nd.zeros(x[0].shape, dtype=x[0].dtype) + for x in self._exec_group.aux_arrays + ] + self._aux_params = {name:arr for name, arr in zip(self._aux_names, aux_arrays)} + + if shared_module is not None and shared_module.optimizer_initialized: + self.borrow_optimizer(shared_module) + + + def reshape(self, data_shapes, label_shapes=None): + """Reshape the module for new input shapes. + + Parameters + ---------- + data_shapes : list of (str, tuple) + Typically is `data_iter.provide_data`. + label_shapes : list of (str, tuple) + Typically is `data_iter.provide_label`. + """ + assert self.binded + # self._data_shapes, self._label_shapes = _parse_data_desc( + # self.data_names, self.label_names, data_shapes, label_shapes) + self._data_shapes, self._label_shapes = zip(*[_parse_data_desc(self.data_names, self.label_names, data_shape, label_shape) + for data_shape, label_shape in zip(data_shapes, label_shapes)]) + + self._exec_group.reshape(self._data_shapes, self._label_shapes) + + + def init_optimizer(self, kvstore='local', optimizer='sgd', + optimizer_params=(('learning_rate', 0.01),), force_init=False): + """Install and initialize optimizers. + + Parameters + ---------- + kvstore : str or KVStore + Default `'local'`. + optimizer : str or Optimizer + Default `'sgd'` + optimizer_params : dict + Default `(('learning_rate', 0.01),)`. The default value is not a dictionary, + just to avoid pylint warning of dangerous default values. + force_init : bool + Default `False`, indicating whether we should force re-initializing the + optimizer in the case an optimizer is already installed. + """ + assert self.binded and self.params_initialized + + if self.optimizer_initialized and not force_init: + self.logger.warning('optimizer already initialized, ignoring...') + return + + (kvstore, update_on_kvstore) = \ + _create_kvstore(kvstore, len(self._context), self._arg_params) + + batch_size = self._exec_group.batch_size + if kvstore and 'dist' in kvstore.type and '_sync' in kvstore.type: + batch_size *= kvstore.num_workers + rescale_grad = 1.0/batch_size + + if isinstance(optimizer, str): + idx2name = {} + if update_on_kvstore: + idx2name.update(enumerate(self._exec_group.param_names)) + else: + for k in range(len(self._context)): + idx2name.update({i*len(self._context)+k: n + for i, n in enumerate(self._exec_group.param_names)}) + optimizer_params = dict(optimizer_params) + if 'rescale_grad' not in optimizer_params: + optimizer_params['rescale_grad'] = rescale_grad + optimizer = opt.create(optimizer, + sym=self.symbol, param_idx2name=idx2name, + **optimizer_params) + else: + assert isinstance(optimizer, opt.Optimizer) + if optimizer.rescale_grad != rescale_grad: + #pylint: disable=no-member + warnings.warn( + "Optimizer created manually outside Module but rescale_grad " + + "is not normalized to 1.0/batch_size/num_workers (%s vs. %s). "%( + optimizer.rescale_grad, rescale_grad) + + "Is this intended?", stacklevel=2) + + self._optimizer = optimizer + self._kvstore = kvstore + self._update_on_kvstore = update_on_kvstore + self._updater = None + + if kvstore: + # copy initialized local parameters to kvstore + _initialize_kvstore(kvstore=kvstore, + param_arrays=self._exec_group.param_arrays, + arg_params=self._arg_params, + param_names=self._param_names, + update_on_kvstore=update_on_kvstore) + if update_on_kvstore: + kvstore.set_optimizer(self._optimizer) + else: + self._updater = opt.get_updater(optimizer) + + self.optimizer_initialized = True + + if self._preload_opt_states is not None: + self.load_optimizer_states(self._preload_opt_states) + self._preload_opt_states = None + + def borrow_optimizer(self, shared_module): + """Borrow optimizer from a shared module. Used in bucketing, where exactly the same + optimizer (esp. kvstore) is used. + + Parameters + ---------- + shared_module : Module + """ + assert shared_module.optimizer_initialized + self._optimizer = shared_module._optimizer + self._kvstore = shared_module._kvstore + self._update_on_kvstore = shared_module._update_on_kvstore + self._updater = shared_module._updater + self.optimizer_initialized = True + + def forward(self, data_batch, is_train=None): + """Forward computation. + + Parameters + ---------- + data_batch : DataBatch + Could be anything with similar API implemented. + is_train : bool + Default is `None`, which means `is_train` takes the value of `self.for_training`. + """ + assert self.binded and self.params_initialized + self._exec_group.forward(data_batch, is_train) + + def backward(self, out_grads=None): + """Backward computation. + + Parameters + ---------- + out_grads : NDArray or list of NDArray, optional + Gradient on the outputs to be propagated back. + This parameter is only needed when bind is called + on outputs that are not a loss function. + """ + assert self.binded and self.params_initialized + self._exec_group.backward(out_grads=out_grads) + + def update(self): + """Update parameters according to the installed optimizer and the gradients computed + in the previous forward-backward batch. + """ + assert self.binded and self.params_initialized and self.optimizer_initialized + + self._params_dirty = True + if self._update_on_kvstore: + try: + _update_params_on_kvstore(self._exec_group.param_arrays, + self._exec_group.grad_arrays, + self._kvstore) + except: + _update_params_on_kvstore(self._exec_group.param_arrays, + self._exec_group.grad_arrays, + self._kvstore, param_names=self._exec_group.param_names) + else: + _update_params(self._exec_group.param_arrays, + self._exec_group.grad_arrays, + updater=self._updater, + num_device=len(self._context), + kvstore=self._kvstore) + + def get_outputs(self, merge_multi_context=True): + """Get outputs of the previous forward computation. + + Parameters + ---------- + merge_multi_context : bool + Default is `True`. In the case when data-parallelism is used, the outputs + will be collected from multiple devices. A `True` value indicate that we + should merge the collected results so that they look like from a single + executor. + + Returns + ------- + If `merge_multi_context` is `True`, it is like `[out1, out2]`. Otherwise, it + is like `[[out1_dev1, out1_dev2], [out2_dev1, out2_dev2]]`. All the output + elements are `NDArray`. + """ + assert self.binded and self.params_initialized + return self._exec_group.get_outputs(merge_multi_context=merge_multi_context) + + def get_input_grads(self, merge_multi_context=True): + """Get the gradients with respect to the inputs of the module. + + Parameters + ---------- + merge_multi_context : bool + Default is `True`. In the case when data-parallelism is used, the outputs + will be collected from multiple devices. A `True` value indicate that we + should merge the collected results so that they look like from a single + executor. + + Returns + ------- + If `merge_multi_context` is `True`, it is like `[grad1, grad2]`. Otherwise, it + is like `[[grad1_dev1, grad1_dev2], [grad2_dev1, grad2_dev2]]`. All the output + elements are `NDArray`. + """ + assert self.binded and self.params_initialized and self.inputs_need_grad + return self._exec_group.get_input_grads(merge_multi_context=merge_multi_context) + + def get_states(self, merge_multi_context=True): + """Get states from all devices + + Parameters + ---------- + merge_multi_context : bool + Default is `True`. In the case when data-parallelism is used, the states + will be collected from multiple devices. A `True` value indicate that we + should merge the collected results so that they look like from a single + executor. + + Returns + ------- + If `merge_multi_context` is `True`, it is like `[out1, out2]`. Otherwise, it + is like `[[out1_dev1, out1_dev2], [out2_dev1, out2_dev2]]`. All the output + elements are `NDArray`. + """ + assert self.binded and self.params_initialized + return self._exec_group.get_states(merge_multi_context=merge_multi_context) + + def set_states(self, states=None, value=None): + """Set value for states. Only one of states & value can be specified. + + Parameters + ---------- + states : list of list of NDArrays + source states arrays formatted like [[state1_dev1, state1_dev2], + [state2_dev1, state2_dev2]]. + value : number + a single scalar value for all state arrays. + """ + assert self.binded and self.params_initialized + self._exec_group.set_states(states, value) + + def update_metric(self, eval_metric, labels): + """Evaluate and accumulate evaluation metric on outputs of the last forward computation. + + Parameters + ---------- + eval_metric : EvalMetric + labels : list of NDArray + Typically `data_batch.label`. + """ + self._exec_group.update_metric(eval_metric, labels) + + def _sync_params_from_devices(self): + """Synchronize parameters from devices to CPU. This function should be called after + calling `update` that updates the parameters on the devices, before one can read the + latest parameters from `self._arg_params` and `self._aux_params`. + """ + self._exec_group.get_params(self._arg_params, self._aux_params) + self._params_dirty = False + + def save_optimizer_states(self, fname): + """Save optimizer (updater) state to file + + Parameters + ---------- + fname : str + Path to output states file. + """ + assert self.optimizer_initialized + + if self._update_on_kvstore: + self._kvstore.save_optimizer_states(fname) + else: + with open(fname, 'wb') as fout: + fout.write(self._updater.get_states()) + + def load_optimizer_states(self, fname): + """Load optimizer (updater) state from file + + Parameters + ---------- + fname : str + Path to input states file. + """ + assert self.optimizer_initialized + + if self._update_on_kvstore: + self._kvstore.load_optimizer_states(fname) + else: + self._updater.set_states(open(fname, 'rb').read()) + + def install_monitor(self, mon): + """ Install monitor on all executors """ + assert self.binded + self._exec_group.install_monitor(mon) + + + + +class MutableModule(BaseModule): + """A mutable module is a module that supports variable input data. + + Parameters + ---------- + symbol : Symbol + data_names : list of str + label_names : list of str + logger : Logger + context : Context or list of Context + work_load_list : list of number + max_data_shapes : list of (name, shape) tuple, designating inputs whose shape vary + max_label_shapes : list of (name, shape) tuple, designating inputs whose shape vary + fixed_param_prefix : list of str, indicating fixed parameters + """ + def __init__(self, symbol, data_names, label_names, + logger=logging, context=ctx.cpu(), work_load_list=None, + max_data_shapes=None, max_label_shapes=None, fixed_param_prefix=None): + super(MutableModule, self).__init__(logger=logger) + self._symbol = symbol + self._data_names = data_names + self._label_names = label_names + self._context = context + self._work_load_list = work_load_list + + self._curr_module = None + self._max_data_shapes = max_data_shapes + self._max_label_shapes = max_label_shapes + self._fixed_param_prefix = fixed_param_prefix + + fixed_param_names = list() + if fixed_param_prefix is not None: + for name in self._symbol.list_arguments(): + for prefix in self._fixed_param_prefix: + if prefix in name: + fixed_param_names.append(name) + self._fixed_param_names = fixed_param_names + self._preload_opt_states = None + + def _reset_bind(self): + self.binded = False + self._curr_module = None + + @property + def data_names(self): + return self._data_names + + @property + def output_names(self): + return self._symbol.list_outputs() + + @property + def data_shapes(self): + assert self.binded + return self._curr_module.data_shapes + + @property + def label_shapes(self): + assert self.binded + return self._curr_module.label_shapes + + @property + def output_shapes(self): + assert self.binded + return self._curr_module.output_shapes + + def get_params(self): + assert self.binded and self.params_initialized + return self._curr_module.get_params() + + def init_params(self, initializer=Uniform(0.01), arg_params=None, aux_params=None, + allow_missing=False, force_init=False, allow_extra=False): + if self.params_initialized and not force_init: + return + assert self.binded, 'call bind before initializing the parameters' + self._curr_module.init_params(initializer=initializer, arg_params=arg_params, + aux_params=aux_params, allow_missing=allow_missing, + force_init=force_init) + self.params_initialized = True + + def bind(self, data_shapes, label_shapes=None, for_training=True, + inputs_need_grad=False, force_rebind=False, shared_module=None, grad_req='write'): + # in case we already initialized params, keep it + if self.params_initialized: + arg_params, aux_params = self.get_params() + + # force rebinding is typically used when one want to switch from + # training to prediction phase. + if force_rebind: + self._reset_bind() + + if self.binded: + self.logger.warning('Already binded, ignoring bind()') + return + + assert shared_module is None, 'shared_module for MutableModule is not supported' + + self.for_training = for_training + self.inputs_need_grad = inputs_need_grad + self.binded = True + + max_shapes_dict = dict() + if self._max_data_shapes is not None: + max_shapes_dict.update(dict(self._max_data_shapes[0])) + if self._max_label_shapes is not None: + max_shapes_dict.update(dict(self._max_label_shapes[0])) + + max_data_shapes = list() + for name, shape in data_shapes[0]: + if name in max_shapes_dict: + max_data_shapes.append((name, max_shapes_dict[name])) + else: + max_data_shapes.append((name, shape)) + + max_label_shapes = list() + if not label_shapes.count(None) == len(label_shapes): + for name, shape in label_shapes[0]: + if name in max_shapes_dict: + max_label_shapes.append((name, max_shapes_dict[name])) + else: + max_label_shapes.append((name, shape)) + + if len(max_label_shapes) == 0: + max_label_shapes = None + + module = Module(self._symbol, self._data_names, self._label_names, logger=self.logger, + context=self._context, work_load_list=self._work_load_list, + fixed_param_names=self._fixed_param_names) + module.bind([max_data_shapes for _ in range(len(self._context))], [max_label_shapes for _ in range(len(self._context))], + for_training, inputs_need_grad, force_rebind=False, shared_module=None) + self._curr_module = module + + # copy back saved params, if already initialized + if self.params_initialized: + self.set_params(arg_params, aux_params) + + def save_checkpoint(self, prefix, epoch, save_optimizer_states=False): + """Save current progress to checkpoint. + Use mx.callback.module_checkpoint as epoch_end_callback to save during training. + + Parameters + ---------- + prefix : str + The file prefix to checkpoint to + epoch : int + The current epoch number + save_optimizer_states : bool + Whether to save optimizer states for continue training + """ + self._curr_module.save_checkpoint(prefix, epoch, save_optimizer_states) + + def init_optimizer(self, kvstore='local', optimizer='sgd', + optimizer_params=(('learning_rate', 0.01),), force_init=False): + assert self.binded and self.params_initialized + if self.optimizer_initialized and not force_init: + self.logger.warning('optimizer already initialized, ignoring.') + return + + self._curr_module._preload_opt_states = self._preload_opt_states + self._curr_module.init_optimizer(kvstore, optimizer, optimizer_params, + force_init=force_init) + self.optimizer_initialized = True + + def fit(self, train_data, eval_data=None, eval_metric='acc', + epoch_end_callback=None, batch_end_callback=None, kvstore='local', + optimizer='sgd', optimizer_params=(('learning_rate', 0.01),), + eval_end_callback=None, + eval_batch_end_callback=None, initializer=Uniform(0.01), + arg_params=None, aux_params=None, allow_missing=False, + force_rebind=False, force_init=False, begin_epoch=0, num_epoch=None, + validation_metric=None, monitor=None, prefix=None, state=None): + """Train the module parameters. + + Parameters + ---------- + train_data : DataIter + eval_data : DataIter + If not `None`, will be used as validation set and evaluate the performance + after each epoch. + eval_metric : str or EvalMetric + Default `'acc'`. The performance measure used to display during training. + epoch_end_callback : function or list of function + Each callback will be called with the current `epoch`, `symbol`, `arg_params` + and `aux_params`. + batch_end_callback : function or list of function + Each callback will be called with a `BatchEndParam`. + kvstore : str or KVStore + Default `'local'`. + optimizer : str or Optimizer + Default `'sgd'` + optimizer_params : dict + Default `(('learning_rate', 0.01),)`. The parameters for the optimizer constructor. + The default value is not a `dict`, just to avoid pylint warning on dangerous + default values. + eval_end_callback : function or list of function + These will be called at the end of each full evaluation, with the metrics over + the entire evaluation set. + eval_batch_end_callback : function or list of function + These will be called at the end of each minibatch during evaluation + initializer : Initializer + Will be called to initialize the module parameters if not already initialized. + arg_params : dict + Default `None`, if not `None`, should be existing parameters from a trained + model or loaded from a checkpoint (previously saved model). In this case, + the value here will be used to initialize the module parameters, unless they + are already initialized by the user via a call to `init_params` or `fit`. + `arg_params` has higher priority to `initializer`. + aux_params : dict + Default `None`. Similar to `arg_params`, except for auxiliary states. + allow_missing : bool + Default `False`. Indicate whether we allow missing parameters when `arg_params` + and `aux_params` are not `None`. If this is `True`, then the missing parameters + will be initialized via the `initializer`. + force_rebind : bool + Default `False`. Whether to force rebinding the executors if already binded. + force_init : bool + Default `False`. Indicate whether we should force initialization even if the + parameters are already initialized. + begin_epoch : int + Default `0`. Indicate the starting epoch. Usually, if we are resuming from a + checkpoint saved at a previous training phase at epoch N, then we should specify + this value as N+1. + num_epoch : int + Number of epochs to run training. + + Examples + -------- + An example of using fit for training:: + >>> #Assume training dataIter and validation dataIter are ready + >>> mod.fit(train_data=train_dataiter, eval_data=val_dataiter, + optimizer_params={'learning_rate':0.01, 'momentum': 0.9}, + num_epoch=10) + """ + assert num_epoch is not None, 'please specify number of epochs' + + self.bind(data_shapes=train_data.provide_data, label_shapes=train_data.provide_label, + for_training=True, force_rebind=force_rebind) + if monitor is not None: + self.install_monitor(monitor) + self.init_params(initializer=initializer, arg_params=arg_params, aux_params=aux_params, + allow_missing=allow_missing, force_init=force_init) + self.init_optimizer(kvstore=kvstore, optimizer=optimizer, + optimizer_params=optimizer_params) + if state is not None: + self._curr_module.load_optimizer_states(state) + + if validation_metric is None: + validation_metric = eval_metric + if not isinstance(eval_metric, metric.EvalMetric): + eval_metric = metric.create(eval_metric) + + ################################################################################ + # training loop + ################################################################################ + for epoch in range(begin_epoch, num_epoch): + tic = time.time() + eval_metric.reset() + ct = 0 + for nbatch, data_batch in enumerate(train_data): + if monitor is not None: + monitor.tic() + self.forward_backward(data_batch) + self.update() + ct = ct + 1 + # print 'ct: ', ct + # pdb.set_trace() + if ct % 50 == 0: + ct = 0 + self.update_metric(eval_metric, data_batch.label) + sys.stdout.flush() + if monitor is not None: + monitor.toc_print() + + if batch_end_callback is not None: + batch_end_params = BatchEndParam(epoch=epoch, nbatch=nbatch, + eval_metric=eval_metric, + locals=locals()) + for callback in _as_list(batch_end_callback): + callback(batch_end_params) + + # one epoch of training is finished + for name, val in eval_metric.get_name_value(): + self.logger.info('Epoch[%d] Train-%s=%f', epoch, name, val) + toc = time.time() + self.logger.info('Epoch[%d] Time cost=%.3f', epoch, (toc-tic)) + + # sync aux params across devices + arg_params, aux_params = self.get_params() + self.set_params(arg_params, aux_params) + + if epoch_end_callback is not None: + for callback in _as_list(epoch_end_callback): + callback(epoch, self.symbol, arg_params, aux_params) + + #---------------------------------------- + # evaluation on validation set + if eval_data: + res = self.score(eval_data, validation_metric, + score_end_callback=eval_end_callback, + batch_end_callback=eval_batch_end_callback, epoch=epoch) + #TODO: pull this into default + for name, val in res: + self.logger.info('Epoch[%d] Validation-%s=%f', epoch, name, val) + + # end of 1 epoch, reset the data-iter for another epoch + train_data.reset() + + + def forward(self, data_batch, is_train=None): + assert self.binded and self.params_initialized + + # get current_shapes + if self._curr_module.label_shapes is not None: + current_shapes = [dict(self._curr_module.data_shapes[i] + self._curr_module.label_shapes[i]) for i in range(len(self._context))] + else: + current_shapes = [dict(self._curr_module.data_shapes[i]) for i in range(len(self._context))] + + # get input_shapes + if is_train: + input_shapes = [dict(data_batch.provide_data[i] + data_batch.provide_label[i]) for i in range(len(self._context))] + else: + input_shapes = [dict(data_batch.provide_data[i]) for i in range(len(data_batch.provide_data))] + + # decide if shape changed + shape_changed = len(current_shapes) != len(input_shapes) + for pre, cur in zip(current_shapes, input_shapes): + for k, v in pre.items(): + if v != cur[k]: + shape_changed = True + + if shape_changed: + # self._curr_module.reshape(data_batch.provide_data, data_batch.provide_label) + module = Module(self._symbol, self._data_names, self._label_names, + logger=self.logger, context=[self._context[i] for i in range(len(data_batch.provide_data))], + work_load_list=self._work_load_list, + fixed_param_names=self._fixed_param_names) + module.bind(data_batch.provide_data, data_batch.provide_label, self._curr_module.for_training, + self._curr_module.inputs_need_grad, force_rebind=False, + shared_module=self._curr_module) + self._curr_module = module + + self._curr_module.forward(data_batch, is_train=is_train) + + def backward(self, out_grads=None): + assert self.binded and self.params_initialized + self._curr_module.backward(out_grads=out_grads) + + def update(self): + assert self.binded and self.params_initialized and self.optimizer_initialized + self._curr_module.update() + + def get_outputs(self, merge_multi_context=True): + assert self.binded and self.params_initialized + return self._curr_module.get_outputs(merge_multi_context=merge_multi_context) + def get_input_grads(self, merge_multi_context=True): + assert self.binded and self.params_initialized and self.inputs_need_grad + return self._curr_module.get_input_grads(merge_multi_context=merge_multi_context) + + def update_metric(self, eval_metric, labels): + assert self.binded and self.params_initialized + self._curr_module.update_metric(eval_metric, labels) + + def install_monitor(self, mon): + """ Install monitor on all executors """ + assert self.binded + self._curr_module.install_monitor(mon) diff --git a/faster_rcnn/core/rcnn.py b/faster_rcnn/core/rcnn.py new file mode 100644 index 0000000..10db6e6 --- /dev/null +++ b/faster_rcnn/core/rcnn.py @@ -0,0 +1,342 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- +""" +Fast R-CNN: +data = + {'data': [num_images, c, h, w], + 'rois': [num_rois, 5]} +label = + {'label': [num_rois], + 'bbox_target': [num_rois, 4 * num_classes], + 'bbox_weight': [num_rois, 4 * num_classes]} +roidb extended format [image_index] + ['image', 'height', 'width', 'flipped', + 'boxes', 'gt_classes', 'gt_overlaps', 'max_classes', 'max_overlaps', 'bbox_targets'] +""" + +import numpy as np +import numpy.random as npr + +from utils.image import get_image, tensor_vstack +from bbox.bbox_transform import bbox_overlaps, bbox_transform +from bbox.bbox_regression import expand_bbox_regression_targets +from bbox.bbox_regression import expand_bbox_regression_targets, expand_bbox_regression_targets_base, expand_bbox_regression_targets_base_new +from bbox.bbox_transform import * +from dota_kit.poly_nms_gpu.poly_overlaps import poly_overlaps +from dota_kit.poly_nms_gpu.nms import poly_overlaps_nms_wrapper + +def get_rcnn_testbatch(roidb, cfg): + """ + return a dict of testbatch + :param roidb: ['image', 'flipped'] + ['boxes'] + :return: data, label, im_info + """ + # assert len(roidb) == 1, 'Single batch only' + imgs, roidb = get_image(roidb, cfg) + im_array = imgs + im_info = [np.array([roidb[i]['im_info']], dtype=np.float32) for i in range(len(roidb))] + + im_rois = [roidb[i]['boxes'] for i in range(len(roidb))] + rois = im_rois + rois_array = [np.hstack((0 * np.ones((rois[i].shape[0], 1)), rois[i])) for i in range(len(rois))] + + data = [{'data': im_array[i], + 'rois': rois_array[i]} for i in range(len(roidb))] + label = {} + + return data, label, im_info + + +def get_rcnn_batch(roidb, cfg): + """ + return a dict of multiple images + :param roidb: a list of dict, whose length controls batch size + ['images', 'flipped'] + ['gt_boxes', 'boxes', 'gt_overlap'] => ['bbox_targets'] + :return: data, label + """ + num_images = len(roidb) + imgs, roidb = get_image(roidb, cfg) + im_array = tensor_vstack(imgs) + + assert cfg.TRAIN.BATCH_ROIS == -1 or cfg.TRAIN.BATCH_ROIS % cfg.TRAIN.BATCH_IMAGES == 0, \ + 'BATCHIMAGES {} must divide BATCH_ROIS {}'.format(cfg.TRAIN.BATCH_IMAGES, cfg.TRAIN.BATCH_ROIS) + + if cfg.TRAIN.BATCH_ROIS == -1: + rois_per_image = np.sum([iroidb['boxes'].shape[0] for iroidb in roidb]) + fg_rois_per_image = rois_per_image + else: + rois_per_image = cfg.TRAIN.BATCH_ROIS / cfg.TRAIN.BATCH_IMAGES + fg_rois_per_image = np.round(cfg.TRAIN.FG_FRACTION * rois_per_image).astype(int) + + rois_array = list() + labels_array = list() + bbox_targets_array = list() + bbox_weights_array = list() + + for im_i in range(num_images): + roi_rec = roidb[im_i] + + # infer num_classes from gt_overlaps + num_classes = roi_rec['gt_overlaps'].shape[1] + + # label = class RoI has max overlap with + rois = roi_rec['boxes'] + labels = roi_rec['max_classes'] + overlaps = roi_rec['max_overlaps'] + bbox_targets = roi_rec['bbox_targets'] + + im_rois, labels, bbox_targets, bbox_weights = \ + sample_rois(rois, fg_rois_per_image, rois_per_image, num_classes, cfg, + labels, overlaps, bbox_targets) + + # project im_rois + # do not round roi + rois = im_rois + batch_index = im_i * np.ones((rois.shape[0], 1)) + rois_array_this_image = np.hstack((batch_index, rois)) + rois_array.append(rois_array_this_image) + + # add labels + labels_array.append(labels) + bbox_targets_array.append(bbox_targets) + bbox_weights_array.append(bbox_weights) + + rois_array = np.array(rois_array) + labels_array = np.array(labels_array) + bbox_targets_array = np.array(bbox_targets_array) + bbox_weights_array = np.array(bbox_weights_array) + + data = {'data': im_array, + 'rois': rois_array} + label = {'label': labels_array, + 'bbox_target': bbox_targets_array, + 'bbox_weight': bbox_weights_array} + + return data, label + + +def sample_rois(rois, fg_rois_per_image, rois_per_image, num_classes, cfg, + labels=None, overlaps=None, bbox_targets=None, gt_boxes=None): + """ + generate random sample of ROIs comprising foreground and background examples + :param rois: all_rois [n, 4]; e2e: [n, 5] with batch_index + :param fg_rois_per_image: foreground roi number + :param rois_per_image: total roi number + :param num_classes: number of classes + :param labels: maybe precomputed + :param overlaps: maybe precomputed (max_overlaps) + :param bbox_targets: maybe precomputed + :param gt_boxes: optional for e2e [n, 5] (x1, y1, x2, y2, cls) + :return: (labels, rois, bbox_targets, bbox_weights) + """ + if labels is None: + overlaps = bbox_overlaps(rois[:, 1:].astype(np.float), gt_boxes[:, :4].astype(np.float)) + gt_assignment = overlaps.argmax(axis=1) + overlaps = overlaps.max(axis=1) + labels = gt_boxes[gt_assignment, 4] + + # foreground RoI with FG_THRESH overlap + fg_indexes = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0] + # guard against the case when an image has fewer than fg_rois_per_image foreground RoIs + fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_indexes.size) + # Sample foreground regions without replacement + if len(fg_indexes) > fg_rois_per_this_image: + fg_indexes = npr.choice(fg_indexes, size=fg_rois_per_this_image, replace=False) + + # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI) + bg_indexes = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) & (overlaps >= cfg.TRAIN.BG_THRESH_LO))[0] + # Compute number of background RoIs to take from this image (guarding against there being fewer than desired) + bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image + bg_rois_per_this_image = np.minimum(bg_rois_per_this_image, bg_indexes.size) + # Sample foreground regions without replacement + if len(bg_indexes) > bg_rois_per_this_image: + bg_indexes = npr.choice(bg_indexes, size=bg_rois_per_this_image, replace=False) + + # indexes selected + keep_indexes = np.append(fg_indexes, bg_indexes) + + # pad more to ensure a fixed minibatch size + while keep_indexes.shape[0] < rois_per_image: + gap = np.minimum(len(rois), rois_per_image - keep_indexes.shape[0]) + gap_indexes = npr.choice(range(len(rois)), size=gap, replace=False) + keep_indexes = np.append(keep_indexes, gap_indexes) + + # select labels + labels = labels[keep_indexes] + # set labels of bg_rois to be 0 + labels[fg_rois_per_this_image:] = 0 + rois = rois[keep_indexes] + + # load or compute bbox_target + if bbox_targets is not None: + bbox_target_data = bbox_targets[keep_indexes, :] + else: + targets = bbox_transform(rois[:, 1:], gt_boxes[gt_assignment[keep_indexes], :4]) + if cfg.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED: + targets = ((targets - np.array(cfg.TRAIN.BBOX_MEANS)) + / np.array(cfg.TRAIN.BBOX_STDS)) + bbox_target_data = np.hstack((labels[:, np.newaxis], targets)) + + bbox_targets, bbox_weights = \ + expand_bbox_regression_targets(bbox_target_data, num_classes, cfg) + + return rois, labels, bbox_targets, bbox_weights + +def sample_rotbox_rois(rois, fg_rois_per_image, rois_per_image, num_classes, cfg, + labels=None, overlaps=None, dbbox_targets=None, gt_boxes=None): + """ + + :param rois: al_rois [n, 4]; e2e [n, 5] with batch_index + :param fg_rois_per_image: + :param rois_per_image: + :param num_clases: + :param cfg: + :param labels: + :param overlaps: + :param dbbox_targets: + :param gt_boxes: optional for e2e [n, 9] (x1, y1, ..., x4, y4, cls) + :return: + """ + if labels is None: + # hgt_boxes = np.hstack((bbox_poly2hbb(gt_boxes[:, :-1]), gt_boxes[:, -1])) + hgt_boxes = bbox_poly2hbb(gt_boxes) + ## rois: (xmin, ymin, xmax, ymax) + overlaps = bbox_overlaps(rois[:, 1:].astype(np.float), hgt_boxes[:, :4].astype(np.float)) + gt_assignment = overlaps.argmax(axis=1) + overlaps = overlaps.max(axis=1) + labels = hgt_boxes[gt_assignment, 4] + + # foreground RoI with FG_THRESH overlap + fg_indexes = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0] + # guard against the case when an image has fewer than fg_rois_per_image foreground RoIs + fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_indexes.size) + # Sample foreground regions without replacement + if len(fg_indexes) > fg_rois_per_this_image: + fg_indexes = npr.choice(fg_indexes, size=fg_rois_per_this_image, replace=False) + + # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI) + bg_indexes = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) & (overlaps >= cfg.TRAIN.BG_THRESH_LO))[0] + # Compute number of background RoIs to take from this image (guarding against there being fewer than desired) + bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image + bg_rois_per_this_image = np.minimum(bg_rois_per_this_image, bg_indexes.size) + # Sample foreground regions without replacement + if len(bg_indexes) > bg_rois_per_this_image: + bg_indexes = npr.choice(bg_indexes, size=bg_rois_per_this_image, replace=False) + + # indexes selected + keep_indexes = np.append(fg_indexes, bg_indexes) + + # pad more to ensure a fixed minibatch size + while keep_indexes.shape[0] < rois_per_image: + gap = np.minimum(len(rois), rois_per_image - keep_indexes.shape[0]) + gap_indexes = npr.choice(range(len(rois)), size=gap, replace=False) + keep_indexes = np.append(keep_indexes, gap_indexes) + + # select labels + labels = labels[keep_indexes] + # set labels of bg_rois to be 0 + # pdb.set_trace() + labels[fg_rois_per_this_image:] = 0 + rois = rois[keep_indexes] + # pdb.set_trace() + # load or compute bbox_target + if dbbox_targets is not None: + bbox_target_data = dbbox_targets[keep_indexes, :] + else: + targets = dbbox_transform2_warp(rois[:, 1:], gt_boxes[gt_assignment[keep_indexes], :8]) + if cfg.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED: + targets = ((targets - np.array(cfg.TRAIN.BBOX_MEANS)) + / np.array(cfg.TRAIN.BBOX_STDS)) + bbox_target_data = np.hstack((labels[:, np.newaxis], targets)) + # pdb.set_trace() + bbox_targets, bbox_weights = \ + expand_bbox_regression_targets_base(bbox_target_data, num_classes, cfg) + + return rois, labels, bbox_targets, bbox_weights + +def sample_Rrois(Rrois, fg_rois_per_image, rois_per_image, num_classes, cfg, + labels=None, overlaps=None, dbbox_targets=None, gt_boxes=None, device_id=0): + """ + + :param Rrois: all_rois [n, 5]; e2e [n, 6] with batch_index (x_ctr, y_ctr, w, h, theta) + :param fg_rois_per_image: + :param rois_per_image: + :param num_clases: + :param cfg: + :param labels: + :param overlaps: + :param dbbox_targets: + :param gt_boxes: optional for e2e [n, 6] (x_ctr, y_ctr, w, h, theta, cls) + :return: + """ + if labels is None: + ## rois: (xmin, ymin, xmax, ymax) + # poly_overlaps = poly_overlaps_nms_wrapper(Rrois.context.device_id) + # TODO: optimize it by 1. first use h_overlaps 2. for h_overlaps > 0, we use the poly_overlaps + overlaps = poly_overlaps(Rrois[:, 1:].astype(np.float32), gt_boxes[:, :5].astype(np.float32), device_id) + gt_assignment = overlaps.argmax(axis=1) + overlaps = overlaps.max(axis=1) + labels = gt_boxes[gt_assignment, 5] + + # pdb.set_trace() + # foreground RoI with FG_THRESH overlap + # fg_indexes = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0] + fg_indexes = np.where(overlaps >= cfg.TRAIN.RRoI_FG_THRESH)[0] + # guard against the case when an image has fewer than fg_rois_per_image foreground RoIs + fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_indexes.size) + # Sample foreground regions without replacement + if len(fg_indexes) > fg_rois_per_this_image: + fg_indexes = npr.choice(fg_indexes, size=fg_rois_per_this_image, replace=False) + + # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI) + bg_indexes = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) & (overlaps >= cfg.TRAIN.BG_THRESH_LO))[0] + # Compute number of background RoIs to take from this image (guarding against there being fewer than desired) + bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image + bg_rois_per_this_image = np.minimum(bg_rois_per_this_image, bg_indexes.size) + # Sample foreground regions without replacement + if len(bg_indexes) > bg_rois_per_this_image: + bg_indexes = npr.choice(bg_indexes, size=bg_rois_per_this_image, replace=False) + + # indexes selected + keep_indexes = np.append(fg_indexes, bg_indexes) + + # pad more to ensure a fixed minibatch size + while keep_indexes.shape[0] < rois_per_image: + gap = np.minimum(len(Rrois), rois_per_image - keep_indexes.shape[0]) + gap_indexes = npr.choice(range(len(Rrois)), size=gap, replace=False) + keep_indexes = np.append(keep_indexes, gap_indexes) + + # select labels + labels = labels[keep_indexes] + # set labels of bg_rois to be 0 + # pdb.set_trace() + labels[fg_rois_per_this_image:] = 0 + Rrois = Rrois[keep_indexes] + # pdb.set_trace() + # load or compute bbox_target + if dbbox_targets is not None: + bbox_target_data = dbbox_targets[keep_indexes, :] + else: + # targets = dbbox_transform2(Rrois[:, 1:], gt_boxes[gt_assignment[keep_indexes], :5]) + targets = dbbox_transform2_best_match_warp(Rrois[:, 1:], gt_boxes[gt_assignment[keep_indexes], :5]) + if cfg.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED: + # targets = ((targets - np.array(cfg.TRAIN.BBOX_MEANS)) + # / np.array(cfg.TRAIN.BBOX_STDS)) + targets = ((targets - np.array(cfg.TRAIN.RRoI_BBOX_STDS)) + / np.array(cfg.TRAIN.RRoI_BBOX_STDS)) + bbox_target_data = np.hstack((labels[:, np.newaxis], targets)) + # pdb.set_trace() + bbox_targets, bbox_weights = \ + expand_bbox_regression_targets_base_new(bbox_target_data, num_classes, cfg.network.RRoI_CLASS_AGNOSTIC) + + return Rrois, labels, bbox_targets, bbox_weights \ No newline at end of file diff --git a/faster_rcnn/core/tester.py b/faster_rcnn/core/tester.py new file mode 100644 index 0000000..0cd39ea --- /dev/null +++ b/faster_rcnn/core/tester.py @@ -0,0 +1,652 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import cPickle +import os +import time +import mxnet as mx +import numpy as np + +from module import MutableModule +from utils import image +from bbox.bbox_transform import bbox_pred, clip_boxes +from nms.nms import py_nms_wrapper, cpu_nms_wrapper, gpu_nms_wrapper +from utils.PrefetchingIter import PrefetchingIter +from bbox.bbox_transform import bbox_pred, dbbox_pred, clip_boxes, clip_polys, dbbox_transform2_inv_warp, dbbox_transform2_warp +from bbox.bbox_transform import polygonToRotRectangle_batch, RotBox2Polys_multi_class +from bbox.bbox_transform import dbboxtransform3_inv_warp, xyhs2polys, xyhs2polys_muli_class +from bbox.bbox_transform import dbbox_transform2_inv, dbbox_transform2_inv_new, RotBox2Polys +from dota_kit.ResultMerge import py_cpu_nms_poly, py_cpu_nms_poly_fast +from poly_nms_gpu.nms import poly_gpu_nms_wrapper +import cv2 +import pdb + +DEBUG = False + +class Predictor(object): + def __init__(self, symbol, data_names, label_names, + context=mx.cpu(), max_data_shapes=None, + provide_data=None, provide_label=None, + arg_params=None, aux_params=None): + self._mod = MutableModule(symbol, data_names, label_names, + context=context, max_data_shapes=max_data_shapes) + self._mod.bind(provide_data, provide_label, for_training=False) + self._mod.init_params(arg_params=arg_params, aux_params=aux_params) + + def predict(self, data_batch): + self._mod.forward(data_batch) + # [dict(zip(self._mod.output_names, _)) for _ in zip(*self._mod.get_outputs(merge_multi_context=False))] + return [dict(zip(self._mod.output_names, _)) for _ in zip(*self._mod.get_outputs(merge_multi_context=False))] + + +def im_proposal(predictor, data_batch, data_names, scales): + output_all = predictor.predict(data_batch) + + data_dict_all = [dict(zip(data_names, data_batch.data[i])) for i in xrange(len(data_batch.data))] + scores_all = [] + boxes_all = [] + + for output, data_dict, scale in zip(output_all, data_dict_all, scales): + # drop the batch index + boxes = output['rois_output'].asnumpy()[:, 1:] + scores = output['rois_score'].asnumpy() + + # transform to original scale + boxes = boxes / scale + scores_all.append(scores) + boxes_all.append(boxes) + + return scores_all, boxes_all, data_dict_all + + +def generate_proposals(predictor, test_data, imdb, cfg, vis=False, thresh=0.): + """ + Generate detections results using RPN. + :param predictor: Predictor + :param test_data: data iterator, must be non-shuffled + :param imdb: image database + :param vis: controls visualization + :param thresh: thresh for valid detections + :return: list of detected boxes + """ + assert vis or not test_data.shuffle + data_names = [k[0] for k in test_data.provide_data[0]] + + if not isinstance(test_data, PrefetchingIter): + test_data = PrefetchingIter(test_data) + + idx = 0 + t = time.time() + imdb_boxes = list() + original_boxes = list() + for im_info, data_batch in test_data: + t1 = time.time() - t + t = time.time() + + scales = [iim_info[0, 2] for iim_info in im_info] + scores_all, boxes_all, data_dict_all = im_proposal(predictor, data_batch, data_names, scales) + t2 = time.time() - t + t = time.time() + for delta, (scores, boxes, data_dict, scale) in enumerate(zip(scores_all, boxes_all, data_dict_all, scales)): + # assemble proposals + dets = np.hstack((boxes, scores)) + original_boxes.append(dets) + + # filter proposals + keep = np.where(dets[:, 4:] > thresh)[0] + dets = dets[keep, :] + imdb_boxes.append(dets) + + if vis: + vis_all_detection(data_dict['data'].asnumpy(), [dets], ['obj'], scale, cfg) + + print 'generating %d/%d' % (idx + 1, imdb.num_images), 'proposal %d' % (dets.shape[0]), \ + 'data %.4fs net %.4fs' % (t1, t2 / test_data.batch_size) + idx += 1 + + + assert len(imdb_boxes) == imdb.num_images, 'calculations not complete' + + # save results + rpn_folder = os.path.join(imdb.result_path, 'rpn_data') + if not os.path.exists(rpn_folder): + os.mkdir(rpn_folder) + + rpn_file = os.path.join(rpn_folder, imdb.name + '_rpn.pkl') + with open(rpn_file, 'wb') as f: + cPickle.dump(imdb_boxes, f, cPickle.HIGHEST_PROTOCOL) + + if thresh > 0: + full_rpn_file = os.path.join(rpn_folder, imdb.name + '_full_rpn.pkl') + with open(full_rpn_file, 'wb') as f: + cPickle.dump(original_boxes, f, cPickle.HIGHEST_PROTOCOL) + + print 'wrote rpn proposals to {}'.format(rpn_file) + return imdb_boxes + + +def im_detect(predictor, data_batch, data_names, scales, cfg): + output_all = predictor.predict(data_batch) + + data_dict_all = [dict(zip(data_names, idata)) for idata in data_batch.data] + scores_all = [] + pred_boxes_all = [] + for output, data_dict, scale in zip(output_all, data_dict_all, scales): + if cfg.TEST.HAS_RPN: + rois = output['rois_output'].asnumpy()[:, 1:] + else: + rois = data_dict['rois'].asnumpy().reshape((-1, 5))[:, 1:] + im_shape = data_dict['data'].shape + + # save output + scores = output['cls_prob_reshape_output'].asnumpy()[0] + bbox_deltas = output['bbox_pred_reshape_output'].asnumpy()[0] + + # post processing + pred_boxes = bbox_pred(rois, bbox_deltas) + pred_boxes = clip_boxes(pred_boxes, im_shape[-2:]) + + # we used scaled image & roi to train, so it is necessary to transform them back + pred_boxes = pred_boxes / scale + + scores_all.append(scores) + pred_boxes_all.append(pred_boxes) + return scores_all, pred_boxes_all, data_dict_all + + +def pred_eval(predictor, test_data, imdb, cfg, vis=False, thresh=1e-3, logger=None, ignore_cache=True): + """ + wrapper for calculating offline validation for faster data analysis + in this example, all threshold are set by hand + :param predictor: Predictor + :param test_data: data iterator, must be non-shuffle + :param imdb: image database + :param vis: controls visualization + :param thresh: valid detection threshold + :return: + """ + + det_file = os.path.join(imdb.result_path, imdb.name + '_detections.pkl') + if os.path.exists(det_file) and not ignore_cache: + with open(det_file, 'rb') as fid: + all_boxes = cPickle.load(fid) + info_str = imdb.evaluate_detections(all_boxes) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + return + + assert vis or not test_data.shuffle + data_names = [k[0] for k in test_data.provide_data[0]] + + if not isinstance(test_data, PrefetchingIter): + test_data = PrefetchingIter(test_data) + + nms = py_nms_wrapper(cfg.TEST.NMS) + + # limit detections to max_per_image over all classes + max_per_image = cfg.TEST.max_per_image + + num_images = imdb.num_images + # all detections are collected into: + # all_boxes[cls][image] = N x 5 array of detections in + # (x1, y1, x2, y2, score) + all_boxes = [[[] for _ in range(num_images)] + for _ in range(imdb.num_classes)] + + idx = 0 + data_time, net_time, post_time = 0.0, 0.0, 0.0 + t = time.time() + for im_info, data_batch in test_data: + t1 = time.time() - t + t = time.time() + + scales = [iim_info[0, 2] for iim_info in im_info] + scores_all, boxes_all, data_dict_all = im_detect(predictor, data_batch, data_names, scales, cfg) + + t2 = time.time() - t + t = time.time() + for delta, (scores, boxes, data_dict) in enumerate(zip(scores_all, boxes_all, data_dict_all)): + for j in range(1, imdb.num_classes): + indexes = np.where(scores[:, j] > thresh)[0] + cls_scores = scores[indexes, j, np.newaxis] + cls_boxes = boxes[indexes, 4:8] if cfg.CLASS_AGNOSTIC else boxes[indexes, j * 4:(j + 1) * 4] + cls_dets = np.hstack((cls_boxes, cls_scores)) + keep = nms(cls_dets) + all_boxes[j][idx+delta] = cls_dets[keep, :] + + if max_per_image > 0: + image_scores = np.hstack([all_boxes[j][idx+delta][:, -1] + for j in range(1, imdb.num_classes)]) + if len(image_scores) > max_per_image: + image_thresh = np.sort(image_scores)[-max_per_image] + for j in range(1, imdb.num_classes): + keep = np.where(all_boxes[j][idx+delta][:, -1] >= image_thresh)[0] + all_boxes[j][idx+delta] = all_boxes[j][idx+delta][keep, :] + + if vis: + boxes_this_image = [[]] + [all_boxes[j][idx+delta] for j in range(1, imdb.num_classes)] + vis_all_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scales[delta], cfg) + + idx += test_data.batch_size + t3 = time.time() - t + t = time.time() + data_time += t1 + net_time += t2 + post_time += t3 + print 'testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size) + if logger: + logger.info('testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size)) + + with open(det_file, 'wb') as f: + cPickle.dump(all_boxes, f, protocol=cPickle.HIGHEST_PROTOCOL) + + info_str = imdb.evaluate_detections(all_boxes) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + + +def vis_all_detection(im_array, detections, class_names, scale, cfg, threshold=1e-3): + """ + visualize all detections in one image + :param im_array: [b=1 c h w] in rgb + :param detections: [ numpy.ndarray([[x1 y1 x2 y2 score]]) for j in classes ] + :param class_names: list of names in imdb + :param scale: visualize the scaled image + :return: + """ + import matplotlib.pyplot as plt + import random + im = image.transform_inverse(im_array, cfg.network.PIXEL_MEANS) + plt.imshow(im) + for j, name in enumerate(class_names): + if name == '__background__': + continue + color = (random.random(), random.random(), random.random()) # generate a random color + dets = detections[j] + for det in dets: + bbox = det[:4] * scale + score = det[-1] + if score < threshold: + continue + rect = plt.Rectangle((bbox[0], bbox[1]), + bbox[2] - bbox[0], + bbox[3] - bbox[1], fill=False, + edgecolor=color, linewidth=3.5) + plt.gca().add_patch(rect) + plt.gca().text(bbox[0], bbox[1] - 2, + '{:s} {:.3f}'.format(name, score), + bbox=dict(facecolor=color, alpha=0.5), fontsize=12, color='white') + plt.show() + + +def draw_all_detection(im_array, detections, class_names, scale, cfg, threshold=1e-1): + """ + visualize all detections in one image + :param im_array: [b=1 c h w] in rgb + :param detections: [ numpy.ndarray([[x1 y1 x2 y2 score]]) for j in classes ] + :param class_names: list of names in imdb + :param scale: visualize the scaled image + :return: + """ + import cv2 + import random + color_white = (255, 255, 255) + im = image.transform_inverse(im_array, cfg.network.PIXEL_MEANS) + # change to bgr + im = cv2.cvtColor(im, cv2.COLOR_RGB2BGR) + for j, name in enumerate(class_names): + if name == '__background__': + continue + color = (random.randint(0, 256), random.randint(0, 256), random.randint(0, 256)) # generate a random color + dets = detections[j] + for det in dets: + bbox = det[:4] * scale + score = det[-1] + if score < threshold: + continue + bbox = map(int, bbox) + cv2.rectangle(im, (bbox[0], bbox[1]), (bbox[2], bbox[3]), color=color, thickness=2) + cv2.putText(im, '%s %.3f' % (class_names[j], score), (bbox[0], bbox[1] + 10), + color=color_white, fontFace=cv2.FONT_HERSHEY_COMPLEX, fontScale=0.5) + return im + +def im_detect_rotbox(predictor, data_batch, data_names, scales, cfg): + output_all = predictor.predict(data_batch) + + data_dict_all = [dict(zip(data_names, idata)) for idata in data_batch.data] + scores_all = [] + pred_boxes_all = [] + for output, data_dict, scale in zip(output_all, data_dict_all, scales): + if cfg.TEST.HAS_RPN: + rois = output['rois_output'].asnumpy()[:, 1:] + else: + rois = data_dict['rois'].asnumpy().reshape((-1, 5))[:, 1:] + im_shape = data_dict['data'].shape + + # save output + scores = output['cls_prob_reshape_output'].asnumpy()[0] + bbox_deltas = output['bbox_pred_reshape_output'].asnumpy()[0] + + pred_boxes = dbbox_transform2_inv_warp(rois, bbox_deltas) + pred_polys = RotBox2Polys_multi_class(pred_boxes) + pred_polys = clip_polys(pred_polys, im_shape[-2:]) + + pred_polys = pred_polys / scale + + scores_all.append(scores) + pred_boxes_all.append(pred_polys) + + return scores_all, pred_boxes_all, data_dict_all + +def pred_eval_dota_rotbox(predictor, test_data, imdb, cfg, vis=False, draw=False, + thresh=1e-3, logger=None, ignore_cache=True): + """ + wrapper for calculating offline validation for faster data analysis + in this example, all threshold are set by hand + :param predictor: Predictor + :param test_data: data iterator, must be non-shuffle + :param imdb: image database + :param vis: controls visualization + :param thresh: valid detection threshold + :return: + """ + # ignore_cache = True + # pdb.set_trace() + det_file = os.path.join(imdb.result_path, imdb.name + '_detections.pkl') + if os.path.exists(det_file) and not ignore_cache: + with open(det_file, 'rb') as fid: + all_boxes = cPickle.load(fid) + # imdb.count_ar() + #imdb.check_transform() + # imdb.draw_gt_and_detections(all_boxes, thresh=0.1) + info_str = imdb.evaluate_detections(all_boxes, ignore_cache) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + return + + assert vis or not test_data.shuffle + data_names = [k[0] for k in test_data.provide_data[0]] + + if not isinstance(test_data, PrefetchingIter): + test_data = PrefetchingIter(test_data) + + #nms = py_nms_wrapper(cfg.TEST.NMS) + + # limit detections to max_per_image over all classes + max_per_image = cfg.TEST.max_per_image + + num_images = imdb.num_images + # all detections are collected into: + # all_boxes[cls][image] = N x 9 array of detections in + # (x1, y1, x2, y2, x3, y3, x4, y4, score) + all_boxes = [[[] for _ in range(num_images)] + for _ in range(imdb.num_classes)] + + idx = 0 + data_time, net_time, post_time = 0.0, 0.0, 0.0 + t = time.time() + for im_info, data_batch in test_data: + t1 = time.time() - t + t = time.time() + + scales = [iim_info[0, 2] for iim_info in im_info] + # scores_all, boxes_all, data_dict_all= im_detect_poly(predictor, data_batch, data_names, scales, cfg) + scores_all, boxes_all, data_dict_all = im_detect_rotbox(predictor, data_batch, data_names, scales, cfg) + # pdb.set_trace() + t2 = time.time() - t + t = time.time() + for delta, (scores, boxes, data_dict) in enumerate(zip(scores_all, boxes_all, data_dict_all)): + # idx = int(data_dict['im_index'])-1 + if DEBUG: + imdb.num_classes = 2 + for j in range(1, imdb.num_classes): + indexes = np.where(scores[:, j] > thresh)[0] + cls_scores = scores[indexes, j, np.newaxis] + cls_boxes = boxes[indexes, 8:16] if cfg.CLASS_AGNOSTIC else boxes[indexes, j * 8:(j + 1) * 8] + cls_quadrangle_dets = np.hstack((cls_boxes, cls_scores)) + # keep = nms(cls_dets) + # keep = py_cpu_nms_poly(cls_quadrangle_dets, 0.3) + # TODO: test the accuracy of gpu nms + # gpu_nms = poly_gpu_nms_wrapper(0.3, 0) + # if len(cls_quadrangle_dets) > 0: + # keep = gpu_nms(cls_quadrangle_dets.astype(np.float32)) + # else: + # # keep = [] + # keep = py_cpu_nms_poly(cls_quadrangle_dets, 0.3) + # all_boxes[j][idx+delta] = cls_quadrangle_dets[keep, :] + all_boxes[j][idx+delta] = cls_quadrangle_dets # for debug + # pdb.set_trace() + if max_per_image > 0: + image_scores = np.hstack([all_boxes[j][idx+delta][:, -1] + for j in range(1, imdb.num_classes)]) + if len(image_scores) > max_per_image: + image_thresh = np.sort(image_scores)[-max_per_image] + for j in range(1, imdb.num_classes): + keep = np.where(all_boxes[j][idx+delta][:, -1] >= image_thresh)[0] + all_boxes[j][idx+delta] = all_boxes[j][idx+delta][keep, :] + + if vis: + boxes_this_image = [[]] + [all_boxes[j][idx+delta] for j in range(1, imdb.num_classes)] + vis_all_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scales[delta], cfg) + + if draw: + if not os.path.isdir(cfg.TEST.save_img_path): + os.mkdir(cfg.TEST.save_img_path) + path = os.path.join(cfg.TEST.save_img_path, str(idx) + '.jpg') + boxes_this_image = [[]] + [all_boxes[j][idx + delta] for j in range(1, imdb.num_classes)] + im = draw_all_poly_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scales[delta], cfg, threshold=0.2) + print path + cv2.imwrite(path, im) + + idx += test_data.batch_size + t3 = time.time() - t + t = time.time() + data_time += t1 + net_time += t2 + post_time += t3 + print 'testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size) + if logger: + logger.info('testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size)) + + with open(det_file, 'wb') as f: + cPickle.dump(all_boxes, f, protocol=cPickle.HIGHEST_PROTOCOL) + + # imdb.draw_gt_and_detections(all_boxes, thresh=0.1) + info_str = imdb.evaluate_detections(all_boxes, ignore_cache) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + +def im_detect_rotbox_Rroi(predictor, data_batch, data_names, scales, cfg): + output_all = predictor.predict(data_batch) + + data_dict_all = [dict(zip(data_names, idata)) for idata in data_batch.data] + scores_all = [] + pred_boxes_all = [] + # pdb.set_trace() + for output, data_dict, scale in zip(output_all, data_dict_all, scales): + if cfg.TEST.HAS_RPN: + # rois = output['rois_output'].asnumpy()[:, 1:] + rois = output['Rrois_output'].asnumpy()[:, 1:] + else: + rois = data_dict['Rrois_rois'].asnumpy().reshape((-1, 6))[:, 1:] + im_shape = data_dict['data'].shape + + # save output + scores = output['Rroi_cls_prob_reshape_output'].asnumpy()[0] + bbox_deltas = output['Rroi_bbox_pred_reshape_output'].asnumpy()[0] + + # DEBUG = True + if DEBUG: + bbox_deltas = np.zeros_like(bbox_deltas) + # pdb.set_trace() + + pred_boxes = dbbox_transform2_inv_new(rois, bbox_deltas, np.pi/2.) + + pred_polys = RotBox2Polys_multi_class(pred_boxes) + + + pred_polys = clip_polys(pred_polys, im_shape[-2:]) + + pred_polys = pred_polys / scale + + scores_all.append(scores) + pred_boxes_all.append(pred_polys) + + return scores_all, pred_boxes_all, data_dict_all + +def pred_eval_dota_rotbox_Rroi(predictor, test_data, imdb, cfg, vis=False, draw=False, + thresh=1e-3, logger=None, ignore_cache=True): + """ + wrapper for calculating offline validation for faster data analysis + in this example, all threshold are set by hand + :param predictor: Predictor + :param test_data: data iterator, must be non-shuffle + :param imdb: image database + :param vis: controls visualization + :param thresh: valid detection threshold + :return: + """ + # ignore_cache = True + # pdb.set_trace() + det_file = os.path.join(imdb.result_path, imdb.name + '_detections.pkl') + if os.path.exists(det_file) and not ignore_cache: + with open(det_file, 'rb') as fid: + all_boxes = cPickle.load(fid) + # imdb.count_ar() + #imdb.check_transform() + # imdb.draw_gt_and_detections(all_boxes, thresh=0.1) + info_str = imdb.evaluate_detections(all_boxes, ignore_cache) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + return + + assert vis or not test_data.shuffle + data_names = [k[0] for k in test_data.provide_data[0]] + + if not isinstance(test_data, PrefetchingIter): + test_data = PrefetchingIter(test_data) + + #nms = py_nms_wrapper(cfg.TEST.NMS) + + # limit detections to max_per_image over all classes + max_per_image = cfg.TEST.max_per_image + + num_images = imdb.num_images + # all detections are collected into: + # all_boxes[cls][image] = N x 9 array of detections in + # (x1, y1, x2, y2, x3, y3, x4, y4, score) + all_boxes = [[[] for _ in range(num_images)] + for _ in range(imdb.num_classes)] + + idx = 0 + data_time, net_time, post_time = 0.0, 0.0, 0.0 + t = time.time() + for im_info, data_batch in test_data: + t1 = time.time() - t + t = time.time() + + scales = [iim_info[0, 2] for iim_info in im_info] + # scores_all, boxes_all, data_dict_all= im_detect_poly(predictor, data_batch, data_names, scales, cfg) + scores_all, boxes_all, data_dict_all = im_detect_rotbox_Rroi(predictor, data_batch, data_names, scales, cfg) + # pdb.set_trace() + t2 = time.time() - t + t = time.time() + for delta, (scores, boxes, data_dict) in enumerate(zip(scores_all, boxes_all, data_dict_all)): + # idx = int(data_dict['im_index'])-1 + for j in range(1, imdb.num_classes): + indexes = np.where(scores[:, j] > thresh)[0] + cls_scores = scores[indexes, j, np.newaxis] + cls_boxes = boxes[indexes, 8:16] if cfg.network.RRoI_CLASS_AGNOSTIC else boxes[indexes, j * 8:(j + 1) * 8] + cls_quadrangle_dets = np.hstack((cls_boxes, cls_scores)) + # keep = nms(cls_dets) + # TODO: check the thresh + keep = py_cpu_nms_poly(cls_quadrangle_dets, 0.3) + # pdb.set_trace() + all_boxes[j][idx+delta] = cls_quadrangle_dets[keep, :] + # all_boxes[j][idx+delta]=cls_quadrangle_dets + if max_per_image > 0: + image_scores = np.hstack([all_boxes[j][idx+delta][:, -1] + for j in range(1, imdb.num_classes)]) + if len(image_scores) > max_per_image: + image_thresh = np.sort(image_scores)[-max_per_image] + for j in range(1, imdb.num_classes): + keep = np.where(all_boxes[j][idx+delta][:, -1] >= image_thresh)[0] + all_boxes[j][idx+delta] = all_boxes[j][idx+delta][keep, :] + + if vis: + boxes_this_image = [[]] + [all_boxes[j][idx+delta] for j in range(1, imdb.num_classes)] + vis_all_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scales[delta], cfg) + + if draw: + if not os.path.isdir(cfg.TEST.save_img_path): + os.mkdir(cfg.TEST.save_img_path) + path = os.path.join(cfg.TEST.save_img_path, str(idx) + '.jpg') + boxes_this_image = [[]] + [all_boxes[j][idx + delta] for j in range(1, imdb.num_classes)] + im = draw_all_poly_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scales[delta], cfg, threshold=0.2) + print path + cv2.imwrite(path, im) + + idx += test_data.batch_size + t3 = time.time() - t + t = time.time() + data_time += t1 + net_time += t2 + post_time += t3 + print 'testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size) + if logger: + logger.info('testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size)) + + with open(det_file, 'wb') as f: + cPickle.dump(all_boxes, f, protocol=cPickle.HIGHEST_PROTOCOL) + + # imdb.draw_gt_and_detections(all_boxes, thresh=0.1) + info_str = imdb.evaluate_detections(all_boxes, ignore_cache) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + +def draw_all_poly_detection(im_array, detections, class_names, scale, cfg, threshold=0.2): + """ + visualize all detections in one image + :param im_array: [b=1 c h w] in rgb + :param detections: [ numpy.ndarray([[x1 y1 x2 y2 score]]) for j in classes ] + :param class_names: list of names in imdb + :param scale: visualize the scaled image + :return: + """ + import cv2 + import random + color_white = (255, 255, 255) + im = image.transform_inverse(im_array, cfg.network.PIXEL_MEANS) + # change to bgr + im = cv2.cvtColor(im, cv2.COLOR_RGB2BGR) + if DEBUG: + class_names = ['__background__', 'fg'] + for j, name in enumerate(class_names): + if name == '__background__': + continue + color = (random.randint(0, 256), random.randint(0, 256), random.randint(0, 256)) # generate a random color + dets = detections[j] + for det in dets: + bbox = det[:8] * scale + score = det[-1] + if score < threshold: + continue + bbox = map(int, bbox) + # draw first point + cv2.circle(im, (bbox[0], bbox[1]), 3, (0, 0, 255), -1) + for i in range(3): + cv2.line(im, (bbox[i * 2], bbox[i * 2 + 1]), (bbox[(i+1) * 2], bbox[(i+1) * 2 + 1]), color=color, thickness=2) + cv2.line(im, (bbox[6], bbox[7]), (bbox[0], bbox[1]), color=color, thickness=2) + cv2.putText(im, '%s %.3f' % (class_names[j], score), (bbox[0], bbox[1] + 10), + color=color_white, fontFace=cv2.FONT_HERSHEY_COMPLEX, fontScale=0.5) + return im \ No newline at end of file diff --git a/faster_rcnn/function/__init__.py b/faster_rcnn/function/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/faster_rcnn/function/test_rcnn.py b/faster_rcnn/function/test_rcnn.py new file mode 100644 index 0000000..515b6dd --- /dev/null +++ b/faster_rcnn/function/test_rcnn.py @@ -0,0 +1,78 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Guodong Zhang +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import argparse +import pprint +import logging +import time +import os +import mxnet as mx + +from symbols import * +from dataset import * +from core.loader import TestLoader +from core.tester import Predictor, pred_eval +from utils.load_model import load_param + + +def test_rcnn(cfg, dataset, image_set, root_path, dataset_path, + ctx, prefix, epoch, + vis, ignore_cache, shuffle, has_rpn, proposal, thresh, logger=None, output_path=None): + if not logger: + assert False, 'require a logger' + + # print cfg + pprint.pprint(cfg) + logger.info('testing cfg:{}\n'.format(pprint.pformat(cfg))) + + # load symbol and testing data + if has_rpn: + sym_instance = eval(cfg.symbol + '.' + cfg.symbol)() + sym = sym_instance.get_symbol(cfg, is_train=False) + imdb = eval(dataset)(image_set, root_path, dataset_path, result_path=output_path) + roidb = imdb.gt_roidb() + else: + sym_instance = eval(cfg.symbol + '.' + cfg.symbol)() + sym = sym_instance.get_symbol_rcnn(cfg, is_train=False) + imdb = eval(dataset)(image_set, root_path, dataset_path, result_path=output_path) + gt_roidb = imdb.gt_roidb() + roidb = eval('imdb.' + proposal + '_roidb')(gt_roidb) + + # get test data iter + test_data = TestLoader(roidb, cfg, batch_size=len(ctx), shuffle=shuffle, has_rpn=has_rpn) + + # load model + arg_params, aux_params = load_param(prefix, epoch, process=True) + + # infer shape + data_shape_dict = dict(test_data.provide_data_single) + sym_instance.infer_shape(data_shape_dict) + + sym_instance.check_parameter_shapes(arg_params, aux_params, data_shape_dict, is_train=False) + + # decide maximum shape + data_names = [k[0] for k in test_data.provide_data_single] + label_names = None + max_data_shape = [[('data', (1, 3, max([v[0] for v in cfg.SCALES]), max([v[1] for v in cfg.SCALES])))]] + if not has_rpn: + max_data_shape.append(('rois', (cfg.TEST.PROPOSAL_POST_NMS_TOP_N + 30, 5))) + + # create predictor + predictor = Predictor(sym, data_names, label_names, + context=ctx, max_data_shapes=max_data_shape, + provide_data=test_data.provide_data, provide_label=test_data.provide_label, + arg_params=arg_params, aux_params=aux_params) + + # start detection + pred_eval(predictor, test_data, imdb, cfg, vis=vis, ignore_cache=ignore_cache, thresh=thresh, logger=logger) + diff --git a/faster_rcnn/function/test_rcnn_poly.py b/faster_rcnn/function/test_rcnn_poly.py new file mode 100644 index 0000000..20470ca --- /dev/null +++ b/faster_rcnn/function/test_rcnn_poly.py @@ -0,0 +1,84 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Guodong Zhang +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import argparse +import pprint +import logging +import time +import os +import mxnet as mx + +from symbols import * +from dataset import * +from core.loader import TestLoader +from core.tester import Predictor, pred_eval, pred_eval_dota_rotbox, pred_eval_dota_rotbox_Rroi +from utils.load_model import load_param + + +def test_rcnn_poly(cfg, dataset, image_set, root_path, dataset_path, + ctx, prefix, epoch, + vis, draw, ignore_cache, shuffle, has_rpn, proposal, thresh, logger=None, output_path=None): + if not logger: + assert False, 'require a logger' + + # print cfg + pprint.pprint(cfg) + logger.info('testing cfg:{}\n'.format(pprint.pformat(cfg))) + + # load symbol and testing data + if has_rpn: + sym_instance = eval(cfg.symbol + '.' + cfg.symbol)() + sym = sym_instance.get_symbol(cfg, is_train=False) + imdb = eval(dataset)(image_set, root_path, dataset_path, result_path=output_path) + roidb = imdb.gt_roidb() + else: + sym_instance = eval(cfg.symbol + '.' + cfg.symbol)() + sym = sym_instance.get_symbol_rcnn(cfg, is_train=False) + imdb = eval(dataset)(image_set, root_path, dataset_path, result_path=output_path) + gt_roidb = imdb.gt_roidb() + roidb = eval('imdb.' + proposal + '_roidb')(gt_roidb) + + # get test data iter + test_data = TestLoader(roidb, cfg, batch_size=len(ctx), shuffle=shuffle, has_rpn=has_rpn) + + # load model + arg_params, aux_params = load_param(prefix, epoch, process=True) + + # infer shape + data_shape_dict = dict(test_data.provide_data_single) + sym_instance.infer_shape(data_shape_dict) + + sym_instance.check_parameter_shapes(arg_params, aux_params, data_shape_dict, is_train=False) + + # decide maximum shape + data_names = [k[0] for k in test_data.provide_data_single] + label_names = None + max_data_shape = [[('data', (1, 3, max([v[0] for v in cfg.SCALES]), max([v[1] for v in cfg.SCALES])))]] + if not has_rpn: + max_data_shape.append(('rois', (cfg.TEST.PROPOSAL_POST_NMS_TOP_N + 30, 5))) + + # create predictor + predictor = Predictor(sym, data_names, label_names, + context=ctx, max_data_shapes=max_data_shape, + provide_data=test_data.provide_data, provide_label=test_data.provide_label, + arg_params=arg_params, aux_params=aux_params) + + # ignore_cache = True + # start detection + if cfg.network.RRoI_REGRESSION: + pred_eval_dota_rotbox_Rroi(predictor, test_data, imdb, cfg, vis=False, draw=True, ignore_cache=ignore_cache, + thresh=thresh, logger=logger) + else: + # start detection + pred_eval_dota_rotbox(predictor, test_data, imdb, cfg, vis=vis, draw=draw, ignore_cache=ignore_cache, thresh=thresh, logger=logger) + diff --git a/faster_rcnn/function/test_rpn.py b/faster_rcnn/function/test_rpn.py new file mode 100644 index 0000000..e323725 --- /dev/null +++ b/faster_rcnn/function/test_rpn.py @@ -0,0 +1,76 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import argparse +import pprint +import logging +import mxnet as mx + +from symbols import * +from dataset import * +from core.loader import TestLoader +from core.tester import Predictor, generate_proposals +from utils.load_model import load_param + + +def test_rpn(cfg, dataset, image_set, root_path, dataset_path, + ctx, prefix, epoch, + vis, shuffle, thresh, logger=None, output_path=None): + # set up logger + if not logger: + logging.basicConfig() + logger = logging.getLogger() + logger.setLevel(logging.INFO) + + # rpn generate proposal cfg + cfg.TEST.HAS_RPN = True + + # print cfg + pprint.pprint(cfg) + logger.info('testing rpn cfg:{}\n'.format(pprint.pformat(cfg))) + + # load symbol + sym_instance = eval(cfg.symbol + '.' + cfg.symbol)() + sym = sym_instance.get_symbol_rpn(cfg, is_train=False) + + # load dataset and prepare imdb for training + imdb = eval(dataset)(image_set, root_path, dataset_path, result_path=output_path) + roidb = imdb.gt_roidb() + test_data = TestLoader(roidb, cfg, batch_size=len(ctx), shuffle=shuffle, has_rpn=True) + + # load model + arg_params, aux_params = load_param(prefix, epoch) + + # infer shape + data_shape_dict = dict(test_data.provide_data_single) + sym_instance.infer_shape(data_shape_dict) + + # check parameters + sym_instance.check_parameter_shapes(arg_params, aux_params, data_shape_dict, is_train=False) + + # decide maximum shape + data_names = [k[0] for k in test_data.provide_data[0]] + label_names = None if test_data.provide_label[0] is None else [k[0] for k in test_data.provide_label[0]] + max_data_shape = [[('data', (1, 3, max([v[0] for v in cfg.SCALES]), max([v[1] for v in cfg.SCALES])))]] + + # create predictor + predictor = Predictor(sym, data_names, label_names, + context=ctx, max_data_shapes=max_data_shape, + provide_data=test_data.provide_data, provide_label=test_data.provide_label, + arg_params=arg_params, aux_params=aux_params) + + # start testing + imdb_boxes = generate_proposals(predictor, test_data, imdb, cfg, vis=vis, thresh=thresh) + + all_log_info = imdb.evaluate_recall(roidb, candidate_boxes=imdb_boxes) + logger.info(all_log_info) diff --git a/faster_rcnn/function/train_rcnn.py b/faster_rcnn/function/train_rcnn.py new file mode 100644 index 0000000..5ae98d4 --- /dev/null +++ b/faster_rcnn/function/train_rcnn.py @@ -0,0 +1,139 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Guodong Zhang +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import logging +import pprint + +import mxnet as mx +import numpy as np + +from bbox.bbox_regression import add_bbox_regression_targets +from core import metric, callback +from core.loader import ROIIter +from core.module import MutableModule +from utils.PrefetchingIter import PrefetchingIter +from utils.load_data import load_proposal_roidb, merge_roidb, filter_roidb +from utils.load_model import load_param +from utils.lr_scheduler import WarmupMultiFactorScheduler + + +def train_rcnn(cfg, dataset, image_set, root_path, dataset_path, + frequent, kvstore, flip, shuffle, resume, + ctx, pretrained, epoch, prefix, begin_epoch, end_epoch, + train_shared, lr, lr_step, proposal, logger=None, output_path=None): + mx.random.seed(3) + np.random.seed(3) + # set up logger + if not logger: + logging.basicConfig() + logger = logging.getLogger() + logger.setLevel(logging.INFO) + + # load symbol + sym_instance = eval(cfg.symbol + '.' + cfg.symbol)() + sym = sym_instance.get_symbol_rcnn(cfg, is_train=True) + + # setup multi-gpu + batch_size = len(ctx) + input_batch_size = cfg.TRAIN.BATCH_IMAGES * batch_size + + # print cfg + pprint.pprint(cfg) + logger.info('training rcnn cfg:{}\n'.format(pprint.pformat(cfg))) + + # load dataset and prepare imdb for training + image_sets = [iset for iset in image_set.split('+')] + roidbs = [load_proposal_roidb(dataset, image_set, root_path, dataset_path, + proposal=proposal, append_gt=True, flip=flip, result_path=output_path) + for image_set in image_sets] + roidb = merge_roidb(roidbs) + roidb = filter_roidb(roidb, cfg) + means, stds = add_bbox_regression_targets(roidb, cfg) + + # load training data + train_data = ROIIter(roidb, cfg, batch_size=input_batch_size, shuffle=shuffle, + ctx=ctx, aspect_grouping=cfg.TRAIN.ASPECT_GROUPING) + + # infer max shape + max_data_shape = [('data', (cfg.TRAIN.BATCH_IMAGES, 3, max([v[0] for v in cfg.SCALES]), max([v[1] for v in cfg.SCALES])))] + + # infer shape + data_shape_dict = dict(train_data.provide_data_single + train_data.provide_label_single) + sym_instance.infer_shape(data_shape_dict) + + # load and initialize params + if resume: + print('continue training from ', begin_epoch) + arg_params, aux_params = load_param(prefix, begin_epoch, convert=True) + else: + arg_params, aux_params = load_param(pretrained, epoch, convert=True) + sym_instance.init_weight_rcnn(cfg, arg_params, aux_params) + + # check parameter shapes + sym_instance.check_parameter_shapes(arg_params, aux_params, data_shape_dict) + + # prepare training + # create solver + data_names = [k[0] for k in train_data.provide_data_single] + label_names = [k[0] for k in train_data.provide_label_single] + if train_shared: + fixed_param_prefix = cfg.network.FIXED_PARAMS_SHARED + else: + fixed_param_prefix = cfg.network.FIXED_PARAMS + mod = MutableModule(sym, data_names=data_names, label_names=label_names, + logger=logger, context=ctx, + max_data_shapes=[max_data_shape for _ in range(batch_size)], fixed_param_prefix=fixed_param_prefix) + + if cfg.TRAIN.RESUME: + mod._preload_opt_states = '%s-%04d.states'%(prefix, begin_epoch) + + + # decide training params + # metric + eval_metric = metric.RCNNAccMetric(cfg) + cls_metric = metric.RCNNLogLossMetric(cfg) + bbox_metric = metric.RCNNL1LossMetric(cfg) + eval_metrics = mx.metric.CompositeEvalMetric() + for child_metric in [eval_metric, cls_metric, bbox_metric]: + eval_metrics.add(child_metric) + # callback + batch_end_callback = callback.Speedometer(train_data.batch_size, frequent=frequent) + epoch_end_callback = [mx.callback.module_checkpoint(mod, prefix, period=1, save_optimizer_states=True), + callback.do_checkpoint(prefix, means, stds)] + # decide learning rate + base_lr = lr + lr_factor = cfg.TRAIN.lr_factor + lr_epoch = [float(epoch) for epoch in lr_step.split(',')] + lr_epoch_diff = [epoch - begin_epoch for epoch in lr_epoch if epoch > begin_epoch] + lr = base_lr * (lr_factor ** (len(lr_epoch) - len(lr_epoch_diff))) + lr_iters = [int(epoch * len(roidb) / batch_size) for epoch in lr_epoch_diff] + print('lr', lr, 'lr_epoch_diff', lr_epoch_diff, 'lr_iters', lr_iters) + lr_scheduler = WarmupMultiFactorScheduler(lr_iters, lr_factor, cfg.TRAIN.warmup, cfg.TRAIN.warmup_lr, cfg.TRAIN.warmup_step) + # optimizer + optimizer_params = {'momentum': cfg.TRAIN.momentum, + 'wd': cfg.TRAIN.wd, + 'learning_rate': lr, + 'lr_scheduler': lr_scheduler, + 'rescale_grad': 1.0, + 'clip_gradient': None} + + # train + + if not isinstance(train_data, PrefetchingIter): + train_data = PrefetchingIter(train_data) + + mod.fit(train_data, eval_metric=eval_metrics, epoch_end_callback=epoch_end_callback, + batch_end_callback=batch_end_callback, kvstore=kvstore, + optimizer='sgd', optimizer_params=optimizer_params, + arg_params=arg_params, aux_params=aux_params, begin_epoch=begin_epoch, num_epoch=end_epoch) + diff --git a/faster_rcnn/function/train_rpn.py b/faster_rcnn/function/train_rpn.py new file mode 100644 index 0000000..bba5a52 --- /dev/null +++ b/faster_rcnn/function/train_rpn.py @@ -0,0 +1,135 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import logging +import pprint + +import mxnet as mx + +from core import metric, callback +from core.loader import AnchorLoader +from core.module import MutableModule +from utils.PrefetchingIter import PrefetchingIter +from utils.load_data import load_gt_roidb, merge_roidb, filter_roidb +from utils.load_model import load_param +from utils.lr_scheduler import WarmupMultiFactorScheduler + + +def train_rpn(cfg, dataset, image_set, root_path, dataset_path, + frequent, kvstore, flip, shuffle, resume, + ctx, pretrained, epoch, prefix, begin_epoch, end_epoch, + train_shared, lr, lr_step, logger=None, output_path=None): + # set up logger + if not logger: + logging.basicConfig() + logger = logging.getLogger() + logger.setLevel(logging.INFO) + + # set up config + cfg.TRAIN.BATCH_IMAGES = cfg.TRAIN.ALTERNATE.RPN_BATCH_IMAGES + + # load symbol + sym_instance = eval(cfg.symbol + '.' + cfg.symbol)() + sym = sym_instance.get_symbol_rpn(cfg, is_train=True) + feat_sym = sym.get_internals()['rpn_cls_score_output'] + + # setup multi-gpu + batch_size = len(ctx) + input_batch_size = cfg.TRAIN.BATCH_IMAGES * batch_size + + # print cfg + pprint.pprint(cfg) + logger.info('training rpn cfg:{}\n'.format(pprint.pformat(cfg))) + + # load dataset and prepare imdb for training + image_sets = [iset for iset in image_set.split('+')] + roidbs = [load_gt_roidb(dataset, image_set, root_path, dataset_path, result_path=output_path, + flip=flip) + for image_set in image_sets] + roidb = merge_roidb(roidbs) + roidb = filter_roidb(roidb, cfg) + + # load training data + train_data = AnchorLoader(feat_sym, roidb, cfg, batch_size=input_batch_size, shuffle=shuffle, + ctx=ctx, feat_stride=cfg.network.RPN_FEAT_STRIDE, anchor_scales=cfg.network.ANCHOR_SCALES, + anchor_ratios=cfg.network.ANCHOR_RATIOS, aspect_grouping=cfg.TRAIN.ASPECT_GROUPING) + + # infer max shape + max_data_shape = [('data', (cfg.TRAIN.BATCH_IMAGES, 3, max([v[0] for v in cfg.SCALES]), max([v[1] for v in cfg.SCALES])))] + max_data_shape, max_label_shape = train_data.infer_shape(max_data_shape) + print('providing maximum shape', max_data_shape, max_label_shape) + + # infer shape + data_shape_dict = dict(train_data.provide_data_single + train_data.provide_label_single) + sym_instance.infer_shape(data_shape_dict) + + # load and initialize params + if resume: + print('continue training from ', begin_epoch) + arg_params, aux_params = load_param(prefix, begin_epoch, convert=True) + else: + arg_params, aux_params = load_param(pretrained, epoch, convert=True) + sym_instance.init_weight_rpn(cfg, arg_params, aux_params) + + # check parameter shapes + sym_instance.check_parameter_shapes(arg_params, aux_params, data_shape_dict) + + # create solver + data_names = [k[0] for k in train_data.provide_data_single] + label_names = [k[0] for k in train_data.provide_label_single] + if train_shared: + fixed_param_prefix = cfg.network.FIXED_PARAMS_SHARED + else: + fixed_param_prefix = cfg.network.FIXED_PARAMS + mod = MutableModule(sym, data_names=data_names, label_names=label_names, + logger=logger, context=ctx, max_data_shapes=[max_data_shape for _ in xrange(batch_size)], + max_label_shapes=[max_label_shape for _ in xrange(batch_size)], fixed_param_prefix=fixed_param_prefix) + + # decide training params + # metric + eval_metric = metric.RPNAccMetric() + cls_metric = metric.RPNLogLossMetric() + bbox_metric = metric.RPNL1LossMetric() + eval_metrics = mx.metric.CompositeEvalMetric() + for child_metric in [eval_metric, cls_metric, bbox_metric]: + eval_metrics.add(child_metric) + # callback + batch_end_callback = callback.Speedometer(train_data.batch_size, frequent=frequent) + # epoch_end_callback = mx.callback.do_checkpoint(prefix) + epoch_end_callback = mx.callback.module_checkpoint(mod, prefix, period=1, save_optimizer_states=True) + # decide learning rate + base_lr = lr + lr_factor = cfg.TRAIN.lr_factor + lr_epoch = [int(epoch) for epoch in lr_step.split(',')] + lr_epoch_diff = [epoch - begin_epoch for epoch in lr_epoch if epoch > begin_epoch] + lr = base_lr * (lr_factor ** (len(lr_epoch) - len(lr_epoch_diff))) + lr_iters = [int(epoch * len(roidb) / batch_size) for epoch in lr_epoch_diff] + print('lr', lr, 'lr_epoch_diff', lr_epoch_diff, 'lr_iters', lr_iters) + lr_scheduler = WarmupMultiFactorScheduler(lr_iters, lr_factor, cfg.TRAIN.warmup, cfg.TRAIN.warmup_lr, cfg.TRAIN.warmup_step) + # optimizer + optimizer_params = {'momentum': cfg.TRAIN.momentum, + 'wd': cfg.TRAIN.wd, + 'learning_rate': lr, + 'lr_scheduler': lr_scheduler, + 'rescale_grad': 1.0, + 'clip_gradient': None} + + if not isinstance(train_data, PrefetchingIter): + train_data = PrefetchingIter(train_data) + + # train + mod.fit(train_data, eval_metric=eval_metrics, epoch_end_callback=epoch_end_callback, + batch_end_callback=batch_end_callback, kvstore=kvstore, + optimizer='sgd', optimizer_params=optimizer_params, + arg_params=arg_params, aux_params=aux_params, begin_epoch=begin_epoch, num_epoch=end_epoch) + diff --git a/faster_rcnn/operator_cxx/deformable_convolution-inl.h b/faster_rcnn/operator_cxx/deformable_convolution-inl.h new file mode 100644 index 0000000..5948865 --- /dev/null +++ b/faster_rcnn/operator_cxx/deformable_convolution-inl.h @@ -0,0 +1,489 @@ +/*! + * Copyright (c) 2017 Microsoft + * Licensed under The MIT License [see LICENSE for details] + * \file deformable_convolution-inl.h + * \brief + * \ref: https://github.com/Yangqing/caffe/wiki/Convolution-in-Caffe:-a-memo + * \ref: https://arxiv.org/abs/1703.06211 + * \author Yuwen Xiong, Haozhi Qi, Jifeng Dai +*/ +#ifndef MXNET_OPERATOR_DEFORMABLE_CONVOLUTION_INL_H_ +#define MXNET_OPERATOR_DEFORMABLE_CONVOLUTION_INL_H_ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "../operator_common.h" +#include "../nn/im2col.h" +#include "./nn/deformable_im2col.h" + + +namespace mxnet { + namespace op { + + namespace conv { + enum DeformableConvolutionOpInputs { kData, kOffset, kWeight, kBias }; + enum DeformableConvolutionOpOutputs { kOut }; + enum DeformableConvolutionOpResource { kTempSpace }; + } + + struct DeformableConvolutionParam : public dmlc::Parameter { + TShape kernel; + TShape stride; + TShape dilate; + TShape pad; + uint32_t num_filter; + uint32_t num_group; + uint32_t num_deformable_group; + uint64_t workspace; + bool no_bias; + dmlc::optional layout; + DMLC_DECLARE_PARAMETER(DeformableConvolutionParam) { + DMLC_DECLARE_FIELD(kernel).describe("convolution kernel size: (h, w) or (d, h, w)"); + DMLC_DECLARE_FIELD(stride).set_default(TShape()) + .describe("convolution stride: (h, w) or (d, h, w)"); + DMLC_DECLARE_FIELD(dilate).set_default(TShape()) + .describe("convolution dilate: (h, w) or (d, h, w)"); + DMLC_DECLARE_FIELD(pad).set_default(TShape()) + .describe("pad for convolution: (h, w) or (d, h, w)"); + DMLC_DECLARE_FIELD(num_filter).set_range(1, 100000) + .describe("convolution filter(channel) number"); + DMLC_DECLARE_FIELD(num_group).set_default(1) + .describe("Number of group partitions."); + DMLC_DECLARE_FIELD(num_deformable_group).set_default(1) + .describe("Number of deformable group partitions."); + DMLC_DECLARE_FIELD(workspace).set_default(1024).set_range(0, 8192) + .describe("Maximum temperal workspace allowed for convolution (MB)."); + DMLC_DECLARE_FIELD(no_bias).set_default(false) + .describe("Whether to disable bias parameter."); + DMLC_DECLARE_FIELD(layout) + .add_enum("NCW", mshadow::kNCW) + .add_enum("NCHW", mshadow::kNCHW) + .add_enum("NCDHW", mshadow::kNCDHW) + .set_default(dmlc::optional()) + .describe("Set layout for input, output and weight. Empty for\n " + "default layout: NCW for 1d, NCHW for 2d and NCDHW for 3d."); + } + }; + + template + class DeformableConvolutionOp : public Operator { + public: + explicit DeformableConvolutionOp(DeformableConvolutionParam p) { + this->param_ = p; + // convert MBytes first to Bytes and then to elements. + param_.workspace = (param_.workspace << 20) / sizeof(DType); + CHECK(param_.layout.value() == mshadow::kNCW || + param_.layout.value() == mshadow::kNCHW || + param_.layout.value() == mshadow::kNCDHW) + << "Only support NCW, NCHW and NCDHW layout"; + } + + virtual void Forward(const OpContext &ctx, + const std::vector &in_data, + const std::vector &req, + const std::vector &out_data, + const std::vector &aux_args) { + using namespace mshadow; + using namespace mshadow::expr; + CHECK_EQ(req[conv::kOut], kWriteTo); + size_t expected = param_.no_bias ? 3 : 4; + CHECK_EQ(in_data.size(), expected); + CHECK_EQ(out_data.size(), 1U); + LayerSetUp(in_data[conv::kData].shape_, in_data[conv::kOffset].shape_, out_data[conv::kOut].shape_); + Stream* s = ctx.get_stream(); + // allocate workspace for col_buffer + Tensor workspace = ctx.requested[conv::kTempSpace] + .get_space_typed(Shape1(col_buffer_size_), s); + // calculate the shape of col_buffer + TShape col_buffer_shape(num_spatial_axes_ + 1); + col_buffer_shape[0] = conv_in_channels_ * param_.kernel.Size(); + for (index_t i = 1; i < col_buffer_shape.ndim(); ++i) { + col_buffer_shape[i] = out_data[0].shape_[i + 1]; + } + // create a column buffer using workspace and col_buffer_shape + TBlob col_buffer(workspace.dptr_, col_buffer_shape, xpu::kDevMask, DataType::kFlag); + + // initialize weight and col_buffer 3D tensors for using gemm + index_t M = conv_out_channels_ / group_; + index_t N = conv_out_spatial_dim_; + index_t K = kernel_dim_; + Tensor weight_3d = in_data[conv::kWeight].get_with_shape( + Shape3(group_, M, K), s); + Tensor col_buffer_3d = col_buffer.get_with_shape( + Shape3(group_, K, N), s); + Tensor output_4d = out_data[conv::kOut].get_with_shape( + Shape4(num_, group_, M, N), s); + for (index_t n = 0; n < num_; ++n) { + // transform image to col_buffer in order to use gemm + deformable_im2col(s, in_data[conv::kData].dptr() + n*input_dim_, + in_data[conv::kOffset].dptr() + n*input_offset_dim_, in_data[conv::kData].shape_, + col_buffer.shape_, param_.kernel, param_.pad, param_.stride, param_.dilate, param_.num_deformable_group, + col_buffer.dptr()); + Tensor output_3d = output_4d[n]; + for (index_t g = 0; g < group_; ++g) { + ASSIGN_DISPATCH(output_3d[g], req[conv::kOut], dot(weight_3d[g], col_buffer_3d[g])); + } + } + if (bias_term_) { + Tensor bias = in_data[conv::kBias].get(s); + Tensor output_3d = out_data[conv::kOut].get_with_shape( + Shape3(num_, conv_out_channels_, conv_out_spatial_dim_), s); + // has bias term, broadcast it to the same shape of output_3d in channel dim + output_3d += mshadow::expr::broadcast<1>(bias, output_3d.shape_); + } + } + + virtual void Backward(const OpContext &ctx, + const std::vector& out_grad, + const std::vector& in_data, + const std::vector& out_data, + const std::vector& req, + const std::vector& in_grad, + const std::vector& aux_args) { + using namespace mshadow; + using namespace mshadow::expr; + CHECK_EQ(out_grad.size(), 1U); + size_t expected = param_.no_bias == 0 ? 4 : 3; + CHECK(in_data.size() == expected && in_grad.size() == expected); + CHECK_EQ(req.size(), expected); + CHECK_EQ(in_data[conv::kWeight].CheckContiguous(), true); + LayerSetUp(in_grad[conv::kData].shape_, in_grad[conv::kOffset].shape_, out_grad[conv::kOut].shape_); + Stream *s = ctx.get_stream(); + // allocate workspace for col_buffer + Tensor workspace = ctx.requested[conv::kTempSpace] + .get_space_typed(Shape1(col_buffer_size_), s); + // calculate the shape of col_buffer + TShape col_buffer_shape(num_spatial_axes_ + 1); + col_buffer_shape[0] = conv_in_channels_ * param_.kernel.Size(); + for (index_t i = 1; i < col_buffer_shape.ndim(); ++i) { + col_buffer_shape[i] = out_grad[conv::kData].shape_[i + 1]; + } + // create a column buffer using workspace and col_buffer_shape + TBlob col_buffer(workspace.dptr_, col_buffer_shape, xpu::kDevMask, DataType::kFlag); + + // initialize weight and col_buffer 3D tensors for using gemm + // For computing dLoss/d(in_data[kData]) + index_t M = kernel_dim_; + index_t N = conv_out_spatial_dim_; + index_t K = conv_out_channels_ / group_; + Tensor weight_3d = in_data[conv::kWeight].get_with_shape( + Shape3(group_, K, M), s); + Tensor out_grad_4d = out_grad[conv::kOut].get_with_shape( + Shape4(num_, group_, K, N), s); + Tensor col_buffer_3d = col_buffer.get_with_shape( + Shape3(group_, M, N), s); + // For computing dLoss/dWeight + Tensor dweight_3d = in_grad[conv::kWeight].get_with_shape( + Shape3(group_, K, M), s); + + Tensor data_grad = in_grad[conv::kData].FlatTo1D(s); + data_grad = 0; + + + for (index_t n = 0; n < num_; ++n) { + Tensor out_grad_3d = out_grad_4d[n]; + for (index_t g = 0; g < group_; ++g) { + col_buffer_3d[g] = dot(weight_3d[g].T(), out_grad_3d[g]); + } + + // gradient w.r.t. input coordinate data + deformable_col2im_coord(s, col_buffer.dptr(), + in_data[conv::kData].dptr() + n*input_dim_, in_data[conv::kOffset].dptr() + n*input_offset_dim_, + in_grad[conv::kData].shape_, col_buffer.shape_, + param_.kernel, param_.pad, param_.stride, param_.dilate, param_.num_deformable_group, + in_grad[conv::kOffset].dptr() + n*input_offset_dim_, + req[conv::kData]); + + // gradient w.r.t. input data + deformable_col2im(s, col_buffer.dptr(), + in_data[conv::kOffset].dptr() + n*input_offset_dim_, in_grad[conv::kData].shape_, col_buffer.shape_, + param_.kernel, param_.pad, param_.stride, param_.dilate, param_.num_deformable_group, + in_grad[conv::kData].dptr() + n*input_dim_, + req[conv::kData]); + + // gradient w.r.t. weight, dWeight should accumulate across the batch and group + deformable_im2col(s, in_data[conv::kData].dptr() + n*input_dim_, + in_data[conv::kOffset].dptr() + n*input_offset_dim_, in_data[conv::kData].shape_, + col_buffer.shape_, param_.kernel, param_.pad, param_.stride, param_.dilate, param_.num_deformable_group, + col_buffer.dptr()); + + for (index_t g = 0; g < group_; ++g) { + if (0 == n) { + ASSIGN_DISPATCH(dweight_3d[g], req[conv::kWeight], + dot(out_grad_3d[g], col_buffer_3d[g].T())); + } + else { + dweight_3d[g] += dot(out_grad_3d[g], col_buffer_3d[g].T()); + } + } + } + + // gradient w.r.t bias + if (bias_term_) { + Tensor dbias = in_grad[conv::kBias].get(s); + Tensor dout = out_grad[conv::kOut].get_with_shape( + Shape3(num_, conv_out_channels_, conv_out_spatial_dim_), s); + ASSIGN_DISPATCH(dbias, req[conv::kBias], sumall_except_dim<1>(dout)); + } + + } + + private: + void LayerSetUp(const TShape& ishape, const TShape& offset_shape, const TShape& oshape) { + channel_axis_ = 1; // hard code channel axis + const index_t first_spatial_axis = channel_axis_ + 1; + const index_t num_axes = param_.kernel.ndim() + 2; + num_spatial_axes_ = num_axes - first_spatial_axis; + is_1x1_ = true; + for (index_t i = 0; i < param_.kernel.ndim(); ++i) { + is_1x1_ &= param_.kernel[i] == 1 && param_.stride[i] == 1 && param_.pad[i] == 0; + if (!is_1x1_) break; + } + + // batch size + num_ = ishape[0]; + // number of input channels + channels_ = ishape[1]; + group_ = param_.num_group; + conv_out_channels_ = param_.num_filter; + conv_in_channels_ = channels_; + bias_term_ = !param_.no_bias; + kernel_dim_ = conv_in_channels_ / group_ * param_.kernel.Size(); + weight_offset_ = conv_out_channels_ * kernel_dim_ / group_; + conv_out_spatial_dim_ = oshape.ProdShape(2, oshape.ndim()); + col_offset_ = kernel_dim_ * conv_out_spatial_dim_; + output_offset_ = conv_out_channels_ * conv_out_spatial_dim_ / group_; + // size of the column buffer used for storing im2col-ed pixels + col_buffer_size_ = kernel_dim_ * group_ * conv_out_spatial_dim_; + // input/output image size (#channels * height * width) + input_dim_ = ishape.ProdShape(1, ishape.ndim()); + input_offset_dim_ = offset_shape.ProdShape(1, offset_shape.ndim()); + output_dim_ = oshape.ProdShape(1, oshape.ndim()); + num_kernels_im2col_ = conv_in_channels_ * conv_out_spatial_dim_; + num_kernels_col2im_ = input_dim_; + } + + private: + DeformableConvolutionParam param_; + index_t channel_axis_; // channel axis of the input + index_t channels_; // number of channels of input image + index_t num_spatial_axes_; // number of spatial axes + index_t num_; // batch size + index_t group_; // number of groups + index_t conv_out_channels_; // number of output channels (num_filter) + index_t conv_out_spatial_dim_; // number of pixels of output images per channel + index_t conv_in_channels_; // number of input channels + index_t kernel_dim_; // number of input channels per group * kernel size + index_t weight_offset_; // number of output channels per group * kernel_dim_ + index_t col_offset_; + index_t output_offset_; + index_t col_buffer_size_; + index_t input_dim_; + index_t input_offset_dim_; + index_t output_dim_; + index_t num_kernels_im2col_; + index_t num_kernels_col2im_; + bool bias_term_; // has bias term? + bool is_1x1_; + }; // class ConvolutionOp + + template + Operator* CreateOp(DeformableConvolutionParam param, int dtype, + std::vector *in_shape, + std::vector *out_shape, + Context ctx); + +#if DMLC_USE_CXX11 + class DeformableConvolutionProp : public OperatorProperty { + public: + std::vector ListArguments() const override { + if (!param_.no_bias) { + return{ "data", "offset", "weight", "bias" }; + } + else { + return{ "data", "offset", "weight" }; + } + } + + void Init(const std::vector >& kwargs) override { + using namespace mshadow; + param_.Init(kwargs); + if (param_.kernel.ndim() == 2) { + param_.layout = param_.layout ? param_.layout.value() : mshadow::kNCHW; + if (param_.stride.ndim() == 0) param_.stride = Shape2(1, 1); + if (param_.dilate.ndim() == 0) param_.dilate = Shape2(1, 1); + if (param_.pad.ndim() == 0) param_.pad = Shape2(0, 0); + } + else { + LOG(FATAL) << "not implemented"; + } + } + + std::map GetParams() const override { + return param_.__DICT__(); + } + + bool InferShape(std::vector *in_shape, + std::vector *out_shape, + std::vector *aux_shape) const override { + using namespace mshadow; + if (!param_.no_bias) { + CHECK_EQ(in_shape->size(), 4U) << "Input:[data, offset, weight, bias]"; + } + else { + CHECK_EQ(in_shape->size(), 3U) << "Input:[data, offset, weight]"; + } + out_shape->resize(1, TShape()); + const TShape &dshp = (*in_shape)[conv::kData]; + const TShape &oshp = (*in_shape)[conv::kOffset]; + if (dshp.ndim() == 0) return false; + if (param_.kernel.ndim() == 2) { + // 2d conv + CHECK_EQ(dshp.ndim(), 4U) \ + << "Input data should be 4D in batch-num_filter-y-x"; + CHECK_EQ(oshp.ndim(), 4U) \ + << "Input offset should be 4D in batch-num_filter-y-x"; + Shape<4> dshape = ConvertLayout(dshp.get<4>(), param_.layout.value(), kNCHW); + Shape<4> offsetshape = ConvertLayout(oshp.get<4>(), param_.layout.value(), kNCHW); + Shape<4> wshape = Shape4(param_.num_filter / param_.num_group, dshape[1] / param_.num_group, + param_.kernel[0], param_.kernel[1]); + wshape = ConvertLayout(wshape, kNCHW, param_.layout.value()); + wshape[0] *= param_.num_group; + SHAPE_ASSIGN_CHECK(*in_shape, conv::kWeight, wshape); + if (!param_.no_bias) { + SHAPE_ASSIGN_CHECK(*in_shape, conv::kBias, Shape1(param_.num_filter)); + } + + const index_t ksize_y = static_cast(param_.kernel[0]); + const index_t ksize_x = static_cast(param_.kernel[1]); + CHECK_EQ(dshape[1] % param_.num_group, 0U) \ + << "input num_filter must divide group size"; + CHECK_EQ(dshape[1] % param_.num_deformable_group, 0U) \ + << "input num_filter must divide deformable group size"; + CHECK_EQ(param_.num_filter % param_.num_group, 0U) \ + << "output num_filter must divide group size"; + CHECK_GT(param_.kernel.Size(), 0U) \ + << "incorrect kernel size: " << param_.kernel; + CHECK_GT(param_.stride.Size(), 0U) \ + << "incorrect stride size: " << param_.stride; + CHECK_GT(param_.dilate.Size(), 0U) \ + << "incorrect dilate size: " << param_.dilate; + Shape<4> oshape; + oshape[0] = dshape[0]; + oshape[1] = param_.num_filter; + oshape[2] = (dshape[2] + 2 * param_.pad[0] - + (param_.dilate[0] * (ksize_y - 1) + 1)) / param_.stride[0] + 1; + oshape[3] = (dshape[3] + 2 * param_.pad[1] - + (param_.dilate[1] * (ksize_x - 1) + 1)) / param_.stride[1] + 1; + SHAPE_ASSIGN_CHECK(*out_shape, 0, ConvertLayout(oshape, kNCHW, param_.layout.value())); + CHECK_EQ(oshape[1] % param_.num_deformable_group, 0U) \ + << "output num_filter must divide deformable group size"; + CHECK_EQ(oshape[2], offsetshape[2]) \ + << "output height must equal to offset map height"; + CHECK_EQ(oshape[3], offsetshape[3]) \ + << "output width must equal to offset map width"; + CHECK_EQ(offsetshape[1] % (param_.kernel[0] * param_.kernel[1]), 0U) \ + << "offset filter must divide deformable group size"; + CHECK_EQ(offsetshape[1] / (2 * param_.kernel[0] * param_.kernel[1]), param_.num_deformable_group) \ + << "offset filter must divide deformable group size"; + // Perform incomplete shape inference. Fill in the missing values in data shape. + // 1) We can always fill in the batch_size. + // 2) We can back-calculate the input height/width if the corresponding stride is 1. + oshape = ConvertLayout((*out_shape)[0].get<4>(), param_.layout.value(), kNCHW); + dshape[0] = oshape[0]; + if (param_.stride[0] == 1) { + dshape[2] = oshape[2] + param_.dilate[0] * (ksize_y - 1) - 2 * param_.pad[0]; + } + if (param_.stride[1] == 1) { + dshape[3] = oshape[3] + param_.dilate[1] * (ksize_x - 1) - 2 * param_.pad[1]; + } + SHAPE_ASSIGN_CHECK(*in_shape, conv::kData, + ConvertLayout(dshape, kNCHW, param_.layout.value())); + // Check whether the kernel sizes are valid + if (dshape[2] != 0) { + CHECK_LE(ksize_y, dshape[2] + 2 * param_.pad[0]) << "kernel size exceed input"; + } + if (dshape[3] != 0) { + CHECK_LE(ksize_x, dshape[3] + 2 * param_.pad[1]) << "kernel size exceed input"; + } + return true; + } + else { + LOG(FATAL) << "not implemented"; + return false; + } + } + + bool InferType(std::vector *in_type, + std::vector *out_type, + std::vector *aux_type) const override { + CHECK_GE(in_type->size(), 1U); + int dtype = (*in_type)[0]; + CHECK_NE(dtype, -1) << "First input must have specified type"; + for (index_t i = 0; i < in_type->size(); ++i) { + if ((*in_type)[i] == -1) { + (*in_type)[i] = dtype; + } + else { + CHECK_EQ((*in_type)[i], dtype) << "This layer requires uniform type. " + << "Expected " << dtype << " v.s. given " + << (*in_type)[i] << " at " << ListArguments()[i]; + } + } + out_type->clear(); + out_type->push_back(dtype); + return true; + } + + OperatorProperty* Copy() const override { + auto ptr = new DeformableConvolutionProp(); + ptr->param_ = param_; + return ptr; + } + + std::string TypeString() const override { + return "_contrib_DeformableConvolution"; + } + + std::vector DeclareBackwardDependency( + const std::vector &out_grad, + const std::vector &in_data, + const std::vector &out_data) const override { + return{ out_grad[conv::kOut], in_data[conv::kData], in_data[conv::kOffset], in_data[conv::kWeight] }; + } + + std::vector ForwardResource( + const std::vector &in_shape) const override { + return{ ResourceRequest::kTempSpace }; + } + + std::vector BackwardResource( + const std::vector &in_shape) const override { + return{ ResourceRequest::kTempSpace }; + } + + Operator* CreateOperator(Context ctx) const override { + LOG(FATAL) << "Not Implemented."; + return NULL; + } + + Operator* CreateOperatorEx(Context ctx, std::vector *in_shape, + std::vector *in_type) const override; + + private: + DeformableConvolutionParam param_; + }; // class ConvolutionProp +#endif // DMLC_USE_CXX11 + } // namespace op +} // namespace mxnet +#endif // MXNET_OPERATOR_CONVOLUTION_INL_H_ diff --git a/faster_rcnn/operator_cxx/deformable_convolution.cc b/faster_rcnn/operator_cxx/deformable_convolution.cc new file mode 100644 index 0000000..49e6c28 --- /dev/null +++ b/faster_rcnn/operator_cxx/deformable_convolution.cc @@ -0,0 +1,89 @@ +/*! + * Copyright (c) 2017 Microsoft + * Licensed under The MIT License [see LICENSE for details] + * \file deformable_convolution.cc + * \brief + * \author Yuwen Xiong, Haozhi Qi, Jifeng Dai +*/ + +#include "./deformable_convolution-inl.h" + +namespace mxnet { +namespace op { +DMLC_REGISTER_PARAMETER(DeformableConvolutionParam); + +template<> +Operator* CreateOp(DeformableConvolutionParam param, int dtype, + std::vector *in_shape, + std::vector *out_shape, + Context ctx) { + Operator *op = NULL; + MSHADOW_REAL_TYPE_SWITCH(dtype, DType, { + op = new DeformableConvolutionOp(param); + }) + return op; +} + +// DO_BIND_DISPATCH comes from operator_common.h +Operator *DeformableConvolutionProp::CreateOperatorEx(Context ctx, + std::vector *in_shape, + std::vector *in_type) const { + std::vector out_shape, aux_shape; + std::vector out_type, aux_type; + CHECK(InferType(in_type, &out_type, &aux_type)); + CHECK(InferShape(in_shape, &out_shape, &aux_shape)); + DO_BIND_DISPATCH(CreateOp, param_, (*in_type)[0], in_shape, &out_shape, ctx); +} + +MXNET_REGISTER_OP_PROPERTY(_contrib_DeformableConvolution, DeformableConvolutionProp) +.describe(R"code(Compute *N*-D convolution on *(N+2)*-D input. + +In the 2-D convolution, given input data with shape *(batch_size, +channel, height, width)*, the output is computed by + +.. math:: + + out[n,i,:,:] = bias[i] + \sum_{j=0}^{num\_filter} data[n,j,:,:] \star + weight[i,j,:,:] + +where :math:`\star` is the 2-D cross-correlation operator. + +For general 2-D convolution, the shapes are + +- **data**: *(batch_size, channel, height, width)* +- **weight**: *(num_filter, channel, kernel[0], kernel[1])* +- **bias**: *(num_filter,)* +- **out**: *(batch_size, num_filter, out_height, out_width)*. + +Define:: + + f(x,k,p,s,d) = floor((x+2*p-d*(k-1)-1)/s)+1 + +then we have:: + + out_height=f(height, kernel[0], pad[0], stride[0], dilate[0]) + out_width=f(width, kernel[1], pad[1], stride[1], dilate[1]) + +If ``no_bias`` is set to be true, then the ``bias`` term is ignored. + +The default data ``layout`` is *NCHW*, namely *(batch_size, channle, height, +width)*. We can choose other layouts such as *NHWC*. + +If ``num_group`` is larger than 1, denoted by *g*, then split the input ``data`` +evenly into *g* parts along the channel axis, and also evenly split ``weight`` +along the first dimension. Next compute the convolution on the *i*-th part of +the data with the *i*-th weight part. The output is obtained by concating all +the *g* results. + +Both ``weight`` and ``bias`` are learnable parameters. + + +)code" ADD_FILELINE) +.add_argument("data", "NDArray-or-Symbol", "Input data to the DeformableConvolutionOp.") +.add_argument("offset", "NDArray-or-Symbol", "Input offset to the DeformableConvolutionOp.") +.add_argument("weight", "NDArray-or-Symbol", "Weight matrix.") +.add_argument("bias", "NDArray-or-Symbol", "Bias parameter.") +.add_arguments(DeformableConvolutionParam::__FIELDS__()); + +} // namespace op +} // namespace mxnet diff --git a/faster_rcnn/operator_cxx/deformable_convolution.cu b/faster_rcnn/operator_cxx/deformable_convolution.cu new file mode 100644 index 0000000..8d11ca7 --- /dev/null +++ b/faster_rcnn/operator_cxx/deformable_convolution.cu @@ -0,0 +1,29 @@ +/*! + * Copyright (c) 2017 Microsoft + * Licensed under The MIT License [see LICENSE for details] + * \file deformable_convolution.cu + * \brief + * \author Yuwen Xiong, Haozhi Qi, Jifeng Dai +*/ + +#include "./deformable_convolution-inl.h" +#include + +namespace mxnet { + namespace op { + + template<> + Operator* CreateOp(DeformableConvolutionParam param, int dtype, + std::vector *in_shape, + std::vector *out_shape, + Context ctx) { + Operator *op = NULL; + MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {-*+ + op = new DeformableConvolutionOp(param); + }) + return op; + } + + } // namespace op +} // namespace mxnet + diff --git a/faster_rcnn/operator_cxx/deformable_psroi_pooling-inl.h b/faster_rcnn/operator_cxx/deformable_psroi_pooling-inl.h new file mode 100644 index 0000000..c4f06a9 --- /dev/null +++ b/faster_rcnn/operator_cxx/deformable_psroi_pooling-inl.h @@ -0,0 +1,282 @@ +/*! + * Copyright (c) 2017 Microsoft + * Licensed under The MIT License [see LICENSE for details] + * \file deformable_convolution-inl.h + * \brief + * \ref: https://github.com/Yangqing/caffe/wiki/Convolution-in-Caffe:-a-memo + * \ref: https://arxiv.org/abs/1703.06211 + * \author Yuwen Xiong, Haozhi Qi, Jifeng Dai +*/ +#ifndef MXNET_OPERATOR_DEFORMABLE_PSROI_POOLING_INL_H_ +#define MXNET_OPERATOR_DEFORMABLE_PSROI_POOLING_INL_H_ + +#include +#include +#include +#include +#include +#include +#include +#include "../mshadow_op.h" +#include "../operator_common.h" + + +namespace mxnet { + namespace op { + + // Declare enumeration of input order to make code more intuitive. + // These enums are only visible within this header + namespace deformablepsroipool { + enum DeformablePSROIPoolingOpInputs { kData, kBox, kTrans }; + enum DeformablePSROIPoolingOpOutputs { kOut, kTopCount }; + } // deformablepsroipool + + struct DeformablePSROIPoolingParam : public dmlc::Parameter { + // TShape pooled_size; + float spatial_scale; + int output_dim; + int group_size; + int pooled_size; + int part_size; + int sample_per_part; + float trans_std; + bool no_trans; + DMLC_DECLARE_PARAMETER(DeformablePSROIPoolingParam) { + DMLC_DECLARE_FIELD(spatial_scale).set_range(0.0, 1.0) + .describe("Ratio of input feature map height (or w) to raw image height (or w). " + "Equals the reciprocal of total stride in convolutional layers"); + DMLC_DECLARE_FIELD(output_dim).describe("fix output dim"); + DMLC_DECLARE_FIELD(group_size).describe("fix group size"); + DMLC_DECLARE_FIELD(pooled_size).describe("fix pooled size"); + DMLC_DECLARE_FIELD(part_size).set_default(0).describe("fix part size"); + DMLC_DECLARE_FIELD(sample_per_part).set_default(1).describe("fix samples per part"); + DMLC_DECLARE_FIELD(trans_std).set_default(0.0).set_range(0.0, 1.0).describe("fix transition std"); + DMLC_DECLARE_FIELD(no_trans).set_default(false) + .describe("Whether to disable trans parameter."); + } + }; + + template + class DeformablePSROIPoolingOp : public Operator { + public: + explicit DeformablePSROIPoolingOp(DeformablePSROIPoolingParam p) { + this->param_ = p; + } + + virtual void Forward(const OpContext &ctx, + const std::vector &in_data, + const std::vector &req, + const std::vector &out_data, + const std::vector &aux_args) { + using namespace mshadow; + size_t in_expected = param_.no_trans? 2 : 3; + size_t out_expected = 2; + CHECK_EQ(in_data.size(), in_expected); + CHECK_EQ(out_data.size(), out_expected); + CHECK_EQ(out_data[deformablepsroipool::kOut].shape_[0], in_data[deformablepsroipool::kBox].shape_[0]); + CHECK_EQ(out_data[deformablepsroipool::kTopCount].shape_[0], in_data[deformablepsroipool::kBox].shape_[0]); + Stream *s = ctx.get_stream(); + + Tensor data = in_data[deformablepsroipool::kData].get(s); + Tensor bbox = in_data[deformablepsroipool::kBox].get(s); + Tensor out = out_data[deformablepsroipool::kOut].get(s); + Tensor top_count = out_data[deformablepsroipool::kTopCount].get(s); + CHECK_EQ(data.CheckContiguous(), true); + CHECK_EQ(bbox.CheckContiguous(), true); + CHECK_EQ(out.CheckContiguous(), true); + CHECK_EQ(top_count.CheckContiguous(), true); + out = -FLT_MAX; + top_count = 0.0f; + + Tensor trans; + if (!param_.no_trans) { + trans = in_data[deformablepsroipool::kTrans].get(s); + } + DeformablePSROIPoolForward(out, data, bbox, trans, top_count, param_.no_trans, param_.spatial_scale, + param_.output_dim, param_.group_size, param_.pooled_size, param_.part_size, param_.sample_per_part, param_.trans_std); + } + + virtual void Backward(const OpContext &ctx, + const std::vector &out_grad, + const std::vector &in_data, + const std::vector &out_data, + const std::vector &req, + const std::vector &in_grad, + const std::vector &aux_args) { + using namespace mshadow; + size_t in_expected = param_.no_trans ? 2 : 3; + size_t out_expected = 2; + CHECK_EQ(in_data.size(), in_expected); + CHECK_EQ(out_data.size(), out_expected); + CHECK_EQ(out_grad[deformablepsroipool::kOut].shape_[0], in_data[deformablepsroipool::kBox].shape_[0]); + CHECK_EQ(out_data[deformablepsroipool::kTopCount].shape_[0], in_data[deformablepsroipool::kBox].shape_[0]); + CHECK_NE(req[deformablepsroipool::kData], kWriteInplace) << + "DeformablePSROIPooling: Backward doesn't support kWriteInplace."; + CHECK_NE(req[deformablepsroipool::kBox], kWriteInplace) << + "DeformablePSROIPooling: Backward doesn't support kWriteInplace."; + // CHECK_NE(req[deformablepsroipool::kTrans], kWriteInplace) << + // "DeformablePSROIPooling: Backward doesn't support kWriteInplace."; + Stream *s = ctx.get_stream(); + + Tensor grad_out = out_grad[deformablepsroipool::kOut].get(s); + Tensor data = in_data[deformablepsroipool::kData].get(s); + Tensor bbox = in_data[deformablepsroipool::kBox].get(s); + Tensor top_count = out_data[deformablepsroipool::kTopCount].get(s); + Tensor grad_in = in_grad[deformablepsroipool::kData].get(s); + Tensor grad_roi = in_grad[deformablepsroipool::kBox].get(s); + Tensor grad_trans; + Tensor trans; + if (!param_.no_trans) { + CHECK_EQ(in_grad.size(), 3); + trans = in_data[deformablepsroipool::kTrans].get(s); + grad_trans = in_grad[deformablepsroipool::kTrans].get(s); + } + + CHECK_EQ(grad_out.CheckContiguous(), true); + CHECK_EQ(data.CheckContiguous(), true); + CHECK_EQ(bbox.CheckContiguous(), true); + CHECK_EQ(top_count.CheckContiguous(), true); + CHECK_EQ(grad_in.CheckContiguous(), true); + + Assign(grad_in, req[deformablepsroipool::kData], 0); + if (!param_.no_trans) { + Assign(grad_trans, req[deformablepsroipool::kTrans], 0); + } + DeformablePSROIPoolBackwardAcc(grad_in, grad_trans, grad_out, data, bbox, trans, top_count, param_.no_trans, + param_.spatial_scale, param_.output_dim, param_.group_size, param_.pooled_size, param_.part_size, + param_.sample_per_part, param_.trans_std); + Assign(grad_roi, req[deformablepsroipool::kBox], 0); + } + + private: + DeformablePSROIPoolingParam param_; + }; // class DeformablePSROIPoolingOp + + // Decalre Factory function, used for dispatch specialization + template + Operator* CreateOp(DeformablePSROIPoolingParam param, int dtype); + +#if DMLC_USE_CXX11 + class DeformablePSROIPoolingProp : public OperatorProperty { + public: + std::vector ListArguments() const override { + if (param_.no_trans) { + return{ "data", "rois" }; + } + else { + return{ "data", "rois", "trans" }; + } + } + + std::vector ListOutputs() const override { + return{ "output", "top_count" }; + } + + int NumOutputs() const override { + return 2; + } + + int NumVisibleOutputs() const override { + return 1; + } + + void Init(const std::vector >& kwargs) override { + param_.Init(kwargs); + if (param_.part_size == 0) { + param_.part_size = param_.pooled_size; + } + } + + std::map GetParams() const override { + return param_.__DICT__(); + } + + bool InferShape(std::vector *in_shape, + std::vector *out_shape, + std::vector *aux_shape) const override { + using namespace mshadow; + if (param_.no_trans) { + CHECK_EQ(in_shape->size(), 2) << "Input:[data, rois]"; + } + else { + CHECK_EQ(in_shape->size(), 3) << "Input:[data, rois, trans]"; + // trans: [num_rois, 2, pooled_h, pooled_w] + TShape tshape = in_shape->at(deformablepsroipool::kTrans); + CHECK_EQ(tshape.ndim(), 4) << "trans should be a 4D tensor of shape"; + } + + // data: [batch_size, c, h, w] + TShape dshape = in_shape->at(deformablepsroipool::kData); + CHECK_EQ(dshape.ndim(), 4) << "data should be a 4D tensor"; + + // bbox: [num_rois, 5] + TShape bshape = in_shape->at(deformablepsroipool::kBox); + CHECK_EQ(bshape.ndim(), 2) << "bbox should be a 2D tensor of shape [batch, 5]"; + CHECK_EQ(bshape[1], 5) << "bbox should be a 2D tensor of shape [batch, 5]"; + + // out: [num_rois, c, pooled_h, pooled_w] + // top_count: [num_rois, c, pooled_h, pooled_w] + out_shape->clear(); + out_shape->push_back( + Shape4(bshape[0], param_.output_dim, param_.pooled_size, param_.pooled_size)); + out_shape->push_back( + Shape4(bshape[0], param_.output_dim, param_.pooled_size, param_.pooled_size)); + return true; + } + + bool InferType(std::vector *in_type, + std::vector *out_type, + std::vector *aux_type) const override { + CHECK_GE(in_type->size(), 2); + int dtype = (*in_type)[0]; + CHECK_EQ(dtype, (*in_type)[1]); + CHECK_NE(dtype, -1) << "Input must have specified type"; + + out_type->clear(); + out_type->push_back(dtype); + out_type->push_back(dtype); + return true; + } + + OperatorProperty* Copy() const override { + DeformablePSROIPoolingProp* deformable_psroi_pooling_sym = new DeformablePSROIPoolingProp(); + deformable_psroi_pooling_sym->param_ = this->param_; + return deformable_psroi_pooling_sym; + } + + std::string TypeString() const override { + return "_contrib_DeformablePSROIPooling"; + } + + // decalre dependency and inplace optimization options + std::vector DeclareBackwardDependency( + const std::vector &out_grad, + const std::vector &in_data, + const std::vector &out_data) const override { + if (param_.no_trans) { + return{ out_grad[deformablepsroipool::kOut], in_data[deformablepsroipool::kData], in_data[deformablepsroipool::kBox], + out_data[deformablepsroipool::kTopCount] }; + } + else { + return{ out_grad[deformablepsroipool::kOut], in_data[deformablepsroipool::kData], in_data[deformablepsroipool::kBox], + in_data[deformablepsroipool::kTrans], out_data[deformablepsroipool::kTopCount] }; + } + } + + + Operator* CreateOperator(Context ctx) const override { + LOG(FATAL) << "Not Implemented."; + return NULL; + } + + Operator* CreateOperatorEx(Context ctx, std::vector *in_shape, + std::vector *in_type) const override; + + + private: + DeformablePSROIPoolingParam param_; + }; // class DeformablePSROIPoolingProp +#endif + } // namespace op +} // namespace mxnet +#endif // MXNET_OPERATOR_DEFORMABLE_PSROI_POOLING_INL_H_ \ No newline at end of file diff --git a/faster_rcnn/operator_cxx/deformable_psroi_pooling.cc b/faster_rcnn/operator_cxx/deformable_psroi_pooling.cc new file mode 100644 index 0000000..83e40c0 --- /dev/null +++ b/faster_rcnn/operator_cxx/deformable_psroi_pooling.cc @@ -0,0 +1,96 @@ +/*! + * Copyright (c) 2017 Microsoft + * Licensed under The MIT License [see LICENSE for details] + * \file deformable_psroi_pooling.cc + * \brief + * \author Yi Li, Guodong Zhang, Jifeng Dai +*/ +#include "./deformable_psroi_pooling-inl.h" +#include +#include +#include +#include +#include + +using std::max; +using std::min; +using std::floor; +using std::ceil; + +namespace mshadow { + template + inline void DeformablePSROIPoolForward(const Tensor &out, + const Tensor &data, + const Tensor &bbox, + const Tensor &trans, + const Tensor &top_count, + const bool no_trans, + const float spatial_scale, + const int output_dim, + const int group_size, + const int pooled_size, + const int part_size, + const int sample_per_part, + const float trans_std) { + // NOT_IMPLEMENTED; + return; + } + + template + inline void DeformablePSROIPoolBackwardAcc(const Tensor &in_grad, + const Tensor &trans_grad, + const Tensor &out_grad, + const Tensor &data, + const Tensor &bbox, + const Tensor &trans, + const Tensor &top_count, + const bool no_trans, + const float spatial_scale, + const int output_dim, + const int group_size, + const int pooled_size, + const int part_size, + const int sample_per_part, + const float trans_std) { + // NOT_IMPLEMENTED; + return; + } +} // namespace mshadow + +namespace mxnet { + namespace op { + + template<> + Operator *CreateOp(DeformablePSROIPoolingParam param, int dtype) { + Operator* op = NULL; + MSHADOW_REAL_TYPE_SWITCH(dtype, DType, { + op = new DeformablePSROIPoolingOp(param); + }); + return op; + } + + Operator *DeformablePSROIPoolingProp::CreateOperatorEx(Context ctx, std::vector *in_shape, + std::vector *in_type) const { + std::vector out_shape, aux_shape; + std::vector out_type, aux_type; + CHECK(InferType(in_type, &out_type, &aux_type)); + CHECK(InferShape(in_shape, &out_shape, &aux_shape)); + DO_BIND_DISPATCH(CreateOp, param_, in_type->at(0)); + } + + DMLC_REGISTER_PARAMETER(DeformablePSROIPoolingParam); + + MXNET_REGISTER_OP_PROPERTY(_contrib_DeformablePSROIPooling, DeformablePSROIPoolingProp) + .describe("Performs region-of-interest pooling on inputs. Resize bounding box coordinates by " + "spatial_scale and crop input feature maps accordingly. The cropped feature maps are pooled " + "by max pooling to a fixed size output indicated by pooled_size. batch_size will change to " + "the number of region bounding boxes after DeformablePSROIPooling") + .add_argument("data", "Symbol", "Input data to the pooling operator, a 4D Feature maps") + .add_argument("rois", "Symbol", "Bounding box coordinates, a 2D array of " + "[[batch_index, x1, y1, x2, y2]]. (x1, y1) and (x2, y2) are top left and down right corners " + "of designated region of interest. batch_index indicates the index of corresponding image " + "in the input data") + .add_argument("trans", "Symbol", "transition parameter") + .add_arguments(DeformablePSROIPoolingParam::__FIELDS__()); + } // namespace op +} // namespace mxnet diff --git a/faster_rcnn/operator_cxx/deformable_psroi_pooling.cu b/faster_rcnn/operator_cxx/deformable_psroi_pooling.cu new file mode 100644 index 0000000..3b17056 --- /dev/null +++ b/faster_rcnn/operator_cxx/deformable_psroi_pooling.cu @@ -0,0 +1,402 @@ +/*! + * Copyright (c) 2017 Microsoft + * Licensed under The MIT License [see LICENSE for details] + * \file deformable_psroi_pooling.cu + * \brief + * \author Yi Li, Guodong Zhang, Jifeng Dai +*/ +#include "./deformable_psroi_pooling-inl.h" +#include +#include +#include +#include +#include "../../common/cuda_utils.h" +#include "../mxnet_op.h" + +#define DeformablePSROIPOOLING_CUDA_CHECK(condition) \ + /* Code block avoids redefinition of cudaError_t error */ \ + do { \ + cudaError_t error = condition; \ + CHECK_EQ(error, cudaSuccess) << " " << cudaGetErrorString(error); \ + } while (0) +#define CUDA_KERNEL_LOOP(i, n) \ +for (int i = blockIdx.x * blockDim.x + threadIdx.x; \ + i < (n); \ + i += blockDim.x * gridDim.x) + +namespace mshadow { + namespace cuda { + template + __device__ DType bilinear_interp( + const DType* data, + const DType x, + const DType y, + const int width, + const int height) { + int x1 = floor(x); + int x2 = ceil(x); + int y1 = floor(y); + int y2 = ceil(y); + DType dist_x = static_cast(x - x1); + DType dist_y = static_cast(y - y1); + DType value11 = data[y1*width + x1]; + DType value12 = data[y2*width + x1]; + DType value21 = data[y1*width + x2]; + DType value22 = data[y2*width + x2]; + DType value = (1 - dist_x)*(1 - dist_y)*value11 + (1 - dist_x)*dist_y*value12 + + dist_x*(1 - dist_y)*value21 + dist_x*dist_y*value22; + return value; + } + + template + __global__ void DeformablePSROIPoolForwardKernel( + const int count, + const DType* bottom_data, + const DType spatial_scale, + const int channels, + const int height, const int width, + const int pooled_height, const int pooled_width, + const DType* bottom_rois, const DType* bottom_trans, + const bool no_trans, + const DType trans_std, + const int sample_per_part, + const int output_dim, + const int group_size, + const int part_size, + const int num_classes, + const int channels_each_class, + DType* top_data, + DType* top_count) { + CUDA_KERNEL_LOOP(index, count) { + // The output is in order (n, ctop, ph, pw) + int pw = index % pooled_width; + int ph = (index / pooled_width) % pooled_height; + int ctop = (index / pooled_width / pooled_height) % output_dim; + int n = index / pooled_width / pooled_height / output_dim; + + // [start, end) interval for spatial sampling + const DType* offset_bottom_rois = bottom_rois + n * 5; + int roi_batch_ind = offset_bottom_rois[0]; + DType roi_start_w = static_cast(round(offset_bottom_rois[1])) * spatial_scale - 0.5; + DType roi_start_h = static_cast(round(offset_bottom_rois[2])) * spatial_scale - 0.5; + DType roi_end_w = static_cast(round(offset_bottom_rois[3]) + 1.) * spatial_scale - 0.5; + DType roi_end_h = static_cast(round(offset_bottom_rois[4]) + 1.) * spatial_scale - 0.5; + + // Force too small ROIs to be 1x1 + DType roi_width = max(roi_end_w - roi_start_w, 0.1); //avoid 0 + DType roi_height = max(roi_end_h - roi_start_h, 0.1); + + // Compute w and h at bottom + DType bin_size_h = roi_height / static_cast(pooled_height); + DType bin_size_w = roi_width / static_cast(pooled_width); + + DType sub_bin_size_h = bin_size_h / static_cast(sample_per_part); + DType sub_bin_size_w = bin_size_w / static_cast(sample_per_part); + + int part_h = floor(static_cast(ph) / pooled_height*part_size); + int part_w = floor(static_cast(pw) / pooled_width*part_size); + int class_id = ctop / channels_each_class; + DType trans_x = no_trans ? static_cast(0) : + bottom_trans[(((n * num_classes + class_id) * 2) * part_size + part_h)*part_size + part_w] * trans_std; + DType trans_y = no_trans ? static_cast(0) : + bottom_trans[(((n * num_classes + class_id) * 2 + 1) * part_size + part_h)*part_size + part_w] * trans_std; + + DType wstart = static_cast(pw)* bin_size_w + + roi_start_w; + wstart += trans_x * roi_width; + DType hstart = static_cast(ph) * bin_size_h + + roi_start_h; + hstart += trans_y * roi_height; + + DType sum = 0; + int count = 0; + int gw = floor(static_cast(pw) * group_size / pooled_width); + int gh = floor(static_cast(ph)* group_size / pooled_height); + gw = min(max(gw, 0), group_size - 1); + gh = min(max(gh, 0), group_size - 1); + + const DType* offset_bottom_data = bottom_data + (roi_batch_ind * channels) * height * width; + for (int ih = 0; ih < sample_per_part; ih++) { + for (int iw = 0; iw < sample_per_part; iw++) { + DType w = wstart + iw*sub_bin_size_w; + DType h = hstart + ih*sub_bin_size_h; + // bilinear interpolation + if (w<-0.5 || w>width - 0.5 || h<-0.5 || h>height - 0.5) { + continue; + } + w = min(max(w, 0.), width - 1.); + h = min(max(h, 0.), height - 1.); + int c = (ctop*group_size + gh)*group_size + gw; + DType val = bilinear_interp(offset_bottom_data + c*height*width, w, h, width, height); + sum += val; + count++; + } + } + top_data[index] = count == 0 ? static_cast(0) : sum / count; + top_count[index] = count; + } + } + + template + inline void DeformablePSROIPoolForward(const Tensor &out, + const Tensor &data, + const Tensor &bbox, + const Tensor &trans, + const Tensor &top_count, + const bool no_trans, + const float spatial_scale, + const int output_dim, + const int group_size, + const int pooled_size, + const int part_size, + const int sample_per_part, + const float trans_std) { + // LOG(INFO) << "DeformablePSROIPoolForward"; + const DType *bottom_data = data.dptr_; + const DType *bottom_rois = bbox.dptr_; + const DType *bottom_trans = no_trans ? NULL : trans.dptr_; + DType *top_data = out.dptr_; + DType *top_count_data = top_count.dptr_; + const int count = out.shape_.Size(); + const int channels = data.size(1); + const int height = data.size(2); + const int width = data.size(3); + const int pooled_height = pooled_size; + const int pooled_width = pooled_size; + const int num_classes = no_trans ? 1 : trans.size(1) / 2; + const int channels_each_class = no_trans ? output_dim : output_dim / num_classes; + + cudaStream_t stream = Stream::GetStream(out.stream_); + DeformablePSROIPoolForwardKernel << > >( + count, bottom_data, spatial_scale, channels, height, width, pooled_height, pooled_width, + bottom_rois, bottom_trans, no_trans, trans_std, sample_per_part, output_dim, + group_size, part_size, num_classes, channels_each_class, top_data, top_count_data); + DeformablePSROIPOOLING_CUDA_CHECK(cudaPeekAtLastError()); + } + + + template + __global__ void DeformablePSROIPoolBackwardAccKernel( + const int count, + const DType* top_diff, + const DType* top_count, + const int num_rois, + const DType spatial_scale, + const int channels, + const int height, const int width, + const int pooled_height, const int pooled_width, + const int output_dim, + DType* bottom_data_diff, DType* bottom_trans_diff, + const DType* bottom_data, + const DType* bottom_rois, + const DType* bottom_trans, + const bool no_trans, + const DType trans_std, + const int sample_per_part, + const int group_size, + const int part_size, + const int num_classes, + const int channels_each_class) { + CUDA_KERNEL_LOOP(index, count) { + // The output is in order (n, ctop, ph, pw) + int pw = index % pooled_width; + int ph = (index / pooled_width) % pooled_height; + int ctop = (index / pooled_width / pooled_height) % output_dim; + int n = index / pooled_width / pooled_height / output_dim; + + // [start, end) interval for spatial sampling + const DType* offset_bottom_rois = bottom_rois + n * 5; + int roi_batch_ind = offset_bottom_rois[0]; + DType roi_start_w = static_cast(round(offset_bottom_rois[1])) * spatial_scale - 0.5; + DType roi_start_h = static_cast(round(offset_bottom_rois[2])) * spatial_scale - 0.5; + DType roi_end_w = static_cast(round(offset_bottom_rois[3]) + 1.) * spatial_scale - 0.5; + DType roi_end_h = static_cast(round(offset_bottom_rois[4]) + 1.) * spatial_scale - 0.5; + + // Force too small ROIs to be 1x1 + DType roi_width = max(roi_end_w - roi_start_w, 0.1); //avoid 0 + DType roi_height = max(roi_end_h - roi_start_h, 0.1); + + // Compute w and h at bottom + DType bin_size_h = roi_height / static_cast(pooled_height); + DType bin_size_w = roi_width / static_cast(pooled_width); + + DType sub_bin_size_h = bin_size_h / static_cast(sample_per_part); + DType sub_bin_size_w = bin_size_w / static_cast(sample_per_part); + + int part_h = floor(static_cast(ph) / pooled_height*part_size); + int part_w = floor(static_cast(pw) / pooled_width*part_size); + int class_id = ctop / channels_each_class; + DType trans_x = no_trans ? static_cast(0) : + bottom_trans[(((n * num_classes + class_id) * 2) * part_size + part_h)*part_size + part_w] * trans_std; + DType trans_y = no_trans ? static_cast(0) : + bottom_trans[(((n * num_classes + class_id) * 2 + 1) * part_size + part_h)*part_size + part_w] * trans_std; + + DType wstart = static_cast(pw)* bin_size_w + + roi_start_w; + wstart += trans_x * roi_width; + DType hstart = static_cast(ph) * bin_size_h + + roi_start_h; + hstart += trans_y * roi_height; + + if (top_count[index] <= 0) { + continue; + } + DType diff_val = top_diff[index] / top_count[index]; + const DType* offset_bottom_data = bottom_data + roi_batch_ind * channels * height * width; + DType* offset_bottom_data_diff = bottom_data_diff + roi_batch_ind * channels * height * width; + int gw = floor(static_cast(pw)* group_size / pooled_width); + int gh = floor(static_cast(ph)* group_size / pooled_height); + gw = min(max(gw, 0), group_size - 1); + gh = min(max(gh, 0), group_size - 1); + + for (int ih = 0; ih < sample_per_part; ih++) { + for (int iw = 0; iw < sample_per_part; iw++) { + DType w = wstart + iw*sub_bin_size_w; + DType h = hstart + ih*sub_bin_size_h; + // bilinear interpolation + if (w<-0.5 || w>width - 0.5 || h<-0.5 || h>height - 0.5) { + continue; + } + w = min(max(w, 0.), width - 1.); + h = min(max(h, 0.), height - 1.); + int c = (ctop*group_size + gh)*group_size + gw; + // backward on feature + int x0 = floor(w); + int x1 = ceil(w); + int y0 = floor(h); + int y1 = ceil(h); + DType dist_x = w - x0, dist_y = h - y0; + DType q00 = (1 - dist_x)*(1 - dist_y); + DType q01 = (1 - dist_x)*dist_y; + DType q10 = dist_x*(1 - dist_y); + DType q11 = dist_x*dist_y; + int bottom_index_base = c * height *width; + atomicAdd(offset_bottom_data_diff + bottom_index_base + y0*width + x0, q00*diff_val); + atomicAdd(offset_bottom_data_diff + bottom_index_base + y1*width + x0, q01*diff_val); + atomicAdd(offset_bottom_data_diff + bottom_index_base + y0*width + x1, q10*diff_val); + atomicAdd(offset_bottom_data_diff + bottom_index_base + y1*width + x1, q11*diff_val); + + if (no_trans) { + continue; + } + DType U00 = offset_bottom_data[bottom_index_base + y0*width + x0]; + DType U01 = offset_bottom_data[bottom_index_base + y1*width + x0]; + DType U10 = offset_bottom_data[bottom_index_base + y0*width + x1]; + DType U11 = offset_bottom_data[bottom_index_base + y1*width + x1]; + DType diff_x = (U11*dist_y + U10*(1 - dist_y) - U01*dist_y - U00*(1 - dist_y)) + *trans_std*diff_val; + diff_x *= roi_width; + DType diff_y = (U11*dist_x + U01*(1 - dist_x) - U10*dist_x - U00*(1 - dist_x)) + *trans_std*diff_val; + diff_y *= roi_height; + + atomicAdd(bottom_trans_diff + (((n * num_classes + class_id) * 2) * part_size + part_h)*part_size + part_w, diff_x); + atomicAdd(bottom_trans_diff + (((n * num_classes + class_id) * 2 + 1)*part_size + part_h)*part_size + part_w, diff_y); + } + } + } + } + + + template + inline void DeformablePSROIPoolBackwardAcc(const Tensor &in_grad, + const Tensor &trans_grad, + const Tensor &out_grad, + const Tensor &data, + const Tensor &bbox, + const Tensor &trans, + const Tensor &top_count, + const bool no_trans, + const float spatial_scale, + const int output_dim, + const int group_size, + const int pooled_size, + const int part_size, + const int sample_per_part, + const float trans_std) { + // LOG(INFO) << "DeformablePSROIPoolBackward"; + const DType *top_diff = out_grad.dptr_; + const DType *bottom_data = data.dptr_; + const DType *bottom_rois = bbox.dptr_; + const DType *bottom_trans = no_trans ? NULL : trans.dptr_; + DType *bottom_data_diff = in_grad.dptr_; + DType *bottom_trans_diff = no_trans ? NULL : trans_grad.dptr_; + const DType *top_count_data = top_count.dptr_; + const int count = out_grad.shape_.Size(); + const int num_rois = bbox.size(0); + const int channels = in_grad.size(1); + const int height = in_grad.size(2); + const int width = in_grad.size(3); + const int pooled_height = pooled_size; + const int pooled_width = pooled_size; + const int num_classes = no_trans ? 1 : trans_grad.size(1) / 2; + const int channels_each_class = no_trans ? output_dim : output_dim / num_classes; + + cudaStream_t stream = Stream::GetStream(in_grad.stream_); + DeformablePSROIPoolBackwardAccKernel << > >( + count, top_diff, top_count_data, num_rois, spatial_scale, channels, height, width, + pooled_height, pooled_width, output_dim, bottom_data_diff, bottom_trans_diff, + bottom_data, bottom_rois, bottom_trans, no_trans, trans_std, sample_per_part, + group_size, part_size, num_classes, channels_each_class); + DeformablePSROIPOOLING_CUDA_CHECK(cudaPeekAtLastError()); + } + + } // namespace cuda + + template + inline void DeformablePSROIPoolForward(const Tensor &out, + const Tensor &data, + const Tensor &bbox, + const Tensor &trans, + const Tensor &top_count, + const bool no_trans, + const float spatial_scale, + const int output_dim, + const int group_size, + const int pooled_size, + const int part_size, + const int sample_per_part, + const float trans_std) { + cuda::DeformablePSROIPoolForward(out, data, bbox, trans, top_count, no_trans, spatial_scale, + output_dim, group_size, pooled_size, part_size, sample_per_part, trans_std); + } + + template + inline void DeformablePSROIPoolBackwardAcc(const Tensor &in_grad, + const Tensor &trans_grad, + const Tensor &out_grad, + const Tensor &data, + const Tensor &bbox, + const Tensor &trans, + const Tensor &top_count, + const bool no_trans, + const float spatial_scale, + const int output_dim, + const int group_size, + const int pooled_size, + const int part_size, + const int sample_per_part, + const float trans_std) { + cuda::DeformablePSROIPoolBackwardAcc(in_grad, trans_grad, out_grad, data, bbox, trans, top_count, no_trans, + spatial_scale, output_dim, group_size, pooled_size, part_size, sample_per_part, trans_std); + } + +} // namespace mshadow + + +namespace mxnet { + namespace op { + + template<> + Operator* CreateOp(DeformablePSROIPoolingParam param, int dtype) { + Operator* op = NULL; + MSHADOW_REAL_TYPE_SWITCH(dtype, DType, { + op = new DeformablePSROIPoolingOp(param); + }); + return op; + } + + } // namespace op +} // namespace mxnet diff --git a/faster_rcnn/operator_cxx/nn/deformable_im2col.cuh b/faster_rcnn/operator_cxx/nn/deformable_im2col.cuh new file mode 100644 index 0000000..f0dc2e5 --- /dev/null +++ b/faster_rcnn/operator_cxx/nn/deformable_im2col.cuh @@ -0,0 +1,525 @@ +/*! + ******************* BEGIN Caffe Copyright Notice and Disclaimer **************** + * + * COPYRIGHT + * + * All contributions by the University of California: + * Copyright (c) 2014-2017 The Regents of the University of California (Regents) + * All rights reserved. + * + * All other contributions: + * Copyright (c) 2014-2017, the respective contributors + * All rights reserved. + * + * Caffe uses a shared copyright model: each contributor holds copyright over + * their contributions to Caffe. The project versioning records all such + * contribution and copyright details. If a contributor wants to further mark + * their specific copyright on a particular contribution, they should indicate + * their copyright solely in the commit message of the change when it is + * committed. + * + * LICENSE + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions are met: + * + * 1. Redistributions of source code must retain the above copyright notice, this + * list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright notice, + * this list of conditions and the following disclaimer in the documentation + * and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE + * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR + * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * CONTRIBUTION AGREEMENT + * + * By contributing to the BVLC/caffe repository through pull-request, comment, + * or otherwise, the contributor releases their content to the + * license and copyright terms herein. + * + ***************** END Caffe Copyright Notice and Disclaimer ******************** + * + * Copyright (c) 2017 Microsoft + * Licensed under The MIT License [see LICENSE for details] + * \file deformable_im2col.cuh + * \brief Function definitions of converting an image to + * column matrix based on kernel, padding, dilation, and offset. + * These functions are mainly used in deformable convolution operators. + * \ref: https://arxiv.org/abs/1703.06211 + * \author Yuwen Xiong, Haozhi Qi, Jifeng Dai + */ + +#ifndef MXNET_OPERATOR_CONTRIB_NN_DEFORMABLE_IM2COL_CUH_ +#define MXNET_OPERATOR_CONTRIB_NN_DEFORMABLE_IM2COL_CUH_ + +#include +#include +#include +#include +#include +#include "../../mxnet_op.h" +#include "../../../common/cuda_utils.h" + + + +namespace mxnet { +namespace op { + +template +__device__ DType deformable_im2col_bilinear(const DType* bottom_data, const int data_width, + const int height, const int width, DType h, DType w) { + + int h_low = floor(h); + int w_low = floor(w); + int h_high; + int w_high; + if (h_low >= height - 1) { + h_high = h_low = height - 1; + h = (DType)h_low; + } + else { + h_high = h_low + 1; + } + + if (w_low >= width - 1) { + w_high = w_low = width - 1; + w = (DType)w_low; + } + else { + w_high = w_low + 1; + } + + DType lh = h - h_low; + DType lw = w - w_low; + DType hh = 1 - lh, hw = 1 - lw; + + DType v1 = bottom_data[h_low * data_width + w_low]; + DType v2 = bottom_data[h_low * data_width + w_high]; + DType v3 = bottom_data[h_high * data_width + w_low]; + DType v4 = bottom_data[h_high * data_width + w_high]; + DType w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw; + + DType val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); + return val; +} + + +template +__device__ DType get_gradient_weight(DType argmax_h, DType argmax_w, + const int h, const int w, const int height, const int width) { + + if (argmax_h < 0 || argmax_h > height || argmax_w < 0 || argmax_w > width) { + //empty + return 0; + } + + argmax_h = max(argmax_h, (DType)0.0f); + argmax_w = max(argmax_w, (DType)0.0f); + + int argmax_h_low = (int)argmax_h; + int argmax_w_low = (int)argmax_w; + int argmax_h_high; + int argmax_w_high; + if (argmax_h_low >= height - 1) { + argmax_h_high = argmax_h_low = height - 1; + argmax_h = (DType)argmax_h_low; + } else { + argmax_h_high = argmax_h_low + 1; + } + if (argmax_w_low >= width - 1) + { + argmax_w_high = argmax_w_low = width - 1; + argmax_w = (DType)argmax_w_low; + } else { + argmax_w_high = argmax_w_low + 1; + } + DType weight = 0; + if (h == argmax_h_low) { + if (w == argmax_w_low) { + weight = (h + 1 - argmax_h) * (w + 1 - argmax_w); + } else if (w == argmax_w_high) { + weight = (h + 1 - argmax_h) * (argmax_w + 1 - w); + } + } else if (h == argmax_h_high) { + if (w == argmax_w_low) { + weight = (argmax_h + 1 - h) * (w + 1 - argmax_w); + } else if (w == argmax_w_high) { + weight = (argmax_h + 1 - h) * (argmax_w + 1 - w); + } + } + return weight; +} + + +template +__device__ DType get_coordinate_weight(DType argmax_h, DType argmax_w, + const int height, const int width, const DType* im_data, + const int data_width, const int bp_dir) { + + if (argmax_h < 0 || argmax_h > height || argmax_w < 0 || argmax_w > width) + { + //empty + return 0; + } + + if (argmax_h < 0) argmax_h = 0; + if (argmax_w < 0) argmax_w = 0; + + int argmax_h_low = (int)argmax_h; + int argmax_w_low = (int)argmax_w; + int argmax_h_high; + int argmax_w_high; + if (argmax_h_low >= height - 1) { + argmax_h_high = argmax_h_low = height - 1; + argmax_h = (DType)argmax_h_low; + } else { + argmax_h_high = argmax_h_low + 1; + } + if (argmax_w_low >= width - 1) { + argmax_w_high = argmax_w_low = width - 1; + argmax_w = (DType)argmax_w_low; + } else { + argmax_w_high = argmax_w_low + 1; + } + DType weight = 0; + + if (bp_dir == 0) { + weight += -1 * (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_low * data_width + argmax_w_low]; + weight += -1 * (argmax_w - argmax_w_low) * im_data[argmax_h_low * data_width + argmax_w_high]; + weight += (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_high * data_width + argmax_w_low]; + weight += (argmax_w - argmax_w_low) * im_data[argmax_h_high * data_width + argmax_w_high]; + } else if (bp_dir == 1) { + weight += -1 * (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_low]; + weight += (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_high]; + weight += -1 * (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_low]; + weight += (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_high]; + } + + return weight; +} + + +/*! + * \brief deformable_im2col gpu kernel. + * DO NOT call this directly. Use wrapper function im2col() instead; + */ +template +__global__ void deformable_im2col_gpu_kernel(const int n, const DType* data_im, const DType* data_offset, + const int height, const int width, const int kernel_h, const int kernel_w, + const int pad_h, const int pad_w, + const int stride_h, const int stride_w, + const int dilation_h, const int dilation_w, + const int channel_per_deformable_group, + const int height_col, const int width_col, + DType* data_col) { + CUDA_KERNEL_LOOP(index, n) { + // index index of output matrix + const int w_col = index % width_col; + const int h_col = (index / width_col) % height_col; + const int c_im = (index / width_col) / height_col; + const int c_col = c_im * kernel_h * kernel_w; + + // compute deformable group index + const int deformable_group_index = c_im / channel_per_deformable_group; + + const int h_in = h_col * stride_h - pad_h; + const int w_in = w_col * stride_w - pad_w; + DType* data_col_ptr = data_col + (c_col * height_col + h_col) * width_col + w_col; + const DType* data_im_ptr = data_im + (c_im * height + h_in) * width + w_in; + const DType* data_offset_ptr = data_offset + deformable_group_index * 2 * kernel_h * kernel_w * height_col * width_col; + + + for (int i = 0; i < kernel_h; ++i) { + for (int j = 0; j < kernel_w; ++j) { + const int data_offset_h_ptr = ((2 * (i * kernel_w + j)) * height_col + h_col) * width_col + w_col; + const int data_offset_w_ptr = ((2 * (i * kernel_w + j) + 1) * height_col + h_col) * width_col + w_col; + const DType offset_h = data_offset_ptr[data_offset_h_ptr]; + const DType offset_w = data_offset_ptr[data_offset_w_ptr]; + DType val = static_cast(0); + const DType h_im = h_in + i * dilation_h + offset_h; + const DType w_im = w_in + j * dilation_w + offset_w; + if (h_im >= 0 && w_im >= 0 && h_im < height && w_im < width) { + const DType map_h = i * dilation_h + offset_h; + const DType map_w = j * dilation_w + offset_w; + const int cur_height = height - h_in; + const int cur_width = width - w_in; + val = deformable_im2col_bilinear(data_im_ptr, width, cur_height, cur_width, map_h, map_w); + } + *data_col_ptr = val; + data_col_ptr += height_col * width_col; + } + } + } +} + + + + + + +/*!\brief + * cpu function of deformable_im2col algorithm + * \param s device stream + * \param data_im pointer of an image (C, H, W, ...) in the image batch + * \param data_offset pointer of offset (C, H, W, ...) in the offset batch + * \param im_shape input image shape in dimensions (N, C, H, W,) + * \param col_shape column buffer shape (#channels, output_im_height, output_im_width, ...) + * \param kernel_shape kernel filter shape + * \param pad pad shape + * \param stride stride shape + * \param dilation dilation shape + * \param deformable_group #offset group that deformable convolution use + * \param data_col column buffer pointer + */ +template +inline void deformable_im2col(mshadow::Stream* s, + const DType* data_im, const DType* data_offset, + const TShape& im_shape, const TShape& col_shape, const TShape& kernel_shape, + const TShape& pad, const TShape& stride, const TShape& dilation, + const uint32_t deformable_group, DType* data_col) { + // num_axes should be smaller than block size + index_t num_spatial_axes = kernel_shape.ndim(); + CHECK_LT(num_spatial_axes, mshadow::cuda::kBaseThreadNum); + index_t channel_per_deformable_group = im_shape[1] / deformable_group; + index_t num_kernels = im_shape[1] * col_shape.ProdShape(1, col_shape.ndim()); + using namespace mxnet_op; + switch (num_spatial_axes) { + case 2: + deformable_im2col_gpu_kernel // NOLINT_NEXT_LINE(whitespace/operators) + <<::GetStream(s)>>>( + num_kernels, data_im, data_offset, im_shape[2], im_shape[3], kernel_shape[0], kernel_shape[1], + pad[0], pad[1], stride[0], stride[1], dilation[0], dilation[1], channel_per_deformable_group, + col_shape[1], col_shape[2], data_col); + MSHADOW_CUDA_POST_KERNEL_CHECK(deformable_im2col_gpu_kernel); + break; + default: + LOG(FATAL) << "im2col_nd_gpu does not support computation with " + << num_spatial_axes << " spatial axes"; + } +} + + +/*! +* \brief deformable_col2im gpu kernel. +* \brief DO NOT call this directly. Use wrapper function deformable_col2im() instead; +*/ +template +__global__ void deformable_col2im_gpu_kernel(const int n, const DType* data_col, const DType* data_offset, + const int channels, const int height, const int width, + const int kernel_h, const int kernel_w, + const int pad_h, const int pad_w, + const int stride_h, const int stride_w, + const int dilation_h, const int dilation_w, + const int channel_per_deformable_group, + const int height_col, const int width_col, + DType* grad_im, OpReqType req) { + CUDA_KERNEL_LOOP(index, n) { + const int j = (index / width_col / height_col) % kernel_w; + const int i = (index / width_col / height_col / kernel_w) % kernel_h; + const int c = index / width_col / height_col / kernel_w / kernel_h; + // compute the start and end of the output + + const int deformable_group_index = c / channel_per_deformable_group; + + int w_out = index % width_col; + int h_out = (index / width_col) % height_col; + int w_in = w_out * stride_w - pad_w; + int h_in = h_out * stride_h - pad_h; + + const DType* data_offset_ptr = data_offset + deformable_group_index * 2 * kernel_h * kernel_w * height_col * width_col; + const int data_offset_h_ptr = ((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out; + const int data_offset_w_ptr = ((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out; + const DType offset_h = data_offset_ptr[data_offset_h_ptr]; + const DType offset_w = data_offset_ptr[data_offset_w_ptr]; + const DType cur_inv_h_data = h_in + i * dilation_h + offset_h; + const DType cur_inv_w_data = w_in + j * dilation_w + offset_w; + + const DType cur_top_grad = data_col[index]; + const int cur_h = (int)cur_inv_h_data; + const int cur_w = (int)cur_inv_w_data; + for (int dy = -2; dy <= 2; dy++) { + for (int dx = -2; dx <= 2; dx++) { + if (cur_h + dy >= 0 && cur_h + dy < height && + cur_w + dx >= 0 && cur_w + dx < width && + abs(cur_inv_h_data - (cur_h + dy)) < 1 && + abs(cur_inv_w_data - (cur_w + dx)) < 1 + ) { + int cur_bottom_grad_pos = (c * height + cur_h + dy) * width + cur_w + dx; + DType weight = get_gradient_weight(cur_inv_h_data, cur_inv_w_data, cur_h + dy, cur_w + dx, height, width); + atomicAdd(grad_im + cur_bottom_grad_pos, weight * cur_top_grad); + } + } + } + } +} + + +/*!\brief + * gpu function of deformable_col2im algorithm + * \param s device stream + * \param data_col start pointer of the column buffer to be filled + * \param data_offset pointer of offset (C, H, W, ...) in the offset batch + * \param im_shape input image shape in dimensions (N, C, H, W,) + * \param col_shape column buffer shape + * \param kernel_shape kernel filter shape + * \param pad pad shape + * \param stride stride shape + * \param dilation dilation shape + * \param deformable_group #offset group that deformable convolution use + * \param grad_im pointer of a image (C, H, W,...) in the image batch + */ +template +inline void deformable_col2im(mshadow::Stream* s, + const DType* data_col, const DType* data_offset, + const TShape& im_shape, const TShape& col_shape, const TShape& kernel_shape, + const TShape& pad, const TShape& stride, + const TShape& dilation, const uint32_t deformable_group, + DType* grad_im, OpReqType req) { + index_t num_spatial_axes = kernel_shape.ndim(); + index_t im_size = im_shape.ProdShape(1, im_shape.ndim()); + index_t channel_per_deformable_group = im_shape[1] / deformable_group; + index_t num_kernels = col_shape.ProdShape(0, col_shape.ndim()); + // num_axes should be smaller than block size + CHECK_LT(num_spatial_axes, mshadow::cuda::kBaseThreadNum); + using namespace mxnet_op; + switch (num_spatial_axes) { + case 2: + // To avoid involving atomic operations, we will launch one kernel per + // bottom dimension, and then in the kernel add up the top dimensions. + // NOLINT_NEXT_LINE(whitespace/operators) + deformable_col2im_gpu_kernel<<::GetStream(s)>>>( + num_kernels, data_col, data_offset, im_shape[1], im_shape[2], im_shape[3], + kernel_shape[0], kernel_shape[1], pad[0], pad[1], stride[0], stride[1], + dilation[0], dilation[1], channel_per_deformable_group, col_shape[1], col_shape[2], grad_im, req); + MSHADOW_CUDA_POST_KERNEL_CHECK(deformable_col2im_gpu_kernel); + break; + default: + LOG(FATAL) << "col2im_nd_gpu does not support computation with " + << num_spatial_axes << " spatial axes"; + } +} + + +/*! + * \brief deformable_col2im_coord gpu kernel. + * \brief DO NOT call this directly. Use wrapper function deformable_col2im_coord() instead; + */ +template +__global__ void deformable_col2im_coord_gpu_kernel(const int n, const DType* data_col, + const DType* data_im, const DType* data_offset, + const int channels, const int height, const int width, + const int kernel_h, const int kernel_w, + const int pad_h, const int pad_w, + const int stride_h, const int stride_w, + const int dilation_h, const int dilation_w, + const int channel_per_deformable_group, + const int height_col, const int width_col, + DType* grad_offset, OpReqType req) { + CUDA_KERNEL_LOOP(index, n) { + DType val = 0; + int w = index % width_col; + int h = (index / width_col) % height_col; + int c = index / width_col / height_col; + // compute the start and end of the output + + const int deformable_group_index = c / (2 * kernel_h * kernel_w); + const int col_step = kernel_h * kernel_w; + int cnt = 0; + const DType* data_col_ptr = data_col + deformable_group_index * channel_per_deformable_group * width_col * height_col; + const DType* data_im_ptr = data_im + deformable_group_index * channel_per_deformable_group / kernel_h / kernel_w * height * width; + const DType* data_offset_ptr = data_offset + deformable_group_index * 2 * kernel_h * kernel_w * height_col * width_col; + + const int offset_c = c - deformable_group_index * 2 * kernel_h * kernel_w; + + for (int col_c = (offset_c / 2); col_c < channel_per_deformable_group; col_c += col_step) { + const int col_pos = ((col_c * height_col) + h) * width_col + w; + const int bp_dir = offset_c % 2; + + int j = (col_pos / width_col / height_col) % kernel_w; + int i = (col_pos / width_col / height_col / kernel_w) % kernel_h; + int w_out = col_pos % width_col; + int h_out = (col_pos / width_col) % height_col; + int w_in = w_out * stride_w - pad_w; + int h_in = h_out * stride_h - pad_h; + const int data_offset_h_ptr = (((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out); + const int data_offset_w_ptr = (((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out); + const DType offset_h = data_offset_ptr[data_offset_h_ptr]; + const DType offset_w = data_offset_ptr[data_offset_w_ptr]; + DType inv_h = h_in + i * dilation_h + offset_h; + DType inv_w = w_in + j * dilation_w + offset_w; + if (inv_h < 0 || inv_w < 0 || inv_h >= height || inv_w >= width) { + inv_h = inv_w = -1; + } + const DType weight = get_coordinate_weight( + inv_h, inv_w, + height, width, data_im_ptr + cnt * height * width, width, bp_dir); + val += weight * data_col_ptr[col_pos]; + cnt += 1; + } + + grad_offset[index] = val; + } +} + +/*!\brief + * gpu function of deformable_col2im_coord algorithm + * \param s device stream + * \param data_col start pointer of the column buffer to be filled + * \param data_im pointer of an image (C, H, W, ...) in the image batch + * \param data_offset pointer of offset (C, H, W, ...) in the offset batch + * \param im_shape input image shape in dimensions (N, C, H, W,) + * \param col_shape column buffer shape + * \param kernel_shape kernel filter shape + * \param pad pad shape + * \param stride stride shape + * \param dilation dilation shape + * \param deformable_group #offset group that deformable convolution use + * \param grad_offset pointer of the offset (C, H, W,...) in the offset batch + */ +template +inline void deformable_col2im_coord(mshadow::Stream* s, + const DType* data_col, const DType* data_im, const DType* data_offset, const TShape& im_shape, + const TShape& col_shape, const TShape& kernel_shape, + const TShape& pad, const TShape& stride, + const TShape& dilation, const uint32_t deformable_group, DType* grad_offset, OpReqType req) { + index_t num_spatial_axes = kernel_shape.ndim(); + index_t num_kernels = col_shape[1] * col_shape[2] * 2 * kernel_shape[0] * kernel_shape[1] * deformable_group; + index_t channel_per_deformable_group = col_shape[0] / deformable_group; + // num_axes should be smaller than block size + CHECK_LT(num_spatial_axes, mshadow::cuda::kBaseThreadNum); + using namespace mxnet_op; + switch (num_spatial_axes) { + case 2: + // To avoid involving atomic operations, we will launch one kernel per + // bottom dimension, and then in the kernel add up the top dimensions. + // NOLINT_NEXT_LINE(whitespace/operators) + + deformable_col2im_coord_gpu_kernel << ::GetStream(s) >> >( + num_kernels, data_col, data_im, data_offset, im_shape[1], im_shape[2], im_shape[3], + kernel_shape[0], kernel_shape[1], pad[0], pad[1], stride[0], stride[1], + dilation[0], dilation[1], channel_per_deformable_group, col_shape[1], col_shape[2], grad_offset, req); + MSHADOW_CUDA_POST_KERNEL_CHECK(deformable_col2im_gpu_kernel); + break; + default: + LOG(FATAL) << "col2im_nd_gpu does not support computation with " + << num_spatial_axes << " spatial axes"; + } +} + + +} // namespace op +} // namespace mxnet + +#endif // MXNET_OPERATOR_CONTRIB_NN_DEFORMABLE_IM2COL_CUH_ diff --git a/faster_rcnn/operator_cxx/nn/deformable_im2col.h b/faster_rcnn/operator_cxx/nn/deformable_im2col.h new file mode 100644 index 0000000..60a2ecd --- /dev/null +++ b/faster_rcnn/operator_cxx/nn/deformable_im2col.h @@ -0,0 +1,157 @@ +/*! + ******************* BEGIN Caffe Copyright Notice and Disclaimer **************** + * + * COPYRIGHT + * + * All contributions by the University of California: + * Copyright (c) 2014-2017 The Regents of the University of California (Regents) + * All rights reserved. + * + * All other contributions: + * Copyright (c) 2014-2017, the respective contributors + * All rights reserved. + * + * Caffe uses a shared copyright model: each contributor holds copyright over + * their contributions to Caffe. The project versioning records all such + * contribution and copyright details. If a contributor wants to further mark + * their specific copyright on a particular contribution, they should indicate + * their copyright solely in the commit message of the change when it is + * committed. + * + * LICENSE + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions are met: + * + * 1. Redistributions of source code must retain the above copyright notice, this + * list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright notice, + * this list of conditions and the following disclaimer in the documentation + * and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE + * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR + * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * CONTRIBUTION AGREEMENT + * + * By contributing to the BVLC/caffe repository through pull-request, comment, + * or otherwise, the contributor releases their content to the + * license and copyright terms herein. + * + ***************** END Caffe Copyright Notice and Disclaimer ******************** + * + * Copyright (c) 2017 Microsoft + * Licensed under The MIT License [see LICENSE for details] + * \file deformable_im2col.h + * \brief Function definitions of converting an image to + * column matrix based on kernel, padding, dilation, and offset. + * These functions are mainly used in deformable convolution operators. + * \ref: https://arxiv.org/abs/1703.06211 + * \author Yuwen Xiong, Haozhi Qi, Jifeng Dai + */ + +#ifndef MXNET_OPERATOR_CONTRIB_NN_DEFORMABLE_IM2COL_H_ +#define MXNET_OPERATOR_CONTRIB_NN_DEFORMABLE_IM2COL_H_ + +#include +#include +#include +#include +#include "../../mxnet_op.h" + +namespace mxnet { +namespace op { + +/*!\brief + * cpu function of deformable_im2col algorithm + * \param s device stream + * \param data_im pointer of an image (C, H, W, ...) in the image batch + * \param data_offset pointer of offset (C, H, W, ...) in the offset batch + * \param im_shape input image shape in dimensions (N, C, H, W,) + * \param col_shape column buffer shape (#channels, output_im_height, output_im_width, ...) + * \param kernel_shape kernel filter shape + * \param pad pad shape + * \param stride stride shape + * \param dilation dilation shape + * \param deformable_group #offset group that deformable convolution use + * \param data_col column buffer pointer + */ +template +inline void deformable_im2col(mshadow::Stream* s, + const DType* data_im, const DType* data_offset, + const TShape& im_shape, const TShape& col_shape, const TShape& kernel_shape, + const TShape& pad, const TShape& stride, const TShape& dilation, + const uint32_t deformable_group, DType* data_col) { + if (2 == kernel_shape.ndim()) { + LOG(FATAL) << "not implemented"; + } else { + LOG(FATAL) << "not implemented"; + } +} + + +/*!\brief + * cpu function of deformable_col2im algorithm + * \param s device stream + * \param data_col start pointer of the column buffer to be filled + * \param data_offset pointer of offset (C, H, W, ...) in the offset batch + * \param im_shape input image shape in dimensions (N, C, H, W,) + * \param col_shape column buffer shape + * \param kernel_shape kernel filter shape + * \param pad pad shape + * \param stride stride shape + * \param dilation dilation shape + * \param deformable_group #offset group that deformable convolution use + * \param grad_im pointer of a image (C, H, W,...) in the image batch + */ +template +inline void deformable_col2im(mshadow::Stream* s, + const DType* data_col, const DType* data_offset, + const TShape& im_shape, const TShape& col_shape, const TShape& kernel_shape, + const TShape& pad, const TShape& stride, + const TShape& dilation, const uint32_t deformable_group, + DType* grad_im, OpReqType req) { + index_t num_spatial_axes = kernel_shape.ndim(); + LOG(FATAL) << "not implemented"; +} + + +/*!\brief + * cpu function of deformable_col2im_coord algorithm + * \param s device stream + * \param data_col start pointer of the column buffer to be filled + * \param data_im pointer of an image (C, H, W, ...) in the image batch + * \param data_offset pointer of offset (C, H, W, ...) in the offset batch + * \param im_shape input image shape in dimensions (N, C, H, W,) + * \param col_shape column buffer shape + * \param kernel_shape kernel filter shape + * \param pad pad shape + * \param stride stride shape + * \param dilation dilation shape + * \param deformable_group #offset group that deformable convolution use + * \param grad_offset pointer of the offset (C, H, W,...) in the offset batch + */ + +template +inline void deformable_col2im_coord(mshadow::Stream* s, + const DType* data_col, const DType* data_im, const DType* data_offset, const TShape& im_shape, + const TShape& col_shape, const TShape& kernel_shape, + const TShape& pad, const TShape& stride, + const TShape& dilation, const uint32_t deformable_group, DType* grad_offset, OpReqType req) { + LOG(FATAL) << "not implemented"; +} + +} // namespace op +} // namespace mxnet +#ifdef __CUDACC__ +#include "./deformable_im2col.cuh" +#endif +#endif // MXNET_OPERATOR_CONTRIB_NN_DEFORMABLE_IM2COL_H_ diff --git a/faster_rcnn/operator_cxx/psroi_pooling-inl.h b/faster_rcnn/operator_cxx/psroi_pooling-inl.h new file mode 100644 index 0000000..39489f4 --- /dev/null +++ b/faster_rcnn/operator_cxx/psroi_pooling-inl.h @@ -0,0 +1,235 @@ +/*! + * Copyright (c) 2017 by Contributors + * Copyright (c) 2017 Microsoft + * Licensed under The MIT License [see LICENSE for details] + * \file psroi_pooling-inl.h + * \brief psroi pooling operator and symbol + * \author Yi Li, Tairui Chen, Guodong Zhang, Jifeng Dai +*/ +#ifndef MXNET_OPERATOR_PSROI_POOLING_INL_H_ +#define MXNET_OPERATOR_PSROI_POOLING_INL_H_ + +#include +#include +#include +#include +#include +#include +#include +#include "../mshadow_op.h" +#include "../operator_common.h" + + +namespace mxnet { +namespace op { + +// Declare enumeration of input order to make code more intuitive. +// These enums are only visible within this header +namespace psroipool { +enum PSROIPoolingOpInputs {kData, kBox}; +enum PSROIPoolingOpOutputs {kOut, kMappingChannel}; +} // psroipool + +struct PSROIPoolingParam : public dmlc::Parameter { + // TShape pooled_size; + float spatial_scale; + int output_dim; + int pooled_size; + int group_size; + DMLC_DECLARE_PARAMETER(PSROIPoolingParam) { + DMLC_DECLARE_FIELD(spatial_scale).set_range(0.0, 1.0) + .describe("Ratio of input feature map height (or w) to raw image height (or w). " + "Equals the reciprocal of total stride in convolutional layers"); + DMLC_DECLARE_FIELD(output_dim).describe("fix output dim"); + DMLC_DECLARE_FIELD(pooled_size).describe("fix pooled size"); + DMLC_DECLARE_FIELD(group_size).set_default(0).describe("fix group size"); + } +}; + +template +class PSROIPoolingOp : public Operator { + public: + explicit PSROIPoolingOp(PSROIPoolingParam p) { + this->param_ = p; + } + + virtual void Forward(const OpContext &ctx, + const std::vector &in_data, + const std::vector &req, + const std::vector &out_data, + const std::vector &aux_args) { + using namespace mshadow; + size_t expected = 2; + CHECK_EQ(in_data.size(), expected); + CHECK_EQ(out_data.size(), expected); + CHECK_EQ(out_data[psroipool::kOut].shape_[0], in_data[psroipool::kBox].shape_[0]); + CHECK_EQ(out_data[psroipool::kMappingChannel].shape_[0], in_data[psroipool::kBox].shape_[0]); + Stream *s = ctx.get_stream(); + + Tensor data = in_data[psroipool::kData].get(s); + Tensor bbox = in_data[psroipool::kBox].get(s); + Tensor out = out_data[psroipool::kOut].get(s); + Tensor mapping_channel = out_data[psroipool::kMappingChannel].get(s); + CHECK_EQ(data.CheckContiguous(), true); + CHECK_EQ(bbox.CheckContiguous(), true); + CHECK_EQ(out.CheckContiguous(), true); + CHECK_EQ(mapping_channel.CheckContiguous(), true); + out = -FLT_MAX; + mapping_channel = -1.0f; + PSROIPoolForward(out, data, bbox, mapping_channel, param_.spatial_scale, param_.output_dim, param_.group_size); + } + + virtual void Backward(const OpContext &ctx, + const std::vector &out_grad, + const std::vector &in_data, + const std::vector &out_data, + const std::vector &req, + const std::vector &in_grad, + const std::vector &aux_args) { + using namespace mshadow; + size_t expected = 2; + CHECK_EQ(in_data.size(), expected); + CHECK_EQ(out_data.size(), expected); + CHECK_EQ(out_grad[psroipool::kOut].shape_[0], in_data[psroipool::kBox].shape_[0]); + CHECK_EQ(out_data[psroipool::kMappingChannel].shape_[0], in_data[psroipool::kBox].shape_[0]); + CHECK_NE(req[psroipool::kData], kWriteInplace) << + "ROIPooling: Backward doesn't support kWriteInplace."; + CHECK_NE(req[psroipool::kBox], kWriteInplace) << + "ROIPooling: Backward doesn't support kWriteInplace."; + Stream *s = ctx.get_stream(); + + Tensor grad_out = out_grad[psroipool::kOut].get(s); + Tensor bbox = in_data[psroipool::kBox].get(s); + Tensor mapping_channel = out_data[psroipool::kMappingChannel].get(s); + Tensor grad_in = in_grad[psroipool::kData].get(s); + Tensor grad_roi = in_grad[psroipool::kBox].get(s); + + CHECK_EQ(grad_out.CheckContiguous(), true); + CHECK_EQ(bbox.CheckContiguous(), true); + CHECK_EQ(mapping_channel.CheckContiguous(), true); + CHECK_EQ(grad_in.CheckContiguous(), true); + + if (kAddTo == req[psroipool::kData] || kWriteTo == req[psroipool::kData]) { + if (kWriteTo == req[psroipool::kData]) { + grad_in = 0.0f; + } + PSROIPoolBackwardAcc(grad_in, grad_out, bbox, mapping_channel, param_.spatial_scale, param_.output_dim); + } + if (kWriteTo == req[psroipool::kBox]) { + grad_roi = 0.0f; + } + + } + + private: + PSROIPoolingParam param_; +}; // class PSROIPoolingOp + +// Decalre Factory function, used for dispatch specialization +template +Operator* CreateOp(PSROIPoolingParam param, int dtype); + +#if DMLC_USE_CXX11 +class PSROIPoolingProp : public OperatorProperty { + public: + std::vector ListArguments() const override { + return {"data", "rois"}; + } + + std::vector ListOutputs() const override { + return {"output", "maxidx"}; + } + + int NumOutputs() const override { + return 2; + } + + int NumVisibleOutputs() const override { + return 1; + } + + void Init(const std::vector >& kwargs) override { + param_.Init(kwargs); + if (param_.group_size == 0) { + param_.group_size = param_.pooled_size; + } + } + + std::map GetParams() const override { + return param_.__DICT__(); + } + + bool InferShape(std::vector *in_shape, + std::vector *out_shape, + std::vector *aux_shape) const override { + using namespace mshadow; + CHECK_EQ(in_shape->size(), 2) << "Input:[data, rois]"; + + // data: [batch_size, c, h, w] + TShape dshape = in_shape->at(psroipool::kData); + CHECK_EQ(dshape.ndim(), 4) << "data should be a 4D tensor"; + + // bbox: [num_rois, 5] + TShape bshape = in_shape->at(psroipool::kBox); + CHECK_EQ(bshape.ndim(), 2) << "bbox should be a 2D tensor of shape [batch, 5]"; + CHECK_EQ(bshape[1], 5) << "bbox should be a 2D tensor of shape [batch, 5]"; + + // out: [num_rois, c, pooled_h, pooled_w] + // mapping_channel: [num_rois, c, pooled_h, pooled_w] + out_shape->clear(); + out_shape->push_back( + Shape4(bshape[0], param_.output_dim, param_.pooled_size, param_.pooled_size)); + out_shape->push_back( + Shape4(bshape[0], param_.output_dim, param_.pooled_size, param_.pooled_size)); + return true; + } + + bool InferType(std::vector *in_type, + std::vector *out_type, + std::vector *aux_type) const override { + CHECK_EQ(in_type->size(), 2); + int dtype = (*in_type)[0]; + CHECK_EQ(dtype, (*in_type)[1]); + CHECK_NE(dtype, -1) << "Input must have specified type"; + + out_type->clear(); + out_type->push_back(dtype); + out_type->push_back(dtype); + return true; + } + + OperatorProperty* Copy() const override { + PSROIPoolingProp* psroi_pooling_sym = new PSROIPoolingProp(); + psroi_pooling_sym->param_ = this->param_; + return psroi_pooling_sym; + } + + std::string TypeString() const override { + return "_contrib_PSROIPooling"; + } + + // decalre dependency and inplace optimization options + std::vector DeclareBackwardDependency( + const std::vector &out_grad, + const std::vector &in_data, + const std::vector &out_data) const override { + return {out_grad[psroipool::kOut], in_data[psroipool::kBox], out_data[psroipool::kMappingChannel]}; + } + + + Operator* CreateOperator(Context ctx) const override { + LOG(FATAL) << "Not Implemented."; + return NULL; + } + + Operator* CreateOperatorEx(Context ctx, std::vector *in_shape, + std::vector *in_type) const override; + + + private: + PSROIPoolingParam param_; +}; // class PSROIPoolingProp +#endif +} // namespace op +} // namespace mxnet +#endif // MXNET_OPERATOR_PSROI_POOLING_INL_H_ \ No newline at end of file diff --git a/faster_rcnn/operator_cxx/psroi_pooling.cc b/faster_rcnn/operator_cxx/psroi_pooling.cc new file mode 100644 index 0000000..5c7e126 --- /dev/null +++ b/faster_rcnn/operator_cxx/psroi_pooling.cc @@ -0,0 +1,81 @@ +/*! + * Copyright (c) 2017 by Contributors + * Copyright (c) 2017 Microsoft + * Licensed under The MIT License [see LICENSE for details] + * \file psroi_pooling.cc + * \brief psroi pooling operator + * \author Yi Li, Tairui Chen, Guodong Zhang, Jifeng Dai +*/ +#include "./psroi_pooling-inl.h" +#include +#include +#include +#include +#include + +using std::max; +using std::min; +using std::floor; +using std::ceil; + +namespace mshadow { +template +inline void PSROIPoolForward(const Tensor &out, + const Tensor &data, + const Tensor &bbox, + const Tensor &mapping_channel, + const float spatial_scale_, + const int output_dim_, + const int group_size_) { + // NOT_IMPLEMENTED; + return; +} + +template +inline void PSROIPoolBackwardAcc(const Tensor &in_grad, + const Tensor &out_grad, + const Tensor &bbox, + const Tensor &mapping_channel, + const float spatial_scale_, + const int output_dim_) { + // NOT_IMPLEMENTED; + return; +} +} // namespace mshadow + +namespace mxnet { +namespace op { + +template<> +Operator *CreateOp(PSROIPoolingParam param, int dtype) { + Operator* op = NULL; + MSHADOW_REAL_TYPE_SWITCH(dtype, DType, { + op = new PSROIPoolingOp(param); + }); + return op; +} + +Operator *PSROIPoolingProp::CreateOperatorEx(Context ctx, std::vector *in_shape, + std::vector *in_type) const { + std::vector out_shape, aux_shape; + std::vector out_type, aux_type; + CHECK(InferType(in_type, &out_type, &aux_type)); + CHECK(InferShape(in_shape, &out_shape, &aux_shape)); + DO_BIND_DISPATCH(CreateOp, param_, in_type->at(0)); +} + +DMLC_REGISTER_PARAMETER(PSROIPoolingParam); + +MXNET_REGISTER_OP_PROPERTY(_contrib_PSROIPooling, PSROIPoolingProp) +.describe("Performs region-of-interest pooling on inputs. Resize bounding box coordinates by " +"spatial_scale and crop input feature maps accordingly. The cropped feature maps are pooled " +"by max pooling to a fixed size output indicated by pooled_size. batch_size will change to " +"the number of region bounding boxes after PSROIPooling") +.add_argument("data", "Symbol", "Input data to the pooling operator, a 4D Feature maps") +.add_argument("rois", "Symbol", "Bounding box coordinates, a 2D array of " +"[[batch_index, x1, y1, x2, y2]]. (x1, y1) and (x2, y2) are top left and down right corners " +"of designated region of interest. batch_index indicates the index of corresponding image " +"in the input data") +.add_arguments(PSROIPoolingParam::__FIELDS__()); +} // namespace op +} // namespace mxnet \ No newline at end of file diff --git a/faster_rcnn/operator_cxx/psroi_pooling.cu b/faster_rcnn/operator_cxx/psroi_pooling.cu new file mode 100644 index 0000000..96f5398 --- /dev/null +++ b/faster_rcnn/operator_cxx/psroi_pooling.cu @@ -0,0 +1,263 @@ +/*! + * Copyright (c) 2017 by Contributors + * Copyright (c) 2017 Microsoft + * Licensed under The MIT License [see LICENSE for details] + * \file psroi_pooling.cu + * \brief psroi pooling operator + * \author Yi Li, Tairui Chen, Guodong Zhang, Jifeng Dai +*/ +#include "./psroi_pooling-inl.h" +#include +#include +#include +#include +#include "../../common/cuda_utils.h" +#include "../mxnet_op.h" + +#define PSROIPOOLING_CUDA_CHECK(condition) \ + /* Code block avoids redefinition of cudaError_t error */ \ + do { \ + cudaError_t error = condition; \ + CHECK_EQ(error, cudaSuccess) << " " << cudaGetErrorString(error); \ + } while (0) +#define CUDA_KERNEL_LOOP(i, n) \ +for (int i = blockIdx.x * blockDim.x + threadIdx.x; \ + i < (n); \ + i += blockDim.x * gridDim.x) + +namespace mshadow { +namespace cuda { + +template +__global__ void PSROIPoolForwardKernel( + const int count, + const DType* bottom_data, + const DType spatial_scale,f + const int channels, + const int height, const int width, + const int pooled_height, const int pooled_width, + const DType* bottom_rois, + const int output_dim, + const int group_size, + DType* top_data, + DType* mapping_channel) { + CUDA_KERNEL_LOOP(index, count) { + // The output is in order (n, ctop, ph, pw) + int pw = index % pooled_width; + int ph = (index / pooled_width) % pooled_height; + int ctop = (index / pooled_width / pooled_height) % output_dim; + int n = index / pooled_width / pooled_height / output_dim; + + // [start, end) interval for spatial sampling + const DType* offset_bottom_rois = bottom_rois + n * 5; + int roi_batch_ind = offset_bottom_rois[0]; + DType roi_start_w = static_cast(round(offset_bottom_rois[1])) * spatial_scale; + DType roi_start_h = static_cast(round(offset_bottom_rois[2])) * spatial_scale; + DType roi_end_w = static_cast(round(offset_bottom_rois[3]) + 1.) * spatial_scale; + DType roi_end_h = static_cast(round(offset_bottom_rois[4]) + 1.) * spatial_scale; + + // Force too small ROIs to be 1x1 + DType roi_width = max(roi_end_w - roi_start_w, 0.1); //avoid 0 + DType roi_height = max(roi_end_h - roi_start_h, 0.1); + + // Compute w and h at bottom + DType bin_size_h = roi_height / static_cast(pooled_height); + DType bin_size_w = roi_width / static_cast(pooled_width); + + int hstart = floor(static_cast(ph) * bin_size_h + + roi_start_h); + int wstart = floor(static_cast(pw)* bin_size_w + + roi_start_w); + int hend = ceil(static_cast(ph + 1) * bin_size_h + + roi_start_h); + int wend = ceil(static_cast(pw + 1) * bin_size_w + + roi_start_w); + // Add roi offsets and clip to input boundaries + hstart = min(max(hstart, 0), height); + hend = min(max(hend, 0), height); + wstart = min(max(wstart, 0),width); + wend = min(max(wend, 0), width); + bool is_empty = (hend <= hstart) || (wend <= wstart); + + int gw = floor(static_cast(pw)* group_size / pooled_width); + int gh = floor(static_cast(ph)* group_size / pooled_height); + gw = min(max(gw, 0), group_size - 1); + gh = min(max(gh, 0), group_size - 1); + int c = (ctop*group_size + gh)*group_size + gw; + + const DType* offset_bottom_data = bottom_data + (roi_batch_ind * channels + c) * height * width; + DType out_sum = 0; + for (int h = hstart; h < hend; ++h){ + for (int w = wstart; w < wend; ++w){ + int bottom_index = h*width + w; + out_sum += offset_bottom_data[bottom_index]; + } + } + + DType bin_area = (hend - hstart)*(wend - wstart); + top_data[index] = is_empty? (DType)0. : out_sum/bin_area; + mapping_channel[index] = c; + } +} + +template +inline void PSROIPoolForward(const Tensor &out, + const Tensor &data, + const Tensor &bbox, + const Tensor &mapping_channel, + const float spatial_scale, + const int output_dim_, + const int group_size_) { + // LOG(INFO) << "PSROIPoolForward"; + const DType *bottom_data = data.dptr_; + const DType *bottom_rois = bbox.dptr_; + DType *top_data = out.dptr_; + DType *mapping_channel_ptr = mapping_channel.dptr_; + const int count = out.shape_.Size(); + const int channels = data.size(1); + const int height = data.size(2); + const int width = data.size(3); + const int pooled_height = out.size(2); + const int pooled_width = out.size(3); + cudaStream_t stream = Stream::GetStream(out.stream_); + PSROIPoolForwardKernel << > >( + count, bottom_data, spatial_scale, channels, height, width, + pooled_height, pooled_width, bottom_rois, output_dim_, group_size_, top_data, mapping_channel_ptr); + PSROIPOOLING_CUDA_CHECK(cudaPeekAtLastError()); +} + + +template +__global__ void PSROIPoolBackwardAccKernel( + const int count, + const DType* top_diff, + const DType* mapping_channel, + const int num_rois, + const DType spatial_scale, + const int channels, + const int height, const int width, + const int pooled_height, const int pooled_width, + const int output_dim, + DType* bottom_diff, + const DType* bottom_rois) { + CUDA_KERNEL_LOOP(index, count) { + // The output is in order (n, ctop, ph, pw) + int pw = index % pooled_width; + int ph = (index / pooled_width) % pooled_height; + int n = index / pooled_width / pooled_height / output_dim; + + // [start, end) interval for spatial sampling + const DType* offset_bottom_rois = bottom_rois + n * 5; + int roi_batch_ind = offset_bottom_rois[0]; + DType roi_start_w = static_cast(round(offset_bottom_rois[1])) * spatial_scale; + DType roi_start_h = static_cast(round(offset_bottom_rois[2])) * spatial_scale; + DType roi_end_w = static_cast(round(offset_bottom_rois[3]) + 1.) * spatial_scale; + DType roi_end_h = static_cast(round(offset_bottom_rois[4]) + 1.) * spatial_scale; + + // Force too small ROIs to be 1x1 + DType roi_width = max(roi_end_w - roi_start_w, 0.1); //avoid 0 + DType roi_height = max(roi_end_h - roi_start_h, 0.1); + + // Compute w and h at bottom + DType bin_size_h = roi_height / static_cast(pooled_height); + DType bin_size_w = roi_width / static_cast(pooled_width); + + int hstart = floor(static_cast(ph)* bin_size_h + + roi_start_h); + int wstart = floor(static_cast(pw)* bin_size_w + + roi_start_w); + int hend = ceil(static_cast(ph + 1) * bin_size_h + + roi_start_h); + int wend = ceil(static_cast(pw + 1) * bin_size_w + + roi_start_w); + // Add roi offsets and clip to input boundaries + hstart = min(max(hstart, 0), height); + hend = min(max(hend, 0), height); + wstart = min(max(wstart, 0), width); + wend = min(max(wend, 0), width); + bool is_empty = (hend <= hstart) || (wend <= wstart); + + // Compute c at bottom + int c = mapping_channel[index]; + DType* offset_bottom_diff = bottom_diff + (roi_batch_ind * channels + c) * height * width; + DType bin_area = (hend - hstart)*(wend - wstart); + DType diff_val = is_empty ? (DType)0. : top_diff[index] / bin_area; + for (int h = hstart; h < hend; ++h){ + for (int w = wstart; w < wend; ++w){ + int bottom_index = h*width + w; + // mxnet_gpu_atomic_add(diff_val, offset_bottom_diff + bottom_index); + atomicAdd(offset_bottom_diff + bottom_index, diff_val); + } + } + } +} + + +template +inline void PSROIPoolBackwardAcc(const Tensor &in_grad, + const Tensor &out_grad, + const Tensor &bbox, + const Tensor &mapping_channel, + const float spatial_scale, + const int output_dim_) { + // LOG(INFO) << "PSROIPoolBackward"; + const DType *top_diff = out_grad.dptr_; + const DType *bottom_rois = bbox.dptr_; + DType *bottom_diff = in_grad.dptr_; + DType *mapping_channel_ptr = mapping_channel.dptr_; + const int count = out_grad.shape_.Size(); + const int num_rois = bbox.size(0); + const int channels = in_grad.size(1); + const int height = in_grad.size(2); + const int width = in_grad.size(3); + const int pooled_height = out_grad.size(2); + const int pooled_width = out_grad.size(3); + cudaStream_t stream = Stream::GetStream(in_grad.stream_); + PSROIPoolBackwardAccKernel << > >( + count, top_diff, mapping_channel_ptr, num_rois, spatial_scale, channels, height, width, + pooled_height, pooled_width, output_dim_, bottom_diff, bottom_rois); + PSROIPOOLING_CUDA_CHECK(cudaPeekAtLastError()); +} + +} // namespace cuda + +template +inline void PSROIPoolForward(const Tensor &out, + const Tensor &data, + const Tensor &bbox, + const Tensor &mapping_channel, + const float spatial_scale, + const int output_dim_, + const int group_size_) { + cuda::PSROIPoolForward(out, data, bbox, mapping_channel, spatial_scale, output_dim_, group_size_); +} + +template +inline void PSROIPoolBackwardAcc(const Tensor &in_grad, + const Tensor &out_grad, + const Tensor &bbox, + const Tensor &mapping_channel, + const float spatial_scale, + const int output_dim_) { + cuda::PSROIPoolBackwardAcc(in_grad, out_grad, bbox, mapping_channel, spatial_scale, output_dim_); +} + +} // namespace mshadow + + +namespace mxnet { +namespace op { + +template<> +Operator* CreateOp(PSROIPoolingParam param, int dtype) { + Operator* op = NULL; + MSHADOW_REAL_TYPE_SWITCH(dtype, DType, { + op = new PSROIPoolingOp(param); + }); + return op; +} + +} // namespace op +} // namespace mxnet diff --git a/faster_rcnn/operator_py/RRoIDecoder.py b/faster_rcnn/operator_py/RRoIDecoder.py new file mode 100644 index 0000000..afc94db --- /dev/null +++ b/faster_rcnn/operator_py/RRoIDecoder.py @@ -0,0 +1,167 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Haozhi Qi +# -------------------------------------------------------- + +import mxnet as mx +import numpy as np +import numpy.random as npr +from distutils.util import strtobool + +from bbox.bbox_transform import bbox_pred, clip_boxes +from rpn.generate_anchor import generate_anchors +from nms.nms import gpu_nms_wrapper +# from poly_nms_gpu.poly_nms import poly_gpu_nms +from poly_nms_gpu.nms import poly_gpu_nms_wrapper +from bbox.bbox_transform import dbbox_transform2_inv_warp +from bbox.bbox_transform import clip_polys, RotBox2Polys, polygonToRotRectangle_batch, choose_best_Rroi_batch +import cPickle +import pdb +import copy +DEBUG = False + +## version 2 did not apply nms +class RRoIDecoderOperator(mx.operator.CustomOp): + def __init__(self, pre_nms_top_n, post_nms_top_n, threshold, min_size, cfg): + super(RRoIDecoderOperator, self).__init__() + self._pre_nms_top_n = pre_nms_top_n + self._post_nms_top_n = post_nms_top_n + self._threshold = threshold + self._min_size = min_size + self._cfg = cfg + + def forward(self, is_train, req, in_data, out_data, aux): + # batch_size = in_data[0].shape[0] + batch_size = in_data[0][0][0] + if batch_size.asnumpy() > 1: + raise ValueError("Sorry, multiple images each device is not implemented") + + rois = in_data[0].asnumpy() + st_pred = in_data[1].asnumpy() + # st_score: shape (n, 2) + st_score = in_data[2].asnumpy()[:, :, 1].reshape(-1, 1) + im_info = in_data[-1].asnumpy()[0, :] + + pre_nms_topN = self._pre_nms_top_n + post_nms_topN = self._post_nms_top_n + min_size = self._min_size + # 1. generate Rrois + cfg = self._cfg + # checked it, yes, the weights is different in training and testing, so the st_pred is different in training and testing + # this is very critical + if is_train: + if cfg.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED: + means = np.tile(np.array(cfg.TRAIN.BBOX_MEANS), 2 if cfg.CLASS_AGNOSTIC else cfg.dataset.NUM_CLASSES) + stds = np.tile(np.array(cfg.TRAIN.BBOX_STDS), 2 if cfg.CLASS_AGNOSTIC else cfg.dataset.NUM_CLASSES) + st_pred = st_pred * stds + means + Rrois = dbbox_transform2_inv_warp(rois[:, 1:], st_pred)[:, 5:] + if (len(Rrois) == 0): + pdb.set_trace() + # remove Rrois with either height or width < thredhold + keep = self._filter_boxes_v2(Rrois, min_size * im_info[2] * min_size * im_info[2]) + keep_Rrois = Rrois[keep] + scores = st_score[keep] + + if len(keep_Rrois) == 0: + Rrois[:, 2] = np.maximum(Rrois[:, 2], min_size * im_info[2]) + Rrois[:, 3] = np.maximum(Rrois[:, 3], min_size * im_info[2]) + # if after filter, there are no instances, clip all Rrois' size + keep_Rrois = Rrois + scores = st_score + proposals = RotBox2Polys(keep_Rrois) + + # sort all (proposal, score) pairs by score from highest to lowest + # take top pre_nms_topN (e.g. 6000) + order = scores.ravel().argsort()[::-1] + if pre_nms_topN > 0: + order = order[:pre_nms_topN] + proposals = proposals[order, :] + scores = scores[order] + # take after_nms_topN (e.g. 300) + # return the top proposals (-> RoIs top) + det = np.hstack((proposals, scores)).astype(np.float32) + + keep = np.arange(len(det)) + if post_nms_topN > 0: + keep = keep[:post_nms_topN] + # pad to ensure output size remains unchanged + if len(keep) < post_nms_topN: + pad = npr.choice(keep, size=post_nms_topN - len(keep)) + keep = np.hstack((keep, pad)) + proposals = proposals[keep, :] + + scores = scores[keep] + # ----------------------------- + # trans polys to rotboxes + proposals = polygonToRotRectangle_batch(proposals) + # range angle in [0, 180] to eliminate ambiguity of orientation agnostic instance regression + proposals = choose_best_Rroi_batch(proposals) + # proposals: (x_ctr, y_ctr, w, h, angle) + # Output rois array + # Our RPN implementation only supports a single input image, so all + # batch inds are 0 + batch_inds = np.zeros((proposals.shape[0], 1), dtype=np.float32) + blob = np.hstack((batch_inds, proposals.astype(np.float32, copy=False))) + # if is_train: + self.assign(out_data[0], req[0], blob) + + # elarged area for feature extraction + elarge_proposals = copy.deepcopy(proposals) + elarge_proposals[:, 2] = proposals[:, 2] * 1.2 + elarge_proposals[:, 3] = proposals[:, 3] * 1.4 + elarge_blob = np.hstack((batch_inds, elarge_proposals.astype(np.float32, copy=False))) + self.assign(out_data[1], req[1], elarge_blob) + + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + for i in range(len(in_grad)): + self.assign(in_grad[i], req[i], 0) + + @staticmethod + def _filter_boxes(boxes, min_size): + """ Remove all boxes with any side smaller than min_size """ + ws = boxes[:, 2] + hs = boxes[:, 3] + keep = np.where((ws >= min_size) & (hs >= min_size))[0] + return keep + + @staticmethod + def _filter_boxes_v2(boxes, area): + """ Remove all boxes with area below 10 * 10 """ + ws = boxes[:, 2] + hs = boxes[:, 3] + # keep = np.where((ws >= min_size) & (hs >= min_size))[0] + keep = np.where(ws * hs >= area)[0] + return keep + +@mx.operator.register("RRoIDecoder") +class RRoIDecoderProp(mx.operator.CustomOpProp): + def __init__(self, cfg, Rroi_pre_nms_top_n='12000', Rroi_post_nms_top_n='2000', threshold='0.5', min_size='10'): + super(RRoIDecoderProp, self).__init__(need_top_grad=False) + self._cfg = cPickle.loads(cfg) + self._Rroi_pre_nms_top_n = int(Rroi_pre_nms_top_n) + self._Rroi_post_nms_top_n = int(Rroi_post_nms_top_n) + self._threshold = float(threshold) + self._min_size = int(min_size) + + def list_arguments(self): + + return ['rois', 'bbox_pred', 'cls_prob', 'im_info'] + + def list_outputs(self): + + # return ['output_rois', 'output_rois_L'] + return ['output', 'output_rois_L'] + + def infer_shape(self, in_shape): + output_shape = (self._Rroi_post_nms_top_n, 6) + + return in_shape, [output_shape, output_shape] + + def create_operator(self, ctx, shapes, dtypes): + return RRoIDecoderOperator(self._Rroi_pre_nms_top_n, self._Rroi_post_nms_top_n, + self._threshold, self._min_size, self._cfg) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [] diff --git a/faster_rcnn/operator_py/RRoI_target_rotbox_v2.py b/faster_rcnn/operator_py/RRoI_target_rotbox_v2.py new file mode 100644 index 0000000..1e59aa4 --- /dev/null +++ b/faster_rcnn/operator_py/RRoI_target_rotbox_v2.py @@ -0,0 +1,134 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +""" +Proposal Target Operator selects foreground and background roi and assigns label, bbox_transform to them. +""" + +import mxnet as mx +import numpy as np +from distutils.util import strtobool +from easydict import EasyDict as edict +import cPickle +from bbox.bbox_transform import bbox_poly2hbb, poly2bbox, polygonToRotRectangle_batch, choose_best_Rroi_batch + +from core.rcnn import sample_Rrois +import copy +import pdb +DEBUG = False + +# v2 is the version with Rroi elarge +class RRoITargetRotBox_v2Operator(mx.operator.CustomOp): + def __init__(self, num_classes, batch_images, batch_rois, cfg, fg_fraction): + super(RRoITargetRotBox_v2Operator, self).__init__() + self._num_classes = num_classes + self._batch_images = batch_images + self._batch_rois = batch_rois + self._cfg = cfg + self._fg_fraction = fg_fraction + + if DEBUG: + self._count = 0 + self._fg_num = 0 + self._bg_num = 0 + + def forward(self, is_train, req, in_data, out_data, aux): + assert self._batch_rois == -1 or self._batch_rois % self._batch_images == 0, \ + 'batchimages {} must devide batch_rois {}'.format(self._batch_images, self._batch_rois) + all_rois = in_data[0].asnumpy() + gt_boxes = in_data[1].asnumpy() + + if self._batch_rois == -1: + rois_per_image = all_rois.shape[0] + gt_boxes.shape[0] + fg_rois_per_image = rois_per_image + else: + rois_per_image = self._batch_rois / self._batch_images + fg_rois_per_image = np.round(self._fg_fraction * rois_per_image).astype(int) + + # Include ground-truth boxes in the set of candidate rois + zeros = np.zeros((gt_boxes.shape[0], 1), dtype=gt_boxes.dtype) + # pdb.set_trace() + gt_rotboxes = np.concatenate((polygonToRotRectangle_batch(gt_boxes[:, :-1]), gt_boxes[:, -1][:, np.newaxis]), axis=1).astype(np.float32) + + all_rois = np.vstack((all_rois, np.hstack((zeros, choose_best_Rroi_batch(gt_rotboxes[:, :-1]))))) + # Sanity check: single batch only + assert np.all(all_rois[:, 0] == 0), 'Only single item batches are supported' + gpu_id = in_data[0].context.device_id + rois, labels, bbox_targets, bbox_weights = \ + sample_Rrois(all_rois, fg_rois_per_image, rois_per_image, self._num_classes, self._cfg, gt_boxes=gt_rotboxes, device_id=gpu_id) + + # elarge roi for feature extraction + # rois: (n, 6) (batch, x, y, w, h ,theta) + # pdb.set_trace() + elarge_rois = copy.deepcopy(rois) + elarge_rois[:, 3] = rois[:, 3] * 1.2 + elarge_rois[:, 4] = rois[:, 4] * 1.4 + + if DEBUG: + print "labels=", labels + print 'num fg: {}'.format((labels > 0).sum()) + print 'num bg: {}'.format((labels == 0).sum()) + self._count += 1 + self._fg_num += (labels > 0).sum() + self._bg_num += (labels == 0).sum() + print "self._count=", self._count + print 'num fg avg: {}'.format(self._fg_num / self._count) + print 'num bg avg: {}'.format(self._bg_num / self._count) + print 'ratio: {:.3f}'.format(float(self._fg_num) / float(self._bg_num)) + # pdb.set_trace() + for ind, val in enumerate([rois, elarge_rois, labels, bbox_targets, bbox_weights]): + self.assign(out_data[ind], req[ind], val) + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + for i in range(len(in_grad)): + self.assign(in_grad[i], req[i], 0) + +@mx.operator.register('RRoI_target_rotbox_v2') +class RRoITargetRotbox_v2Prop(mx.operator.CustomOpProp): + def __init__(self, num_classes, batch_images, batch_rois, cfg, fg_fraction='0.25'): + super(RRoITargetRotbox_v2Prop, self).__init__(need_top_grad=False) + self._num_classes = int(num_classes) + self._batch_images = int(batch_images) + self._batch_rois = int(batch_rois) + self._cfg = cPickle.loads(cfg) + self._fg_fraction = float(fg_fraction) + + def list_arguments(self): + return ['Rrois', 'gt_boxes'] + + def list_outputs(self): + return ['Rrois_output', 'Rrois_output_elarge', 'Rlabel', 'Rbbox_target', 'Rbbox_weight'] + + def infer_shape(self, in_shape): + rpn_rois_shape = in_shape[0] + gt_boxes_shape = in_shape[1] + + rois = rpn_rois_shape[0] + gt_boxes_shape[0] if self._batch_rois == -1 else self._batch_rois + # rois = rpn_rois_shape[0] if self._batch_rois == -1 else self._batch_rois + + output_rois_shape = (rois, 6) + label_shape = (rois, ) + bbox_target_shape = (rois, 5 * self._num_classes) + bbox_weight_shape = (rois, 5 * self._num_classes) + + return [rpn_rois_shape, gt_boxes_shape], \ + [output_rois_shape, output_rois_shape, label_shape, bbox_target_shape, bbox_weight_shape] + + # return [rpn_rois_shape], \ + # [output_rois_shape, label_shape, bbox_target_shape, bbox_weight_shape] + + + def create_operator(self, ctx, shapes, dtypes): + return RRoITargetRotBox_v2Operator(self._num_classes, self._batch_images, self._batch_rois, self._cfg, self._fg_fraction) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [] diff --git a/faster_rcnn/operator_py/__init__.py b/faster_rcnn/operator_py/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/faster_rcnn/operator_py/box_annotator_ohem.py b/faster_rcnn/operator_py/box_annotator_ohem.py new file mode 100644 index 0000000..1ad2a7b --- /dev/null +++ b/faster_rcnn/operator_py/box_annotator_ohem.py @@ -0,0 +1,87 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Yuwen Xiong +# -------------------------------------------------------- + +""" +Proposal Target Operator selects foreground and background roi and assigns label, bbox_transform to them. +""" + +import mxnet as mx +import numpy as np +from distutils.util import strtobool +import pdb + + + +class BoxAnnotatorOHEMOperator(mx.operator.CustomOp): + def __init__(self, num_classes, num_reg_classes, roi_per_img): + super(BoxAnnotatorOHEMOperator, self).__init__() + self._num_classes = num_classes + self._num_reg_classes = num_reg_classes + self._roi_per_img = roi_per_img + + def forward(self, is_train, req, in_data, out_data, aux): + + cls_score = in_data[0] + bbox_pred = in_data[1] + labels = in_data[2].asnumpy() + bbox_targets = in_data[3] + bbox_weights = in_data[4] + + # pdb.set_trace() + per_roi_loss_cls = mx.nd.SoftmaxActivation(cls_score) + 1e-14 + per_roi_loss_cls = per_roi_loss_cls.asnumpy() + ## get the probilities of category corresponding to category, which should be 1 in ground truth + per_roi_loss_cls = per_roi_loss_cls[np.arange(per_roi_loss_cls.shape[0], dtype='int'), labels.astype('int')] + per_roi_loss_cls = -1 * np.log(per_roi_loss_cls) + per_roi_loss_cls = np.reshape(per_roi_loss_cls, newshape=(-1,)) + + per_roi_loss_bbox = bbox_weights * mx.nd.smooth_l1((bbox_pred - bbox_targets), scalar=1.0) + per_roi_loss_bbox = mx.nd.sum(per_roi_loss_bbox, axis=1).asnumpy() + + top_k_per_roi_loss = np.argsort(per_roi_loss_cls + per_roi_loss_bbox) + labels_ohem = labels + labels_ohem[top_k_per_roi_loss[::-1][self._roi_per_img:]] = -1 + bbox_weights_ohem = bbox_weights.asnumpy() + bbox_weights_ohem[top_k_per_roi_loss[::-1][self._roi_per_img:]] = 0 + + labels_ohem = mx.nd.array(labels_ohem) + bbox_weights_ohem = mx.nd.array(bbox_weights_ohem) + + for ind, val in enumerate([labels_ohem, bbox_weights_ohem]): + self.assign(out_data[ind], req[ind], val) + + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + for i in range(len(in_grad)): + self.assign(in_grad[i], req[i], 0) + + +@mx.operator.register('BoxAnnotatorOHEM') +class BoxAnnotatorOHEMProp(mx.operator.CustomOpProp): + def __init__(self, num_classes, num_reg_classes, roi_per_img): + super(BoxAnnotatorOHEMProp, self).__init__(need_top_grad=False) + self._num_classes = int(num_classes) + self._num_reg_classes = int(num_reg_classes) + self._roi_per_img = int(roi_per_img) + + def list_arguments(self): + return ['cls_score', 'bbox_pred', 'labels', 'bbox_targets', 'bbox_weights'] + + def list_outputs(self): + return ['labels_ohem', 'bbox_weights_ohem'] + + def infer_shape(self, in_shape): + labels_shape = in_shape[2] + bbox_weights_shape = in_shape[4] + + return in_shape, \ + [labels_shape, bbox_weights_shape] + + def create_operator(self, ctx, shapes, dtypes): + return BoxAnnotatorOHEMOperator(self._num_classes, self._num_reg_classes, self._roi_per_img) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [] diff --git a/faster_rcnn/operator_py/proposal.py b/faster_rcnn/operator_py/proposal.py new file mode 100644 index 0000000..815ad58 --- /dev/null +++ b/faster_rcnn/operator_py/proposal.py @@ -0,0 +1,242 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- +""" +Proposal Operator transform anchor coordinates into ROI coordinates with prediction results on +classification probability and bounding box prediction results, and image size and scale information. +""" + +import mxnet as mx +import numpy as np +import numpy.random as npr +from distutils.util import strtobool + +from bbox.bbox_transform import bbox_pred, clip_boxes +from rpn.generate_anchor import generate_anchors +from nms.nms import py_nms_wrapper, cpu_nms_wrapper, gpu_nms_wrapper + +DEBUG = False + + +class ProposalOperator(mx.operator.CustomOp): + def __init__(self, feat_stride, scales, ratios, output_score, + rpn_pre_nms_top_n, rpn_post_nms_top_n, threshold, rpn_min_size): + super(ProposalOperator, self).__init__() + self._feat_stride = feat_stride + self._scales = np.fromstring(scales[1:-1], dtype=float, sep=',') + self._ratios = np.fromstring(ratios[1:-1], dtype=float, sep=',') + self._anchors = generate_anchors(base_size=self._feat_stride, scales=self._scales, ratios=self._ratios) + self._num_anchors = self._anchors.shape[0] + self._output_score = output_score + self._rpn_pre_nms_top_n = rpn_pre_nms_top_n + self._rpn_post_nms_top_n = rpn_post_nms_top_n + self._threshold = threshold + self._rpn_min_size = rpn_min_size + + if DEBUG: + print 'feat_stride: {}'.format(self._feat_stride) + print 'anchors:' + print self._anchors + + def forward(self, is_train, req, in_data, out_data, aux): + nms = gpu_nms_wrapper(self._threshold, in_data[0].context.device_id) + + batch_size = in_data[0].shape[0] + if batch_size > 1: + raise ValueError("Sorry, multiple images each device is not implemented") + + # for each (H, W) location i + # generate A anchor boxes centered on cell i + # apply predicted bbox deltas at cell i to each of the A anchors + # clip predicted boxes to image + # remove predicted boxes with either height or width < threshold + # sort all (proposal, score) pairs by score from highest to lowest + # take top pre_nms_topN proposals before NMS + # apply NMS with threshold 0.7 to remaining proposals + # take after_nms_topN proposals after NMS + # return the top proposals (-> RoIs top, scores top) + + pre_nms_topN = self._rpn_pre_nms_top_n + post_nms_topN = self._rpn_post_nms_top_n + min_size = self._rpn_min_size + + # the first set of anchors are background probabilities + # keep the second part + scores = in_data[0].asnumpy()[:, self._num_anchors:, :, :] + bbox_deltas = in_data[1].asnumpy() + im_info = in_data[2].asnumpy()[0, :] + + if DEBUG: + print 'im_size: ({}, {})'.format(im_info[0], im_info[1]) + print 'scale: {}'.format(im_info[2]) + + # 1. Generate proposals from bbox_deltas and shifted anchors + # use real image size instead of padded feature map sizes + height, width = int(im_info[0] / self._feat_stride), int(im_info[1] / self._feat_stride) + + if DEBUG: + print 'score map size: {}'.format(scores.shape) + print "resudial: {}".format((scores.shape[2] - height, scores.shape[3] - width)) + + # Enumerate all shifts + shift_x = np.arange(0, width) * self._feat_stride + shift_y = np.arange(0, height) * self._feat_stride + shift_x, shift_y = np.meshgrid(shift_x, shift_y) + shifts = np.vstack((shift_x.ravel(), shift_y.ravel(), shift_x.ravel(), shift_y.ravel())).transpose() + + # Enumerate all shifted anchors: + # + # add A anchors (1, A, 4) to + # cell K shifts (K, 1, 4) to get + # shift anchors (K, A, 4) + # reshape to (K*A, 4) shifted anchors + A = self._num_anchors + K = shifts.shape[0] + anchors = self._anchors.reshape((1, A, 4)) + shifts.reshape((1, K, 4)).transpose((1, 0, 2)) + anchors = anchors.reshape((K * A, 4)) + + # Transpose and reshape predicted bbox transformations to get them + # into the same order as the anchors: + # + # bbox deltas will be (1, 4 * A, H, W) format + # transpose to (1, H, W, 4 * A) + # reshape to (1 * H * W * A, 4) where rows are ordered by (h, w, a) + # in slowest to fastest order + bbox_deltas = self._clip_pad(bbox_deltas, (height, width)) + bbox_deltas = bbox_deltas.transpose((0, 2, 3, 1)).reshape((-1, 4)) + + # Same story for the scores: + # + # scores are (1, A, H, W) format + # transpose to (1, H, W, A) + # reshape to (1 * H * W * A, 1) where rows are ordered by (h, w, a) + scores = self._clip_pad(scores, (height, width)) + scores = scores.transpose((0, 2, 3, 1)).reshape((-1, 1)) + + # Convert anchors into proposals via bbox transformations + proposals = bbox_pred(anchors, bbox_deltas) + + # 2. clip predicted boxes to image + proposals = clip_boxes(proposals, im_info[:2]) + + # 3. remove predicted boxes with either height or width < threshold + # (NOTE: convert min_size to input image scale stored in im_info[2]) + keep = self._filter_boxes(proposals, min_size * im_info[2]) + proposals = proposals[keep, :] + scores = scores[keep] + + # 4. sort all (proposal, score) pairs by score from highest to lowest + # 5. take top pre_nms_topN (e.g. 6000) + order = scores.ravel().argsort()[::-1] + if pre_nms_topN > 0: + order = order[:pre_nms_topN] + proposals = proposals[order, :] + scores = scores[order] + + # 6. apply nms (e.g. threshold = 0.7) + # 7. take after_nms_topN (e.g. 300) + # 8. return the top proposals (-> RoIs top) + det = np.hstack((proposals, scores)).astype(np.float32) + keep = nms(det) + if post_nms_topN > 0: + keep = keep[:post_nms_topN] + # pad to ensure output size remains unchanged + if len(keep) < post_nms_topN: + pad = npr.choice(keep, size=post_nms_topN - len(keep)) + keep = np.hstack((keep, pad)) + proposals = proposals[keep, :] + scores = scores[keep] + + # Output rois array + # Our RPN implementation only supports a single input image, so all + # batch inds are 0 + batch_inds = np.zeros((proposals.shape[0], 1), dtype=np.float32) + blob = np.hstack((batch_inds, proposals.astype(np.float32, copy=False))) + self.assign(out_data[0], req[0], blob) + + if self._output_score: + self.assign(out_data[1], req[1], scores.astype(np.float32, copy=False)) + + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + self.assign(in_grad[0], req[0], 0) + self.assign(in_grad[1], req[1], 0) + self.assign(in_grad[2], req[2], 0) + + @staticmethod + def _filter_boxes(boxes, min_size): + """ Remove all boxes with any side smaller than min_size """ + ws = boxes[:, 2] - boxes[:, 0] + 1 + hs = boxes[:, 3] - boxes[:, 1] + 1 + keep = np.where((ws >= min_size) & (hs >= min_size))[0] + return keep + + @staticmethod + def _clip_pad(tensor, pad_shape): + """ + Clip boxes of the pad area. + :param tensor: [n, c, H, W] + :param pad_shape: [h, w] + :return: [n, c, h, w] + """ + H, W = tensor.shape[2:] + h, w = pad_shape + + if h < H or w < W: + tensor = tensor[:, :, :h, :w].copy() + + return tensor + + +@mx.operator.register("proposal") +class ProposalProp(mx.operator.CustomOpProp): + def __init__(self, feat_stride='16', scales='(8, 16, 32)', ratios='(0.5, 1, 2)', output_score='False', + rpn_pre_nms_top_n='6000', rpn_post_nms_top_n='300', threshold='0.3', rpn_min_size='16'): + super(ProposalProp, self).__init__(need_top_grad=False) + self._feat_stride = int(feat_stride) + self._scales = scales + self._ratios = ratios + self._output_score = strtobool(output_score) + self._rpn_pre_nms_top_n = int(rpn_pre_nms_top_n) + self._rpn_post_nms_top_n = int(rpn_post_nms_top_n) + self._threshold = float(threshold) + self._rpn_min_size = int(rpn_min_size) + + def list_arguments(self): + return ['cls_prob', 'bbox_pred', 'im_info'] + + def list_outputs(self): + if self._output_score: + return ['output', 'score'] + else: + return ['output'] + + def infer_shape(self, in_shape): + cls_prob_shape = in_shape[0] + bbox_pred_shape = in_shape[1] + assert cls_prob_shape[0] == bbox_pred_shape[0], 'ROI number does not equal in cls and reg' + + batch_size = cls_prob_shape[0] + im_info_shape = (batch_size, 3) + output_shape = (self._rpn_post_nms_top_n, 5) + score_shape = (self._rpn_post_nms_top_n, 1) + + if self._output_score: + return [cls_prob_shape, bbox_pred_shape, im_info_shape], [output_shape, score_shape] + else: + return [cls_prob_shape, bbox_pred_shape, im_info_shape], [output_shape] + + def create_operator(self, ctx, shapes, dtypes): + return ProposalOperator(self._feat_stride, self._scales, self._ratios, self._output_score, + self._rpn_pre_nms_top_n, self._rpn_post_nms_top_n, self._threshold, self._rpn_min_size) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [] diff --git a/faster_rcnn/operator_py/proposal_target.py b/faster_rcnn/operator_py/proposal_target.py new file mode 100644 index 0000000..f837742 --- /dev/null +++ b/faster_rcnn/operator_py/proposal_target.py @@ -0,0 +1,121 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +""" +Proposal Target Operator selects foreground and background roi and assigns label, bbox_transform to them. +""" + +import mxnet as mx +import numpy as np +from distutils.util import strtobool +from easydict import EasyDict as edict +import cPickle + + +from core.rcnn import sample_rois + +DEBUG = False + + +class ProposalTargetOperator(mx.operator.CustomOp): + def __init__(self, num_classes, batch_images, batch_rois, cfg, fg_fraction): + super(ProposalTargetOperator, self).__init__() + self._num_classes = num_classes + self._batch_images = batch_images + self._batch_rois = batch_rois + self._cfg = cfg + self._fg_fraction = fg_fraction + + if DEBUG: + self._count = 0 + self._fg_num = 0 + self._bg_num = 0 + + def forward(self, is_train, req, in_data, out_data, aux): + assert self._batch_rois == -1 or self._batch_rois % self._batch_images == 0, \ + 'batchimages {} must devide batch_rois {}'.format(self._batch_images, self._batch_rois) + all_rois = in_data[0].asnumpy() + gt_boxes = in_data[1].asnumpy() + + if self._batch_rois == -1: + rois_per_image = all_rois.shape[0] + gt_boxes.shape[0] + fg_rois_per_image = rois_per_image + else: + rois_per_image = self._batch_rois / self._batch_images + fg_rois_per_image = np.round(self._fg_fraction * rois_per_image).astype(int) + + + # Include ground-truth boxes in the set of candidate rois + zeros = np.zeros((gt_boxes.shape[0], 1), dtype=gt_boxes.dtype) + all_rois = np.vstack((all_rois, np.hstack((zeros, gt_boxes[:, :-1])))) + # Sanity check: single batch only + assert np.all(all_rois[:, 0] == 0), 'Only single item batches are supported' + + rois, labels, bbox_targets, bbox_weights = \ + sample_rois(all_rois, fg_rois_per_image, rois_per_image, self._num_classes, self._cfg, gt_boxes=gt_boxes) + + if DEBUG: + print "labels=", labels + print 'num fg: {}'.format((labels > 0).sum()) + print 'num bg: {}'.format((labels == 0).sum()) + self._count += 1 + self._fg_num += (labels > 0).sum() + self._bg_num += (labels == 0).sum() + print "self._count=", self._count + print 'num fg avg: {}'.format(self._fg_num / self._count) + print 'num bg avg: {}'.format(self._bg_num / self._count) + print 'ratio: {:.3f}'.format(float(self._fg_num) / float(self._bg_num)) + + for ind, val in enumerate([rois, labels, bbox_targets, bbox_weights]): + self.assign(out_data[ind], req[ind], val) + + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + self.assign(in_grad[0], req[0], 0) + self.assign(in_grad[1], req[1], 0) + + +@mx.operator.register('proposal_target') +class ProposalTargetProp(mx.operator.CustomOpProp): + def __init__(self, num_classes, batch_images, batch_rois, cfg, fg_fraction='0.25'): + super(ProposalTargetProp, self).__init__(need_top_grad=False) + self._num_classes = int(num_classes) + self._batch_images = int(batch_images) + self._batch_rois = int(batch_rois) + self._cfg = cPickle.loads(cfg) + self._fg_fraction = float(fg_fraction) + + def list_arguments(self): + return ['rois', 'gt_boxes'] + + def list_outputs(self): + return ['rois_output', 'label', 'bbox_target', 'bbox_weight'] + + def infer_shape(self, in_shape): + rpn_rois_shape = in_shape[0] + gt_boxes_shape = in_shape[1] + + rois = rpn_rois_shape[0] + gt_boxes_shape[0] if self._batch_rois == -1 else self._batch_rois + + output_rois_shape = (rois, 5) + label_shape = (rois, ) + bbox_target_shape = (rois, self._num_classes * 4) + bbox_weight_shape = (rois, self._num_classes * 4) + + return [rpn_rois_shape, gt_boxes_shape], \ + [output_rois_shape, label_shape, bbox_target_shape, bbox_weight_shape] + + def create_operator(self, ctx, shapes, dtypes): + return ProposalTargetOperator(self._num_classes, self._batch_images, self._batch_rois, self._cfg, self._fg_fraction) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [] diff --git a/faster_rcnn/operator_py/proposal_target_rotbox.py b/faster_rcnn/operator_py/proposal_target_rotbox.py new file mode 100644 index 0000000..70043f9 --- /dev/null +++ b/faster_rcnn/operator_py/proposal_target_rotbox.py @@ -0,0 +1,123 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +""" +Proposal Target Operator selects foreground and background roi and assigns label, bbox_transform to them. +""" + +import mxnet as mx +import numpy as np +import cPickle +from bbox.bbox_transform import poly2bbox + +from core.rcnn import sample_rotbox_rois +import pdb +DEBUG = False + + +class ProposalTargetRotBoxOperator(mx.operator.CustomOp): + def __init__(self, num_classes, batch_images, batch_rois, cfg, fg_fraction, fg_class_agnostic): + super(ProposalTargetRotBoxOperator, self).__init__() + self._num_classes = num_classes + self._batch_images = batch_images + self._batch_rois = batch_rois + self._cfg = cfg + self._fg_fraction = fg_fraction + self._fg_class_agnostic = fg_class_agnostic + + if DEBUG: + self._count = 0 + self._fg_num = 0 + self._bg_num = 0 + + def forward(self, is_train, req, in_data, out_data, aux): + assert self._batch_rois == -1 or self._batch_rois % self._batch_images == 0, \ + 'batchimages {} must devide batch_rois {}'.format(self._batch_images, self._batch_rois) + # pdb.set_trace() + all_rois = in_data[0].asnumpy() + gt_boxes = in_data[1].asnumpy() + + if self._batch_rois == -1: + rois_per_image = all_rois.shape[0] + gt_boxes.shape[0] + fg_rois_per_image = rois_per_image + else: + rois_per_image = self._batch_rois / self._batch_images + fg_rois_per_image = np.round(self._fg_fraction * rois_per_image).astype(int) + + # Include ground-truth boxes in the set of candidate rois + zeros = np.zeros((gt_boxes.shape[0], 1), dtype=gt_boxes.dtype) + # pdb.set_trace() + all_rois = np.vstack((all_rois, np.hstack((zeros, poly2bbox(gt_boxes[:, :-1]))))) + # Sanity check: single batch only + assert np.all(all_rois[:, 0] == 0), 'Only single item batches are supported' + rois, labels, bbox_targets, bbox_weights = \ + sample_rotbox_rois(all_rois, fg_rois_per_image, rois_per_image, self._num_classes, self._cfg, gt_boxes=gt_boxes) + + if self._fg_class_agnostic: + fg_indexes = labels > 0 + labels[fg_indexes] = 1 + # pdb.set_trace() + if DEBUG: + print "labels=", labels + print 'num fg: {}'.format((labels > 0).sum()) + print 'num bg: {}'.format((labels == 0).sum()) + self._count += 1 + self._fg_num += (labels > 0).sum() + self._bg_num += (labels == 0).sum() + print "self._count=", self._count + print 'num fg avg: {}'.format(self._fg_num / self._count) + print 'num bg avg: {}'.format(self._bg_num / self._count) + print 'ratio: {:.3f}'.format(float(self._fg_num) / float(self._bg_num)) + # pdb.set_trace() + for ind, val in enumerate([rois, labels, bbox_targets, bbox_weights]): + self.assign(out_data[ind], req[ind], val) + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + for i in range(len(in_grad)): + self.assign(in_grad[i], req[i], 0) + +@mx.operator.register('proposal_target_rotbox') +class ProposalTargetRotboxtProp(mx.operator.CustomOpProp): + def __init__(self, num_classes, batch_images, batch_rois, cfg, fg_class_agnostic='False', fg_fraction='0.25'): + super(ProposalTargetRotboxtProp, self).__init__(need_top_grad=False) + self._num_classes = int(num_classes) + self._batch_images = int(batch_images) + self._batch_rois = int(batch_rois) + self._cfg = cPickle.loads(cfg) + self._fg_class_agnostic = fg_class_agnostic == 'True' + self._fg_fraction = float(fg_fraction) + + def list_arguments(self): + return ['rois', 'gt_boxes'] + + def list_outputs(self): + return ['rois_output', 'label', 'bbox_target', 'bbox_weight'] + + def infer_shape(self, in_shape): + rpn_rois_shape = in_shape[0] + gt_boxes_shape = in_shape[1] + + rois = rpn_rois_shape[0] + gt_boxes_shape[0] if self._batch_rois == -1 else self._batch_rois + + output_rois_shape = (rois, 5) + label_shape = (rois, ) + bbox_target_shape = (rois, 5 * self._num_classes) + bbox_weight_shape = (rois, 5 * self._num_classes) + + return [rpn_rois_shape, gt_boxes_shape], \ + [output_rois_shape, label_shape, bbox_target_shape, bbox_weight_shape] + + def create_operator(self, ctx, shapes, dtypes): + return ProposalTargetRotBoxOperator(self._num_classes, self._batch_images, self._batch_rois, self._cfg, self._fg_fraction, self._fg_class_agnostic) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [] diff --git a/faster_rcnn/symbols/__init__.py b/faster_rcnn/symbols/__init__.py new file mode 100644 index 0000000..51ccc9b --- /dev/null +++ b/faster_rcnn/symbols/__init__.py @@ -0,0 +1,6 @@ +import resnet_v1_101_rcnn +import resnet_v1_101_rcnn_dcn +import resnet_v1_101_rcnn_light_head +import resnet_v1_101_rcnn_light_head_deformpsroi +import resnet_v1_101_rcnn_light_head_RoITransformer +import resnet_v1_101_rcnn_obb \ No newline at end of file diff --git a/faster_rcnn/symbols/resnet_v1_101_rcnn.py b/faster_rcnn/symbols/resnet_v1_101_rcnn.py new file mode 100644 index 0000000..290e2a3 --- /dev/null +++ b/faster_rcnn/symbols/resnet_v1_101_rcnn.py @@ -0,0 +1,1014 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Guodong Zhang, Bin Xiao +# -------------------------------------------------------- + +import cPickle +import mxnet as mx +from utils.symbol import Symbol +from operator_py.proposal import * +from operator_py.proposal_target import * +from operator_py.box_annotator_ohem import * +import pdb + +class resnet_v1_101_rcnn(Symbol): + def __init__(self): + """ + Use __init__ to define parameter network needs + """ + self.eps = 1e-5 + self.use_global_stats = True + self.workspace = 512 + self.units = (3, 4, 23, 3) # use for 101 + self.filter_list = [256, 512, 1024, 2048] + + def get_resnet_v1_conv4(self, data): + conv1 = mx.symbol.Convolution(name='conv1', data=data, num_filter=64, pad=(3, 3), kernel=(7, 7), stride=(2, 2), + no_bias=True) + bn_conv1 = mx.symbol.BatchNorm(name='bn_conv1', data=conv1, use_global_stats=True, fix_gamma=False, eps=self.eps) + scale_conv1 = bn_conv1 + conv1_relu = mx.symbol.Activation(name='conv1_relu', data=scale_conv1, act_type='relu') + pool1 = mx.symbol.Pooling(name='pool1', data=conv1_relu, pooling_convention='full', pad=(0, 0), kernel=(3, 3), + stride=(2, 2), pool_type='max') + res2a_branch1 = mx.symbol.Convolution(name='res2a_branch1', data=pool1, num_filter=256, pad=(0, 0), kernel=(1, 1), + stride=(1, 1), no_bias=True) + bn2a_branch1 = mx.symbol.BatchNorm(name='bn2a_branch1', data=res2a_branch1, use_global_stats=True, fix_gamma=False, eps=self.eps) + scale2a_branch1 = bn2a_branch1 + res2a_branch2a = mx.symbol.Convolution(name='res2a_branch2a', data=pool1, num_filter=64, pad=(0, 0), kernel=(1, 1), + stride=(1, 1), no_bias=True) + bn2a_branch2a = mx.symbol.BatchNorm(name='bn2a_branch2a', data=res2a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2a = bn2a_branch2a + res2a_branch2a_relu = mx.symbol.Activation(name='res2a_branch2a_relu', data=scale2a_branch2a, act_type='relu') + res2a_branch2b = mx.symbol.Convolution(name='res2a_branch2b', data=res2a_branch2a_relu, num_filter=64, pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2a_branch2b = mx.symbol.BatchNorm(name='bn2a_branch2b', data=res2a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2b = bn2a_branch2b + res2a_branch2b_relu = mx.symbol.Activation(name='res2a_branch2b_relu', data=scale2a_branch2b, act_type='relu') + res2a_branch2c = mx.symbol.Convolution(name='res2a_branch2c', data=res2a_branch2b_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2a_branch2c = mx.symbol.BatchNorm(name='bn2a_branch2c', data=res2a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2c = bn2a_branch2c + res2a = mx.symbol.broadcast_add(name='res2a', *[scale2a_branch1, scale2a_branch2c]) + res2a_relu = mx.symbol.Activation(name='res2a_relu', data=res2a, act_type='relu') + res2b_branch2a = mx.symbol.Convolution(name='res2b_branch2a', data=res2a_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2a = mx.symbol.BatchNorm(name='bn2b_branch2a', data=res2b_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2a = bn2b_branch2a + res2b_branch2a_relu = mx.symbol.Activation(name='res2b_branch2a_relu', data=scale2b_branch2a, act_type='relu') + res2b_branch2b = mx.symbol.Convolution(name='res2b_branch2b', data=res2b_branch2a_relu, num_filter=64, pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2b_branch2b = mx.symbol.BatchNorm(name='bn2b_branch2b', data=res2b_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2b = bn2b_branch2b + res2b_branch2b_relu = mx.symbol.Activation(name='res2b_branch2b_relu', data=scale2b_branch2b, act_type='relu') + res2b_branch2c = mx.symbol.Convolution(name='res2b_branch2c', data=res2b_branch2b_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2c = mx.symbol.BatchNorm(name='bn2b_branch2c', data=res2b_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2c = bn2b_branch2c + res2b = mx.symbol.broadcast_add(name='res2b', *[res2a_relu, scale2b_branch2c]) + res2b_relu = mx.symbol.Activation(name='res2b_relu', data=res2b, act_type='relu') + res2c_branch2a = mx.symbol.Convolution(name='res2c_branch2a', data=res2b_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2a = mx.symbol.BatchNorm(name='bn2c_branch2a', data=res2c_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2a = bn2c_branch2a + res2c_branch2a_relu = mx.symbol.Activation(name='res2c_branch2a_relu', data=scale2c_branch2a, act_type='relu') + res2c_branch2b = mx.symbol.Convolution(name='res2c_branch2b', data=res2c_branch2a_relu, num_filter=64, pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2c_branch2b = mx.symbol.BatchNorm(name='bn2c_branch2b', data=res2c_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2b = bn2c_branch2b + res2c_branch2b_relu = mx.symbol.Activation(name='res2c_branch2b_relu', data=scale2c_branch2b, act_type='relu') + res2c_branch2c = mx.symbol.Convolution(name='res2c_branch2c', data=res2c_branch2b_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2c = mx.symbol.BatchNorm(name='bn2c_branch2c', data=res2c_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2c = bn2c_branch2c + res2c = mx.symbol.broadcast_add(name='res2c', *[res2b_relu, scale2c_branch2c]) + res2c_relu = mx.symbol.Activation(name='res2c_relu', data=res2c, act_type='relu') + res3a_branch1 = mx.symbol.Convolution(name='res3a_branch1', data=res2c_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch1 = mx.symbol.BatchNorm(name='bn3a_branch1', data=res3a_branch1, use_global_stats=True, fix_gamma=False, eps=self.eps) + scale3a_branch1 = bn3a_branch1 + res3a_branch2a = mx.symbol.Convolution(name='res3a_branch2a', data=res2c_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch2a = mx.symbol.BatchNorm(name='bn3a_branch2a', data=res3a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2a = bn3a_branch2a + res3a_branch2a_relu = mx.symbol.Activation(name='res3a_branch2a_relu', data=scale3a_branch2a, act_type='relu') + res3a_branch2b = mx.symbol.Convolution(name='res3a_branch2b', data=res3a_branch2a_relu, num_filter=128, pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3a_branch2b = mx.symbol.BatchNorm(name='bn3a_branch2b', data=res3a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2b = bn3a_branch2b + res3a_branch2b_relu = mx.symbol.Activation(name='res3a_branch2b_relu', data=scale3a_branch2b, act_type='relu') + res3a_branch2c = mx.symbol.Convolution(name='res3a_branch2c', data=res3a_branch2b_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3a_branch2c = mx.symbol.BatchNorm(name='bn3a_branch2c', data=res3a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2c = bn3a_branch2c + res3a = mx.symbol.broadcast_add(name='res3a', *[scale3a_branch1, scale3a_branch2c]) + res3a_relu = mx.symbol.Activation(name='res3a_relu', data=res3a, act_type='relu') + res3b1_branch2a = mx.symbol.Convolution(name='res3b1_branch2a', data=res3a_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2a = mx.symbol.BatchNorm(name='bn3b1_branch2a', data=res3b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2a = bn3b1_branch2a + res3b1_branch2a_relu = mx.symbol.Activation(name='res3b1_branch2a_relu', data=scale3b1_branch2a, act_type='relu') + res3b1_branch2b = mx.symbol.Convolution(name='res3b1_branch2b', data=res3b1_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b1_branch2b = mx.symbol.BatchNorm(name='bn3b1_branch2b', data=res3b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2b = bn3b1_branch2b + res3b1_branch2b_relu = mx.symbol.Activation(name='res3b1_branch2b_relu', data=scale3b1_branch2b, act_type='relu') + res3b1_branch2c = mx.symbol.Convolution(name='res3b1_branch2c', data=res3b1_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2c = mx.symbol.BatchNorm(name='bn3b1_branch2c', data=res3b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2c = bn3b1_branch2c + res3b1 = mx.symbol.broadcast_add(name='res3b1', *[res3a_relu, scale3b1_branch2c]) + res3b1_relu = mx.symbol.Activation(name='res3b1_relu', data=res3b1, act_type='relu') + res3b2_branch2a = mx.symbol.Convolution(name='res3b2_branch2a', data=res3b1_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2a = mx.symbol.BatchNorm(name='bn3b2_branch2a', data=res3b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2a = bn3b2_branch2a + res3b2_branch2a_relu = mx.symbol.Activation(name='res3b2_branch2a_relu', data=scale3b2_branch2a, act_type='relu') + res3b2_branch2b = mx.symbol.Convolution(name='res3b2_branch2b', data=res3b2_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b2_branch2b = mx.symbol.BatchNorm(name='bn3b2_branch2b', data=res3b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2b = bn3b2_branch2b + res3b2_branch2b_relu = mx.symbol.Activation(name='res3b2_branch2b_relu', data=scale3b2_branch2b, act_type='relu') + res3b2_branch2c = mx.symbol.Convolution(name='res3b2_branch2c', data=res3b2_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2c = mx.symbol.BatchNorm(name='bn3b2_branch2c', data=res3b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2c = bn3b2_branch2c + res3b2 = mx.symbol.broadcast_add(name='res3b2', *[res3b1_relu, scale3b2_branch2c]) + res3b2_relu = mx.symbol.Activation(name='res3b2_relu', data=res3b2, act_type='relu') + res3b3_branch2a = mx.symbol.Convolution(name='res3b3_branch2a', data=res3b2_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2a = mx.symbol.BatchNorm(name='bn3b3_branch2a', data=res3b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2a = bn3b3_branch2a + res3b3_branch2a_relu = mx.symbol.Activation(name='res3b3_branch2a_relu', data=scale3b3_branch2a, act_type='relu') + res3b3_branch2b = mx.symbol.Convolution(name='res3b3_branch2b', data=res3b3_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b3_branch2b = mx.symbol.BatchNorm(name='bn3b3_branch2b', data=res3b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2b = bn3b3_branch2b + res3b3_branch2b_relu = mx.symbol.Activation(name='res3b3_branch2b_relu', data=scale3b3_branch2b, act_type='relu') + res3b3_branch2c = mx.symbol.Convolution(name='res3b3_branch2c', data=res3b3_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2c = mx.symbol.BatchNorm(name='bn3b3_branch2c', data=res3b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2c = bn3b3_branch2c + res3b3 = mx.symbol.broadcast_add(name='res3b3', *[res3b2_relu, scale3b3_branch2c]) + res3b3_relu = mx.symbol.Activation(name='res3b3_relu', data=res3b3, act_type='relu') + res4a_branch1 = mx.symbol.Convolution(name='res4a_branch1', data=res3b3_relu, num_filter=1024, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch1 = mx.symbol.BatchNorm(name='bn4a_branch1', data=res4a_branch1, use_global_stats=True, fix_gamma=False, eps=self.eps) + scale4a_branch1 = bn4a_branch1 + res4a_branch2a = mx.symbol.Convolution(name='res4a_branch2a', data=res3b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch2a = mx.symbol.BatchNorm(name='bn4a_branch2a', data=res4a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2a = bn4a_branch2a + res4a_branch2a_relu = mx.symbol.Activation(name='res4a_branch2a_relu', data=scale4a_branch2a, act_type='relu') + res4a_branch2b = mx.symbol.Convolution(name='res4a_branch2b', data=res4a_branch2a_relu, num_filter=256, pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4a_branch2b = mx.symbol.BatchNorm(name='bn4a_branch2b', data=res4a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2b = bn4a_branch2b + res4a_branch2b_relu = mx.symbol.Activation(name='res4a_branch2b_relu', data=scale4a_branch2b, act_type='relu') + res4a_branch2c = mx.symbol.Convolution(name='res4a_branch2c', data=res4a_branch2b_relu, num_filter=1024, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4a_branch2c = mx.symbol.BatchNorm(name='bn4a_branch2c', data=res4a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2c = bn4a_branch2c + res4a = mx.symbol.broadcast_add(name='res4a', *[scale4a_branch1, scale4a_branch2c]) + res4a_relu = mx.symbol.Activation(name='res4a_relu', data=res4a, act_type='relu') + res4b1_branch2a = mx.symbol.Convolution(name='res4b1_branch2a', data=res4a_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2a = mx.symbol.BatchNorm(name='bn4b1_branch2a', data=res4b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2a = bn4b1_branch2a + res4b1_branch2a_relu = mx.symbol.Activation(name='res4b1_branch2a_relu', data=scale4b1_branch2a, act_type='relu') + res4b1_branch2b = mx.symbol.Convolution(name='res4b1_branch2b', data=res4b1_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b1_branch2b = mx.symbol.BatchNorm(name='bn4b1_branch2b', data=res4b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2b = bn4b1_branch2b + res4b1_branch2b_relu = mx.symbol.Activation(name='res4b1_branch2b_relu', data=scale4b1_branch2b, act_type='relu') + res4b1_branch2c = mx.symbol.Convolution(name='res4b1_branch2c', data=res4b1_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2c = mx.symbol.BatchNorm(name='bn4b1_branch2c', data=res4b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2c = bn4b1_branch2c + res4b1 = mx.symbol.broadcast_add(name='res4b1', *[res4a_relu, scale4b1_branch2c]) + res4b1_relu = mx.symbol.Activation(name='res4b1_relu', data=res4b1, act_type='relu') + res4b2_branch2a = mx.symbol.Convolution(name='res4b2_branch2a', data=res4b1_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2a = mx.symbol.BatchNorm(name='bn4b2_branch2a', data=res4b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2a = bn4b2_branch2a + res4b2_branch2a_relu = mx.symbol.Activation(name='res4b2_branch2a_relu', data=scale4b2_branch2a, act_type='relu') + res4b2_branch2b = mx.symbol.Convolution(name='res4b2_branch2b', data=res4b2_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b2_branch2b = mx.symbol.BatchNorm(name='bn4b2_branch2b', data=res4b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2b = bn4b2_branch2b + res4b2_branch2b_relu = mx.symbol.Activation(name='res4b2_branch2b_relu', data=scale4b2_branch2b, act_type='relu') + res4b2_branch2c = mx.symbol.Convolution(name='res4b2_branch2c', data=res4b2_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2c = mx.symbol.BatchNorm(name='bn4b2_branch2c', data=res4b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2c = bn4b2_branch2c + res4b2 = mx.symbol.broadcast_add(name='res4b2', *[res4b1_relu, scale4b2_branch2c]) + res4b2_relu = mx.symbol.Activation(name='res4b2_relu', data=res4b2, act_type='relu') + res4b3_branch2a = mx.symbol.Convolution(name='res4b3_branch2a', data=res4b2_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2a = mx.symbol.BatchNorm(name='bn4b3_branch2a', data=res4b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2a = bn4b3_branch2a + res4b3_branch2a_relu = mx.symbol.Activation(name='res4b3_branch2a_relu', data=scale4b3_branch2a, act_type='relu') + res4b3_branch2b = mx.symbol.Convolution(name='res4b3_branch2b', data=res4b3_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b3_branch2b = mx.symbol.BatchNorm(name='bn4b3_branch2b', data=res4b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2b = bn4b3_branch2b + res4b3_branch2b_relu = mx.symbol.Activation(name='res4b3_branch2b_relu', data=scale4b3_branch2b, act_type='relu') + res4b3_branch2c = mx.symbol.Convolution(name='res4b3_branch2c', data=res4b3_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2c = mx.symbol.BatchNorm(name='bn4b3_branch2c', data=res4b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2c = bn4b3_branch2c + res4b3 = mx.symbol.broadcast_add(name='res4b3', *[res4b2_relu, scale4b3_branch2c]) + res4b3_relu = mx.symbol.Activation(name='res4b3_relu', data=res4b3, act_type='relu') + res4b4_branch2a = mx.symbol.Convolution(name='res4b4_branch2a', data=res4b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2a = mx.symbol.BatchNorm(name='bn4b4_branch2a', data=res4b4_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2a = bn4b4_branch2a + res4b4_branch2a_relu = mx.symbol.Activation(name='res4b4_branch2a_relu', data=scale4b4_branch2a, act_type='relu') + res4b4_branch2b = mx.symbol.Convolution(name='res4b4_branch2b', data=res4b4_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b4_branch2b = mx.symbol.BatchNorm(name='bn4b4_branch2b', data=res4b4_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2b = bn4b4_branch2b + res4b4_branch2b_relu = mx.symbol.Activation(name='res4b4_branch2b_relu', data=scale4b4_branch2b, act_type='relu') + res4b4_branch2c = mx.symbol.Convolution(name='res4b4_branch2c', data=res4b4_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2c = mx.symbol.BatchNorm(name='bn4b4_branch2c', data=res4b4_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2c = bn4b4_branch2c + res4b4 = mx.symbol.broadcast_add(name='res4b4', *[res4b3_relu, scale4b4_branch2c]) + res4b4_relu = mx.symbol.Activation(name='res4b4_relu', data=res4b4, act_type='relu') + res4b5_branch2a = mx.symbol.Convolution(name='res4b5_branch2a', data=res4b4_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2a = mx.symbol.BatchNorm(name='bn4b5_branch2a', data=res4b5_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2a = bn4b5_branch2a + res4b5_branch2a_relu = mx.symbol.Activation(name='res4b5_branch2a_relu', data=scale4b5_branch2a, act_type='relu') + res4b5_branch2b = mx.symbol.Convolution(name='res4b5_branch2b', data=res4b5_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b5_branch2b = mx.symbol.BatchNorm(name='bn4b5_branch2b', data=res4b5_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2b = bn4b5_branch2b + res4b5_branch2b_relu = mx.symbol.Activation(name='res4b5_branch2b_relu', data=scale4b5_branch2b, act_type='relu') + res4b5_branch2c = mx.symbol.Convolution(name='res4b5_branch2c', data=res4b5_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2c = mx.symbol.BatchNorm(name='bn4b5_branch2c', data=res4b5_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2c = bn4b5_branch2c + res4b5 = mx.symbol.broadcast_add(name='res4b5', *[res4b4_relu, scale4b5_branch2c]) + res4b5_relu = mx.symbol.Activation(name='res4b5_relu', data=res4b5, act_type='relu') + res4b6_branch2a = mx.symbol.Convolution(name='res4b6_branch2a', data=res4b5_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2a = mx.symbol.BatchNorm(name='bn4b6_branch2a', data=res4b6_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2a = bn4b6_branch2a + res4b6_branch2a_relu = mx.symbol.Activation(name='res4b6_branch2a_relu', data=scale4b6_branch2a, act_type='relu') + res4b6_branch2b = mx.symbol.Convolution(name='res4b6_branch2b', data=res4b6_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b6_branch2b = mx.symbol.BatchNorm(name='bn4b6_branch2b', data=res4b6_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2b = bn4b6_branch2b + res4b6_branch2b_relu = mx.symbol.Activation(name='res4b6_branch2b_relu', data=scale4b6_branch2b, act_type='relu') + res4b6_branch2c = mx.symbol.Convolution(name='res4b6_branch2c', data=res4b6_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2c = mx.symbol.BatchNorm(name='bn4b6_branch2c', data=res4b6_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2c = bn4b6_branch2c + res4b6 = mx.symbol.broadcast_add(name='res4b6', *[res4b5_relu, scale4b6_branch2c]) + res4b6_relu = mx.symbol.Activation(name='res4b6_relu', data=res4b6, act_type='relu') + res4b7_branch2a = mx.symbol.Convolution(name='res4b7_branch2a', data=res4b6_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2a = mx.symbol.BatchNorm(name='bn4b7_branch2a', data=res4b7_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2a = bn4b7_branch2a + res4b7_branch2a_relu = mx.symbol.Activation(name='res4b7_branch2a_relu', data=scale4b7_branch2a, act_type='relu') + res4b7_branch2b = mx.symbol.Convolution(name='res4b7_branch2b', data=res4b7_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b7_branch2b = mx.symbol.BatchNorm(name='bn4b7_branch2b', data=res4b7_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2b = bn4b7_branch2b + res4b7_branch2b_relu = mx.symbol.Activation(name='res4b7_branch2b_relu', data=scale4b7_branch2b, act_type='relu') + res4b7_branch2c = mx.symbol.Convolution(name='res4b7_branch2c', data=res4b7_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2c = mx.symbol.BatchNorm(name='bn4b7_branch2c', data=res4b7_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2c = bn4b7_branch2c + res4b7 = mx.symbol.broadcast_add(name='res4b7', *[res4b6_relu, scale4b7_branch2c]) + res4b7_relu = mx.symbol.Activation(name='res4b7_relu', data=res4b7, act_type='relu') + res4b8_branch2a = mx.symbol.Convolution(name='res4b8_branch2a', data=res4b7_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2a = mx.symbol.BatchNorm(name='bn4b8_branch2a', data=res4b8_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2a = bn4b8_branch2a + res4b8_branch2a_relu = mx.symbol.Activation(name='res4b8_branch2a_relu', data=scale4b8_branch2a, act_type='relu') + res4b8_branch2b = mx.symbol.Convolution(name='res4b8_branch2b', data=res4b8_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b8_branch2b = mx.symbol.BatchNorm(name='bn4b8_branch2b', data=res4b8_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2b = bn4b8_branch2b + res4b8_branch2b_relu = mx.symbol.Activation(name='res4b8_branch2b_relu', data=scale4b8_branch2b, act_type='relu') + res4b8_branch2c = mx.symbol.Convolution(name='res4b8_branch2c', data=res4b8_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2c = mx.symbol.BatchNorm(name='bn4b8_branch2c', data=res4b8_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2c = bn4b8_branch2c + res4b8 = mx.symbol.broadcast_add(name='res4b8', *[res4b7_relu, scale4b8_branch2c]) + res4b8_relu = mx.symbol.Activation(name='res4b8_relu', data=res4b8, act_type='relu') + res4b9_branch2a = mx.symbol.Convolution(name='res4b9_branch2a', data=res4b8_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2a = mx.symbol.BatchNorm(name='bn4b9_branch2a', data=res4b9_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2a = bn4b9_branch2a + res4b9_branch2a_relu = mx.symbol.Activation(name='res4b9_branch2a_relu', data=scale4b9_branch2a, act_type='relu') + res4b9_branch2b = mx.symbol.Convolution(name='res4b9_branch2b', data=res4b9_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b9_branch2b = mx.symbol.BatchNorm(name='bn4b9_branch2b', data=res4b9_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2b = bn4b9_branch2b + res4b9_branch2b_relu = mx.symbol.Activation(name='res4b9_branch2b_relu', data=scale4b9_branch2b, act_type='relu') + res4b9_branch2c = mx.symbol.Convolution(name='res4b9_branch2c', data=res4b9_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2c = mx.symbol.BatchNorm(name='bn4b9_branch2c', data=res4b9_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2c = bn4b9_branch2c + res4b9 = mx.symbol.broadcast_add(name='res4b9', *[res4b8_relu, scale4b9_branch2c]) + res4b9_relu = mx.symbol.Activation(name='res4b9_relu', data=res4b9, act_type='relu') + res4b10_branch2a = mx.symbol.Convolution(name='res4b10_branch2a', data=res4b9_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2a = mx.symbol.BatchNorm(name='bn4b10_branch2a', data=res4b10_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2a = bn4b10_branch2a + res4b10_branch2a_relu = mx.symbol.Activation(name='res4b10_branch2a_relu', data=scale4b10_branch2a, act_type='relu') + res4b10_branch2b = mx.symbol.Convolution(name='res4b10_branch2b', data=res4b10_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b10_branch2b = mx.symbol.BatchNorm(name='bn4b10_branch2b', data=res4b10_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2b = bn4b10_branch2b + res4b10_branch2b_relu = mx.symbol.Activation(name='res4b10_branch2b_relu', data=scale4b10_branch2b, act_type='relu') + res4b10_branch2c = mx.symbol.Convolution(name='res4b10_branch2c', data=res4b10_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2c = mx.symbol.BatchNorm(name='bn4b10_branch2c', data=res4b10_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2c = bn4b10_branch2c + res4b10 = mx.symbol.broadcast_add(name='res4b10', *[res4b9_relu, scale4b10_branch2c]) + res4b10_relu = mx.symbol.Activation(name='res4b10_relu', data=res4b10, act_type='relu') + res4b11_branch2a = mx.symbol.Convolution(name='res4b11_branch2a', data=res4b10_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2a = mx.symbol.BatchNorm(name='bn4b11_branch2a', data=res4b11_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2a = bn4b11_branch2a + res4b11_branch2a_relu = mx.symbol.Activation(name='res4b11_branch2a_relu', data=scale4b11_branch2a, act_type='relu') + res4b11_branch2b = mx.symbol.Convolution(name='res4b11_branch2b', data=res4b11_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b11_branch2b = mx.symbol.BatchNorm(name='bn4b11_branch2b', data=res4b11_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2b = bn4b11_branch2b + res4b11_branch2b_relu = mx.symbol.Activation(name='res4b11_branch2b_relu', data=scale4b11_branch2b, act_type='relu') + res4b11_branch2c = mx.symbol.Convolution(name='res4b11_branch2c', data=res4b11_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2c = mx.symbol.BatchNorm(name='bn4b11_branch2c', data=res4b11_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2c = bn4b11_branch2c + res4b11 = mx.symbol.broadcast_add(name='res4b11', *[res4b10_relu, scale4b11_branch2c]) + res4b11_relu = mx.symbol.Activation(name='res4b11_relu', data=res4b11, act_type='relu') + res4b12_branch2a = mx.symbol.Convolution(name='res4b12_branch2a', data=res4b11_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2a = mx.symbol.BatchNorm(name='bn4b12_branch2a', data=res4b12_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2a = bn4b12_branch2a + res4b12_branch2a_relu = mx.symbol.Activation(name='res4b12_branch2a_relu', data=scale4b12_branch2a, act_type='relu') + res4b12_branch2b = mx.symbol.Convolution(name='res4b12_branch2b', data=res4b12_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b12_branch2b = mx.symbol.BatchNorm(name='bn4b12_branch2b', data=res4b12_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2b = bn4b12_branch2b + res4b12_branch2b_relu = mx.symbol.Activation(name='res4b12_branch2b_relu', data=scale4b12_branch2b, act_type='relu') + res4b12_branch2c = mx.symbol.Convolution(name='res4b12_branch2c', data=res4b12_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2c = mx.symbol.BatchNorm(name='bn4b12_branch2c', data=res4b12_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2c = bn4b12_branch2c + res4b12 = mx.symbol.broadcast_add(name='res4b12', *[res4b11_relu, scale4b12_branch2c]) + res4b12_relu = mx.symbol.Activation(name='res4b12_relu', data=res4b12, act_type='relu') + res4b13_branch2a = mx.symbol.Convolution(name='res4b13_branch2a', data=res4b12_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2a = mx.symbol.BatchNorm(name='bn4b13_branch2a', data=res4b13_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2a = bn4b13_branch2a + res4b13_branch2a_relu = mx.symbol.Activation(name='res4b13_branch2a_relu', data=scale4b13_branch2a, act_type='relu') + res4b13_branch2b = mx.symbol.Convolution(name='res4b13_branch2b', data=res4b13_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b13_branch2b = mx.symbol.BatchNorm(name='bn4b13_branch2b', data=res4b13_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2b = bn4b13_branch2b + res4b13_branch2b_relu = mx.symbol.Activation(name='res4b13_branch2b_relu', data=scale4b13_branch2b, act_type='relu') + res4b13_branch2c = mx.symbol.Convolution(name='res4b13_branch2c', data=res4b13_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2c = mx.symbol.BatchNorm(name='bn4b13_branch2c', data=res4b13_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2c = bn4b13_branch2c + res4b13 = mx.symbol.broadcast_add(name='res4b13', *[res4b12_relu, scale4b13_branch2c]) + res4b13_relu = mx.symbol.Activation(name='res4b13_relu', data=res4b13, act_type='relu') + res4b14_branch2a = mx.symbol.Convolution(name='res4b14_branch2a', data=res4b13_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2a = mx.symbol.BatchNorm(name='bn4b14_branch2a', data=res4b14_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2a = bn4b14_branch2a + res4b14_branch2a_relu = mx.symbol.Activation(name='res4b14_branch2a_relu', data=scale4b14_branch2a, act_type='relu') + res4b14_branch2b = mx.symbol.Convolution(name='res4b14_branch2b', data=res4b14_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b14_branch2b = mx.symbol.BatchNorm(name='bn4b14_branch2b', data=res4b14_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2b = bn4b14_branch2b + res4b14_branch2b_relu = mx.symbol.Activation(name='res4b14_branch2b_relu', data=scale4b14_branch2b, act_type='relu') + res4b14_branch2c = mx.symbol.Convolution(name='res4b14_branch2c', data=res4b14_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2c = mx.symbol.BatchNorm(name='bn4b14_branch2c', data=res4b14_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2c = bn4b14_branch2c + res4b14 = mx.symbol.broadcast_add(name='res4b14', *[res4b13_relu, scale4b14_branch2c]) + res4b14_relu = mx.symbol.Activation(name='res4b14_relu', data=res4b14, act_type='relu') + res4b15_branch2a = mx.symbol.Convolution(name='res4b15_branch2a', data=res4b14_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2a = mx.symbol.BatchNorm(name='bn4b15_branch2a', data=res4b15_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2a = bn4b15_branch2a + res4b15_branch2a_relu = mx.symbol.Activation(name='res4b15_branch2a_relu', data=scale4b15_branch2a, act_type='relu') + res4b15_branch2b = mx.symbol.Convolution(name='res4b15_branch2b', data=res4b15_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b15_branch2b = mx.symbol.BatchNorm(name='bn4b15_branch2b', data=res4b15_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2b = bn4b15_branch2b + res4b15_branch2b_relu = mx.symbol.Activation(name='res4b15_branch2b_relu', data=scale4b15_branch2b, act_type='relu') + res4b15_branch2c = mx.symbol.Convolution(name='res4b15_branch2c', data=res4b15_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2c = mx.symbol.BatchNorm(name='bn4b15_branch2c', data=res4b15_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2c = bn4b15_branch2c + res4b15 = mx.symbol.broadcast_add(name='res4b15', *[res4b14_relu, scale4b15_branch2c]) + res4b15_relu = mx.symbol.Activation(name='res4b15_relu', data=res4b15, act_type='relu') + res4b16_branch2a = mx.symbol.Convolution(name='res4b16_branch2a', data=res4b15_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2a = mx.symbol.BatchNorm(name='bn4b16_branch2a', data=res4b16_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2a = bn4b16_branch2a + res4b16_branch2a_relu = mx.symbol.Activation(name='res4b16_branch2a_relu', data=scale4b16_branch2a, act_type='relu') + res4b16_branch2b = mx.symbol.Convolution(name='res4b16_branch2b', data=res4b16_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b16_branch2b = mx.symbol.BatchNorm(name='bn4b16_branch2b', data=res4b16_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2b = bn4b16_branch2b + res4b16_branch2b_relu = mx.symbol.Activation(name='res4b16_branch2b_relu', data=scale4b16_branch2b, act_type='relu') + res4b16_branch2c = mx.symbol.Convolution(name='res4b16_branch2c', data=res4b16_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2c = mx.symbol.BatchNorm(name='bn4b16_branch2c', data=res4b16_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2c = bn4b16_branch2c + res4b16 = mx.symbol.broadcast_add(name='res4b16', *[res4b15_relu, scale4b16_branch2c]) + res4b16_relu = mx.symbol.Activation(name='res4b16_relu', data=res4b16, act_type='relu') + res4b17_branch2a = mx.symbol.Convolution(name='res4b17_branch2a', data=res4b16_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2a = mx.symbol.BatchNorm(name='bn4b17_branch2a', data=res4b17_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2a = bn4b17_branch2a + res4b17_branch2a_relu = mx.symbol.Activation(name='res4b17_branch2a_relu', data=scale4b17_branch2a, act_type='relu') + res4b17_branch2b = mx.symbol.Convolution(name='res4b17_branch2b', data=res4b17_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b17_branch2b = mx.symbol.BatchNorm(name='bn4b17_branch2b', data=res4b17_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2b = bn4b17_branch2b + res4b17_branch2b_relu = mx.symbol.Activation(name='res4b17_branch2b_relu', data=scale4b17_branch2b, act_type='relu') + res4b17_branch2c = mx.symbol.Convolution(name='res4b17_branch2c', data=res4b17_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2c = mx.symbol.BatchNorm(name='bn4b17_branch2c', data=res4b17_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2c = bn4b17_branch2c + res4b17 = mx.symbol.broadcast_add(name='res4b17', *[res4b16_relu, scale4b17_branch2c]) + res4b17_relu = mx.symbol.Activation(name='res4b17_relu', data=res4b17, act_type='relu') + res4b18_branch2a = mx.symbol.Convolution(name='res4b18_branch2a', data=res4b17_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2a = mx.symbol.BatchNorm(name='bn4b18_branch2a', data=res4b18_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2a = bn4b18_branch2a + res4b18_branch2a_relu = mx.symbol.Activation(name='res4b18_branch2a_relu', data=scale4b18_branch2a, act_type='relu') + res4b18_branch2b = mx.symbol.Convolution(name='res4b18_branch2b', data=res4b18_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b18_branch2b = mx.symbol.BatchNorm(name='bn4b18_branch2b', data=res4b18_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2b = bn4b18_branch2b + res4b18_branch2b_relu = mx.symbol.Activation(name='res4b18_branch2b_relu', data=scale4b18_branch2b, act_type='relu') + res4b18_branch2c = mx.symbol.Convolution(name='res4b18_branch2c', data=res4b18_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2c = mx.symbol.BatchNorm(name='bn4b18_branch2c', data=res4b18_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2c = bn4b18_branch2c + res4b18 = mx.symbol.broadcast_add(name='res4b18', *[res4b17_relu, scale4b18_branch2c]) + res4b18_relu = mx.symbol.Activation(name='res4b18_relu', data=res4b18, act_type='relu') + res4b19_branch2a = mx.symbol.Convolution(name='res4b19_branch2a', data=res4b18_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2a = mx.symbol.BatchNorm(name='bn4b19_branch2a', data=res4b19_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2a = bn4b19_branch2a + res4b19_branch2a_relu = mx.symbol.Activation(name='res4b19_branch2a_relu', data=scale4b19_branch2a, act_type='relu') + res4b19_branch2b = mx.symbol.Convolution(name='res4b19_branch2b', data=res4b19_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b19_branch2b = mx.symbol.BatchNorm(name='bn4b19_branch2b', data=res4b19_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2b = bn4b19_branch2b + res4b19_branch2b_relu = mx.symbol.Activation(name='res4b19_branch2b_relu', data=scale4b19_branch2b, act_type='relu') + res4b19_branch2c = mx.symbol.Convolution(name='res4b19_branch2c', data=res4b19_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2c = mx.symbol.BatchNorm(name='bn4b19_branch2c', data=res4b19_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2c = bn4b19_branch2c + res4b19 = mx.symbol.broadcast_add(name='res4b19', *[res4b18_relu, scale4b19_branch2c]) + res4b19_relu = mx.symbol.Activation(name='res4b19_relu', data=res4b19, act_type='relu') + res4b20_branch2a = mx.symbol.Convolution(name='res4b20_branch2a', data=res4b19_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2a = mx.symbol.BatchNorm(name='bn4b20_branch2a', data=res4b20_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2a = bn4b20_branch2a + res4b20_branch2a_relu = mx.symbol.Activation(name='res4b20_branch2a_relu', data=scale4b20_branch2a, act_type='relu') + res4b20_branch2b = mx.symbol.Convolution(name='res4b20_branch2b', data=res4b20_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b20_branch2b = mx.symbol.BatchNorm(name='bn4b20_branch2b', data=res4b20_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2b = bn4b20_branch2b + res4b20_branch2b_relu = mx.symbol.Activation(name='res4b20_branch2b_relu', data=scale4b20_branch2b, act_type='relu') + res4b20_branch2c = mx.symbol.Convolution(name='res4b20_branch2c', data=res4b20_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2c = mx.symbol.BatchNorm(name='bn4b20_branch2c', data=res4b20_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2c = bn4b20_branch2c + res4b20 = mx.symbol.broadcast_add(name='res4b20', *[res4b19_relu, scale4b20_branch2c]) + res4b20_relu = mx.symbol.Activation(name='res4b20_relu', data=res4b20, act_type='relu') + res4b21_branch2a = mx.symbol.Convolution(name='res4b21_branch2a', data=res4b20_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2a = mx.symbol.BatchNorm(name='bn4b21_branch2a', data=res4b21_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2a = bn4b21_branch2a + res4b21_branch2a_relu = mx.symbol.Activation(name='res4b21_branch2a_relu', data=scale4b21_branch2a, act_type='relu') + res4b21_branch2b = mx.symbol.Convolution(name='res4b21_branch2b', data=res4b21_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b21_branch2b = mx.symbol.BatchNorm(name='bn4b21_branch2b', data=res4b21_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2b = bn4b21_branch2b + res4b21_branch2b_relu = mx.symbol.Activation(name='res4b21_branch2b_relu', data=scale4b21_branch2b, act_type='relu') + res4b21_branch2c = mx.symbol.Convolution(name='res4b21_branch2c', data=res4b21_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2c = mx.symbol.BatchNorm(name='bn4b21_branch2c', data=res4b21_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2c = bn4b21_branch2c + res4b21 = mx.symbol.broadcast_add(name='res4b21', *[res4b20_relu, scale4b21_branch2c]) + res4b21_relu = mx.symbol.Activation(name='res4b21_relu', data=res4b21, act_type='relu') + res4b22_branch2a = mx.symbol.Convolution(name='res4b22_branch2a', data=res4b21_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2a = mx.symbol.BatchNorm(name='bn4b22_branch2a', data=res4b22_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2a = bn4b22_branch2a + res4b22_branch2a_relu = mx.symbol.Activation(name='res4b22_branch2a_relu', data=scale4b22_branch2a, act_type='relu') + res4b22_branch2b = mx.symbol.Convolution(name='res4b22_branch2b', data=res4b22_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b22_branch2b = mx.symbol.BatchNorm(name='bn4b22_branch2b', data=res4b22_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2b = bn4b22_branch2b + res4b22_branch2b_relu = mx.symbol.Activation(name='res4b22_branch2b_relu', data=scale4b22_branch2b, act_type='relu') + res4b22_branch2c = mx.symbol.Convolution(name='res4b22_branch2c', data=res4b22_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2c = mx.symbol.BatchNorm(name='bn4b22_branch2c', data=res4b22_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2c = bn4b22_branch2c + res4b22 = mx.symbol.broadcast_add(name='res4b22', *[res4b21_relu, scale4b22_branch2c]) + res4b22_relu = mx.symbol.Activation(name='res4b22_relu', data=res4b22, act_type='relu') + return res4b22_relu + + def get_resnet_v1_conv5(self, conv_feat): + res5a_branch1 = mx.symbol.Convolution(name='res5a_branch1', data=conv_feat, num_filter=2048, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch1 = mx.symbol.BatchNorm(name='bn5a_branch1', data=res5a_branch1, use_global_stats=True, fix_gamma=False, eps=self.eps) + scale5a_branch1 = bn5a_branch1 + res5a_branch2a = mx.symbol.Convolution(name='res5a_branch2a', data=conv_feat, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch2a = mx.symbol.BatchNorm(name='bn5a_branch2a', data=res5a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2a = bn5a_branch2a + res5a_branch2a_relu = mx.symbol.Activation(name='res5a_branch2a_relu', data=scale5a_branch2a, act_type='relu') + res5a_branch2b = mx.symbol.Convolution(name='res5a_branch2b', data=res5a_branch2a_relu, num_filter=512, pad=(2, 2), + kernel=(3, 3), stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5a_branch2b = mx.symbol.BatchNorm(name='bn5a_branch2b', data=res5a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2b = bn5a_branch2b + res5a_branch2b_relu = mx.symbol.Activation(name='res5a_branch2b_relu', data=scale5a_branch2b, act_type='relu') + res5a_branch2c = mx.symbol.Convolution(name='res5a_branch2c', data=res5a_branch2b_relu, num_filter=2048, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch2c = mx.symbol.BatchNorm(name='bn5a_branch2c', data=res5a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2c = bn5a_branch2c + res5a = mx.symbol.broadcast_add(name='res5a', *[scale5a_branch1, scale5a_branch2c]) + res5a_relu = mx.symbol.Activation(name='res5a_relu', data=res5a, act_type='relu') + res5b_branch2a = mx.symbol.Convolution(name='res5b_branch2a', data=res5a_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2a = mx.symbol.BatchNorm(name='bn5b_branch2a', data=res5b_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2a = bn5b_branch2a + res5b_branch2a_relu = mx.symbol.Activation(name='res5b_branch2a_relu', data=scale5b_branch2a, act_type='relu') + res5b_branch2b = mx.symbol.Convolution(name='res5b_branch2b', data=res5b_branch2a_relu, num_filter=512, pad=(2, 2), + kernel=(3, 3), stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5b_branch2b = mx.symbol.BatchNorm(name='bn5b_branch2b', data=res5b_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2b = bn5b_branch2b + res5b_branch2b_relu = mx.symbol.Activation(name='res5b_branch2b_relu', data=scale5b_branch2b, act_type='relu') + res5b_branch2c = mx.symbol.Convolution(name='res5b_branch2c', data=res5b_branch2b_relu, num_filter=2048, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2c = mx.symbol.BatchNorm(name='bn5b_branch2c', data=res5b_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2c = bn5b_branch2c + res5b = mx.symbol.broadcast_add(name='res5b', *[res5a_relu, scale5b_branch2c]) + res5b_relu = mx.symbol.Activation(name='res5b_relu', data=res5b, act_type='relu') + res5c_branch2a = mx.symbol.Convolution(name='res5c_branch2a', data=res5b_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2a = mx.symbol.BatchNorm(name='bn5c_branch2a', data=res5c_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2a = bn5c_branch2a + res5c_branch2a_relu = mx.symbol.Activation(name='res5c_branch2a_relu', data=scale5c_branch2a, act_type='relu') + res5c_branch2b = mx.symbol.Convolution(name='res5c_branch2b', data=res5c_branch2a_relu, num_filter=512, pad=(2, 2), + kernel=(3, 3), stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5c_branch2b = mx.symbol.BatchNorm(name='bn5c_branch2b', data=res5c_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2b = bn5c_branch2b + res5c_branch2b_relu = mx.symbol.Activation(name='res5c_branch2b_relu', data=scale5c_branch2b, act_type='relu') + res5c_branch2c = mx.symbol.Convolution(name='res5c_branch2c', data=res5c_branch2b_relu, num_filter=2048, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2c = mx.symbol.BatchNorm(name='bn5c_branch2c', data=res5c_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2c = bn5c_branch2c + res5c = mx.symbol.broadcast_add(name='res5c', *[res5b_relu, scale5c_branch2c]) + res5c_relu = mx.symbol.Activation(name='res5c_relu', data=res5c, act_type='relu') + return res5c_relu + + def get_rpn(self, conv_feat, num_anchors): + rpn_conv = mx.sym.Convolution( + data=conv_feat, kernel=(3, 3), pad=(1, 1), num_filter=512, name="rpn_conv_3x3") + rpn_relu = mx.sym.Activation(data=rpn_conv, act_type="relu", name="rpn_relu") + rpn_cls_score = mx.sym.Convolution( + data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=2 * num_anchors, name="rpn_cls_score") + rpn_bbox_pred = mx.sym.Convolution( + data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=4 * num_anchors, name="rpn_bbox_pred") + return rpn_cls_score, rpn_bbox_pred + + def get_symbol(self, cfg, is_train=True): + + # config alias for convenient + num_classes = cfg.dataset.NUM_CLASSES + num_reg_classes = (2 if cfg.CLASS_AGNOSTIC else num_classes) + num_anchors = cfg.network.NUM_ANCHORS + + # input init + if is_train: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + gt_boxes = mx.sym.Variable(name="gt_boxes") + rpn_label = mx.sym.Variable(name='label') + rpn_bbox_target = mx.sym.Variable(name='bbox_target') + rpn_bbox_weight = mx.sym.Variable(name='bbox_weight') + else: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + # res5 + relu1 = self.get_resnet_v1_conv5(conv_feat) + + rpn_cls_score, rpn_bbox_pred = self.get_rpn(conv_feat, num_anchors) + + if is_train: + # prepare rpn data + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + + # classification + rpn_cls_prob = mx.sym.SoftmaxOutput(data=rpn_cls_score_reshape, label=rpn_label, multi_output=True, + normalization='valid', use_ignore=True, ignore_label=-1, name="rpn_cls_prob") + + # bounding box regression + rpn_bbox_loss_ = rpn_bbox_weight * mx.sym.smooth_l1(name='rpn_bbox_loss_', scalar=3.0, data=(rpn_bbox_pred - rpn_bbox_target)) + + rpn_bbox_loss = mx.sym.MakeLoss(name='rpn_bbox_loss', data=rpn_bbox_loss_, grad_scale=1.0 / cfg.TRAIN.RPN_BATCH_SIZE) + + # ROI proposal + rpn_cls_act = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_act") + rpn_cls_act_reshape = mx.sym.Reshape( + data=rpn_cls_act, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_act_reshape') + if cfg.TRAIN.CXX_PROPOSAL: + rois = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_act_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TRAIN.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TRAIN.RPN_POST_NMS_TOP_N, + threshold=cfg.TRAIN.RPN_NMS_THRESH, rpn_min_size=cfg.TRAIN.RPN_MIN_SIZE) + else: + rois = mx.sym.Custom( + cls_prob=rpn_cls_act_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TRAIN.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TRAIN.RPN_POST_NMS_TOP_N, + threshold=cfg.TRAIN.RPN_NMS_THRESH, rpn_min_size=cfg.TRAIN.RPN_MIN_SIZE) + # print 'in get_symbol, after proposal' + # pdb.set_trace() + group = mx.sym.Group([rois]) + print group.list_outputs() + # ROI proposal target + gt_boxes_reshape = mx.sym.Reshape(data=gt_boxes, shape=(-1, 5), name='gt_boxes_reshape') + rois, label, bbox_target, bbox_weight = mx.sym.Custom(rois=rois, gt_boxes=gt_boxes_reshape, + op_type='proposal_target', + num_classes=num_reg_classes, + batch_images=cfg.TRAIN.BATCH_IMAGES, + batch_rois=cfg.TRAIN.BATCH_ROIS, + cfg=cPickle.dumps(cfg), + fg_fraction=cfg.TRAIN.FG_FRACTION) + # print 'in get_symbol, after proposal_target' + # pdb.set_trace() + else: + # ROI Proposal + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + rpn_cls_prob = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_prob") + rpn_cls_prob_reshape = mx.sym.Reshape( + data=rpn_cls_prob, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_prob_reshape') + if cfg.TEST.CXX_PROPOSAL: + rois = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + else: + rois = mx.sym.Custom( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + + conv_new_1 = mx.sym.Convolution(data=relu1, kernel=(1, 1), num_filter=256, name="conv_new_1") + conv_new_1_relu = mx.sym.Activation(data=conv_new_1, act_type='relu', name='conv_new_1_relu') + + roi_pool = mx.symbol.ROIPooling( + name='roi_pool', data=conv_new_1_relu, rois=rois, pooled_size=(7, 7), spatial_scale=0.0625) + + # 2 fc + fc_new_1 = mx.symbol.FullyConnected(name='fc_new_1', data=roi_pool, num_hidden=1024) + fc_new_1_relu = mx.sym.Activation(data=fc_new_1, act_type='relu', name='fc_new_1_relu') + + fc_new_2 = mx.symbol.FullyConnected(name='fc_new_2', data=fc_new_1_relu, num_hidden=1024) + fc_new_2_relu = mx.sym.Activation(data=fc_new_2, act_type='relu', name='fc_new_2_relu') + + # cls_score/bbox_pred + cls_score = mx.symbol.FullyConnected(name='cls_score', data=fc_new_2_relu, num_hidden=num_classes) + bbox_pred = mx.symbol.FullyConnected(name='bbox_pred', data=fc_new_2_relu, num_hidden=num_reg_classes * 4) + + if is_train: + if cfg.TRAIN.ENABLE_OHEM: + labels_ohem, bbox_weights_ohem = mx.sym.Custom(op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=num_reg_classes, + roi_per_img=cfg.TRAIN.BATCH_ROIS_OHEM, + cls_score=cls_score, bbox_pred=bbox_pred, labels=label, + bbox_targets=bbox_target, bbox_weights=bbox_weight) + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=labels_ohem, + normalization='valid', use_ignore=True, ignore_label=-1) + bbox_loss_ = bbox_weights_ohem * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS_OHEM) + rcnn_label = labels_ohem + else: + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=label, normalization='valid') + bbox_loss_ = bbox_weight * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS) + rcnn_label = label + + # reshape output + rcnn_label = mx.sym.Reshape(data=rcnn_label, shape=(cfg.TRAIN.BATCH_IMAGES, -1), name='label_reshape') + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_loss = mx.sym.Reshape(data=bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 4 * num_reg_classes), + name='bbox_loss_reshape') + group = mx.sym.Group([rpn_cls_prob, rpn_bbox_loss, cls_prob, bbox_loss, mx.sym.BlockGrad(rcnn_label)]) + else: + cls_prob = mx.sym.SoftmaxActivation(name='cls_prob', data=cls_score) + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_pred = mx.sym.Reshape(data=bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 4 * num_reg_classes), + name='bbox_pred_reshape') + group = mx.sym.Group([rois, cls_prob, bbox_pred]) + + self.sym = group + return group + + def get_symbol_rpn(self, cfg, is_train=True): + # config alias for convenient + num_anchors = cfg.network.NUM_ANCHORS + + # input init + if is_train: + data = mx.sym.Variable(name="data") + rpn_label = mx.sym.Variable(name='label') + rpn_bbox_target = mx.sym.Variable(name='bbox_target') + rpn_bbox_weight = mx.sym.Variable(name='bbox_weight') + else: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + rpn_cls_score, rpn_bbox_pred = self.get_rpn(conv_feat, num_anchors) + if is_train: + # prepare rpn data + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + + # classification + rpn_cls_prob = mx.sym.SoftmaxOutput(data=rpn_cls_score_reshape, label=rpn_label, multi_output=True, + normalization='valid', use_ignore=True, ignore_label=-1, + name="rpn_cls_prob", + grad_scale=1.0) + # bounding box regression + rpn_bbox_loss_ = rpn_bbox_weight * mx.sym.smooth_l1(name='rpn_bbox_loss_', scalar=3.0, + data=(rpn_bbox_pred - rpn_bbox_target)) + rpn_bbox_loss = mx.sym.MakeLoss(name='rpn_bbox_loss', data=rpn_bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.RPN_BATCH_SIZE) + group = mx.symbol.Group([rpn_cls_prob, rpn_bbox_loss]) + else: + # ROI Proposal + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + rpn_cls_prob = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_prob") + rpn_cls_prob_reshape = mx.sym.Reshape( + data=rpn_cls_prob, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_prob_reshape') + if cfg.TEST.CXX_PROPOSAL: + rois, score = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + output_score=True, + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + else: + rois, score = mx.sym.Custom( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + output_score=True, + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + group = mx.symbol.Group([rois, score]) + self.sym = group + return group + + def get_symbol_rcnn(self, cfg, is_train=True): + # config alias for convenient + num_classes = cfg.dataset.NUM_CLASSES + num_reg_classes = (2 if cfg.CLASS_AGNOSTIC else num_classes) + + # input init + if is_train: + data = mx.symbol.Variable(name="data") + rois = mx.symbol.Variable(name='rois') + label = mx.symbol.Variable(name='label') + bbox_target = mx.symbol.Variable(name='bbox_target') + bbox_weight = mx.symbol.Variable(name='bbox_weight') + # reshape input + rois = mx.symbol.Reshape(data=rois, shape=(-1, 5), name='rois_reshape') + label = mx.symbol.Reshape(data=label, shape=(-1,), name='label_reshape') + bbox_target = mx.symbol.Reshape(data=bbox_target, shape=(-1, 4 * num_classes), name='bbox_target_reshape') + bbox_weight = mx.symbol.Reshape(data=bbox_weight, shape=(-1, 4 * num_classes), name='bbox_weight_reshape') + else: + data = mx.sym.Variable(name="data") + rois = mx.symbol.Variable(name='rois') + # reshape input + rois = mx.symbol.Reshape(data=rois, shape=(-1, 5), name='rois_reshape') + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + # res5 + relu1 = self.get_resnet_v1_conv5(conv_feat) + + conv_new_1 = mx.sym.Convolution(data=relu1, kernel=(1, 1), num_filter=256, name="conv_new_1") + conv_new_1_relu = mx.sym.Activation(data=conv_new_1, act_type='relu', name='conv_new_1_relu') + + roi_pool = mx.symbol.ROIPooling( + name='roi_pool', data=conv_new_1_relu, rois=rois, pooled_size=(7, 7), spatial_scale=0.0625) + + # 2 fc + fc_new_1 = mx.symbol.FullyConnected(name='fc_new_1', data=roi_pool, num_hidden=1024) + fc_new_1_relu = mx.sym.Activation(data=fc_new_1, act_type='relu', name='fc_new_1_relu') + + fc_new_2 = mx.symbol.FullyConnected(name='fc_new_2', data=fc_new_1_relu, num_hidden=1024) + fc_new_2_relu = mx.sym.Activation(data=fc_new_2, act_type='relu', name='fc_new_2_relu') + + # cls_score/bbox_pred + cls_score = mx.symbol.FullyConnected(name='cls_score', data=fc_new_2_relu, num_hidden=num_classes) + bbox_pred = mx.symbol.FullyConnected(name='bbox_pred', data=fc_new_2_relu, num_hidden=num_reg_classes * 4) + + if is_train: + if cfg.TRAIN.ENABLE_OHEM: + labels_ohem, bbox_weights_ohem = mx.sym.Custom(op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=num_reg_classes, + roi_per_img=cfg.TRAIN.BATCH_ROIS_OHEM, + cls_score=cls_score, bbox_pred=bbox_pred, labels=label, + bbox_targets=bbox_target, bbox_weights=bbox_weight) + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=labels_ohem, + normalization='valid', use_ignore=True, ignore_label=-1, grad_scale=1.0) + bbox_loss_ = bbox_weights_ohem * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS_OHEM) + else: + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=label, normalization='valid', + grad_scale=1.0) + bbox_loss_ = bbox_weight * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS) + + # reshape output + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_loss = mx.sym.Reshape(data=bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 4 * num_reg_classes), + name='bbox_loss_reshape') + group = mx.sym.Group([cls_prob, bbox_loss]) + else: + cls_prob = mx.sym.SoftmaxActivation(name='cls_prob', data=cls_score) + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_pred = mx.sym.Reshape(data=bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 4 * num_reg_classes), + name='bbox_pred_reshape') + group = mx.sym.Group([cls_prob, bbox_pred]) + + self.sym = group + return group + + def init_weight_rcnn(self, cfg, arg_params, aux_params): + arg_params['conv_new_1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['conv_new_1_weight']) + arg_params['conv_new_1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['conv_new_1_bias']) + arg_params['fc_new_1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_1_weight']) + arg_params['fc_new_1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_1_bias']) + arg_params['fc_new_2_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_2_weight']) + arg_params['fc_new_2_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_2_bias']) + arg_params['cls_score_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['cls_score_weight']) + arg_params['cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['cls_score_bias']) + arg_params['bbox_pred_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['bbox_pred_weight']) + arg_params['bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['bbox_pred_bias']) + + def init_weight_rpn(self, cfg, arg_params, aux_params): + arg_params['rpn_conv_3x3_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['rpn_conv_3x3_weight']) + arg_params['rpn_conv_3x3_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_conv_3x3_bias']) + arg_params['rpn_cls_score_weight'] = mx.random.normal(0, 0.01, + shape=self.arg_shape_dict['rpn_cls_score_weight']) + arg_params['rpn_cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_cls_score_bias']) + arg_params['rpn_bbox_pred_weight'] = mx.random.normal(0, 0.01, + shape=self.arg_shape_dict['rpn_bbox_pred_weight']) + arg_params['rpn_bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_bbox_pred_bias']) + + def init_weight(self, cfg, arg_params, aux_params): + self.init_weight_rpn(cfg, arg_params, aux_params) + self.init_weight_rcnn(cfg, arg_params, aux_params) + diff --git a/faster_rcnn/symbols/resnet_v1_101_rcnn_dcn.py b/faster_rcnn/symbols/resnet_v1_101_rcnn_dcn.py new file mode 100644 index 0000000..fc6fed0 --- /dev/null +++ b/faster_rcnn/symbols/resnet_v1_101_rcnn_dcn.py @@ -0,0 +1,1131 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Guodong Zhang +# -------------------------------------------------------- + +import cPickle +import mxnet as mx +from utils.symbol import Symbol +from operator_py.proposal import * +from operator_py.proposal_target import * +from operator_py.box_annotator_ohem import * +import pdb + +class resnet_v1_101_rcnn_dcn(Symbol): + def __init__(self): + """ + Use __init__ to define parameter network needs + """ + self.eps = 1e-5 + self.use_global_stats = True + self.workspace = 512 + self.units = (3, 4, 23, 3) # use for 101 + self.filter_list = [256, 512, 1024, 2048] + + def get_resnet_v1_conv4(self, data): + conv1 = mx.symbol.Convolution(name='conv1', data=data, num_filter=64, pad=(3, 3), kernel=(7, 7), stride=(2, 2), + no_bias=True) + bn_conv1 = mx.symbol.BatchNorm(name='bn_conv1', data=conv1, use_global_stats=True, fix_gamma=False, + eps=self.eps) + scale_conv1 = bn_conv1 + conv1_relu = mx.symbol.Activation(name='conv1_relu', data=scale_conv1, act_type='relu') + pool1 = mx.symbol.Pooling(name='pool1', data=conv1_relu, pooling_convention='full', pad=(0, 0), kernel=(3, 3), + stride=(2, 2), pool_type='max') + res2a_branch1 = mx.symbol.Convolution(name='res2a_branch1', data=pool1, num_filter=256, pad=(0, 0), + kernel=(1, 1), + stride=(1, 1), no_bias=True) + bn2a_branch1 = mx.symbol.BatchNorm(name='bn2a_branch1', data=res2a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch1 = bn2a_branch1 + res2a_branch2a = mx.symbol.Convolution(name='res2a_branch2a', data=pool1, num_filter=64, pad=(0, 0), + kernel=(1, 1), + stride=(1, 1), no_bias=True) + bn2a_branch2a = mx.symbol.BatchNorm(name='bn2a_branch2a', data=res2a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2a = bn2a_branch2a + res2a_branch2a_relu = mx.symbol.Activation(name='res2a_branch2a_relu', data=scale2a_branch2a, act_type='relu') + res2a_branch2b = mx.symbol.Convolution(name='res2a_branch2b', data=res2a_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2a_branch2b = mx.symbol.BatchNorm(name='bn2a_branch2b', data=res2a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2b = bn2a_branch2b + res2a_branch2b_relu = mx.symbol.Activation(name='res2a_branch2b_relu', data=scale2a_branch2b, act_type='relu') + res2a_branch2c = mx.symbol.Convolution(name='res2a_branch2c', data=res2a_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2a_branch2c = mx.symbol.BatchNorm(name='bn2a_branch2c', data=res2a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2c = bn2a_branch2c + res2a = mx.symbol.broadcast_add(name='res2a', *[scale2a_branch1, scale2a_branch2c]) + res2a_relu = mx.symbol.Activation(name='res2a_relu', data=res2a, act_type='relu') + res2b_branch2a = mx.symbol.Convolution(name='res2b_branch2a', data=res2a_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2a = mx.symbol.BatchNorm(name='bn2b_branch2a', data=res2b_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2a = bn2b_branch2a + res2b_branch2a_relu = mx.symbol.Activation(name='res2b_branch2a_relu', data=scale2b_branch2a, act_type='relu') + res2b_branch2b = mx.symbol.Convolution(name='res2b_branch2b', data=res2b_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2b_branch2b = mx.symbol.BatchNorm(name='bn2b_branch2b', data=res2b_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2b = bn2b_branch2b + res2b_branch2b_relu = mx.symbol.Activation(name='res2b_branch2b_relu', data=scale2b_branch2b, act_type='relu') + res2b_branch2c = mx.symbol.Convolution(name='res2b_branch2c', data=res2b_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2c = mx.symbol.BatchNorm(name='bn2b_branch2c', data=res2b_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2c = bn2b_branch2c + res2b = mx.symbol.broadcast_add(name='res2b', *[res2a_relu, scale2b_branch2c]) + res2b_relu = mx.symbol.Activation(name='res2b_relu', data=res2b, act_type='relu') + res2c_branch2a = mx.symbol.Convolution(name='res2c_branch2a', data=res2b_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2a = mx.symbol.BatchNorm(name='bn2c_branch2a', data=res2c_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2a = bn2c_branch2a + res2c_branch2a_relu = mx.symbol.Activation(name='res2c_branch2a_relu', data=scale2c_branch2a, act_type='relu') + res2c_branch2b = mx.symbol.Convolution(name='res2c_branch2b', data=res2c_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2c_branch2b = mx.symbol.BatchNorm(name='bn2c_branch2b', data=res2c_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2b = bn2c_branch2b + res2c_branch2b_relu = mx.symbol.Activation(name='res2c_branch2b_relu', data=scale2c_branch2b, act_type='relu') + res2c_branch2c = mx.symbol.Convolution(name='res2c_branch2c', data=res2c_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2c = mx.symbol.BatchNorm(name='bn2c_branch2c', data=res2c_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2c = bn2c_branch2c + res2c = mx.symbol.broadcast_add(name='res2c', *[res2b_relu, scale2c_branch2c]) + res2c_relu = mx.symbol.Activation(name='res2c_relu', data=res2c, act_type='relu') + res3a_branch1 = mx.symbol.Convolution(name='res3a_branch1', data=res2c_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch1 = mx.symbol.BatchNorm(name='bn3a_branch1', data=res3a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch1 = bn3a_branch1 + res3a_branch2a = mx.symbol.Convolution(name='res3a_branch2a', data=res2c_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch2a = mx.symbol.BatchNorm(name='bn3a_branch2a', data=res3a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2a = bn3a_branch2a + res3a_branch2a_relu = mx.symbol.Activation(name='res3a_branch2a_relu', data=scale3a_branch2a, act_type='relu') + res3a_branch2b = mx.symbol.Convolution(name='res3a_branch2b', data=res3a_branch2a_relu, num_filter=128, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3a_branch2b = mx.symbol.BatchNorm(name='bn3a_branch2b', data=res3a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2b = bn3a_branch2b + res3a_branch2b_relu = mx.symbol.Activation(name='res3a_branch2b_relu', data=scale3a_branch2b, act_type='relu') + res3a_branch2c = mx.symbol.Convolution(name='res3a_branch2c', data=res3a_branch2b_relu, num_filter=512, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3a_branch2c = mx.symbol.BatchNorm(name='bn3a_branch2c', data=res3a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2c = bn3a_branch2c + res3a = mx.symbol.broadcast_add(name='res3a', *[scale3a_branch1, scale3a_branch2c]) + res3a_relu = mx.symbol.Activation(name='res3a_relu', data=res3a, act_type='relu') + res3b1_branch2a = mx.symbol.Convolution(name='res3b1_branch2a', data=res3a_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2a = mx.symbol.BatchNorm(name='bn3b1_branch2a', data=res3b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2a = bn3b1_branch2a + res3b1_branch2a_relu = mx.symbol.Activation(name='res3b1_branch2a_relu', data=scale3b1_branch2a, + act_type='relu') + res3b1_branch2b = mx.symbol.Convolution(name='res3b1_branch2b', data=res3b1_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b1_branch2b = mx.symbol.BatchNorm(name='bn3b1_branch2b', data=res3b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2b = bn3b1_branch2b + res3b1_branch2b_relu = mx.symbol.Activation(name='res3b1_branch2b_relu', data=scale3b1_branch2b, + act_type='relu') + res3b1_branch2c = mx.symbol.Convolution(name='res3b1_branch2c', data=res3b1_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2c = mx.symbol.BatchNorm(name='bn3b1_branch2c', data=res3b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2c = bn3b1_branch2c + res3b1 = mx.symbol.broadcast_add(name='res3b1', *[res3a_relu, scale3b1_branch2c]) + res3b1_relu = mx.symbol.Activation(name='res3b1_relu', data=res3b1, act_type='relu') + res3b2_branch2a = mx.symbol.Convolution(name='res3b2_branch2a', data=res3b1_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2a = mx.symbol.BatchNorm(name='bn3b2_branch2a', data=res3b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2a = bn3b2_branch2a + res3b2_branch2a_relu = mx.symbol.Activation(name='res3b2_branch2a_relu', data=scale3b2_branch2a, + act_type='relu') + res3b2_branch2b = mx.symbol.Convolution(name='res3b2_branch2b', data=res3b2_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b2_branch2b = mx.symbol.BatchNorm(name='bn3b2_branch2b', data=res3b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2b = bn3b2_branch2b + res3b2_branch2b_relu = mx.symbol.Activation(name='res3b2_branch2b_relu', data=scale3b2_branch2b, + act_type='relu') + res3b2_branch2c = mx.symbol.Convolution(name='res3b2_branch2c', data=res3b2_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2c = mx.symbol.BatchNorm(name='bn3b2_branch2c', data=res3b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2c = bn3b2_branch2c + res3b2 = mx.symbol.broadcast_add(name='res3b2', *[res3b1_relu, scale3b2_branch2c]) + res3b2_relu = mx.symbol.Activation(name='res3b2_relu', data=res3b2, act_type='relu') + res3b3_branch2a = mx.symbol.Convolution(name='res3b3_branch2a', data=res3b2_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2a = mx.symbol.BatchNorm(name='bn3b3_branch2a', data=res3b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2a = bn3b3_branch2a + res3b3_branch2a_relu = mx.symbol.Activation(name='res3b3_branch2a_relu', data=scale3b3_branch2a, + act_type='relu') + res3b3_branch2b = mx.symbol.Convolution(name='res3b3_branch2b', data=res3b3_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b3_branch2b = mx.symbol.BatchNorm(name='bn3b3_branch2b', data=res3b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2b = bn3b3_branch2b + res3b3_branch2b_relu = mx.symbol.Activation(name='res3b3_branch2b_relu', data=scale3b3_branch2b, + act_type='relu') + res3b3_branch2c = mx.symbol.Convolution(name='res3b3_branch2c', data=res3b3_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2c = mx.symbol.BatchNorm(name='bn3b3_branch2c', data=res3b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2c = bn3b3_branch2c + res3b3 = mx.symbol.broadcast_add(name='res3b3', *[res3b2_relu, scale3b3_branch2c]) + res3b3_relu = mx.symbol.Activation(name='res3b3_relu', data=res3b3, act_type='relu') + res4a_branch1 = mx.symbol.Convolution(name='res4a_branch1', data=res3b3_relu, num_filter=1024, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch1 = mx.symbol.BatchNorm(name='bn4a_branch1', data=res4a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch1 = bn4a_branch1 + res4a_branch2a = mx.symbol.Convolution(name='res4a_branch2a', data=res3b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch2a = mx.symbol.BatchNorm(name='bn4a_branch2a', data=res4a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2a = bn4a_branch2a + res4a_branch2a_relu = mx.symbol.Activation(name='res4a_branch2a_relu', data=scale4a_branch2a, act_type='relu') + res4a_branch2b = mx.symbol.Convolution(name='res4a_branch2b', data=res4a_branch2a_relu, num_filter=256, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4a_branch2b = mx.symbol.BatchNorm(name='bn4a_branch2b', data=res4a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2b = bn4a_branch2b + res4a_branch2b_relu = mx.symbol.Activation(name='res4a_branch2b_relu', data=scale4a_branch2b, act_type='relu') + res4a_branch2c = mx.symbol.Convolution(name='res4a_branch2c', data=res4a_branch2b_relu, num_filter=1024, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4a_branch2c = mx.symbol.BatchNorm(name='bn4a_branch2c', data=res4a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2c = bn4a_branch2c + res4a = mx.symbol.broadcast_add(name='res4a', *[scale4a_branch1, scale4a_branch2c]) + res4a_relu = mx.symbol.Activation(name='res4a_relu', data=res4a, act_type='relu') + res4b1_branch2a = mx.symbol.Convolution(name='res4b1_branch2a', data=res4a_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2a = mx.symbol.BatchNorm(name='bn4b1_branch2a', data=res4b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2a = bn4b1_branch2a + res4b1_branch2a_relu = mx.symbol.Activation(name='res4b1_branch2a_relu', data=scale4b1_branch2a, + act_type='relu') + res4b1_branch2b = mx.symbol.Convolution(name='res4b1_branch2b', data=res4b1_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b1_branch2b = mx.symbol.BatchNorm(name='bn4b1_branch2b', data=res4b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2b = bn4b1_branch2b + res4b1_branch2b_relu = mx.symbol.Activation(name='res4b1_branch2b_relu', data=scale4b1_branch2b, + act_type='relu') + res4b1_branch2c = mx.symbol.Convolution(name='res4b1_branch2c', data=res4b1_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2c = mx.symbol.BatchNorm(name='bn4b1_branch2c', data=res4b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2c = bn4b1_branch2c + res4b1 = mx.symbol.broadcast_add(name='res4b1', *[res4a_relu, scale4b1_branch2c]) + res4b1_relu = mx.symbol.Activation(name='res4b1_relu', data=res4b1, act_type='relu') + res4b2_branch2a = mx.symbol.Convolution(name='res4b2_branch2a', data=res4b1_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2a = mx.symbol.BatchNorm(name='bn4b2_branch2a', data=res4b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2a = bn4b2_branch2a + res4b2_branch2a_relu = mx.symbol.Activation(name='res4b2_branch2a_relu', data=scale4b2_branch2a, + act_type='relu') + res4b2_branch2b = mx.symbol.Convolution(name='res4b2_branch2b', data=res4b2_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b2_branch2b = mx.symbol.BatchNorm(name='bn4b2_branch2b', data=res4b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2b = bn4b2_branch2b + res4b2_branch2b_relu = mx.symbol.Activation(name='res4b2_branch2b_relu', data=scale4b2_branch2b, + act_type='relu') + res4b2_branch2c = mx.symbol.Convolution(name='res4b2_branch2c', data=res4b2_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2c = mx.symbol.BatchNorm(name='bn4b2_branch2c', data=res4b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2c = bn4b2_branch2c + res4b2 = mx.symbol.broadcast_add(name='res4b2', *[res4b1_relu, scale4b2_branch2c]) + res4b2_relu = mx.symbol.Activation(name='res4b2_relu', data=res4b2, act_type='relu') + res4b3_branch2a = mx.symbol.Convolution(name='res4b3_branch2a', data=res4b2_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2a = mx.symbol.BatchNorm(name='bn4b3_branch2a', data=res4b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2a = bn4b3_branch2a + res4b3_branch2a_relu = mx.symbol.Activation(name='res4b3_branch2a_relu', data=scale4b3_branch2a, + act_type='relu') + res4b3_branch2b = mx.symbol.Convolution(name='res4b3_branch2b', data=res4b3_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b3_branch2b = mx.symbol.BatchNorm(name='bn4b3_branch2b', data=res4b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2b = bn4b3_branch2b + res4b3_branch2b_relu = mx.symbol.Activation(name='res4b3_branch2b_relu', data=scale4b3_branch2b, + act_type='relu') + res4b3_branch2c = mx.symbol.Convolution(name='res4b3_branch2c', data=res4b3_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2c = mx.symbol.BatchNorm(name='bn4b3_branch2c', data=res4b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2c = bn4b3_branch2c + res4b3 = mx.symbol.broadcast_add(name='res4b3', *[res4b2_relu, scale4b3_branch2c]) + res4b3_relu = mx.symbol.Activation(name='res4b3_relu', data=res4b3, act_type='relu') + res4b4_branch2a = mx.symbol.Convolution(name='res4b4_branch2a', data=res4b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2a = mx.symbol.BatchNorm(name='bn4b4_branch2a', data=res4b4_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2a = bn4b4_branch2a + res4b4_branch2a_relu = mx.symbol.Activation(name='res4b4_branch2a_relu', data=scale4b4_branch2a, + act_type='relu') + res4b4_branch2b = mx.symbol.Convolution(name='res4b4_branch2b', data=res4b4_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b4_branch2b = mx.symbol.BatchNorm(name='bn4b4_branch2b', data=res4b4_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2b = bn4b4_branch2b + res4b4_branch2b_relu = mx.symbol.Activation(name='res4b4_branch2b_relu', data=scale4b4_branch2b, + act_type='relu') + res4b4_branch2c = mx.symbol.Convolution(name='res4b4_branch2c', data=res4b4_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2c = mx.symbol.BatchNorm(name='bn4b4_branch2c', data=res4b4_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2c = bn4b4_branch2c + res4b4 = mx.symbol.broadcast_add(name='res4b4', *[res4b3_relu, scale4b4_branch2c]) + res4b4_relu = mx.symbol.Activation(name='res4b4_relu', data=res4b4, act_type='relu') + res4b5_branch2a = mx.symbol.Convolution(name='res4b5_branch2a', data=res4b4_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2a = mx.symbol.BatchNorm(name='bn4b5_branch2a', data=res4b5_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2a = bn4b5_branch2a + res4b5_branch2a_relu = mx.symbol.Activation(name='res4b5_branch2a_relu', data=scale4b5_branch2a, + act_type='relu') + res4b5_branch2b = mx.symbol.Convolution(name='res4b5_branch2b', data=res4b5_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b5_branch2b = mx.symbol.BatchNorm(name='bn4b5_branch2b', data=res4b5_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2b = bn4b5_branch2b + res4b5_branch2b_relu = mx.symbol.Activation(name='res4b5_branch2b_relu', data=scale4b5_branch2b, + act_type='relu') + res4b5_branch2c = mx.symbol.Convolution(name='res4b5_branch2c', data=res4b5_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2c = mx.symbol.BatchNorm(name='bn4b5_branch2c', data=res4b5_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2c = bn4b5_branch2c + res4b5 = mx.symbol.broadcast_add(name='res4b5', *[res4b4_relu, scale4b5_branch2c]) + res4b5_relu = mx.symbol.Activation(name='res4b5_relu', data=res4b5, act_type='relu') + res4b6_branch2a = mx.symbol.Convolution(name='res4b6_branch2a', data=res4b5_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2a = mx.symbol.BatchNorm(name='bn4b6_branch2a', data=res4b6_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2a = bn4b6_branch2a + res4b6_branch2a_relu = mx.symbol.Activation(name='res4b6_branch2a_relu', data=scale4b6_branch2a, + act_type='relu') + res4b6_branch2b = mx.symbol.Convolution(name='res4b6_branch2b', data=res4b6_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b6_branch2b = mx.symbol.BatchNorm(name='bn4b6_branch2b', data=res4b6_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2b = bn4b6_branch2b + res4b6_branch2b_relu = mx.symbol.Activation(name='res4b6_branch2b_relu', data=scale4b6_branch2b, + act_type='relu') + res4b6_branch2c = mx.symbol.Convolution(name='res4b6_branch2c', data=res4b6_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2c = mx.symbol.BatchNorm(name='bn4b6_branch2c', data=res4b6_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2c = bn4b6_branch2c + res4b6 = mx.symbol.broadcast_add(name='res4b6', *[res4b5_relu, scale4b6_branch2c]) + res4b6_relu = mx.symbol.Activation(name='res4b6_relu', data=res4b6, act_type='relu') + res4b7_branch2a = mx.symbol.Convolution(name='res4b7_branch2a', data=res4b6_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2a = mx.symbol.BatchNorm(name='bn4b7_branch2a', data=res4b7_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2a = bn4b7_branch2a + res4b7_branch2a_relu = mx.symbol.Activation(name='res4b7_branch2a_relu', data=scale4b7_branch2a, + act_type='relu') + res4b7_branch2b = mx.symbol.Convolution(name='res4b7_branch2b', data=res4b7_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b7_branch2b = mx.symbol.BatchNorm(name='bn4b7_branch2b', data=res4b7_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2b = bn4b7_branch2b + res4b7_branch2b_relu = mx.symbol.Activation(name='res4b7_branch2b_relu', data=scale4b7_branch2b, + act_type='relu') + res4b7_branch2c = mx.symbol.Convolution(name='res4b7_branch2c', data=res4b7_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2c = mx.symbol.BatchNorm(name='bn4b7_branch2c', data=res4b7_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2c = bn4b7_branch2c + res4b7 = mx.symbol.broadcast_add(name='res4b7', *[res4b6_relu, scale4b7_branch2c]) + res4b7_relu = mx.symbol.Activation(name='res4b7_relu', data=res4b7, act_type='relu') + res4b8_branch2a = mx.symbol.Convolution(name='res4b8_branch2a', data=res4b7_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2a = mx.symbol.BatchNorm(name='bn4b8_branch2a', data=res4b8_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2a = bn4b8_branch2a + res4b8_branch2a_relu = mx.symbol.Activation(name='res4b8_branch2a_relu', data=scale4b8_branch2a, + act_type='relu') + res4b8_branch2b = mx.symbol.Convolution(name='res4b8_branch2b', data=res4b8_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b8_branch2b = mx.symbol.BatchNorm(name='bn4b8_branch2b', data=res4b8_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2b = bn4b8_branch2b + res4b8_branch2b_relu = mx.symbol.Activation(name='res4b8_branch2b_relu', data=scale4b8_branch2b, + act_type='relu') + res4b8_branch2c = mx.symbol.Convolution(name='res4b8_branch2c', data=res4b8_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2c = mx.symbol.BatchNorm(name='bn4b8_branch2c', data=res4b8_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2c = bn4b8_branch2c + res4b8 = mx.symbol.broadcast_add(name='res4b8', *[res4b7_relu, scale4b8_branch2c]) + res4b8_relu = mx.symbol.Activation(name='res4b8_relu', data=res4b8, act_type='relu') + res4b9_branch2a = mx.symbol.Convolution(name='res4b9_branch2a', data=res4b8_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2a = mx.symbol.BatchNorm(name='bn4b9_branch2a', data=res4b9_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2a = bn4b9_branch2a + res4b9_branch2a_relu = mx.symbol.Activation(name='res4b9_branch2a_relu', data=scale4b9_branch2a, + act_type='relu') + res4b9_branch2b = mx.symbol.Convolution(name='res4b9_branch2b', data=res4b9_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b9_branch2b = mx.symbol.BatchNorm(name='bn4b9_branch2b', data=res4b9_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2b = bn4b9_branch2b + res4b9_branch2b_relu = mx.symbol.Activation(name='res4b9_branch2b_relu', data=scale4b9_branch2b, + act_type='relu') + res4b9_branch2c = mx.symbol.Convolution(name='res4b9_branch2c', data=res4b9_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2c = mx.symbol.BatchNorm(name='bn4b9_branch2c', data=res4b9_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2c = bn4b9_branch2c + res4b9 = mx.symbol.broadcast_add(name='res4b9', *[res4b8_relu, scale4b9_branch2c]) + res4b9_relu = mx.symbol.Activation(name='res4b9_relu', data=res4b9, act_type='relu') + res4b10_branch2a = mx.symbol.Convolution(name='res4b10_branch2a', data=res4b9_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2a = mx.symbol.BatchNorm(name='bn4b10_branch2a', data=res4b10_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2a = bn4b10_branch2a + res4b10_branch2a_relu = mx.symbol.Activation(name='res4b10_branch2a_relu', data=scale4b10_branch2a, + act_type='relu') + res4b10_branch2b = mx.symbol.Convolution(name='res4b10_branch2b', data=res4b10_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b10_branch2b = mx.symbol.BatchNorm(name='bn4b10_branch2b', data=res4b10_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2b = bn4b10_branch2b + res4b10_branch2b_relu = mx.symbol.Activation(name='res4b10_branch2b_relu', data=scale4b10_branch2b, + act_type='relu') + res4b10_branch2c = mx.symbol.Convolution(name='res4b10_branch2c', data=res4b10_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2c = mx.symbol.BatchNorm(name='bn4b10_branch2c', data=res4b10_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2c = bn4b10_branch2c + res4b10 = mx.symbol.broadcast_add(name='res4b10', *[res4b9_relu, scale4b10_branch2c]) + res4b10_relu = mx.symbol.Activation(name='res4b10_relu', data=res4b10, act_type='relu') + res4b11_branch2a = mx.symbol.Convolution(name='res4b11_branch2a', data=res4b10_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2a = mx.symbol.BatchNorm(name='bn4b11_branch2a', data=res4b11_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2a = bn4b11_branch2a + res4b11_branch2a_relu = mx.symbol.Activation(name='res4b11_branch2a_relu', data=scale4b11_branch2a, + act_type='relu') + res4b11_branch2b = mx.symbol.Convolution(name='res4b11_branch2b', data=res4b11_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b11_branch2b = mx.symbol.BatchNorm(name='bn4b11_branch2b', data=res4b11_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2b = bn4b11_branch2b + res4b11_branch2b_relu = mx.symbol.Activation(name='res4b11_branch2b_relu', data=scale4b11_branch2b, + act_type='relu') + res4b11_branch2c = mx.symbol.Convolution(name='res4b11_branch2c', data=res4b11_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2c = mx.symbol.BatchNorm(name='bn4b11_branch2c', data=res4b11_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2c = bn4b11_branch2c + res4b11 = mx.symbol.broadcast_add(name='res4b11', *[res4b10_relu, scale4b11_branch2c]) + res4b11_relu = mx.symbol.Activation(name='res4b11_relu', data=res4b11, act_type='relu') + res4b12_branch2a = mx.symbol.Convolution(name='res4b12_branch2a', data=res4b11_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2a = mx.symbol.BatchNorm(name='bn4b12_branch2a', data=res4b12_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2a = bn4b12_branch2a + res4b12_branch2a_relu = mx.symbol.Activation(name='res4b12_branch2a_relu', data=scale4b12_branch2a, + act_type='relu') + res4b12_branch2b = mx.symbol.Convolution(name='res4b12_branch2b', data=res4b12_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b12_branch2b = mx.symbol.BatchNorm(name='bn4b12_branch2b', data=res4b12_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2b = bn4b12_branch2b + res4b12_branch2b_relu = mx.symbol.Activation(name='res4b12_branch2b_relu', data=scale4b12_branch2b, + act_type='relu') + res4b12_branch2c = mx.symbol.Convolution(name='res4b12_branch2c', data=res4b12_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2c = mx.symbol.BatchNorm(name='bn4b12_branch2c', data=res4b12_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2c = bn4b12_branch2c + res4b12 = mx.symbol.broadcast_add(name='res4b12', *[res4b11_relu, scale4b12_branch2c]) + res4b12_relu = mx.symbol.Activation(name='res4b12_relu', data=res4b12, act_type='relu') + res4b13_branch2a = mx.symbol.Convolution(name='res4b13_branch2a', data=res4b12_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2a = mx.symbol.BatchNorm(name='bn4b13_branch2a', data=res4b13_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2a = bn4b13_branch2a + res4b13_branch2a_relu = mx.symbol.Activation(name='res4b13_branch2a_relu', data=scale4b13_branch2a, + act_type='relu') + res4b13_branch2b = mx.symbol.Convolution(name='res4b13_branch2b', data=res4b13_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b13_branch2b = mx.symbol.BatchNorm(name='bn4b13_branch2b', data=res4b13_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2b = bn4b13_branch2b + res4b13_branch2b_relu = mx.symbol.Activation(name='res4b13_branch2b_relu', data=scale4b13_branch2b, + act_type='relu') + res4b13_branch2c = mx.symbol.Convolution(name='res4b13_branch2c', data=res4b13_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2c = mx.symbol.BatchNorm(name='bn4b13_branch2c', data=res4b13_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2c = bn4b13_branch2c + res4b13 = mx.symbol.broadcast_add(name='res4b13', *[res4b12_relu, scale4b13_branch2c]) + res4b13_relu = mx.symbol.Activation(name='res4b13_relu', data=res4b13, act_type='relu') + res4b14_branch2a = mx.symbol.Convolution(name='res4b14_branch2a', data=res4b13_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2a = mx.symbol.BatchNorm(name='bn4b14_branch2a', data=res4b14_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2a = bn4b14_branch2a + res4b14_branch2a_relu = mx.symbol.Activation(name='res4b14_branch2a_relu', data=scale4b14_branch2a, + act_type='relu') + res4b14_branch2b = mx.symbol.Convolution(name='res4b14_branch2b', data=res4b14_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b14_branch2b = mx.symbol.BatchNorm(name='bn4b14_branch2b', data=res4b14_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2b = bn4b14_branch2b + res4b14_branch2b_relu = mx.symbol.Activation(name='res4b14_branch2b_relu', data=scale4b14_branch2b, + act_type='relu') + res4b14_branch2c = mx.symbol.Convolution(name='res4b14_branch2c', data=res4b14_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2c = mx.symbol.BatchNorm(name='bn4b14_branch2c', data=res4b14_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2c = bn4b14_branch2c + res4b14 = mx.symbol.broadcast_add(name='res4b14', *[res4b13_relu, scale4b14_branch2c]) + res4b14_relu = mx.symbol.Activation(name='res4b14_relu', data=res4b14, act_type='relu') + res4b15_branch2a = mx.symbol.Convolution(name='res4b15_branch2a', data=res4b14_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2a = mx.symbol.BatchNorm(name='bn4b15_branch2a', data=res4b15_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2a = bn4b15_branch2a + res4b15_branch2a_relu = mx.symbol.Activation(name='res4b15_branch2a_relu', data=scale4b15_branch2a, + act_type='relu') + res4b15_branch2b = mx.symbol.Convolution(name='res4b15_branch2b', data=res4b15_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b15_branch2b = mx.symbol.BatchNorm(name='bn4b15_branch2b', data=res4b15_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2b = bn4b15_branch2b + res4b15_branch2b_relu = mx.symbol.Activation(name='res4b15_branch2b_relu', data=scale4b15_branch2b, + act_type='relu') + res4b15_branch2c = mx.symbol.Convolution(name='res4b15_branch2c', data=res4b15_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2c = mx.symbol.BatchNorm(name='bn4b15_branch2c', data=res4b15_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2c = bn4b15_branch2c + res4b15 = mx.symbol.broadcast_add(name='res4b15', *[res4b14_relu, scale4b15_branch2c]) + res4b15_relu = mx.symbol.Activation(name='res4b15_relu', data=res4b15, act_type='relu') + res4b16_branch2a = mx.symbol.Convolution(name='res4b16_branch2a', data=res4b15_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2a = mx.symbol.BatchNorm(name='bn4b16_branch2a', data=res4b16_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2a = bn4b16_branch2a + res4b16_branch2a_relu = mx.symbol.Activation(name='res4b16_branch2a_relu', data=scale4b16_branch2a, + act_type='relu') + res4b16_branch2b = mx.symbol.Convolution(name='res4b16_branch2b', data=res4b16_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b16_branch2b = mx.symbol.BatchNorm(name='bn4b16_branch2b', data=res4b16_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2b = bn4b16_branch2b + res4b16_branch2b_relu = mx.symbol.Activation(name='res4b16_branch2b_relu', data=scale4b16_branch2b, + act_type='relu') + res4b16_branch2c = mx.symbol.Convolution(name='res4b16_branch2c', data=res4b16_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2c = mx.symbol.BatchNorm(name='bn4b16_branch2c', data=res4b16_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2c = bn4b16_branch2c + res4b16 = mx.symbol.broadcast_add(name='res4b16', *[res4b15_relu, scale4b16_branch2c]) + res4b16_relu = mx.symbol.Activation(name='res4b16_relu', data=res4b16, act_type='relu') + res4b17_branch2a = mx.symbol.Convolution(name='res4b17_branch2a', data=res4b16_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2a = mx.symbol.BatchNorm(name='bn4b17_branch2a', data=res4b17_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2a = bn4b17_branch2a + res4b17_branch2a_relu = mx.symbol.Activation(name='res4b17_branch2a_relu', data=scale4b17_branch2a, + act_type='relu') + res4b17_branch2b = mx.symbol.Convolution(name='res4b17_branch2b', data=res4b17_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b17_branch2b = mx.symbol.BatchNorm(name='bn4b17_branch2b', data=res4b17_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2b = bn4b17_branch2b + res4b17_branch2b_relu = mx.symbol.Activation(name='res4b17_branch2b_relu', data=scale4b17_branch2b, + act_type='relu') + res4b17_branch2c = mx.symbol.Convolution(name='res4b17_branch2c', data=res4b17_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2c = mx.symbol.BatchNorm(name='bn4b17_branch2c', data=res4b17_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2c = bn4b17_branch2c + res4b17 = mx.symbol.broadcast_add(name='res4b17', *[res4b16_relu, scale4b17_branch2c]) + res4b17_relu = mx.symbol.Activation(name='res4b17_relu', data=res4b17, act_type='relu') + res4b18_branch2a = mx.symbol.Convolution(name='res4b18_branch2a', data=res4b17_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2a = mx.symbol.BatchNorm(name='bn4b18_branch2a', data=res4b18_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2a = bn4b18_branch2a + res4b18_branch2a_relu = mx.symbol.Activation(name='res4b18_branch2a_relu', data=scale4b18_branch2a, + act_type='relu') + res4b18_branch2b = mx.symbol.Convolution(name='res4b18_branch2b', data=res4b18_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b18_branch2b = mx.symbol.BatchNorm(name='bn4b18_branch2b', data=res4b18_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2b = bn4b18_branch2b + res4b18_branch2b_relu = mx.symbol.Activation(name='res4b18_branch2b_relu', data=scale4b18_branch2b, + act_type='relu') + res4b18_branch2c = mx.symbol.Convolution(name='res4b18_branch2c', data=res4b18_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2c = mx.symbol.BatchNorm(name='bn4b18_branch2c', data=res4b18_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2c = bn4b18_branch2c + res4b18 = mx.symbol.broadcast_add(name='res4b18', *[res4b17_relu, scale4b18_branch2c]) + res4b18_relu = mx.symbol.Activation(name='res4b18_relu', data=res4b18, act_type='relu') + res4b19_branch2a = mx.symbol.Convolution(name='res4b19_branch2a', data=res4b18_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2a = mx.symbol.BatchNorm(name='bn4b19_branch2a', data=res4b19_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2a = bn4b19_branch2a + res4b19_branch2a_relu = mx.symbol.Activation(name='res4b19_branch2a_relu', data=scale4b19_branch2a, + act_type='relu') + res4b19_branch2b = mx.symbol.Convolution(name='res4b19_branch2b', data=res4b19_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b19_branch2b = mx.symbol.BatchNorm(name='bn4b19_branch2b', data=res4b19_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2b = bn4b19_branch2b + res4b19_branch2b_relu = mx.symbol.Activation(name='res4b19_branch2b_relu', data=scale4b19_branch2b, + act_type='relu') + res4b19_branch2c = mx.symbol.Convolution(name='res4b19_branch2c', data=res4b19_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2c = mx.symbol.BatchNorm(name='bn4b19_branch2c', data=res4b19_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2c = bn4b19_branch2c + res4b19 = mx.symbol.broadcast_add(name='res4b19', *[res4b18_relu, scale4b19_branch2c]) + res4b19_relu = mx.symbol.Activation(name='res4b19_relu', data=res4b19, act_type='relu') + res4b20_branch2a = mx.symbol.Convolution(name='res4b20_branch2a', data=res4b19_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2a = mx.symbol.BatchNorm(name='bn4b20_branch2a', data=res4b20_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2a = bn4b20_branch2a + res4b20_branch2a_relu = mx.symbol.Activation(name='res4b20_branch2a_relu', data=scale4b20_branch2a, + act_type='relu') + res4b20_branch2b = mx.symbol.Convolution(name='res4b20_branch2b', data=res4b20_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b20_branch2b = mx.symbol.BatchNorm(name='bn4b20_branch2b', data=res4b20_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2b = bn4b20_branch2b + res4b20_branch2b_relu = mx.symbol.Activation(name='res4b20_branch2b_relu', data=scale4b20_branch2b, + act_type='relu') + res4b20_branch2c = mx.symbol.Convolution(name='res4b20_branch2c', data=res4b20_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2c = mx.symbol.BatchNorm(name='bn4b20_branch2c', data=res4b20_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2c = bn4b20_branch2c + res4b20 = mx.symbol.broadcast_add(name='res4b20', *[res4b19_relu, scale4b20_branch2c]) + res4b20_relu = mx.symbol.Activation(name='res4b20_relu', data=res4b20, act_type='relu') + res4b21_branch2a = mx.symbol.Convolution(name='res4b21_branch2a', data=res4b20_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2a = mx.symbol.BatchNorm(name='bn4b21_branch2a', data=res4b21_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2a = bn4b21_branch2a + res4b21_branch2a_relu = mx.symbol.Activation(name='res4b21_branch2a_relu', data=scale4b21_branch2a, + act_type='relu') + res4b21_branch2b = mx.symbol.Convolution(name='res4b21_branch2b', data=res4b21_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b21_branch2b = mx.symbol.BatchNorm(name='bn4b21_branch2b', data=res4b21_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2b = bn4b21_branch2b + res4b21_branch2b_relu = mx.symbol.Activation(name='res4b21_branch2b_relu', data=scale4b21_branch2b, + act_type='relu') + res4b21_branch2c = mx.symbol.Convolution(name='res4b21_branch2c', data=res4b21_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2c = mx.symbol.BatchNorm(name='bn4b21_branch2c', data=res4b21_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2c = bn4b21_branch2c + res4b21 = mx.symbol.broadcast_add(name='res4b21', *[res4b20_relu, scale4b21_branch2c]) + res4b21_relu = mx.symbol.Activation(name='res4b21_relu', data=res4b21, act_type='relu') + res4b22_branch2a = mx.symbol.Convolution(name='res4b22_branch2a', data=res4b21_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2a = mx.symbol.BatchNorm(name='bn4b22_branch2a', data=res4b22_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2a = bn4b22_branch2a + res4b22_branch2a_relu = mx.symbol.Activation(name='res4b22_branch2a_relu', data=scale4b22_branch2a, + act_type='relu') + res4b22_branch2b = mx.symbol.Convolution(name='res4b22_branch2b', data=res4b22_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b22_branch2b = mx.symbol.BatchNorm(name='bn4b22_branch2b', data=res4b22_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2b = bn4b22_branch2b + res4b22_branch2b_relu = mx.symbol.Activation(name='res4b22_branch2b_relu', data=scale4b22_branch2b, + act_type='relu') + res4b22_branch2c = mx.symbol.Convolution(name='res4b22_branch2c', data=res4b22_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2c = mx.symbol.BatchNorm(name='bn4b22_branch2c', data=res4b22_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2c = bn4b22_branch2c + res4b22 = mx.symbol.broadcast_add(name='res4b22', *[res4b21_relu, scale4b22_branch2c]) + res4b22_relu = mx.symbol.Activation(name='res4b22_relu', data=res4b22, act_type='relu') + return res4b22_relu + + def get_resnet_v1_conv5(self, conv_feat): + res5a_branch1 = mx.symbol.Convolution(name='res5a_branch1', data=conv_feat, num_filter=2048, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch1 = mx.symbol.BatchNorm(name='bn5a_branch1', data=res5a_branch1, use_global_stats=True, fix_gamma=False, eps=self.eps) + scale5a_branch1 = bn5a_branch1 + res5a_branch2a = mx.symbol.Convolution(name='res5a_branch2a', data=conv_feat, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch2a = mx.symbol.BatchNorm(name='bn5a_branch2a', data=res5a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2a = bn5a_branch2a + res5a_branch2a_relu = mx.symbol.Activation(name='res5a_branch2a_relu', data=scale5a_branch2a, act_type='relu') + res5a_branch2b_offset = mx.symbol.Convolution(name='res5a_branch2b_offset', data = res5a_branch2a_relu, + num_filter=72, pad=(2, 2), kernel=(3, 3), stride=(1, 1), dilate=(2, 2), cudnn_off=True) + res5a_branch2b = mx.contrib.symbol.DeformableConvolution(name='res5a_branch2b', data=res5a_branch2a_relu, offset=res5a_branch2b_offset, + num_filter=512, pad=(2, 2), kernel=(3, 3), num_deformable_group=4, + stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5a_branch2b = mx.symbol.BatchNorm(name='bn5a_branch2b', data=res5a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2b = bn5a_branch2b + res5a_branch2b_relu = mx.symbol.Activation(name='res5a_branch2b_relu', data=scale5a_branch2b, act_type='relu') + res5a_branch2c = mx.symbol.Convolution(name='res5a_branch2c', data=res5a_branch2b_relu, num_filter=2048, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch2c = mx.symbol.BatchNorm(name='bn5a_branch2c', data=res5a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2c = bn5a_branch2c + res5a = mx.symbol.broadcast_add(name='res5a', *[scale5a_branch1, scale5a_branch2c]) + res5a_relu = mx.symbol.Activation(name='res5a_relu', data=res5a, act_type='relu') + res5b_branch2a = mx.symbol.Convolution(name='res5b_branch2a', data=res5a_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2a = mx.symbol.BatchNorm(name='bn5b_branch2a', data=res5b_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2a = bn5b_branch2a + res5b_branch2a_relu = mx.symbol.Activation(name='res5b_branch2a_relu', data=scale5b_branch2a, act_type='relu') + res5b_branch2b_offset = mx.symbol.Convolution(name='res5b_branch2b_offset', data = res5b_branch2a_relu, + num_filter=72, pad=(2, 2), kernel=(3, 3), stride=(1, 1), dilate=(2, 2), cudnn_off=True) + res5b_branch2b = mx.contrib.symbol.DeformableConvolution(name='res5b_branch2b', data=res5b_branch2a_relu, offset=res5b_branch2b_offset, + num_filter=512, pad=(2, 2), kernel=(3, 3), num_deformable_group=4, + stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5b_branch2b = mx.symbol.BatchNorm(name='bn5b_branch2b', data=res5b_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2b = bn5b_branch2b + res5b_branch2b_relu = mx.symbol.Activation(name='res5b_branch2b_relu', data=scale5b_branch2b, act_type='relu') + res5b_branch2c = mx.symbol.Convolution(name='res5b_branch2c', data=res5b_branch2b_relu, num_filter=2048, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2c = mx.symbol.BatchNorm(name='bn5b_branch2c', data=res5b_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2c = bn5b_branch2c + res5b = mx.symbol.broadcast_add(name='res5b', *[res5a_relu, scale5b_branch2c]) + res5b_relu = mx.symbol.Activation(name='res5b_relu', data=res5b, act_type='relu') + res5c_branch2a = mx.symbol.Convolution(name='res5c_branch2a', data=res5b_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2a = mx.symbol.BatchNorm(name='bn5c_branch2a', data=res5c_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2a = bn5c_branch2a + res5c_branch2a_relu = mx.symbol.Activation(name='res5c_branch2a_relu', data=scale5c_branch2a, act_type='relu') + res5c_branch2b_offset = mx.symbol.Convolution(name='res5c_branch2b_offset', data = res5c_branch2a_relu, + num_filter=72, pad=(2, 2), kernel=(3, 3), stride=(1, 1), dilate=(2, 2), cudnn_off=True) + res5c_branch2b = mx.contrib.symbol.DeformableConvolution(name='res5c_branch2b', data=res5c_branch2a_relu, offset=res5c_branch2b_offset, + num_filter=512, pad=(2, 2), kernel=(3, 3), num_deformable_group=4, + stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5c_branch2b = mx.symbol.BatchNorm(name='bn5c_branch2b', data=res5c_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2b = bn5c_branch2b + res5c_branch2b_relu = mx.symbol.Activation(name='res5c_branch2b_relu', data=scale5c_branch2b, act_type='relu') + res5c_branch2c = mx.symbol.Convolution(name='res5c_branch2c', data=res5c_branch2b_relu, num_filter=2048, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2c = mx.symbol.BatchNorm(name='bn5c_branch2c', data=res5c_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2c = bn5c_branch2c + res5c = mx.symbol.broadcast_add(name='res5c', *[res5b_relu, scale5c_branch2c]) + res5c_relu = mx.symbol.Activation(name='res5c_relu', data=res5c, act_type='relu') + return res5c_relu + + + def get_rpn(self, conv_feat, num_anchors): + rpn_conv = mx.sym.Convolution( + data=conv_feat, kernel=(3, 3), pad=(1, 1), num_filter=512, name="rpn_conv_3x3") + rpn_relu = mx.sym.Activation(data=rpn_conv, act_type="relu", name="rpn_relu") + rpn_cls_score = mx.sym.Convolution( + data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=2 * num_anchors, name="rpn_cls_score") + rpn_bbox_pred = mx.sym.Convolution( + data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=4 * num_anchors, name="rpn_bbox_pred") + return rpn_cls_score, rpn_bbox_pred + + def get_symbol(self, cfg, is_train=True): + + # config alias for convenient + num_classes = cfg.dataset.NUM_CLASSES + num_reg_classes = (2 if cfg.CLASS_AGNOSTIC else num_classes) + num_anchors = cfg.network.NUM_ANCHORS + + # input init + if is_train: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + gt_boxes = mx.sym.Variable(name="gt_boxes") + rpn_label = mx.sym.Variable(name='label') + rpn_bbox_target = mx.sym.Variable(name='bbox_target') + rpn_bbox_weight = mx.sym.Variable(name='bbox_weight') + else: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + # res5 + relu1 = self.get_resnet_v1_conv5(conv_feat) + + rpn_cls_score, rpn_bbox_pred = self.get_rpn(conv_feat, num_anchors) + + if is_train: + # prepare rpn data + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + + # classification + rpn_cls_prob = mx.sym.SoftmaxOutput(data=rpn_cls_score_reshape, label=rpn_label, multi_output=True, + normalization='valid', use_ignore=True, ignore_label=-1, + name="rpn_cls_prob") + # bounding box regression + rpn_bbox_loss_ = rpn_bbox_weight * mx.sym.smooth_l1(name='rpn_bbox_loss_', scalar=3.0, + data=(rpn_bbox_pred - rpn_bbox_target)) + rpn_bbox_loss = mx.sym.MakeLoss(name='rpn_bbox_loss', data=rpn_bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.RPN_BATCH_SIZE) + + # ROI proposal + rpn_cls_act = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_act") + rpn_cls_act_reshape = mx.sym.Reshape( + data=rpn_cls_act, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_act_reshape') + if cfg.TRAIN.CXX_PROPOSAL: + rois = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_act_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TRAIN.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TRAIN.RPN_POST_NMS_TOP_N, + threshold=cfg.TRAIN.RPN_NMS_THRESH, rpn_min_size=cfg.TRAIN.RPN_MIN_SIZE) + else: + rois = mx.sym.Custom( + cls_prob=rpn_cls_act_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TRAIN.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TRAIN.RPN_POST_NMS_TOP_N, + threshold=cfg.TRAIN.RPN_NMS_THRESH, rpn_min_size=cfg.TRAIN.RPN_MIN_SIZE) + print 'in get_symbol, after proposal' + # pdb.set_trace() + # ROI proposal target + gt_boxes_reshape = mx.sym.Reshape(data=gt_boxes, shape=(-1, 5), name='gt_boxes_reshape') + rois, label, bbox_target, bbox_weight = mx.sym.Custom(rois=rois, gt_boxes=gt_boxes_reshape, + op_type='proposal_target', + num_classes=num_reg_classes, + batch_images=cfg.TRAIN.BATCH_IMAGES, + batch_rois=cfg.TRAIN.BATCH_ROIS, + cfg=cPickle.dumps(cfg), + fg_fraction=cfg.TRAIN.FG_FRACTION) + print 'in get_symbol, after proposal_target' + # pdb.set_trace() + else: + # ROI Proposal + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + rpn_cls_prob = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_prob") + rpn_cls_prob_reshape = mx.sym.Reshape( + data=rpn_cls_prob, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_prob_reshape') + if cfg.TEST.CXX_PROPOSAL: + rois = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + else: + rois = mx.sym.Custom( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + + conv_new_1 = mx.sym.Convolution(data=relu1, kernel=(1, 1), num_filter=256, name="conv_new_1") + conv_new_1_relu = mx.sym.Activation(data=conv_new_1, act_type='relu', name='conv_new_1_relu') + + offset_t = mx.contrib.sym.DeformablePSROIPooling(name='offset_t', data=conv_new_1_relu, rois=rois, group_size=1, pooled_size=7, + sample_per_part=4, no_trans=True, part_size=7, output_dim=256, spatial_scale=0.0625) + offset = mx.sym.FullyConnected(name='offset', data=offset_t, num_hidden=7 * 7 * 2, lr_mult=0.01) + offset_reshape = mx.sym.Reshape(data=offset, shape=(-1, 2, 7, 7), name="offset_reshape") + + deformable_roi_pool = mx.contrib.sym.DeformablePSROIPooling(name='deformable_roi_pool', data=conv_new_1_relu, rois=rois, + trans=offset_reshape, group_size=1, pooled_size=7, sample_per_part=4, + no_trans=False, part_size=7, output_dim=256, spatial_scale=0.0625, trans_std=0.1) + # 2 fc + fc_new_1 = mx.sym.FullyConnected(name='fc_new_1', data=deformable_roi_pool, num_hidden=1024) + fc_new_1_relu = mx.sym.Activation(data=fc_new_1, act_type='relu', name='fc_new_1_relu') + + fc_new_2 = mx.sym.FullyConnected(name='fc_new_2', data=fc_new_1_relu, num_hidden=1024) + fc_new_2_relu = mx.sym.Activation(data=fc_new_2, act_type='relu', name='fc_new_2_relu') + + # cls_score/bbox_pred + cls_score = mx.sym.FullyConnected(name='cls_score', data=fc_new_2_relu, num_hidden=num_classes) + bbox_pred = mx.sym.FullyConnected(name='bbox_pred', data=fc_new_2_relu, num_hidden=num_reg_classes * 4) + + if is_train: + if cfg.TRAIN.ENABLE_OHEM: + labels_ohem, bbox_weights_ohem = mx.sym.Custom(op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=num_reg_classes, + roi_per_img=cfg.TRAIN.BATCH_ROIS_OHEM, + cls_score=cls_score, bbox_pred=bbox_pred, labels=label, + bbox_targets=bbox_target, bbox_weights=bbox_weight) + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=labels_ohem, + normalization='valid', use_ignore=True, ignore_label=-1) + bbox_loss_ = bbox_weights_ohem * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS_OHEM) + rcnn_label = labels_ohem + else: + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=label, normalization='valid') + bbox_loss_ = bbox_weight * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS) + rcnn_label = label + + # reshape output + rcnn_label = mx.sym.Reshape(data=rcnn_label, shape=(cfg.TRAIN.BATCH_IMAGES, -1), name='label_reshape') + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_loss = mx.sym.Reshape(data=bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 4 * num_reg_classes), + name='bbox_loss_reshape') + group = mx.sym.Group([rpn_cls_prob, rpn_bbox_loss, cls_prob, bbox_loss, mx.sym.BlockGrad(rcnn_label)]) + else: + cls_prob = mx.sym.SoftmaxActivation(name='cls_prob', data=cls_score) + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_pred = mx.sym.Reshape(data=bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 4 * num_reg_classes), + name='bbox_pred_reshape') + group = mx.sym.Group([rois, cls_prob, bbox_pred]) + + self.sym = group + return group + + def get_symbol_rpn(self, cfg, is_train=True): + # config alias for convenient + num_anchors = cfg.network.NUM_ANCHORS + + # input init + if is_train: + data = mx.sym.Variable(name="data") + rpn_label = mx.sym.Variable(name='label') + rpn_bbox_target = mx.sym.Variable(name='bbox_target') + rpn_bbox_weight = mx.sym.Variable(name='bbox_weight') + else: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + rpn_cls_score, rpn_bbox_pred = self.get_rpn(conv_feat, num_anchors) + if is_train: + # prepare rpn data + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + + # classification + rpn_cls_prob = mx.sym.SoftmaxOutput(data=rpn_cls_score_reshape, label=rpn_label, multi_output=True, + normalization='valid', use_ignore=True, ignore_label=-1, + name="rpn_cls_prob", + grad_scale=1.0) + # bounding box regression + rpn_bbox_loss_ = rpn_bbox_weight * mx.sym.smooth_l1(name='rpn_bbox_loss_', scalar=3.0, + data=(rpn_bbox_pred - rpn_bbox_target)) + rpn_bbox_loss = mx.sym.MakeLoss(name='rpn_bbox_loss', data=rpn_bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.RPN_BATCH_SIZE) + group = mx.symbol.Group([rpn_cls_prob, rpn_bbox_loss]) + else: + # ROI Proposal + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + rpn_cls_prob = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_prob") + rpn_cls_prob_reshape = mx.sym.Reshape( + data=rpn_cls_prob, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_prob_reshape') + if cfg.TEST.CXX_PROPOSAL: + rois, score = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + output_score=True, + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + else: + rois, score = mx.sym.Custom( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + output_score=True, + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + group = mx.symbol.Group([rois, score]) + self.sym = group + return group + + def get_symbol_rcnn(self, cfg, is_train=True): + # config alias for convenient + num_classes = cfg.dataset.NUM_CLASSES + num_reg_classes = (2 if cfg.CLASS_AGNOSTIC else num_classes) + + # input init + if is_train: + data = mx.symbol.Variable(name="data") + rois = mx.symbol.Variable(name='rois') + label = mx.symbol.Variable(name='label') + bbox_target = mx.symbol.Variable(name='bbox_target') + bbox_weight = mx.symbol.Variable(name='bbox_weight') + # reshape input + rois = mx.symbol.Reshape(data=rois, shape=(-1, 5), name='rois_reshape') + label = mx.symbol.Reshape(data=label, shape=(-1,), name='label_reshape') + bbox_target = mx.symbol.Reshape(data=bbox_target, shape=(-1, 4 * num_classes), name='bbox_target_reshape') + bbox_weight = mx.symbol.Reshape(data=bbox_weight, shape=(-1, 4 * num_classes), name='bbox_weight_reshape') + else: + data = mx.sym.Variable(name="data") + rois = mx.symbol.Variable(name='rois') + # reshape input + rois = mx.symbol.Reshape(data=rois, shape=(-1, 5), name='rois_reshape') + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + # res5 + relu1 = self.get_resnet_v1_conv5(conv_feat) + + conv_new_1 = mx.sym.Convolution(data=relu1, kernel=(1, 1), num_filter=256, name="conv_new_1") + conv_new_1_relu = mx.sym.Activation(data=conv_new_1, act_type='relu', name='conv_new_1_relu') + + offset_t = mx.contrib.sym.DeformablePSROIPooling(name='offset_t', data=conv_new_1_relu, rois=rois, group_size=1, pooled_size=7, + sample_per_part=4, no_trans=True, part_size=7, output_dim=256, spatial_scale=0.0625) + offset = mx.sym.FullyConnected(name='offset', data=offset_t, num_hidden=7 * 7 * 2, lr_mult=0.01) + offset_reshape = mx.sym.Reshape(data=offset, shape=(-1, 2, 7, 7), name="offset_reshape") + + deformable_roi_pool = mx.contrib.sym.DeformablePSROIPooling(name='deformable_roi_pool', data=conv_new_1_relu, rois=rois, + trans=offset_reshape, group_size=1, pooled_size=7, sample_per_part=4, + no_trans=False, part_size=7, output_dim=256, spatial_scale=0.0625, trans_std=0.1) + + # 2 fc + fc_new_1 = mx.sym.FullyConnected(name='fc_new_1', data=deformable_roi_pool, num_hidden=1024) + fc_new_1_relu = mx.sym.Activation(data=fc_new_1, act_type='relu', name='fc_new_1_relu') + + fc_new_2 = mx.sym.FullyConnected(name='fc_new_2', data=fc_new_1_relu, num_hidden=1024) + fc_new_2_relu = mx.sym.Activation(data=fc_new_2, act_type='relu', name='fc_new_2_relu') + + # cls_score/bbox_pred + cls_score = mx.sym.FullyConnected(name='cls_score', data=fc_new_2_relu, num_hidden=num_classes) + bbox_pred = mx.sym.FullyConnected(name='bbox_pred', data=fc_new_2_relu, num_hidden=num_reg_classes * 4) + + if is_train: + if cfg.TRAIN.ENABLE_OHEM: + labels_ohem, bbox_weights_ohem = mx.sym.Custom(op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=num_reg_classes, + roi_per_img=cfg.TRAIN.BATCH_ROIS_OHEM, + cls_score=cls_score, bbox_pred=bbox_pred, labels=label, + bbox_targets=bbox_target, bbox_weights=bbox_weight) + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=labels_ohem, + normalization='valid', use_ignore=True, ignore_label=-1, grad_scale=1.0) + bbox_loss_ = bbox_weights_ohem * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS_OHEM) + else: + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=label, normalization='valid', + grad_scale=1.0) + bbox_loss_ = bbox_weight * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS) + + # reshape output + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_loss = mx.sym.Reshape(data=bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 4 * num_reg_classes), + name='bbox_loss_reshape') + group = mx.sym.Group([cls_prob, bbox_loss]) + else: + cls_prob = mx.sym.SoftmaxActivation(name='cls_prob', data=cls_score) + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_pred = mx.sym.Reshape(data=bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 4 * num_reg_classes), + name='bbox_pred_reshape') + group = mx.sym.Group([cls_prob, bbox_pred]) + + self.sym = group + return group + + def init_weight(self, cfg, arg_params, aux_params): + arg_params['rpn_conv_3x3_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['rpn_conv_3x3_weight']) + arg_params['rpn_conv_3x3_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_conv_3x3_bias']) + arg_params['rpn_cls_score_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['rpn_cls_score_weight']) + arg_params['rpn_cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_cls_score_bias']) + arg_params['rpn_bbox_pred_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['rpn_bbox_pred_weight']) + arg_params['rpn_bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_bbox_pred_bias']) + arg_params['res5a_branch2b_offset_weight'] = mx.nd.zeros(shape=self.arg_shape_dict['res5a_branch2b_offset_weight']) + arg_params['res5a_branch2b_offset_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['res5a_branch2b_offset_bias']) + arg_params['res5b_branch2b_offset_weight'] = mx.nd.zeros(shape=self.arg_shape_dict['res5b_branch2b_offset_weight']) + arg_params['res5b_branch2b_offset_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['res5b_branch2b_offset_bias']) + arg_params['res5c_branch2b_offset_weight'] = mx.nd.zeros(shape=self.arg_shape_dict['res5c_branch2b_offset_weight']) + arg_params['res5c_branch2b_offset_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['res5c_branch2b_offset_bias']) + arg_params['conv_new_1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['conv_new_1_weight']) + arg_params['conv_new_1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['conv_new_1_bias']) + arg_params['offset_weight'] = mx.nd.zeros(shape=self.arg_shape_dict['offset_weight']) + arg_params['offset_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['offset_bias']) + arg_params['fc_new_1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_1_weight']) + arg_params['fc_new_1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_1_bias']) + arg_params['fc_new_2_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_2_weight']) + arg_params['fc_new_2_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_2_bias']) + arg_params['cls_score_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['cls_score_weight']) + arg_params['cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['cls_score_bias']) + arg_params['bbox_pred_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['bbox_pred_weight']) + arg_params['bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['bbox_pred_bias']) + + def init_weight_rpn(self, cfg, arg_params, aux_params): + arg_params['rpn_conv_3x3_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['rpn_conv_3x3_weight']) + arg_params['rpn_conv_3x3_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_conv_3x3_bias']) + arg_params['rpn_cls_score_weight'] = mx.random.normal(0, 0.01, + shape=self.arg_shape_dict['rpn_cls_score_weight']) + arg_params['rpn_cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_cls_score_bias']) + arg_params['rpn_bbox_pred_weight'] = mx.random.normal(0, 0.01, + shape=self.arg_shape_dict['rpn_bbox_pred_weight']) + arg_params['rpn_bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_bbox_pred_bias']) + + def init_weight_rcnn(self, cfg, arg_params, aux_params): + arg_params['res5a_branch2b_offset_weight'] = mx.nd.zeros(shape=self.arg_shape_dict['res5a_branch2b_offset_weight']) + arg_params['res5a_branch2b_offset_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['res5a_branch2b_offset_bias']) + arg_params['res5b_branch2b_offset_weight'] = mx.nd.zeros(shape=self.arg_shape_dict['res5b_branch2b_offset_weight']) + arg_params['res5b_branch2b_offset_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['res5b_branch2b_offset_bias']) + arg_params['res5c_branch2b_offset_weight'] = mx.nd.zeros(shape=self.arg_shape_dict['res5c_branch2b_offset_weight']) + arg_params['res5c_branch2b_offset_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['res5c_branch2b_offset_bias']) + arg_params['conv_new_1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['conv_new_1_weight']) + arg_params['conv_new_1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['conv_new_1_weight']) + arg_params['conv_new_1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['conv_new_1_bias']) + arg_params['offset_weight'] = mx.nd.zeros(shape=self.arg_shape_dict['offset_weight']) + arg_params['offset_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['offset_bias']) + arg_params['fc_new_1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_1_weight']) + arg_params['fc_new_1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_1_bias']) + arg_params['fc_new_2_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_2_weight']) + arg_params['fc_new_2_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_2_bias']) + arg_params['cls_score_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['cls_score_weight']) + arg_params['cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['cls_score_bias']) + arg_params['bbox_pred_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['bbox_pred_weight']) + arg_params['bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['bbox_pred_bias']) + diff --git a/faster_rcnn/symbols/resnet_v1_101_rcnn_light_head.py b/faster_rcnn/symbols/resnet_v1_101_rcnn_light_head.py new file mode 100644 index 0000000..1e925ef --- /dev/null +++ b/faster_rcnn/symbols/resnet_v1_101_rcnn_light_head.py @@ -0,0 +1,1110 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Guodong Zhang, Bin Xiao +# -------------------------------------------------------- + +import cPickle +import mxnet as mx +from utils.symbol import Symbol +from operator_py.proposal import * +from operator_py.proposal_target import * +from operator_py.proposal_target_rotbox import * +from operator_py.box_annotator_ohem import * +import pdb + + +class resnet_v1_101_rcnn_light_head(Symbol): + def __init__(self): + """ + Use __init__ to define parameter network needs + """ + self.eps = 1e-5 + self.use_global_stats = True + self.workspace = 512 + self.units = (3, 4, 23, 3) # use for 101 + self.filter_list = [256, 512, 1024, 2048] + self.shared_param_list = ['conv_new_1', 'conv_new_2', 'conv_new_3', 'conv_new_4'] + self.shared_param_dict = {} + for name in self.shared_param_list: + self.shared_param_dict[name + '_weight'] = mx.sym.Variable(name + '_weight') + self.shared_param_dict[name + '_bias'] = mx.sym.Variable(name + '_bias') + + + def get_resnet_v1_conv4(self, data): + conv1 = mx.symbol.Convolution(name='conv1', data=data, num_filter=64, pad=(3, 3), kernel=(7, 7), stride=(2, 2), + no_bias=True) + bn_conv1 = mx.symbol.BatchNorm(name='bn_conv1', data=conv1, use_global_stats=True, fix_gamma=False, + eps=self.eps) + scale_conv1 = bn_conv1 + conv1_relu = mx.symbol.Activation(name='conv1_relu', data=scale_conv1, act_type='relu') + pool1 = mx.symbol.Pooling(name='pool1', data=conv1_relu, pooling_convention='full', pad=(0, 0), kernel=(3, 3), + stride=(2, 2), pool_type='max') + res2a_branch1 = mx.symbol.Convolution(name='res2a_branch1', data=pool1, num_filter=256, pad=(0, 0), + kernel=(1, 1), + stride=(1, 1), no_bias=True) + bn2a_branch1 = mx.symbol.BatchNorm(name='bn2a_branch1', data=res2a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch1 = bn2a_branch1 + res2a_branch2a = mx.symbol.Convolution(name='res2a_branch2a', data=pool1, num_filter=64, pad=(0, 0), + kernel=(1, 1), + stride=(1, 1), no_bias=True) + bn2a_branch2a = mx.symbol.BatchNorm(name='bn2a_branch2a', data=res2a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2a = bn2a_branch2a + res2a_branch2a_relu = mx.symbol.Activation(name='res2a_branch2a_relu', data=scale2a_branch2a, act_type='relu') + res2a_branch2b = mx.symbol.Convolution(name='res2a_branch2b', data=res2a_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2a_branch2b = mx.symbol.BatchNorm(name='bn2a_branch2b', data=res2a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2b = bn2a_branch2b + res2a_branch2b_relu = mx.symbol.Activation(name='res2a_branch2b_relu', data=scale2a_branch2b, act_type='relu') + res2a_branch2c = mx.symbol.Convolution(name='res2a_branch2c', data=res2a_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2a_branch2c = mx.symbol.BatchNorm(name='bn2a_branch2c', data=res2a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2c = bn2a_branch2c + res2a = mx.symbol.broadcast_add(name='res2a', *[scale2a_branch1, scale2a_branch2c]) + res2a_relu = mx.symbol.Activation(name='res2a_relu', data=res2a, act_type='relu') + res2b_branch2a = mx.symbol.Convolution(name='res2b_branch2a', data=res2a_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2a = mx.symbol.BatchNorm(name='bn2b_branch2a', data=res2b_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2a = bn2b_branch2a + res2b_branch2a_relu = mx.symbol.Activation(name='res2b_branch2a_relu', data=scale2b_branch2a, act_type='relu') + res2b_branch2b = mx.symbol.Convolution(name='res2b_branch2b', data=res2b_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2b_branch2b = mx.symbol.BatchNorm(name='bn2b_branch2b', data=res2b_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2b = bn2b_branch2b + res2b_branch2b_relu = mx.symbol.Activation(name='res2b_branch2b_relu', data=scale2b_branch2b, act_type='relu') + res2b_branch2c = mx.symbol.Convolution(name='res2b_branch2c', data=res2b_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2c = mx.symbol.BatchNorm(name='bn2b_branch2c', data=res2b_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2c = bn2b_branch2c + res2b = mx.symbol.broadcast_add(name='res2b', *[res2a_relu, scale2b_branch2c]) + res2b_relu = mx.symbol.Activation(name='res2b_relu', data=res2b, act_type='relu') + res2c_branch2a = mx.symbol.Convolution(name='res2c_branch2a', data=res2b_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2a = mx.symbol.BatchNorm(name='bn2c_branch2a', data=res2c_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2a = bn2c_branch2a + res2c_branch2a_relu = mx.symbol.Activation(name='res2c_branch2a_relu', data=scale2c_branch2a, act_type='relu') + res2c_branch2b = mx.symbol.Convolution(name='res2c_branch2b', data=res2c_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2c_branch2b = mx.symbol.BatchNorm(name='bn2c_branch2b', data=res2c_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2b = bn2c_branch2b + res2c_branch2b_relu = mx.symbol.Activation(name='res2c_branch2b_relu', data=scale2c_branch2b, act_type='relu') + res2c_branch2c = mx.symbol.Convolution(name='res2c_branch2c', data=res2c_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2c = mx.symbol.BatchNorm(name='bn2c_branch2c', data=res2c_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2c = bn2c_branch2c + res2c = mx.symbol.broadcast_add(name='res2c', *[res2b_relu, scale2c_branch2c]) + res2c_relu = mx.symbol.Activation(name='res2c_relu', data=res2c, act_type='relu') + res3a_branch1 = mx.symbol.Convolution(name='res3a_branch1', data=res2c_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch1 = mx.symbol.BatchNorm(name='bn3a_branch1', data=res3a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch1 = bn3a_branch1 + res3a_branch2a = mx.symbol.Convolution(name='res3a_branch2a', data=res2c_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch2a = mx.symbol.BatchNorm(name='bn3a_branch2a', data=res3a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2a = bn3a_branch2a + res3a_branch2a_relu = mx.symbol.Activation(name='res3a_branch2a_relu', data=scale3a_branch2a, act_type='relu') + res3a_branch2b = mx.symbol.Convolution(name='res3a_branch2b', data=res3a_branch2a_relu, num_filter=128, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3a_branch2b = mx.symbol.BatchNorm(name='bn3a_branch2b', data=res3a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2b = bn3a_branch2b + res3a_branch2b_relu = mx.symbol.Activation(name='res3a_branch2b_relu', data=scale3a_branch2b, act_type='relu') + res3a_branch2c = mx.symbol.Convolution(name='res3a_branch2c', data=res3a_branch2b_relu, num_filter=512, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3a_branch2c = mx.symbol.BatchNorm(name='bn3a_branch2c', data=res3a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2c = bn3a_branch2c + res3a = mx.symbol.broadcast_add(name='res3a', *[scale3a_branch1, scale3a_branch2c]) + res3a_relu = mx.symbol.Activation(name='res3a_relu', data=res3a, act_type='relu') + res3b1_branch2a = mx.symbol.Convolution(name='res3b1_branch2a', data=res3a_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2a = mx.symbol.BatchNorm(name='bn3b1_branch2a', data=res3b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2a = bn3b1_branch2a + res3b1_branch2a_relu = mx.symbol.Activation(name='res3b1_branch2a_relu', data=scale3b1_branch2a, + act_type='relu') + res3b1_branch2b = mx.symbol.Convolution(name='res3b1_branch2b', data=res3b1_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b1_branch2b = mx.symbol.BatchNorm(name='bn3b1_branch2b', data=res3b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2b = bn3b1_branch2b + res3b1_branch2b_relu = mx.symbol.Activation(name='res3b1_branch2b_relu', data=scale3b1_branch2b, + act_type='relu') + res3b1_branch2c = mx.symbol.Convolution(name='res3b1_branch2c', data=res3b1_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2c = mx.symbol.BatchNorm(name='bn3b1_branch2c', data=res3b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2c = bn3b1_branch2c + res3b1 = mx.symbol.broadcast_add(name='res3b1', *[res3a_relu, scale3b1_branch2c]) + res3b1_relu = mx.symbol.Activation(name='res3b1_relu', data=res3b1, act_type='relu') + res3b2_branch2a = mx.symbol.Convolution(name='res3b2_branch2a', data=res3b1_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2a = mx.symbol.BatchNorm(name='bn3b2_branch2a', data=res3b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2a = bn3b2_branch2a + res3b2_branch2a_relu = mx.symbol.Activation(name='res3b2_branch2a_relu', data=scale3b2_branch2a, + act_type='relu') + res3b2_branch2b = mx.symbol.Convolution(name='res3b2_branch2b', data=res3b2_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b2_branch2b = mx.symbol.BatchNorm(name='bn3b2_branch2b', data=res3b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2b = bn3b2_branch2b + res3b2_branch2b_relu = mx.symbol.Activation(name='res3b2_branch2b_relu', data=scale3b2_branch2b, + act_type='relu') + res3b2_branch2c = mx.symbol.Convolution(name='res3b2_branch2c', data=res3b2_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2c = mx.symbol.BatchNorm(name='bn3b2_branch2c', data=res3b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2c = bn3b2_branch2c + res3b2 = mx.symbol.broadcast_add(name='res3b2', *[res3b1_relu, scale3b2_branch2c]) + res3b2_relu = mx.symbol.Activation(name='res3b2_relu', data=res3b2, act_type='relu') + res3b3_branch2a = mx.symbol.Convolution(name='res3b3_branch2a', data=res3b2_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2a = mx.symbol.BatchNorm(name='bn3b3_branch2a', data=res3b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2a = bn3b3_branch2a + res3b3_branch2a_relu = mx.symbol.Activation(name='res3b3_branch2a_relu', data=scale3b3_branch2a, + act_type='relu') + res3b3_branch2b = mx.symbol.Convolution(name='res3b3_branch2b', data=res3b3_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b3_branch2b = mx.symbol.BatchNorm(name='bn3b3_branch2b', data=res3b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2b = bn3b3_branch2b + res3b3_branch2b_relu = mx.symbol.Activation(name='res3b3_branch2b_relu', data=scale3b3_branch2b, + act_type='relu') + res3b3_branch2c = mx.symbol.Convolution(name='res3b3_branch2c', data=res3b3_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2c = mx.symbol.BatchNorm(name='bn3b3_branch2c', data=res3b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2c = bn3b3_branch2c + res3b3 = mx.symbol.broadcast_add(name='res3b3', *[res3b2_relu, scale3b3_branch2c]) + res3b3_relu = mx.symbol.Activation(name='res3b3_relu', data=res3b3, act_type='relu') + res4a_branch1 = mx.symbol.Convolution(name='res4a_branch1', data=res3b3_relu, num_filter=1024, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch1 = mx.symbol.BatchNorm(name='bn4a_branch1', data=res4a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch1 = bn4a_branch1 + res4a_branch2a = mx.symbol.Convolution(name='res4a_branch2a', data=res3b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch2a = mx.symbol.BatchNorm(name='bn4a_branch2a', data=res4a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2a = bn4a_branch2a + res4a_branch2a_relu = mx.symbol.Activation(name='res4a_branch2a_relu', data=scale4a_branch2a, act_type='relu') + res4a_branch2b = mx.symbol.Convolution(name='res4a_branch2b', data=res4a_branch2a_relu, num_filter=256, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4a_branch2b = mx.symbol.BatchNorm(name='bn4a_branch2b', data=res4a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2b = bn4a_branch2b + res4a_branch2b_relu = mx.symbol.Activation(name='res4a_branch2b_relu', data=scale4a_branch2b, act_type='relu') + res4a_branch2c = mx.symbol.Convolution(name='res4a_branch2c', data=res4a_branch2b_relu, num_filter=1024, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4a_branch2c = mx.symbol.BatchNorm(name='bn4a_branch2c', data=res4a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2c = bn4a_branch2c + res4a = mx.symbol.broadcast_add(name='res4a', *[scale4a_branch1, scale4a_branch2c]) + res4a_relu = mx.symbol.Activation(name='res4a_relu', data=res4a, act_type='relu') + res4b1_branch2a = mx.symbol.Convolution(name='res4b1_branch2a', data=res4a_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2a = mx.symbol.BatchNorm(name='bn4b1_branch2a', data=res4b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2a = bn4b1_branch2a + res4b1_branch2a_relu = mx.symbol.Activation(name='res4b1_branch2a_relu', data=scale4b1_branch2a, + act_type='relu') + res4b1_branch2b = mx.symbol.Convolution(name='res4b1_branch2b', data=res4b1_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b1_branch2b = mx.symbol.BatchNorm(name='bn4b1_branch2b', data=res4b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2b = bn4b1_branch2b + res4b1_branch2b_relu = mx.symbol.Activation(name='res4b1_branch2b_relu', data=scale4b1_branch2b, + act_type='relu') + res4b1_branch2c = mx.symbol.Convolution(name='res4b1_branch2c', data=res4b1_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2c = mx.symbol.BatchNorm(name='bn4b1_branch2c', data=res4b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2c = bn4b1_branch2c + res4b1 = mx.symbol.broadcast_add(name='res4b1', *[res4a_relu, scale4b1_branch2c]) + res4b1_relu = mx.symbol.Activation(name='res4b1_relu', data=res4b1, act_type='relu') + res4b2_branch2a = mx.symbol.Convolution(name='res4b2_branch2a', data=res4b1_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2a = mx.symbol.BatchNorm(name='bn4b2_branch2a', data=res4b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2a = bn4b2_branch2a + res4b2_branch2a_relu = mx.symbol.Activation(name='res4b2_branch2a_relu', data=scale4b2_branch2a, + act_type='relu') + res4b2_branch2b = mx.symbol.Convolution(name='res4b2_branch2b', data=res4b2_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b2_branch2b = mx.symbol.BatchNorm(name='bn4b2_branch2b', data=res4b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2b = bn4b2_branch2b + res4b2_branch2b_relu = mx.symbol.Activation(name='res4b2_branch2b_relu', data=scale4b2_branch2b, + act_type='relu') + res4b2_branch2c = mx.symbol.Convolution(name='res4b2_branch2c', data=res4b2_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2c = mx.symbol.BatchNorm(name='bn4b2_branch2c', data=res4b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2c = bn4b2_branch2c + res4b2 = mx.symbol.broadcast_add(name='res4b2', *[res4b1_relu, scale4b2_branch2c]) + res4b2_relu = mx.symbol.Activation(name='res4b2_relu', data=res4b2, act_type='relu') + res4b3_branch2a = mx.symbol.Convolution(name='res4b3_branch2a', data=res4b2_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2a = mx.symbol.BatchNorm(name='bn4b3_branch2a', data=res4b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2a = bn4b3_branch2a + res4b3_branch2a_relu = mx.symbol.Activation(name='res4b3_branch2a_relu', data=scale4b3_branch2a, + act_type='relu') + res4b3_branch2b = mx.symbol.Convolution(name='res4b3_branch2b', data=res4b3_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b3_branch2b = mx.symbol.BatchNorm(name='bn4b3_branch2b', data=res4b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2b = bn4b3_branch2b + res4b3_branch2b_relu = mx.symbol.Activation(name='res4b3_branch2b_relu', data=scale4b3_branch2b, + act_type='relu') + res4b3_branch2c = mx.symbol.Convolution(name='res4b3_branch2c', data=res4b3_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2c = mx.symbol.BatchNorm(name='bn4b3_branch2c', data=res4b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2c = bn4b3_branch2c + res4b3 = mx.symbol.broadcast_add(name='res4b3', *[res4b2_relu, scale4b3_branch2c]) + res4b3_relu = mx.symbol.Activation(name='res4b3_relu', data=res4b3, act_type='relu') + res4b4_branch2a = mx.symbol.Convolution(name='res4b4_branch2a', data=res4b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2a = mx.symbol.BatchNorm(name='bn4b4_branch2a', data=res4b4_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2a = bn4b4_branch2a + res4b4_branch2a_relu = mx.symbol.Activation(name='res4b4_branch2a_relu', data=scale4b4_branch2a, + act_type='relu') + res4b4_branch2b = mx.symbol.Convolution(name='res4b4_branch2b', data=res4b4_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b4_branch2b = mx.symbol.BatchNorm(name='bn4b4_branch2b', data=res4b4_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2b = bn4b4_branch2b + res4b4_branch2b_relu = mx.symbol.Activation(name='res4b4_branch2b_relu', data=scale4b4_branch2b, + act_type='relu') + res4b4_branch2c = mx.symbol.Convolution(name='res4b4_branch2c', data=res4b4_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2c = mx.symbol.BatchNorm(name='bn4b4_branch2c', data=res4b4_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2c = bn4b4_branch2c + res4b4 = mx.symbol.broadcast_add(name='res4b4', *[res4b3_relu, scale4b4_branch2c]) + res4b4_relu = mx.symbol.Activation(name='res4b4_relu', data=res4b4, act_type='relu') + res4b5_branch2a = mx.symbol.Convolution(name='res4b5_branch2a', data=res4b4_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2a = mx.symbol.BatchNorm(name='bn4b5_branch2a', data=res4b5_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2a = bn4b5_branch2a + res4b5_branch2a_relu = mx.symbol.Activation(name='res4b5_branch2a_relu', data=scale4b5_branch2a, + act_type='relu') + res4b5_branch2b = mx.symbol.Convolution(name='res4b5_branch2b', data=res4b5_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b5_branch2b = mx.symbol.BatchNorm(name='bn4b5_branch2b', data=res4b5_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2b = bn4b5_branch2b + res4b5_branch2b_relu = mx.symbol.Activation(name='res4b5_branch2b_relu', data=scale4b5_branch2b, + act_type='relu') + res4b5_branch2c = mx.symbol.Convolution(name='res4b5_branch2c', data=res4b5_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2c = mx.symbol.BatchNorm(name='bn4b5_branch2c', data=res4b5_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2c = bn4b5_branch2c + res4b5 = mx.symbol.broadcast_add(name='res4b5', *[res4b4_relu, scale4b5_branch2c]) + res4b5_relu = mx.symbol.Activation(name='res4b5_relu', data=res4b5, act_type='relu') + res4b6_branch2a = mx.symbol.Convolution(name='res4b6_branch2a', data=res4b5_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2a = mx.symbol.BatchNorm(name='bn4b6_branch2a', data=res4b6_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2a = bn4b6_branch2a + res4b6_branch2a_relu = mx.symbol.Activation(name='res4b6_branch2a_relu', data=scale4b6_branch2a, + act_type='relu') + res4b6_branch2b = mx.symbol.Convolution(name='res4b6_branch2b', data=res4b6_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b6_branch2b = mx.symbol.BatchNorm(name='bn4b6_branch2b', data=res4b6_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2b = bn4b6_branch2b + res4b6_branch2b_relu = mx.symbol.Activation(name='res4b6_branch2b_relu', data=scale4b6_branch2b, + act_type='relu') + res4b6_branch2c = mx.symbol.Convolution(name='res4b6_branch2c', data=res4b6_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2c = mx.symbol.BatchNorm(name='bn4b6_branch2c', data=res4b6_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2c = bn4b6_branch2c + res4b6 = mx.symbol.broadcast_add(name='res4b6', *[res4b5_relu, scale4b6_branch2c]) + res4b6_relu = mx.symbol.Activation(name='res4b6_relu', data=res4b6, act_type='relu') + res4b7_branch2a = mx.symbol.Convolution(name='res4b7_branch2a', data=res4b6_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2a = mx.symbol.BatchNorm(name='bn4b7_branch2a', data=res4b7_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2a = bn4b7_branch2a + res4b7_branch2a_relu = mx.symbol.Activation(name='res4b7_branch2a_relu', data=scale4b7_branch2a, + act_type='relu') + res4b7_branch2b = mx.symbol.Convolution(name='res4b7_branch2b', data=res4b7_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b7_branch2b = mx.symbol.BatchNorm(name='bn4b7_branch2b', data=res4b7_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2b = bn4b7_branch2b + res4b7_branch2b_relu = mx.symbol.Activation(name='res4b7_branch2b_relu', data=scale4b7_branch2b, + act_type='relu') + res4b7_branch2c = mx.symbol.Convolution(name='res4b7_branch2c', data=res4b7_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2c = mx.symbol.BatchNorm(name='bn4b7_branch2c', data=res4b7_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2c = bn4b7_branch2c + res4b7 = mx.symbol.broadcast_add(name='res4b7', *[res4b6_relu, scale4b7_branch2c]) + res4b7_relu = mx.symbol.Activation(name='res4b7_relu', data=res4b7, act_type='relu') + res4b8_branch2a = mx.symbol.Convolution(name='res4b8_branch2a', data=res4b7_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2a = mx.symbol.BatchNorm(name='bn4b8_branch2a', data=res4b8_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2a = bn4b8_branch2a + res4b8_branch2a_relu = mx.symbol.Activation(name='res4b8_branch2a_relu', data=scale4b8_branch2a, + act_type='relu') + res4b8_branch2b = mx.symbol.Convolution(name='res4b8_branch2b', data=res4b8_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b8_branch2b = mx.symbol.BatchNorm(name='bn4b8_branch2b', data=res4b8_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2b = bn4b8_branch2b + res4b8_branch2b_relu = mx.symbol.Activation(name='res4b8_branch2b_relu', data=scale4b8_branch2b, + act_type='relu') + res4b8_branch2c = mx.symbol.Convolution(name='res4b8_branch2c', data=res4b8_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2c = mx.symbol.BatchNorm(name='bn4b8_branch2c', data=res4b8_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2c = bn4b8_branch2c + res4b8 = mx.symbol.broadcast_add(name='res4b8', *[res4b7_relu, scale4b8_branch2c]) + res4b8_relu = mx.symbol.Activation(name='res4b8_relu', data=res4b8, act_type='relu') + res4b9_branch2a = mx.symbol.Convolution(name='res4b9_branch2a', data=res4b8_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2a = mx.symbol.BatchNorm(name='bn4b9_branch2a', data=res4b9_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2a = bn4b9_branch2a + res4b9_branch2a_relu = mx.symbol.Activation(name='res4b9_branch2a_relu', data=scale4b9_branch2a, + act_type='relu') + res4b9_branch2b = mx.symbol.Convolution(name='res4b9_branch2b', data=res4b9_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b9_branch2b = mx.symbol.BatchNorm(name='bn4b9_branch2b', data=res4b9_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2b = bn4b9_branch2b + res4b9_branch2b_relu = mx.symbol.Activation(name='res4b9_branch2b_relu', data=scale4b9_branch2b, + act_type='relu') + res4b9_branch2c = mx.symbol.Convolution(name='res4b9_branch2c', data=res4b9_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2c = mx.symbol.BatchNorm(name='bn4b9_branch2c', data=res4b9_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2c = bn4b9_branch2c + res4b9 = mx.symbol.broadcast_add(name='res4b9', *[res4b8_relu, scale4b9_branch2c]) + res4b9_relu = mx.symbol.Activation(name='res4b9_relu', data=res4b9, act_type='relu') + res4b10_branch2a = mx.symbol.Convolution(name='res4b10_branch2a', data=res4b9_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2a = mx.symbol.BatchNorm(name='bn4b10_branch2a', data=res4b10_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2a = bn4b10_branch2a + res4b10_branch2a_relu = mx.symbol.Activation(name='res4b10_branch2a_relu', data=scale4b10_branch2a, + act_type='relu') + res4b10_branch2b = mx.symbol.Convolution(name='res4b10_branch2b', data=res4b10_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b10_branch2b = mx.symbol.BatchNorm(name='bn4b10_branch2b', data=res4b10_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2b = bn4b10_branch2b + res4b10_branch2b_relu = mx.symbol.Activation(name='res4b10_branch2b_relu', data=scale4b10_branch2b, + act_type='relu') + res4b10_branch2c = mx.symbol.Convolution(name='res4b10_branch2c', data=res4b10_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2c = mx.symbol.BatchNorm(name='bn4b10_branch2c', data=res4b10_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2c = bn4b10_branch2c + res4b10 = mx.symbol.broadcast_add(name='res4b10', *[res4b9_relu, scale4b10_branch2c]) + res4b10_relu = mx.symbol.Activation(name='res4b10_relu', data=res4b10, act_type='relu') + res4b11_branch2a = mx.symbol.Convolution(name='res4b11_branch2a', data=res4b10_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2a = mx.symbol.BatchNorm(name='bn4b11_branch2a', data=res4b11_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2a = bn4b11_branch2a + res4b11_branch2a_relu = mx.symbol.Activation(name='res4b11_branch2a_relu', data=scale4b11_branch2a, + act_type='relu') + res4b11_branch2b = mx.symbol.Convolution(name='res4b11_branch2b', data=res4b11_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b11_branch2b = mx.symbol.BatchNorm(name='bn4b11_branch2b', data=res4b11_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2b = bn4b11_branch2b + res4b11_branch2b_relu = mx.symbol.Activation(name='res4b11_branch2b_relu', data=scale4b11_branch2b, + act_type='relu') + res4b11_branch2c = mx.symbol.Convolution(name='res4b11_branch2c', data=res4b11_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2c = mx.symbol.BatchNorm(name='bn4b11_branch2c', data=res4b11_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2c = bn4b11_branch2c + res4b11 = mx.symbol.broadcast_add(name='res4b11', *[res4b10_relu, scale4b11_branch2c]) + res4b11_relu = mx.symbol.Activation(name='res4b11_relu', data=res4b11, act_type='relu') + res4b12_branch2a = mx.symbol.Convolution(name='res4b12_branch2a', data=res4b11_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2a = mx.symbol.BatchNorm(name='bn4b12_branch2a', data=res4b12_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2a = bn4b12_branch2a + res4b12_branch2a_relu = mx.symbol.Activation(name='res4b12_branch2a_relu', data=scale4b12_branch2a, + act_type='relu') + res4b12_branch2b = mx.symbol.Convolution(name='res4b12_branch2b', data=res4b12_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b12_branch2b = mx.symbol.BatchNorm(name='bn4b12_branch2b', data=res4b12_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2b = bn4b12_branch2b + res4b12_branch2b_relu = mx.symbol.Activation(name='res4b12_branch2b_relu', data=scale4b12_branch2b, + act_type='relu') + res4b12_branch2c = mx.symbol.Convolution(name='res4b12_branch2c', data=res4b12_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2c = mx.symbol.BatchNorm(name='bn4b12_branch2c', data=res4b12_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2c = bn4b12_branch2c + res4b12 = mx.symbol.broadcast_add(name='res4b12', *[res4b11_relu, scale4b12_branch2c]) + res4b12_relu = mx.symbol.Activation(name='res4b12_relu', data=res4b12, act_type='relu') + res4b13_branch2a = mx.symbol.Convolution(name='res4b13_branch2a', data=res4b12_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2a = mx.symbol.BatchNorm(name='bn4b13_branch2a', data=res4b13_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2a = bn4b13_branch2a + res4b13_branch2a_relu = mx.symbol.Activation(name='res4b13_branch2a_relu', data=scale4b13_branch2a, + act_type='relu') + res4b13_branch2b = mx.symbol.Convolution(name='res4b13_branch2b', data=res4b13_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b13_branch2b = mx.symbol.BatchNorm(name='bn4b13_branch2b', data=res4b13_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2b = bn4b13_branch2b + res4b13_branch2b_relu = mx.symbol.Activation(name='res4b13_branch2b_relu', data=scale4b13_branch2b, + act_type='relu') + res4b13_branch2c = mx.symbol.Convolution(name='res4b13_branch2c', data=res4b13_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2c = mx.symbol.BatchNorm(name='bn4b13_branch2c', data=res4b13_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2c = bn4b13_branch2c + res4b13 = mx.symbol.broadcast_add(name='res4b13', *[res4b12_relu, scale4b13_branch2c]) + res4b13_relu = mx.symbol.Activation(name='res4b13_relu', data=res4b13, act_type='relu') + res4b14_branch2a = mx.symbol.Convolution(name='res4b14_branch2a', data=res4b13_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2a = mx.symbol.BatchNorm(name='bn4b14_branch2a', data=res4b14_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2a = bn4b14_branch2a + res4b14_branch2a_relu = mx.symbol.Activation(name='res4b14_branch2a_relu', data=scale4b14_branch2a, + act_type='relu') + res4b14_branch2b = mx.symbol.Convolution(name='res4b14_branch2b', data=res4b14_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b14_branch2b = mx.symbol.BatchNorm(name='bn4b14_branch2b', data=res4b14_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2b = bn4b14_branch2b + res4b14_branch2b_relu = mx.symbol.Activation(name='res4b14_branch2b_relu', data=scale4b14_branch2b, + act_type='relu') + res4b14_branch2c = mx.symbol.Convolution(name='res4b14_branch2c', data=res4b14_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2c = mx.symbol.BatchNorm(name='bn4b14_branch2c', data=res4b14_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2c = bn4b14_branch2c + res4b14 = mx.symbol.broadcast_add(name='res4b14', *[res4b13_relu, scale4b14_branch2c]) + res4b14_relu = mx.symbol.Activation(name='res4b14_relu', data=res4b14, act_type='relu') + res4b15_branch2a = mx.symbol.Convolution(name='res4b15_branch2a', data=res4b14_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2a = mx.symbol.BatchNorm(name='bn4b15_branch2a', data=res4b15_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2a = bn4b15_branch2a + res4b15_branch2a_relu = mx.symbol.Activation(name='res4b15_branch2a_relu', data=scale4b15_branch2a, + act_type='relu') + res4b15_branch2b = mx.symbol.Convolution(name='res4b15_branch2b', data=res4b15_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b15_branch2b = mx.symbol.BatchNorm(name='bn4b15_branch2b', data=res4b15_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2b = bn4b15_branch2b + res4b15_branch2b_relu = mx.symbol.Activation(name='res4b15_branch2b_relu', data=scale4b15_branch2b, + act_type='relu') + res4b15_branch2c = mx.symbol.Convolution(name='res4b15_branch2c', data=res4b15_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2c = mx.symbol.BatchNorm(name='bn4b15_branch2c', data=res4b15_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2c = bn4b15_branch2c + res4b15 = mx.symbol.broadcast_add(name='res4b15', *[res4b14_relu, scale4b15_branch2c]) + res4b15_relu = mx.symbol.Activation(name='res4b15_relu', data=res4b15, act_type='relu') + res4b16_branch2a = mx.symbol.Convolution(name='res4b16_branch2a', data=res4b15_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2a = mx.symbol.BatchNorm(name='bn4b16_branch2a', data=res4b16_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2a = bn4b16_branch2a + res4b16_branch2a_relu = mx.symbol.Activation(name='res4b16_branch2a_relu', data=scale4b16_branch2a, + act_type='relu') + res4b16_branch2b = mx.symbol.Convolution(name='res4b16_branch2b', data=res4b16_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b16_branch2b = mx.symbol.BatchNorm(name='bn4b16_branch2b', data=res4b16_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2b = bn4b16_branch2b + res4b16_branch2b_relu = mx.symbol.Activation(name='res4b16_branch2b_relu', data=scale4b16_branch2b, + act_type='relu') + res4b16_branch2c = mx.symbol.Convolution(name='res4b16_branch2c', data=res4b16_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2c = mx.symbol.BatchNorm(name='bn4b16_branch2c', data=res4b16_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2c = bn4b16_branch2c + res4b16 = mx.symbol.broadcast_add(name='res4b16', *[res4b15_relu, scale4b16_branch2c]) + res4b16_relu = mx.symbol.Activation(name='res4b16_relu', data=res4b16, act_type='relu') + res4b17_branch2a = mx.symbol.Convolution(name='res4b17_branch2a', data=res4b16_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2a = mx.symbol.BatchNorm(name='bn4b17_branch2a', data=res4b17_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2a = bn4b17_branch2a + res4b17_branch2a_relu = mx.symbol.Activation(name='res4b17_branch2a_relu', data=scale4b17_branch2a, + act_type='relu') + res4b17_branch2b = mx.symbol.Convolution(name='res4b17_branch2b', data=res4b17_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b17_branch2b = mx.symbol.BatchNorm(name='bn4b17_branch2b', data=res4b17_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2b = bn4b17_branch2b + res4b17_branch2b_relu = mx.symbol.Activation(name='res4b17_branch2b_relu', data=scale4b17_branch2b, + act_type='relu') + res4b17_branch2c = mx.symbol.Convolution(name='res4b17_branch2c', data=res4b17_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2c = mx.symbol.BatchNorm(name='bn4b17_branch2c', data=res4b17_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2c = bn4b17_branch2c + res4b17 = mx.symbol.broadcast_add(name='res4b17', *[res4b16_relu, scale4b17_branch2c]) + res4b17_relu = mx.symbol.Activation(name='res4b17_relu', data=res4b17, act_type='relu') + res4b18_branch2a = mx.symbol.Convolution(name='res4b18_branch2a', data=res4b17_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2a = mx.symbol.BatchNorm(name='bn4b18_branch2a', data=res4b18_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2a = bn4b18_branch2a + res4b18_branch2a_relu = mx.symbol.Activation(name='res4b18_branch2a_relu', data=scale4b18_branch2a, + act_type='relu') + res4b18_branch2b = mx.symbol.Convolution(name='res4b18_branch2b', data=res4b18_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b18_branch2b = mx.symbol.BatchNorm(name='bn4b18_branch2b', data=res4b18_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2b = bn4b18_branch2b + res4b18_branch2b_relu = mx.symbol.Activation(name='res4b18_branch2b_relu', data=scale4b18_branch2b, + act_type='relu') + res4b18_branch2c = mx.symbol.Convolution(name='res4b18_branch2c', data=res4b18_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2c = mx.symbol.BatchNorm(name='bn4b18_branch2c', data=res4b18_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2c = bn4b18_branch2c + res4b18 = mx.symbol.broadcast_add(name='res4b18', *[res4b17_relu, scale4b18_branch2c]) + res4b18_relu = mx.symbol.Activation(name='res4b18_relu', data=res4b18, act_type='relu') + res4b19_branch2a = mx.symbol.Convolution(name='res4b19_branch2a', data=res4b18_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2a = mx.symbol.BatchNorm(name='bn4b19_branch2a', data=res4b19_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2a = bn4b19_branch2a + res4b19_branch2a_relu = mx.symbol.Activation(name='res4b19_branch2a_relu', data=scale4b19_branch2a, + act_type='relu') + res4b19_branch2b = mx.symbol.Convolution(name='res4b19_branch2b', data=res4b19_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b19_branch2b = mx.symbol.BatchNorm(name='bn4b19_branch2b', data=res4b19_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2b = bn4b19_branch2b + res4b19_branch2b_relu = mx.symbol.Activation(name='res4b19_branch2b_relu', data=scale4b19_branch2b, + act_type='relu') + res4b19_branch2c = mx.symbol.Convolution(name='res4b19_branch2c', data=res4b19_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2c = mx.symbol.BatchNorm(name='bn4b19_branch2c', data=res4b19_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2c = bn4b19_branch2c + res4b19 = mx.symbol.broadcast_add(name='res4b19', *[res4b18_relu, scale4b19_branch2c]) + res4b19_relu = mx.symbol.Activation(name='res4b19_relu', data=res4b19, act_type='relu') + res4b20_branch2a = mx.symbol.Convolution(name='res4b20_branch2a', data=res4b19_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2a = mx.symbol.BatchNorm(name='bn4b20_branch2a', data=res4b20_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2a = bn4b20_branch2a + res4b20_branch2a_relu = mx.symbol.Activation(name='res4b20_branch2a_relu', data=scale4b20_branch2a, + act_type='relu') + res4b20_branch2b = mx.symbol.Convolution(name='res4b20_branch2b', data=res4b20_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b20_branch2b = mx.symbol.BatchNorm(name='bn4b20_branch2b', data=res4b20_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2b = bn4b20_branch2b + res4b20_branch2b_relu = mx.symbol.Activation(name='res4b20_branch2b_relu', data=scale4b20_branch2b, + act_type='relu') + res4b20_branch2c = mx.symbol.Convolution(name='res4b20_branch2c', data=res4b20_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2c = mx.symbol.BatchNorm(name='bn4b20_branch2c', data=res4b20_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2c = bn4b20_branch2c + res4b20 = mx.symbol.broadcast_add(name='res4b20', *[res4b19_relu, scale4b20_branch2c]) + res4b20_relu = mx.symbol.Activation(name='res4b20_relu', data=res4b20, act_type='relu') + res4b21_branch2a = mx.symbol.Convolution(name='res4b21_branch2a', data=res4b20_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2a = mx.symbol.BatchNorm(name='bn4b21_branch2a', data=res4b21_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2a = bn4b21_branch2a + res4b21_branch2a_relu = mx.symbol.Activation(name='res4b21_branch2a_relu', data=scale4b21_branch2a, + act_type='relu') + res4b21_branch2b = mx.symbol.Convolution(name='res4b21_branch2b', data=res4b21_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b21_branch2b = mx.symbol.BatchNorm(name='bn4b21_branch2b', data=res4b21_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2b = bn4b21_branch2b + res4b21_branch2b_relu = mx.symbol.Activation(name='res4b21_branch2b_relu', data=scale4b21_branch2b, + act_type='relu') + res4b21_branch2c = mx.symbol.Convolution(name='res4b21_branch2c', data=res4b21_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2c = mx.symbol.BatchNorm(name='bn4b21_branch2c', data=res4b21_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2c = bn4b21_branch2c + res4b21 = mx.symbol.broadcast_add(name='res4b21', *[res4b20_relu, scale4b21_branch2c]) + res4b21_relu = mx.symbol.Activation(name='res4b21_relu', data=res4b21, act_type='relu') + res4b22_branch2a = mx.symbol.Convolution(name='res4b22_branch2a', data=res4b21_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2a = mx.symbol.BatchNorm(name='bn4b22_branch2a', data=res4b22_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2a = bn4b22_branch2a + res4b22_branch2a_relu = mx.symbol.Activation(name='res4b22_branch2a_relu', data=scale4b22_branch2a, + act_type='relu') + res4b22_branch2b = mx.symbol.Convolution(name='res4b22_branch2b', data=res4b22_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b22_branch2b = mx.symbol.BatchNorm(name='bn4b22_branch2b', data=res4b22_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2b = bn4b22_branch2b + res4b22_branch2b_relu = mx.symbol.Activation(name='res4b22_branch2b_relu', data=scale4b22_branch2b, + act_type='relu') + res4b22_branch2c = mx.symbol.Convolution(name='res4b22_branch2c', data=res4b22_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2c = mx.symbol.BatchNorm(name='bn4b22_branch2c', data=res4b22_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2c = bn4b22_branch2c + res4b22 = mx.symbol.broadcast_add(name='res4b22', *[res4b21_relu, scale4b22_branch2c]) + res4b22_relu = mx.symbol.Activation(name='res4b22_relu', data=res4b22, act_type='relu') + return res4b22_relu + + def get_resnet_v1_conv5(self, conv_feat): + res5a_branch1 = mx.symbol.Convolution(name='res5a_branch1', data=conv_feat, num_filter=2048, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch1 = mx.symbol.BatchNorm(name='bn5a_branch1', data=res5a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch1 = bn5a_branch1 + res5a_branch2a = mx.symbol.Convolution(name='res5a_branch2a', data=conv_feat, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch2a = mx.symbol.BatchNorm(name='bn5a_branch2a', data=res5a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2a = bn5a_branch2a + res5a_branch2a_relu = mx.symbol.Activation(name='res5a_branch2a_relu', data=scale5a_branch2a, act_type='relu') + res5a_branch2b = mx.symbol.Convolution(name='res5a_branch2b', data=res5a_branch2a_relu, num_filter=512, + pad=(2, 2), + kernel=(3, 3), stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5a_branch2b = mx.symbol.BatchNorm(name='bn5a_branch2b', data=res5a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2b = bn5a_branch2b + res5a_branch2b_relu = mx.symbol.Activation(name='res5a_branch2b_relu', data=scale5a_branch2b, act_type='relu') + res5a_branch2c = mx.symbol.Convolution(name='res5a_branch2c', data=res5a_branch2b_relu, num_filter=2048, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch2c = mx.symbol.BatchNorm(name='bn5a_branch2c', data=res5a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2c = bn5a_branch2c + res5a = mx.symbol.broadcast_add(name='res5a', *[scale5a_branch1, scale5a_branch2c]) + res5a_relu = mx.symbol.Activation(name='res5a_relu', data=res5a, act_type='relu') + res5b_branch2a = mx.symbol.Convolution(name='res5b_branch2a', data=res5a_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2a = mx.symbol.BatchNorm(name='bn5b_branch2a', data=res5b_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2a = bn5b_branch2a + res5b_branch2a_relu = mx.symbol.Activation(name='res5b_branch2a_relu', data=scale5b_branch2a, act_type='relu') + res5b_branch2b = mx.symbol.Convolution(name='res5b_branch2b', data=res5b_branch2a_relu, num_filter=512, + pad=(2, 2), + kernel=(3, 3), stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5b_branch2b = mx.symbol.BatchNorm(name='bn5b_branch2b', data=res5b_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2b = bn5b_branch2b + res5b_branch2b_relu = mx.symbol.Activation(name='res5b_branch2b_relu', data=scale5b_branch2b, act_type='relu') + res5b_branch2c = mx.symbol.Convolution(name='res5b_branch2c', data=res5b_branch2b_relu, num_filter=2048, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2c = mx.symbol.BatchNorm(name='bn5b_branch2c', data=res5b_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2c = bn5b_branch2c + res5b = mx.symbol.broadcast_add(name='res5b', *[res5a_relu, scale5b_branch2c]) + res5b_relu = mx.symbol.Activation(name='res5b_relu', data=res5b, act_type='relu') + res5c_branch2a = mx.symbol.Convolution(name='res5c_branch2a', data=res5b_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2a = mx.symbol.BatchNorm(name='bn5c_branch2a', data=res5c_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2a = bn5c_branch2a + res5c_branch2a_relu = mx.symbol.Activation(name='res5c_branch2a_relu', data=scale5c_branch2a, act_type='relu') + res5c_branch2b = mx.symbol.Convolution(name='res5c_branch2b', data=res5c_branch2a_relu, num_filter=512, + pad=(2, 2), + kernel=(3, 3), stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5c_branch2b = mx.symbol.BatchNorm(name='bn5c_branch2b', data=res5c_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2b = bn5c_branch2b + res5c_branch2b_relu = mx.symbol.Activation(name='res5c_branch2b_relu', data=scale5c_branch2b, act_type='relu') + res5c_branch2c = mx.symbol.Convolution(name='res5c_branch2c', data=res5c_branch2b_relu, num_filter=2048, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2c = mx.symbol.BatchNorm(name='bn5c_branch2c', data=res5c_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2c = bn5c_branch2c + res5c = mx.symbol.broadcast_add(name='res5c', *[res5b_relu, scale5c_branch2c]) + res5c_relu = mx.symbol.Activation(name='res5c_relu', data=res5c, act_type='relu') + return res5c_relu + + def get_light_head(self, data, mid_num_filter=256, suffix='separable'): + # mid_num_filter=256 + conv_new_1 = mx.sym.Convolution(data=data, kernel=(15, 1), pad=(7, 0), num_filter=mid_num_filter, name="conv_new_1" + suffix, + weight=self.shared_param_dict['conv_new_1_weight'], bias=self.shared_param_dict['conv_new_1_bias'], lr_mult=3.0) + + relu_new_1 = mx.sym.Activation(data=conv_new_1, act_type='relu', name='relu1' + suffix) + conv_new_2 = mx.sym.Convolution(data=relu_new_1, kernel=(1, 15), pad=(0, 7), num_filter=10 * 7 * 7, name="conv_new_2" + suffix, + weight=self.shared_param_dict['conv_new_2_weight'], bias=self.shared_param_dict['conv_new_2_bias'], + lr_mult=3.0) + relu_new_2 = mx.sym.Activation(data=conv_new_2, act_type='relu', name='relu2' + suffix) + conv_new_3 = mx.sym.Convolution(data=data, kernel=(1, 15), pad=(0, 7), num_filter=mid_num_filter, name="conv_new_3" + suffix, + weight=self.shared_param_dict['conv_new_3_weight'], bias=self.shared_param_dict['conv_new_3_bias'], + lr_mult=3.0) + relu_new_3 = mx.sym.Activation(data=conv_new_3, act_type='relu', name='relu3' + suffix) + conv_new_4 = mx.sym.Convolution(data=relu_new_3, kernel=(15, 1), pad=(7, 0), num_filter=10 * 7 * 7, name="conv_new_4" + suffix, + weight=self.shared_param_dict['conv_new_4_weight'], bias=self.shared_param_dict['conv_new_4_bias'], + lr_mult=3.0) + relu_new_4 = mx.sym.Activation(data=conv_new_4, act_type='relu', name='relu4' + suffix) + light_head = mx.symbol.broadcast_add(name='light_head', *[relu_new_2, relu_new_4]) + return light_head + + def get_rpn(self, conv_feat, num_anchors): + rpn_conv = mx.sym.Convolution( + data=conv_feat, kernel=(3, 3), pad=(1, 1), num_filter=512, name="rpn_conv_3x3") + rpn_relu = mx.sym.Activation(data=rpn_conv, act_type="relu", name="rpn_relu") + rpn_cls_score = mx.sym.Convolution( + data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=2 * num_anchors, name="rpn_cls_score") + rpn_bbox_pred = mx.sym.Convolution( + data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=4 * num_anchors, name="rpn_bbox_pred") + return rpn_cls_score, rpn_bbox_pred + + def get_symbol(self, cfg, is_train=True): + + # config alias for convenient + num_classes = cfg.dataset.NUM_CLASSES + num_reg_classes = (2 if cfg.CLASS_AGNOSTIC else num_classes) + num_anchors = cfg.network.NUM_ANCHORS + + # input init + if is_train: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + gt_boxes = mx.sym.Variable(name="gt_boxes") + rpn_label = mx.sym.Variable(name='label') + rpn_bbox_target = mx.sym.Variable(name='bbox_target') + rpn_bbox_weight = mx.sym.Variable(name='bbox_weight') + else: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + # res5 + relu1 = self.get_resnet_v1_conv5(conv_feat) + + rpn_cls_score, rpn_bbox_pred = self.get_rpn(conv_feat, num_anchors) + + if is_train: + # prepare rpn data + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + + # classification + rpn_cls_prob = mx.sym.SoftmaxOutput(data=rpn_cls_score_reshape, label=rpn_label, multi_output=True, + normalization='valid', use_ignore=True, ignore_label=-1, + name="rpn_cls_prob") + + # bounding box regression + rpn_bbox_loss_ = rpn_bbox_weight * mx.sym.smooth_l1(name='rpn_bbox_loss_', scalar=3.0, + data=(rpn_bbox_pred - rpn_bbox_target)) + + rpn_bbox_loss = mx.sym.MakeLoss(name='rpn_bbox_loss', data=rpn_bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.RPN_BATCH_SIZE) + + # ROI proposal + rpn_cls_act = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_act") + rpn_cls_act_reshape = mx.sym.Reshape( + data=rpn_cls_act, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_act_reshape') + if cfg.TRAIN.CXX_PROPOSAL: + rois = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_act_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TRAIN.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TRAIN.RPN_POST_NMS_TOP_N, + threshold=cfg.TRAIN.RPN_NMS_THRESH, rpn_min_size=cfg.TRAIN.RPN_MIN_SIZE) + else: + rois = mx.sym.Custom( + cls_prob=rpn_cls_act_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TRAIN.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TRAIN.RPN_POST_NMS_TOP_N, + threshold=cfg.TRAIN.RPN_NMS_THRESH, rpn_min_size=cfg.TRAIN.RPN_MIN_SIZE) + # print 'in get_symbol, after proposal' + # pdb.set_trace() + group = mx.sym.Group([rois]) + print group.list_outputs() + # ROI proposal target + gt_boxes_reshape = mx.sym.Reshape(data=gt_boxes, shape=(-1, 9), name='gt_boxes_reshape') + rois, label, bbox_target, bbox_weight = mx.sym.Custom(rois=rois, gt_boxes=gt_boxes_reshape, + op_type='proposal_target_rotbox', + num_classes=num_reg_classes, + batch_images=cfg.TRAIN.BATCH_IMAGES, + batch_rois=cfg.TRAIN.BATCH_ROIS, + cfg=cPickle.dumps(cfg), + fg_fraction=cfg.TRAIN.FG_FRACTION) + # print 'in get_symbol, after proposal_target' + # pdb.set_trace() + else: + # ROI Proposal + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + rpn_cls_prob = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_prob") + rpn_cls_prob_reshape = mx.sym.Reshape( + data=rpn_cls_prob, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_prob_reshape') + if cfg.TEST.CXX_PROPOSAL: + rois = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + else: + rois = mx.sym.Custom( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + + # largeseparable + conv_thin_feat = self.get_light_head(data=relu1, mid_num_filter=256) + roi_pool = mx.contrib.sym.PSROIALIGNAVEPooling(name='psroialignave', data=conv_thin_feat, rois=rois, spatial_scale=0.065, + group_size=7, pooled_size=7, + output_dim=10) + # 2 fc + fc_new_1 = mx.symbol.FullyConnected(name='fc_new_1', data=roi_pool, num_hidden=2048) + fc_new_1_relu = mx.sym.Activation(data=fc_new_1, act_type='relu', name='fc_new_1_relu') + + # cls_score/bbox_pred + cls_score = mx.symbol.FullyConnected(name='cls_score', data=fc_new_1_relu, num_hidden=num_classes) + bbox_pred = mx.symbol.FullyConnected(name='bbox_pred', data=fc_new_1_relu, num_hidden=num_reg_classes * 5) + + if is_train: + if cfg.TRAIN.ENABLE_OHEM: + labels_ohem, bbox_weights_ohem = mx.sym.Custom(op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=num_reg_classes, + roi_per_img=cfg.TRAIN.BATCH_ROIS_OHEM, + cls_score=cls_score, bbox_pred=bbox_pred, labels=label, + bbox_targets=bbox_target, bbox_weights=bbox_weight) + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=labels_ohem, + normalization='valid', use_ignore=True, ignore_label=-1) + bbox_loss_ = bbox_weights_ohem * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS_OHEM) + rcnn_label = labels_ohem + else: + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=label, normalization='valid') + bbox_loss_ = bbox_weight * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS) + rcnn_label = label + + # reshape output + rcnn_label = mx.sym.Reshape(data=rcnn_label, shape=(cfg.TRAIN.BATCH_IMAGES, -1), name='label_reshape') + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_loss = mx.sym.Reshape(data=bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_loss_reshape') + group = mx.sym.Group([rpn_cls_prob, rpn_bbox_loss, cls_prob, bbox_loss, mx.sym.BlockGrad(rcnn_label)]) + else: + cls_prob = mx.sym.SoftmaxActivation(name='cls_prob', data=cls_score) + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_pred = mx.sym.Reshape(data=bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_pred_reshape') + group = mx.sym.Group([rois, cls_prob, bbox_pred]) + + self.sym = group + return group + + def get_symbol_rpn(self, cfg, is_train=True): + # config alias for convenient + num_anchors = cfg.network.NUM_ANCHORS + + # input init + if is_train: + data = mx.sym.Variable(name="data") + rpn_label = mx.sym.Variable(name='label') + rpn_bbox_target = mx.sym.Variable(name='bbox_target') + rpn_bbox_weight = mx.sym.Variable(name='bbox_weight') + else: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + rpn_cls_score, rpn_bbox_pred = self.get_rpn(conv_feat, num_anchors) + if is_train: + # prepare rpn data + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + + # classification + rpn_cls_prob = mx.sym.SoftmaxOutput(data=rpn_cls_score_reshape, label=rpn_label, multi_output=True, + normalization='valid', use_ignore=True, ignore_label=-1, + name="rpn_cls_prob", + grad_scale=1.0) + # bounding box regression + rpn_bbox_loss_ = rpn_bbox_weight * mx.sym.smooth_l1(name='rpn_bbox_loss_', scalar=3.0, + data=(rpn_bbox_pred - rpn_bbox_target)) + rpn_bbox_loss = mx.sym.MakeLoss(name='rpn_bbox_loss', data=rpn_bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.RPN_BATCH_SIZE) + group = mx.symbol.Group([rpn_cls_prob, rpn_bbox_loss]) + else: + # ROI Proposal + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + rpn_cls_prob = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_prob") + rpn_cls_prob_reshape = mx.sym.Reshape( + data=rpn_cls_prob, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_prob_reshape') + if cfg.TEST.CXX_PROPOSAL: + rois, score = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + output_score=True, + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + else: + rois, score = mx.sym.Custom( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + output_score=True, + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + group = mx.symbol.Group([rois, score]) + self.sym = group + return group + + def get_symbol_rcnn(self, cfg, is_train=True): + # config alias for convenient + num_classes = cfg.dataset.NUM_CLASSES + num_reg_classes = (2 if cfg.CLASS_AGNOSTIC else num_classes) + + # input init + if is_train: + data = mx.symbol.Variable(name="data") + rois = mx.symbol.Variable(name='rois') + label = mx.symbol.Variable(name='label') + bbox_target = mx.symbol.Variable(name='bbox_target') + bbox_weight = mx.symbol.Variable(name='bbox_weight') + # reshape input + rois = mx.symbol.Reshape(data=rois, shape=(-1, 5), name='rois_reshape') + label = mx.symbol.Reshape(data=label, shape=(-1,), name='label_reshape') + bbox_target = mx.symbol.Reshape(data=bbox_target, shape=(-1, 5 * num_classes), name='bbox_target_reshape') + bbox_weight = mx.symbol.Reshape(data=bbox_weight, shape=(-1, 5 * num_classes), name='bbox_weight_reshape') + else: + data = mx.sym.Variable(name="data") + rois = mx.symbol.Variable(name='rois') + # reshape input + rois = mx.symbol.Reshape(data=rois, shape=(-1, 5), name='rois_reshape') + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + # res5 + relu1 = self.get_resnet_v1_conv5(conv_feat) + + conv_thin_feat = self.get_light_head(data=relu1, mid_num_filter=256) + + roi_pool = mx.contrib.sym.PSROIPooling(name='psroipooling', data=conv_thin_feat, rois=rois, group_size=7, pooled_size=7, + output_dim=10, spatial_scale=0.0625) + + # 2 fc + fc_new_1 = mx.symbol.FullyConnected(name='fc_new_1', data=roi_pool, num_hidden=1024) + fc_new_1_relu = mx.sym.Activation(data=fc_new_1, act_type='relu', name='fc_new_1_relu') + + # cls_score/bbox_pred + cls_score = mx.symbol.FullyConnected(name='cls_score', data=fc_new_1_relu, num_hidden=num_classes) + bbox_pred = mx.symbol.FullyConnected(name='bbox_pred', data=fc_new_1_relu, num_hidden=num_reg_classes * 5) + + if is_train: + if cfg.TRAIN.ENABLE_OHEM: + labels_ohem, bbox_weights_ohem = mx.sym.Custom(op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=num_reg_classes, + roi_per_img=cfg.TRAIN.BATCH_ROIS_OHEM, + cls_score=cls_score, bbox_pred=bbox_pred, labels=label, + bbox_targets=bbox_target, bbox_weights=bbox_weight) + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=labels_ohem, + normalization='valid', use_ignore=True, ignore_label=-1, grad_scale=1.0) + bbox_loss_ = bbox_weights_ohem * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS_OHEM) + else: + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=label, normalization='valid', + grad_scale=1.0) + bbox_loss_ = bbox_weight * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS) + + # reshape output + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_loss = mx.sym.Reshape(data=bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_loss_reshape') + group = mx.sym.Group([cls_prob, bbox_loss]) + else: + cls_prob = mx.sym.SoftmaxActivation(name='cls_prob', data=cls_score) + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_pred = mx.sym.Reshape(data=bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_pred_reshape') + group = mx.sym.Group([cls_prob, bbox_pred]) + + self.sym = group + return group + + def init_weight_rcnn(self, cfg, arg_params, aux_params): + arg_params['fc_new_1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_1_weight']) + arg_params['fc_new_1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_1_bias']) + arg_params['cls_score_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['cls_score_weight']) + arg_params['cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['cls_score_bias']) + arg_params['bbox_pred_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['bbox_pred_weight']) + arg_params['bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['bbox_pred_bias']) + + def init_weight_rpn(self, cfg, arg_params, aux_params): + arg_params['rpn_conv_3x3_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['rpn_conv_3x3_weight']) + arg_params['rpn_conv_3x3_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_conv_3x3_bias']) + arg_params['rpn_cls_score_weight'] = mx.random.normal(0, 0.01, + shape=self.arg_shape_dict['rpn_cls_score_weight']) + arg_params['rpn_cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_cls_score_bias']) + arg_params['rpn_bbox_pred_weight'] = mx.random.normal(0, 0.01, + shape=self.arg_shape_dict['rpn_bbox_pred_weight']) + arg_params['rpn_bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_bbox_pred_bias']) + + def init_weight(self, cfg, arg_params, aux_params): + self.init_weight_rpn(cfg, arg_params, aux_params) + self.init_weight_rcnn(cfg, arg_params, aux_params) + for name in self.shared_param_list: + arg_params[name + '_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict[name + '_weight']) + arg_params[name + '_bias'] = mx.nd.zeros(shape=self.arg_shape_dict[name + '_bias']) + diff --git a/faster_rcnn/symbols/resnet_v1_101_rcnn_light_head_RoITransformer.py b/faster_rcnn/symbols/resnet_v1_101_rcnn_light_head_RoITransformer.py new file mode 100644 index 0000000..7532c27 --- /dev/null +++ b/faster_rcnn/symbols/resnet_v1_101_rcnn_light_head_RoITransformer.py @@ -0,0 +1,1216 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Guodong Zhang, Bin Xiao +# -------------------------------------------------------- + +import cPickle +import mxnet as mx +from utils.symbol import Symbol +from operator_py.proposal import * +from operator_py.proposal_target import * +from operator_py.proposal_target_rotbox import * +from operator_py.box_annotator_ohem import * +from operator_py.RRoIDecoder import * +from operator_py.RRoI_target_rotbox_v2 import * +import pdb +DEBUG = False +class resnet_v1_101_rcnn_light_head_RoITransformer(Symbol): + def __init__(self): + """ + Use __init__ to define parameter network needs + """ + self.eps = 1e-5 + self.use_global_stats = True + self.workspace = 512 + self.units = (3, 4, 23, 3) # use for 101 + self.filter_list = [256, 512, 1024, 2048] + self.shared_param_list = ['conv_new_1', 'conv_new_2', 'conv_new_3', 'conv_new_4'] + self.shared_param_dict = {} + for name in self.shared_param_list: + self.shared_param_dict[name + '_weight'] = mx.sym.Variable(name + '_weight') + self.shared_param_dict[name + '_bias'] = mx.sym.Variable(name + '_bias') + + + def get_resnet_v1_conv4(self, data): + conv1 = mx.symbol.Convolution(name='conv1', data=data, num_filter=64, pad=(3, 3), kernel=(7, 7), stride=(2, 2), + no_bias=True) + bn_conv1 = mx.symbol.BatchNorm(name='bn_conv1', data=conv1, use_global_stats=True, fix_gamma=False, + eps=self.eps) + scale_conv1 = bn_conv1 + conv1_relu = mx.symbol.Activation(name='conv1_relu', data=scale_conv1, act_type='relu') + pool1 = mx.symbol.Pooling(name='pool1', data=conv1_relu, pooling_convention='full', pad=(0, 0), kernel=(3, 3), + stride=(2, 2), pool_type='max') + res2a_branch1 = mx.symbol.Convolution(name='res2a_branch1', data=pool1, num_filter=256, pad=(0, 0), + kernel=(1, 1), + stride=(1, 1), no_bias=True) + bn2a_branch1 = mx.symbol.BatchNorm(name='bn2a_branch1', data=res2a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch1 = bn2a_branch1 + res2a_branch2a = mx.symbol.Convolution(name='res2a_branch2a', data=pool1, num_filter=64, pad=(0, 0), + kernel=(1, 1), + stride=(1, 1), no_bias=True) + bn2a_branch2a = mx.symbol.BatchNorm(name='bn2a_branch2a', data=res2a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2a = bn2a_branch2a + res2a_branch2a_relu = mx.symbol.Activation(name='res2a_branch2a_relu', data=scale2a_branch2a, act_type='relu') + res2a_branch2b = mx.symbol.Convolution(name='res2a_branch2b', data=res2a_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2a_branch2b = mx.symbol.BatchNorm(name='bn2a_branch2b', data=res2a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2b = bn2a_branch2b + res2a_branch2b_relu = mx.symbol.Activation(name='res2a_branch2b_relu', data=scale2a_branch2b, act_type='relu') + res2a_branch2c = mx.symbol.Convolution(name='res2a_branch2c', data=res2a_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2a_branch2c = mx.symbol.BatchNorm(name='bn2a_branch2c', data=res2a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2c = bn2a_branch2c + res2a = mx.symbol.broadcast_add(name='res2a', *[scale2a_branch1, scale2a_branch2c]) + res2a_relu = mx.symbol.Activation(name='res2a_relu', data=res2a, act_type='relu') + res2b_branch2a = mx.symbol.Convolution(name='res2b_branch2a', data=res2a_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2a = mx.symbol.BatchNorm(name='bn2b_branch2a', data=res2b_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2a = bn2b_branch2a + res2b_branch2a_relu = mx.symbol.Activation(name='res2b_branch2a_relu', data=scale2b_branch2a, act_type='relu') + res2b_branch2b = mx.symbol.Convolution(name='res2b_branch2b', data=res2b_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2b_branch2b = mx.symbol.BatchNorm(name='bn2b_branch2b', data=res2b_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2b = bn2b_branch2b + res2b_branch2b_relu = mx.symbol.Activation(name='res2b_branch2b_relu', data=scale2b_branch2b, act_type='relu') + res2b_branch2c = mx.symbol.Convolution(name='res2b_branch2c', data=res2b_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2c = mx.symbol.BatchNorm(name='bn2b_branch2c', data=res2b_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2c = bn2b_branch2c + res2b = mx.symbol.broadcast_add(name='res2b', *[res2a_relu, scale2b_branch2c]) + res2b_relu = mx.symbol.Activation(name='res2b_relu', data=res2b, act_type='relu') + res2c_branch2a = mx.symbol.Convolution(name='res2c_branch2a', data=res2b_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2a = mx.symbol.BatchNorm(name='bn2c_branch2a', data=res2c_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2a = bn2c_branch2a + res2c_branch2a_relu = mx.symbol.Activation(name='res2c_branch2a_relu', data=scale2c_branch2a, act_type='relu') + res2c_branch2b = mx.symbol.Convolution(name='res2c_branch2b', data=res2c_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2c_branch2b = mx.symbol.BatchNorm(name='bn2c_branch2b', data=res2c_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2b = bn2c_branch2b + res2c_branch2b_relu = mx.symbol.Activation(name='res2c_branch2b_relu', data=scale2c_branch2b, act_type='relu') + res2c_branch2c = mx.symbol.Convolution(name='res2c_branch2c', data=res2c_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2c = mx.symbol.BatchNorm(name='bn2c_branch2c', data=res2c_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2c = bn2c_branch2c + res2c = mx.symbol.broadcast_add(name='res2c', *[res2b_relu, scale2c_branch2c]) + res2c_relu = mx.symbol.Activation(name='res2c_relu', data=res2c, act_type='relu') + res3a_branch1 = mx.symbol.Convolution(name='res3a_branch1', data=res2c_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch1 = mx.symbol.BatchNorm(name='bn3a_branch1', data=res3a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch1 = bn3a_branch1 + res3a_branch2a = mx.symbol.Convolution(name='res3a_branch2a', data=res2c_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch2a = mx.symbol.BatchNorm(name='bn3a_branch2a', data=res3a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2a = bn3a_branch2a + res3a_branch2a_relu = mx.symbol.Activation(name='res3a_branch2a_relu', data=scale3a_branch2a, act_type='relu') + res3a_branch2b = mx.symbol.Convolution(name='res3a_branch2b', data=res3a_branch2a_relu, num_filter=128, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3a_branch2b = mx.symbol.BatchNorm(name='bn3a_branch2b', data=res3a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2b = bn3a_branch2b + res3a_branch2b_relu = mx.symbol.Activation(name='res3a_branch2b_relu', data=scale3a_branch2b, act_type='relu') + res3a_branch2c = mx.symbol.Convolution(name='res3a_branch2c', data=res3a_branch2b_relu, num_filter=512, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3a_branch2c = mx.symbol.BatchNorm(name='bn3a_branch2c', data=res3a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2c = bn3a_branch2c + res3a = mx.symbol.broadcast_add(name='res3a', *[scale3a_branch1, scale3a_branch2c]) + res3a_relu = mx.symbol.Activation(name='res3a_relu', data=res3a, act_type='relu') + res3b1_branch2a = mx.symbol.Convolution(name='res3b1_branch2a', data=res3a_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2a = mx.symbol.BatchNorm(name='bn3b1_branch2a', data=res3b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2a = bn3b1_branch2a + res3b1_branch2a_relu = mx.symbol.Activation(name='res3b1_branch2a_relu', data=scale3b1_branch2a, + act_type='relu') + res3b1_branch2b = mx.symbol.Convolution(name='res3b1_branch2b', data=res3b1_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b1_branch2b = mx.symbol.BatchNorm(name='bn3b1_branch2b', data=res3b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2b = bn3b1_branch2b + res3b1_branch2b_relu = mx.symbol.Activation(name='res3b1_branch2b_relu', data=scale3b1_branch2b, + act_type='relu') + res3b1_branch2c = mx.symbol.Convolution(name='res3b1_branch2c', data=res3b1_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2c = mx.symbol.BatchNorm(name='bn3b1_branch2c', data=res3b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2c = bn3b1_branch2c + res3b1 = mx.symbol.broadcast_add(name='res3b1', *[res3a_relu, scale3b1_branch2c]) + res3b1_relu = mx.symbol.Activation(name='res3b1_relu', data=res3b1, act_type='relu') + res3b2_branch2a = mx.symbol.Convolution(name='res3b2_branch2a', data=res3b1_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2a = mx.symbol.BatchNorm(name='bn3b2_branch2a', data=res3b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2a = bn3b2_branch2a + res3b2_branch2a_relu = mx.symbol.Activation(name='res3b2_branch2a_relu', data=scale3b2_branch2a, + act_type='relu') + res3b2_branch2b = mx.symbol.Convolution(name='res3b2_branch2b', data=res3b2_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b2_branch2b = mx.symbol.BatchNorm(name='bn3b2_branch2b', data=res3b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2b = bn3b2_branch2b + res3b2_branch2b_relu = mx.symbol.Activation(name='res3b2_branch2b_relu', data=scale3b2_branch2b, + act_type='relu') + res3b2_branch2c = mx.symbol.Convolution(name='res3b2_branch2c', data=res3b2_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2c = mx.symbol.BatchNorm(name='bn3b2_branch2c', data=res3b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2c = bn3b2_branch2c + res3b2 = mx.symbol.broadcast_add(name='res3b2', *[res3b1_relu, scale3b2_branch2c]) + res3b2_relu = mx.symbol.Activation(name='res3b2_relu', data=res3b2, act_type='relu') + res3b3_branch2a = mx.symbol.Convolution(name='res3b3_branch2a', data=res3b2_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2a = mx.symbol.BatchNorm(name='bn3b3_branch2a', data=res3b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2a = bn3b3_branch2a + res3b3_branch2a_relu = mx.symbol.Activation(name='res3b3_branch2a_relu', data=scale3b3_branch2a, + act_type='relu') + res3b3_branch2b = mx.symbol.Convolution(name='res3b3_branch2b', data=res3b3_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b3_branch2b = mx.symbol.BatchNorm(name='bn3b3_branch2b', data=res3b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2b = bn3b3_branch2b + res3b3_branch2b_relu = mx.symbol.Activation(name='res3b3_branch2b_relu', data=scale3b3_branch2b, + act_type='relu') + res3b3_branch2c = mx.symbol.Convolution(name='res3b3_branch2c', data=res3b3_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2c = mx.symbol.BatchNorm(name='bn3b3_branch2c', data=res3b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2c = bn3b3_branch2c + res3b3 = mx.symbol.broadcast_add(name='res3b3', *[res3b2_relu, scale3b3_branch2c]) + res3b3_relu = mx.symbol.Activation(name='res3b3_relu', data=res3b3, act_type='relu') + res4a_branch1 = mx.symbol.Convolution(name='res4a_branch1', data=res3b3_relu, num_filter=1024, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch1 = mx.symbol.BatchNorm(name='bn4a_branch1', data=res4a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch1 = bn4a_branch1 + res4a_branch2a = mx.symbol.Convolution(name='res4a_branch2a', data=res3b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch2a = mx.symbol.BatchNorm(name='bn4a_branch2a', data=res4a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2a = bn4a_branch2a + res4a_branch2a_relu = mx.symbol.Activation(name='res4a_branch2a_relu', data=scale4a_branch2a, act_type='relu') + res4a_branch2b = mx.symbol.Convolution(name='res4a_branch2b', data=res4a_branch2a_relu, num_filter=256, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4a_branch2b = mx.symbol.BatchNorm(name='bn4a_branch2b', data=res4a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2b = bn4a_branch2b + res4a_branch2b_relu = mx.symbol.Activation(name='res4a_branch2b_relu', data=scale4a_branch2b, act_type='relu') + res4a_branch2c = mx.symbol.Convolution(name='res4a_branch2c', data=res4a_branch2b_relu, num_filter=1024, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4a_branch2c = mx.symbol.BatchNorm(name='bn4a_branch2c', data=res4a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2c = bn4a_branch2c + res4a = mx.symbol.broadcast_add(name='res4a', *[scale4a_branch1, scale4a_branch2c]) + res4a_relu = mx.symbol.Activation(name='res4a_relu', data=res4a, act_type='relu') + res4b1_branch2a = mx.symbol.Convolution(name='res4b1_branch2a', data=res4a_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2a = mx.symbol.BatchNorm(name='bn4b1_branch2a', data=res4b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2a = bn4b1_branch2a + res4b1_branch2a_relu = mx.symbol.Activation(name='res4b1_branch2a_relu', data=scale4b1_branch2a, + act_type='relu') + res4b1_branch2b = mx.symbol.Convolution(name='res4b1_branch2b', data=res4b1_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b1_branch2b = mx.symbol.BatchNorm(name='bn4b1_branch2b', data=res4b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2b = bn4b1_branch2b + res4b1_branch2b_relu = mx.symbol.Activation(name='res4b1_branch2b_relu', data=scale4b1_branch2b, + act_type='relu') + res4b1_branch2c = mx.symbol.Convolution(name='res4b1_branch2c', data=res4b1_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2c = mx.symbol.BatchNorm(name='bn4b1_branch2c', data=res4b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2c = bn4b1_branch2c + res4b1 = mx.symbol.broadcast_add(name='res4b1', *[res4a_relu, scale4b1_branch2c]) + res4b1_relu = mx.symbol.Activation(name='res4b1_relu', data=res4b1, act_type='relu') + res4b2_branch2a = mx.symbol.Convolution(name='res4b2_branch2a', data=res4b1_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2a = mx.symbol.BatchNorm(name='bn4b2_branch2a', data=res4b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2a = bn4b2_branch2a + res4b2_branch2a_relu = mx.symbol.Activation(name='res4b2_branch2a_relu', data=scale4b2_branch2a, + act_type='relu') + res4b2_branch2b = mx.symbol.Convolution(name='res4b2_branch2b', data=res4b2_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b2_branch2b = mx.symbol.BatchNorm(name='bn4b2_branch2b', data=res4b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2b = bn4b2_branch2b + res4b2_branch2b_relu = mx.symbol.Activation(name='res4b2_branch2b_relu', data=scale4b2_branch2b, + act_type='relu') + res4b2_branch2c = mx.symbol.Convolution(name='res4b2_branch2c', data=res4b2_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2c = mx.symbol.BatchNorm(name='bn4b2_branch2c', data=res4b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2c = bn4b2_branch2c + res4b2 = mx.symbol.broadcast_add(name='res4b2', *[res4b1_relu, scale4b2_branch2c]) + res4b2_relu = mx.symbol.Activation(name='res4b2_relu', data=res4b2, act_type='relu') + res4b3_branch2a = mx.symbol.Convolution(name='res4b3_branch2a', data=res4b2_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2a = mx.symbol.BatchNorm(name='bn4b3_branch2a', data=res4b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2a = bn4b3_branch2a + res4b3_branch2a_relu = mx.symbol.Activation(name='res4b3_branch2a_relu', data=scale4b3_branch2a, + act_type='relu') + res4b3_branch2b = mx.symbol.Convolution(name='res4b3_branch2b', data=res4b3_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b3_branch2b = mx.symbol.BatchNorm(name='bn4b3_branch2b', data=res4b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2b = bn4b3_branch2b + res4b3_branch2b_relu = mx.symbol.Activation(name='res4b3_branch2b_relu', data=scale4b3_branch2b, + act_type='relu') + res4b3_branch2c = mx.symbol.Convolution(name='res4b3_branch2c', data=res4b3_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2c = mx.symbol.BatchNorm(name='bn4b3_branch2c', data=res4b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2c = bn4b3_branch2c + res4b3 = mx.symbol.broadcast_add(name='res4b3', *[res4b2_relu, scale4b3_branch2c]) + res4b3_relu = mx.symbol.Activation(name='res4b3_relu', data=res4b3, act_type='relu') + res4b4_branch2a = mx.symbol.Convolution(name='res4b4_branch2a', data=res4b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2a = mx.symbol.BatchNorm(name='bn4b4_branch2a', data=res4b4_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2a = bn4b4_branch2a + res4b4_branch2a_relu = mx.symbol.Activation(name='res4b4_branch2a_relu', data=scale4b4_branch2a, + act_type='relu') + res4b4_branch2b = mx.symbol.Convolution(name='res4b4_branch2b', data=res4b4_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b4_branch2b = mx.symbol.BatchNorm(name='bn4b4_branch2b', data=res4b4_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2b = bn4b4_branch2b + res4b4_branch2b_relu = mx.symbol.Activation(name='res4b4_branch2b_relu', data=scale4b4_branch2b, + act_type='relu') + res4b4_branch2c = mx.symbol.Convolution(name='res4b4_branch2c', data=res4b4_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2c = mx.symbol.BatchNorm(name='bn4b4_branch2c', data=res4b4_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2c = bn4b4_branch2c + res4b4 = mx.symbol.broadcast_add(name='res4b4', *[res4b3_relu, scale4b4_branch2c]) + res4b4_relu = mx.symbol.Activation(name='res4b4_relu', data=res4b4, act_type='relu') + res4b5_branch2a = mx.symbol.Convolution(name='res4b5_branch2a', data=res4b4_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2a = mx.symbol.BatchNorm(name='bn4b5_branch2a', data=res4b5_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2a = bn4b5_branch2a + res4b5_branch2a_relu = mx.symbol.Activation(name='res4b5_branch2a_relu', data=scale4b5_branch2a, + act_type='relu') + res4b5_branch2b = mx.symbol.Convolution(name='res4b5_branch2b', data=res4b5_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b5_branch2b = mx.symbol.BatchNorm(name='bn4b5_branch2b', data=res4b5_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2b = bn4b5_branch2b + res4b5_branch2b_relu = mx.symbol.Activation(name='res4b5_branch2b_relu', data=scale4b5_branch2b, + act_type='relu') + res4b5_branch2c = mx.symbol.Convolution(name='res4b5_branch2c', data=res4b5_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2c = mx.symbol.BatchNorm(name='bn4b5_branch2c', data=res4b5_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2c = bn4b5_branch2c + res4b5 = mx.symbol.broadcast_add(name='res4b5', *[res4b4_relu, scale4b5_branch2c]) + res4b5_relu = mx.symbol.Activation(name='res4b5_relu', data=res4b5, act_type='relu') + res4b6_branch2a = mx.symbol.Convolution(name='res4b6_branch2a', data=res4b5_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2a = mx.symbol.BatchNorm(name='bn4b6_branch2a', data=res4b6_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2a = bn4b6_branch2a + res4b6_branch2a_relu = mx.symbol.Activation(name='res4b6_branch2a_relu', data=scale4b6_branch2a, + act_type='relu') + res4b6_branch2b = mx.symbol.Convolution(name='res4b6_branch2b', data=res4b6_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b6_branch2b = mx.symbol.BatchNorm(name='bn4b6_branch2b', data=res4b6_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2b = bn4b6_branch2b + res4b6_branch2b_relu = mx.symbol.Activation(name='res4b6_branch2b_relu', data=scale4b6_branch2b, + act_type='relu') + res4b6_branch2c = mx.symbol.Convolution(name='res4b6_branch2c', data=res4b6_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2c = mx.symbol.BatchNorm(name='bn4b6_branch2c', data=res4b6_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2c = bn4b6_branch2c + res4b6 = mx.symbol.broadcast_add(name='res4b6', *[res4b5_relu, scale4b6_branch2c]) + res4b6_relu = mx.symbol.Activation(name='res4b6_relu', data=res4b6, act_type='relu') + res4b7_branch2a = mx.symbol.Convolution(name='res4b7_branch2a', data=res4b6_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2a = mx.symbol.BatchNorm(name='bn4b7_branch2a', data=res4b7_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2a = bn4b7_branch2a + res4b7_branch2a_relu = mx.symbol.Activation(name='res4b7_branch2a_relu', data=scale4b7_branch2a, + act_type='relu') + res4b7_branch2b = mx.symbol.Convolution(name='res4b7_branch2b', data=res4b7_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b7_branch2b = mx.symbol.BatchNorm(name='bn4b7_branch2b', data=res4b7_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2b = bn4b7_branch2b + res4b7_branch2b_relu = mx.symbol.Activation(name='res4b7_branch2b_relu', data=scale4b7_branch2b, + act_type='relu') + res4b7_branch2c = mx.symbol.Convolution(name='res4b7_branch2c', data=res4b7_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2c = mx.symbol.BatchNorm(name='bn4b7_branch2c', data=res4b7_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2c = bn4b7_branch2c + res4b7 = mx.symbol.broadcast_add(name='res4b7', *[res4b6_relu, scale4b7_branch2c]) + res4b7_relu = mx.symbol.Activation(name='res4b7_relu', data=res4b7, act_type='relu') + res4b8_branch2a = mx.symbol.Convolution(name='res4b8_branch2a', data=res4b7_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2a = mx.symbol.BatchNorm(name='bn4b8_branch2a', data=res4b8_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2a = bn4b8_branch2a + res4b8_branch2a_relu = mx.symbol.Activation(name='res4b8_branch2a_relu', data=scale4b8_branch2a, + act_type='relu') + res4b8_branch2b = mx.symbol.Convolution(name='res4b8_branch2b', data=res4b8_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b8_branch2b = mx.symbol.BatchNorm(name='bn4b8_branch2b', data=res4b8_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2b = bn4b8_branch2b + res4b8_branch2b_relu = mx.symbol.Activation(name='res4b8_branch2b_relu', data=scale4b8_branch2b, + act_type='relu') + res4b8_branch2c = mx.symbol.Convolution(name='res4b8_branch2c', data=res4b8_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2c = mx.symbol.BatchNorm(name='bn4b8_branch2c', data=res4b8_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2c = bn4b8_branch2c + res4b8 = mx.symbol.broadcast_add(name='res4b8', *[res4b7_relu, scale4b8_branch2c]) + res4b8_relu = mx.symbol.Activation(name='res4b8_relu', data=res4b8, act_type='relu') + res4b9_branch2a = mx.symbol.Convolution(name='res4b9_branch2a', data=res4b8_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2a = mx.symbol.BatchNorm(name='bn4b9_branch2a', data=res4b9_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2a = bn4b9_branch2a + res4b9_branch2a_relu = mx.symbol.Activation(name='res4b9_branch2a_relu', data=scale4b9_branch2a, + act_type='relu') + res4b9_branch2b = mx.symbol.Convolution(name='res4b9_branch2b', data=res4b9_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b9_branch2b = mx.symbol.BatchNorm(name='bn4b9_branch2b', data=res4b9_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2b = bn4b9_branch2b + res4b9_branch2b_relu = mx.symbol.Activation(name='res4b9_branch2b_relu', data=scale4b9_branch2b, + act_type='relu') + res4b9_branch2c = mx.symbol.Convolution(name='res4b9_branch2c', data=res4b9_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2c = mx.symbol.BatchNorm(name='bn4b9_branch2c', data=res4b9_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2c = bn4b9_branch2c + res4b9 = mx.symbol.broadcast_add(name='res4b9', *[res4b8_relu, scale4b9_branch2c]) + res4b9_relu = mx.symbol.Activation(name='res4b9_relu', data=res4b9, act_type='relu') + res4b10_branch2a = mx.symbol.Convolution(name='res4b10_branch2a', data=res4b9_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2a = mx.symbol.BatchNorm(name='bn4b10_branch2a', data=res4b10_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2a = bn4b10_branch2a + res4b10_branch2a_relu = mx.symbol.Activation(name='res4b10_branch2a_relu', data=scale4b10_branch2a, + act_type='relu') + res4b10_branch2b = mx.symbol.Convolution(name='res4b10_branch2b', data=res4b10_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b10_branch2b = mx.symbol.BatchNorm(name='bn4b10_branch2b', data=res4b10_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2b = bn4b10_branch2b + res4b10_branch2b_relu = mx.symbol.Activation(name='res4b10_branch2b_relu', data=scale4b10_branch2b, + act_type='relu') + res4b10_branch2c = mx.symbol.Convolution(name='res4b10_branch2c', data=res4b10_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2c = mx.symbol.BatchNorm(name='bn4b10_branch2c', data=res4b10_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2c = bn4b10_branch2c + res4b10 = mx.symbol.broadcast_add(name='res4b10', *[res4b9_relu, scale4b10_branch2c]) + res4b10_relu = mx.symbol.Activation(name='res4b10_relu', data=res4b10, act_type='relu') + res4b11_branch2a = mx.symbol.Convolution(name='res4b11_branch2a', data=res4b10_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2a = mx.symbol.BatchNorm(name='bn4b11_branch2a', data=res4b11_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2a = bn4b11_branch2a + res4b11_branch2a_relu = mx.symbol.Activation(name='res4b11_branch2a_relu', data=scale4b11_branch2a, + act_type='relu') + res4b11_branch2b = mx.symbol.Convolution(name='res4b11_branch2b', data=res4b11_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b11_branch2b = mx.symbol.BatchNorm(name='bn4b11_branch2b', data=res4b11_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2b = bn4b11_branch2b + res4b11_branch2b_relu = mx.symbol.Activation(name='res4b11_branch2b_relu', data=scale4b11_branch2b, + act_type='relu') + res4b11_branch2c = mx.symbol.Convolution(name='res4b11_branch2c', data=res4b11_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2c = mx.symbol.BatchNorm(name='bn4b11_branch2c', data=res4b11_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2c = bn4b11_branch2c + res4b11 = mx.symbol.broadcast_add(name='res4b11', *[res4b10_relu, scale4b11_branch2c]) + res4b11_relu = mx.symbol.Activation(name='res4b11_relu', data=res4b11, act_type='relu') + res4b12_branch2a = mx.symbol.Convolution(name='res4b12_branch2a', data=res4b11_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2a = mx.symbol.BatchNorm(name='bn4b12_branch2a', data=res4b12_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2a = bn4b12_branch2a + res4b12_branch2a_relu = mx.symbol.Activation(name='res4b12_branch2a_relu', data=scale4b12_branch2a, + act_type='relu') + res4b12_branch2b = mx.symbol.Convolution(name='res4b12_branch2b', data=res4b12_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b12_branch2b = mx.symbol.BatchNorm(name='bn4b12_branch2b', data=res4b12_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2b = bn4b12_branch2b + res4b12_branch2b_relu = mx.symbol.Activation(name='res4b12_branch2b_relu', data=scale4b12_branch2b, + act_type='relu') + res4b12_branch2c = mx.symbol.Convolution(name='res4b12_branch2c', data=res4b12_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2c = mx.symbol.BatchNorm(name='bn4b12_branch2c', data=res4b12_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2c = bn4b12_branch2c + res4b12 = mx.symbol.broadcast_add(name='res4b12', *[res4b11_relu, scale4b12_branch2c]) + res4b12_relu = mx.symbol.Activation(name='res4b12_relu', data=res4b12, act_type='relu') + res4b13_branch2a = mx.symbol.Convolution(name='res4b13_branch2a', data=res4b12_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2a = mx.symbol.BatchNorm(name='bn4b13_branch2a', data=res4b13_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2a = bn4b13_branch2a + res4b13_branch2a_relu = mx.symbol.Activation(name='res4b13_branch2a_relu', data=scale4b13_branch2a, + act_type='relu') + res4b13_branch2b = mx.symbol.Convolution(name='res4b13_branch2b', data=res4b13_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b13_branch2b = mx.symbol.BatchNorm(name='bn4b13_branch2b', data=res4b13_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2b = bn4b13_branch2b + res4b13_branch2b_relu = mx.symbol.Activation(name='res4b13_branch2b_relu', data=scale4b13_branch2b, + act_type='relu') + res4b13_branch2c = mx.symbol.Convolution(name='res4b13_branch2c', data=res4b13_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2c = mx.symbol.BatchNorm(name='bn4b13_branch2c', data=res4b13_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2c = bn4b13_branch2c + res4b13 = mx.symbol.broadcast_add(name='res4b13', *[res4b12_relu, scale4b13_branch2c]) + res4b13_relu = mx.symbol.Activation(name='res4b13_relu', data=res4b13, act_type='relu') + res4b14_branch2a = mx.symbol.Convolution(name='res4b14_branch2a', data=res4b13_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2a = mx.symbol.BatchNorm(name='bn4b14_branch2a', data=res4b14_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2a = bn4b14_branch2a + res4b14_branch2a_relu = mx.symbol.Activation(name='res4b14_branch2a_relu', data=scale4b14_branch2a, + act_type='relu') + res4b14_branch2b = mx.symbol.Convolution(name='res4b14_branch2b', data=res4b14_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b14_branch2b = mx.symbol.BatchNorm(name='bn4b14_branch2b', data=res4b14_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2b = bn4b14_branch2b + res4b14_branch2b_relu = mx.symbol.Activation(name='res4b14_branch2b_relu', data=scale4b14_branch2b, + act_type='relu') + res4b14_branch2c = mx.symbol.Convolution(name='res4b14_branch2c', data=res4b14_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2c = mx.symbol.BatchNorm(name='bn4b14_branch2c', data=res4b14_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2c = bn4b14_branch2c + res4b14 = mx.symbol.broadcast_add(name='res4b14', *[res4b13_relu, scale4b14_branch2c]) + res4b14_relu = mx.symbol.Activation(name='res4b14_relu', data=res4b14, act_type='relu') + res4b15_branch2a = mx.symbol.Convolution(name='res4b15_branch2a', data=res4b14_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2a = mx.symbol.BatchNorm(name='bn4b15_branch2a', data=res4b15_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2a = bn4b15_branch2a + res4b15_branch2a_relu = mx.symbol.Activation(name='res4b15_branch2a_relu', data=scale4b15_branch2a, + act_type='relu') + res4b15_branch2b = mx.symbol.Convolution(name='res4b15_branch2b', data=res4b15_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b15_branch2b = mx.symbol.BatchNorm(name='bn4b15_branch2b', data=res4b15_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2b = bn4b15_branch2b + res4b15_branch2b_relu = mx.symbol.Activation(name='res4b15_branch2b_relu', data=scale4b15_branch2b, + act_type='relu') + res4b15_branch2c = mx.symbol.Convolution(name='res4b15_branch2c', data=res4b15_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2c = mx.symbol.BatchNorm(name='bn4b15_branch2c', data=res4b15_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2c = bn4b15_branch2c + res4b15 = mx.symbol.broadcast_add(name='res4b15', *[res4b14_relu, scale4b15_branch2c]) + res4b15_relu = mx.symbol.Activation(name='res4b15_relu', data=res4b15, act_type='relu') + res4b16_branch2a = mx.symbol.Convolution(name='res4b16_branch2a', data=res4b15_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2a = mx.symbol.BatchNorm(name='bn4b16_branch2a', data=res4b16_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2a = bn4b16_branch2a + res4b16_branch2a_relu = mx.symbol.Activation(name='res4b16_branch2a_relu', data=scale4b16_branch2a, + act_type='relu') + res4b16_branch2b = mx.symbol.Convolution(name='res4b16_branch2b', data=res4b16_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b16_branch2b = mx.symbol.BatchNorm(name='bn4b16_branch2b', data=res4b16_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2b = bn4b16_branch2b + res4b16_branch2b_relu = mx.symbol.Activation(name='res4b16_branch2b_relu', data=scale4b16_branch2b, + act_type='relu') + res4b16_branch2c = mx.symbol.Convolution(name='res4b16_branch2c', data=res4b16_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2c = mx.symbol.BatchNorm(name='bn4b16_branch2c', data=res4b16_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2c = bn4b16_branch2c + res4b16 = mx.symbol.broadcast_add(name='res4b16', *[res4b15_relu, scale4b16_branch2c]) + res4b16_relu = mx.symbol.Activation(name='res4b16_relu', data=res4b16, act_type='relu') + res4b17_branch2a = mx.symbol.Convolution(name='res4b17_branch2a', data=res4b16_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2a = mx.symbol.BatchNorm(name='bn4b17_branch2a', data=res4b17_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2a = bn4b17_branch2a + res4b17_branch2a_relu = mx.symbol.Activation(name='res4b17_branch2a_relu', data=scale4b17_branch2a, + act_type='relu') + res4b17_branch2b = mx.symbol.Convolution(name='res4b17_branch2b', data=res4b17_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b17_branch2b = mx.symbol.BatchNorm(name='bn4b17_branch2b', data=res4b17_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2b = bn4b17_branch2b + res4b17_branch2b_relu = mx.symbol.Activation(name='res4b17_branch2b_relu', data=scale4b17_branch2b, + act_type='relu') + res4b17_branch2c = mx.symbol.Convolution(name='res4b17_branch2c', data=res4b17_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2c = mx.symbol.BatchNorm(name='bn4b17_branch2c', data=res4b17_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2c = bn4b17_branch2c + res4b17 = mx.symbol.broadcast_add(name='res4b17', *[res4b16_relu, scale4b17_branch2c]) + res4b17_relu = mx.symbol.Activation(name='res4b17_relu', data=res4b17, act_type='relu') + res4b18_branch2a = mx.symbol.Convolution(name='res4b18_branch2a', data=res4b17_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2a = mx.symbol.BatchNorm(name='bn4b18_branch2a', data=res4b18_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2a = bn4b18_branch2a + res4b18_branch2a_relu = mx.symbol.Activation(name='res4b18_branch2a_relu', data=scale4b18_branch2a, + act_type='relu') + res4b18_branch2b = mx.symbol.Convolution(name='res4b18_branch2b', data=res4b18_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b18_branch2b = mx.symbol.BatchNorm(name='bn4b18_branch2b', data=res4b18_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2b = bn4b18_branch2b + res4b18_branch2b_relu = mx.symbol.Activation(name='res4b18_branch2b_relu', data=scale4b18_branch2b, + act_type='relu') + res4b18_branch2c = mx.symbol.Convolution(name='res4b18_branch2c', data=res4b18_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2c = mx.symbol.BatchNorm(name='bn4b18_branch2c', data=res4b18_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2c = bn4b18_branch2c + res4b18 = mx.symbol.broadcast_add(name='res4b18', *[res4b17_relu, scale4b18_branch2c]) + res4b18_relu = mx.symbol.Activation(name='res4b18_relu', data=res4b18, act_type='relu') + res4b19_branch2a = mx.symbol.Convolution(name='res4b19_branch2a', data=res4b18_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2a = mx.symbol.BatchNorm(name='bn4b19_branch2a', data=res4b19_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2a = bn4b19_branch2a + res4b19_branch2a_relu = mx.symbol.Activation(name='res4b19_branch2a_relu', data=scale4b19_branch2a, + act_type='relu') + res4b19_branch2b = mx.symbol.Convolution(name='res4b19_branch2b', data=res4b19_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b19_branch2b = mx.symbol.BatchNorm(name='bn4b19_branch2b', data=res4b19_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2b = bn4b19_branch2b + res4b19_branch2b_relu = mx.symbol.Activation(name='res4b19_branch2b_relu', data=scale4b19_branch2b, + act_type='relu') + res4b19_branch2c = mx.symbol.Convolution(name='res4b19_branch2c', data=res4b19_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2c = mx.symbol.BatchNorm(name='bn4b19_branch2c', data=res4b19_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2c = bn4b19_branch2c + res4b19 = mx.symbol.broadcast_add(name='res4b19', *[res4b18_relu, scale4b19_branch2c]) + res4b19_relu = mx.symbol.Activation(name='res4b19_relu', data=res4b19, act_type='relu') + res4b20_branch2a = mx.symbol.Convolution(name='res4b20_branch2a', data=res4b19_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2a = mx.symbol.BatchNorm(name='bn4b20_branch2a', data=res4b20_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2a = bn4b20_branch2a + res4b20_branch2a_relu = mx.symbol.Activation(name='res4b20_branch2a_relu', data=scale4b20_branch2a, + act_type='relu') + res4b20_branch2b = mx.symbol.Convolution(name='res4b20_branch2b', data=res4b20_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b20_branch2b = mx.symbol.BatchNorm(name='bn4b20_branch2b', data=res4b20_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2b = bn4b20_branch2b + res4b20_branch2b_relu = mx.symbol.Activation(name='res4b20_branch2b_relu', data=scale4b20_branch2b, + act_type='relu') + res4b20_branch2c = mx.symbol.Convolution(name='res4b20_branch2c', data=res4b20_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2c = mx.symbol.BatchNorm(name='bn4b20_branch2c', data=res4b20_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2c = bn4b20_branch2c + res4b20 = mx.symbol.broadcast_add(name='res4b20', *[res4b19_relu, scale4b20_branch2c]) + res4b20_relu = mx.symbol.Activation(name='res4b20_relu', data=res4b20, act_type='relu') + res4b21_branch2a = mx.symbol.Convolution(name='res4b21_branch2a', data=res4b20_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2a = mx.symbol.BatchNorm(name='bn4b21_branch2a', data=res4b21_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2a = bn4b21_branch2a + res4b21_branch2a_relu = mx.symbol.Activation(name='res4b21_branch2a_relu', data=scale4b21_branch2a, + act_type='relu') + res4b21_branch2b = mx.symbol.Convolution(name='res4b21_branch2b', data=res4b21_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b21_branch2b = mx.symbol.BatchNorm(name='bn4b21_branch2b', data=res4b21_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2b = bn4b21_branch2b + res4b21_branch2b_relu = mx.symbol.Activation(name='res4b21_branch2b_relu', data=scale4b21_branch2b, + act_type='relu') + res4b21_branch2c = mx.symbol.Convolution(name='res4b21_branch2c', data=res4b21_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2c = mx.symbol.BatchNorm(name='bn4b21_branch2c', data=res4b21_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2c = bn4b21_branch2c + res4b21 = mx.symbol.broadcast_add(name='res4b21', *[res4b20_relu, scale4b21_branch2c]) + res4b21_relu = mx.symbol.Activation(name='res4b21_relu', data=res4b21, act_type='relu') + res4b22_branch2a = mx.symbol.Convolution(name='res4b22_branch2a', data=res4b21_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2a = mx.symbol.BatchNorm(name='bn4b22_branch2a', data=res4b22_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2a = bn4b22_branch2a + res4b22_branch2a_relu = mx.symbol.Activation(name='res4b22_branch2a_relu', data=scale4b22_branch2a, + act_type='relu') + res4b22_branch2b = mx.symbol.Convolution(name='res4b22_branch2b', data=res4b22_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b22_branch2b = mx.symbol.BatchNorm(name='bn4b22_branch2b', data=res4b22_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2b = bn4b22_branch2b + res4b22_branch2b_relu = mx.symbol.Activation(name='res4b22_branch2b_relu', data=scale4b22_branch2b, + act_type='relu') + res4b22_branch2c = mx.symbol.Convolution(name='res4b22_branch2c', data=res4b22_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2c = mx.symbol.BatchNorm(name='bn4b22_branch2c', data=res4b22_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2c = bn4b22_branch2c + res4b22 = mx.symbol.broadcast_add(name='res4b22', *[res4b21_relu, scale4b22_branch2c]) + res4b22_relu = mx.symbol.Activation(name='res4b22_relu', data=res4b22, act_type='relu') + return res4b22_relu + + def get_resnet_v1_conv5(self, conv_feat): + res5a_branch1 = mx.symbol.Convolution(name='res5a_branch1', data=conv_feat, num_filter=2048, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch1 = mx.symbol.BatchNorm(name='bn5a_branch1', data=res5a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch1 = bn5a_branch1 + res5a_branch2a = mx.symbol.Convolution(name='res5a_branch2a', data=conv_feat, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch2a = mx.symbol.BatchNorm(name='bn5a_branch2a', data=res5a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2a = bn5a_branch2a + res5a_branch2a_relu = mx.symbol.Activation(name='res5a_branch2a_relu', data=scale5a_branch2a, act_type='relu') + res5a_branch2b = mx.symbol.Convolution(name='res5a_branch2b', data=res5a_branch2a_relu, num_filter=512, + pad=(2, 2), + kernel=(3, 3), stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5a_branch2b = mx.symbol.BatchNorm(name='bn5a_branch2b', data=res5a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2b = bn5a_branch2b + res5a_branch2b_relu = mx.symbol.Activation(name='res5a_branch2b_relu', data=scale5a_branch2b, act_type='relu') + res5a_branch2c = mx.symbol.Convolution(name='res5a_branch2c', data=res5a_branch2b_relu, num_filter=2048, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch2c = mx.symbol.BatchNorm(name='bn5a_branch2c', data=res5a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2c = bn5a_branch2c + res5a = mx.symbol.broadcast_add(name='res5a', *[scale5a_branch1, scale5a_branch2c]) + res5a_relu = mx.symbol.Activation(name='res5a_relu', data=res5a, act_type='relu') + res5b_branch2a = mx.symbol.Convolution(name='res5b_branch2a', data=res5a_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2a = mx.symbol.BatchNorm(name='bn5b_branch2a', data=res5b_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2a = bn5b_branch2a + res5b_branch2a_relu = mx.symbol.Activation(name='res5b_branch2a_relu', data=scale5b_branch2a, act_type='relu') + res5b_branch2b = mx.symbol.Convolution(name='res5b_branch2b', data=res5b_branch2a_relu, num_filter=512, + pad=(2, 2), + kernel=(3, 3), stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5b_branch2b = mx.symbol.BatchNorm(name='bn5b_branch2b', data=res5b_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2b = bn5b_branch2b + res5b_branch2b_relu = mx.symbol.Activation(name='res5b_branch2b_relu', data=scale5b_branch2b, act_type='relu') + res5b_branch2c = mx.symbol.Convolution(name='res5b_branch2c', data=res5b_branch2b_relu, num_filter=2048, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2c = mx.symbol.BatchNorm(name='bn5b_branch2c', data=res5b_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2c = bn5b_branch2c + res5b = mx.symbol.broadcast_add(name='res5b', *[res5a_relu, scale5b_branch2c]) + res5b_relu = mx.symbol.Activation(name='res5b_relu', data=res5b, act_type='relu') + res5c_branch2a = mx.symbol.Convolution(name='res5c_branch2a', data=res5b_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2a = mx.symbol.BatchNorm(name='bn5c_branch2a', data=res5c_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2a = bn5c_branch2a + res5c_branch2a_relu = mx.symbol.Activation(name='res5c_branch2a_relu', data=scale5c_branch2a, act_type='relu') + res5c_branch2b = mx.symbol.Convolution(name='res5c_branch2b', data=res5c_branch2a_relu, num_filter=512, + pad=(2, 2), + kernel=(3, 3), stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5c_branch2b = mx.symbol.BatchNorm(name='bn5c_branch2b', data=res5c_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2b = bn5c_branch2b + res5c_branch2b_relu = mx.symbol.Activation(name='res5c_branch2b_relu', data=scale5c_branch2b, act_type='relu') + res5c_branch2c = mx.symbol.Convolution(name='res5c_branch2c', data=res5c_branch2b_relu, num_filter=2048, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2c = mx.symbol.BatchNorm(name='bn5c_branch2c', data=res5c_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2c = bn5c_branch2c + res5c = mx.symbol.broadcast_add(name='res5c', *[res5b_relu, scale5c_branch2c]) + res5c_relu = mx.symbol.Activation(name='res5c_relu', data=res5c, act_type='relu') + return res5c_relu + + def get_light_head(self, data, mid_num_filter=256, suffix='separable'): + # mid_num_filter=256 + conv_new_1 = mx.sym.Convolution(data=data, kernel=(15, 1), pad=(7, 0), num_filter=mid_num_filter, name="conv_new_1" + suffix, + weight=self.shared_param_dict['conv_new_1_weight'], bias=self.shared_param_dict['conv_new_1_bias'], lr_mult=3.0) + + relu_new_1 = mx.sym.Activation(data=conv_new_1, act_type='relu', name='relu1' + suffix) + conv_new_2 = mx.sym.Convolution(data=relu_new_1, kernel=(1, 15), pad=(0, 7), num_filter=10 * 7 * 7, name="conv_new_2" + suffix, + weight=self.shared_param_dict['conv_new_2_weight'], bias=self.shared_param_dict['conv_new_2_bias'], + lr_mult=3.0) + relu_new_2 = mx.sym.Activation(data=conv_new_2, act_type='relu', name='relu2' + suffix) + conv_new_3 = mx.sym.Convolution(data=data, kernel=(1, 15), pad=(0, 7), num_filter=mid_num_filter, name="conv_new_3" + suffix, + weight=self.shared_param_dict['conv_new_3_weight'], bias=self.shared_param_dict['conv_new_3_bias'], + lr_mult=3.0) + relu_new_3 = mx.sym.Activation(data=conv_new_3, act_type='relu', name='relu3' + suffix) + conv_new_4 = mx.sym.Convolution(data=relu_new_3, kernel=(15, 1), pad=(7, 0), num_filter=10 * 7 * 7, name="conv_new_4" + suffix, + weight=self.shared_param_dict['conv_new_4_weight'], bias=self.shared_param_dict['conv_new_4_bias'], + lr_mult=3.0) + relu_new_4 = mx.sym.Activation(data=conv_new_4, act_type='relu', name='relu4' + suffix) + light_head = mx.symbol.broadcast_add(name='light_head', *[relu_new_2, relu_new_4]) + return light_head + + def get_rpn(self, conv_feat, num_anchors): + rpn_conv = mx.sym.Convolution( + data=conv_feat, kernel=(3, 3), pad=(1, 1), num_filter=512, name="rpn_conv_3x3") + rpn_relu = mx.sym.Activation(data=rpn_conv, act_type="relu", name="rpn_relu") + rpn_cls_score = mx.sym.Convolution( + data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=2 * num_anchors, name="rpn_cls_score") + rpn_bbox_pred = mx.sym.Convolution( + data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=4 * num_anchors, name="rpn_bbox_pred") + return rpn_cls_score, rpn_bbox_pred + + def get_symbol(self, cfg, is_train=True): + + # config alias for convenient + num_classes = cfg.dataset.NUM_CLASSES + num_reg_classes = (2 if cfg.CLASS_AGNOSTIC else num_classes) + Rroi_num_reg_classes = (2 if cfg.network.RRoI_CLASS_AGNOSTIC else num_classes) + + num_anchors = cfg.network.NUM_ANCHORS + + # input init + if is_train: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + gt_boxes = mx.sym.Variable(name="gt_boxes") + rpn_label = mx.sym.Variable(name='label') + rpn_bbox_target = mx.sym.Variable(name='bbox_target') + rpn_bbox_weight = mx.sym.Variable(name='bbox_weight') + else: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + # res5 + relu1 = self.get_resnet_v1_conv5(conv_feat) + + rpn_cls_score, rpn_bbox_pred = self.get_rpn(conv_feat, num_anchors) + + if is_train: + # prepare rpn data + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + + # classification + rpn_cls_prob = mx.sym.SoftmaxOutput(data=rpn_cls_score_reshape, label=rpn_label, multi_output=True, + normalization='valid', use_ignore=True, ignore_label=-1, + name="rpn_cls_prob") + + # bounding box regression + rpn_bbox_loss_ = rpn_bbox_weight * mx.sym.smooth_l1(name='rpn_bbox_loss_', scalar=3.0, + data=(rpn_bbox_pred - rpn_bbox_target)) + + rpn_bbox_loss = mx.sym.MakeLoss(name='rpn_bbox_loss', data=rpn_bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.RPN_BATCH_SIZE) + + # ROI proposal + rpn_cls_act = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_act") + rpn_cls_act_reshape = mx.sym.Reshape( + data=rpn_cls_act, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_act_reshape') + if cfg.TRAIN.CXX_PROPOSAL: + rois = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_act_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TRAIN.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TRAIN.RPN_POST_NMS_TOP_N, + threshold=cfg.TRAIN.RPN_NMS_THRESH, rpn_min_size=cfg.TRAIN.RPN_MIN_SIZE) + else: + rois = mx.sym.Custom( + cls_prob=rpn_cls_act_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TRAIN.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TRAIN.RPN_POST_NMS_TOP_N, + threshold=cfg.TRAIN.RPN_NMS_THRESH, rpn_min_size=cfg.TRAIN.RPN_MIN_SIZE) + + group = mx.sym.Group([rois]) + print group.list_outputs() + # ROI proposal target + gt_boxes_reshape = mx.sym.Reshape(data=gt_boxes, shape=(-1, 9), name='gt_boxes_reshape') + rois, label, bbox_target, bbox_weight = mx.sym.Custom(rois=rois, gt_boxes=gt_boxes_reshape, + op_type='proposal_target_rotbox', + num_classes=num_reg_classes, + batch_images=cfg.TRAIN.BATCH_IMAGES, + batch_rois=cfg.TRAIN.BATCH_ROIS, + cfg=cPickle.dumps(cfg), + fg_class_agnostic=True, + fg_fraction=cfg.TRAIN.FG_FRACTION) + else: + # ROI Proposal + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + rpn_cls_prob = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_prob") + rpn_cls_prob_reshape = mx.sym.Reshape( + data=rpn_cls_prob, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_prob_reshape') + if cfg.TEST.CXX_PROPOSAL: + rois = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + else: + rois = mx.sym.Custom( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + + # largeseparable + conv_thin_feat = self.get_light_head(data=relu1, mid_num_filter=256) + roi_pool = mx.contrib.sym.PSROIALIGNAVEPooling(name='psroialign', data=conv_thin_feat, rois=rois, group_size=7, pooled_size=7, + output_dim=10, spatial_scale=0.0625, sampling_ratio=4) + # RRoI Learner + cls_score = mx.symbol.FullyConnected(name='cls_score', data=roi_pool, num_hidden=num_reg_classes) + bbox_pred = mx.symbol.FullyConnected(name='bbox_pred', data=roi_pool, num_hidden=num_reg_classes * 5) + + cls_prob = mx.sym.SoftmaxActivation(name='cls_prob', data=cls_score) + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_reg_classes), + name='cls_prob_reshape') + if is_train: + if cfg.TRAIN.ENABLE_OHEM: + labels_ohem, bbox_weights_ohem = mx.sym.Custom(op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=num_reg_classes, + roi_per_img=cfg.TRAIN.BATCH_ROIS_OHEM, + cls_score=cls_score, bbox_pred=bbox_pred, labels=label, + bbox_targets=bbox_target, bbox_weights=bbox_weight) + cls_prob_loss = mx.sym.SoftmaxOutput(name='cls_prob_loss', data=cls_score, label=labels_ohem, + normalization='valid', use_ignore=True, ignore_label=-1) + bbox_loss_ = bbox_weights_ohem * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS_OHEM) + rcnn_label = labels_ohem + else: + cls_prob_loss = mx.sym.SoftmaxOutput(name='cls_prob_loss', data=cls_score, label=label, normalization='valid') + bbox_loss_ = bbox_weight * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS) + rcnn_label = label + + # reshape output + rcnn_label = mx.sym.Reshape(data=rcnn_label, shape=(cfg.TRAIN.BATCH_IMAGES, -1), name='label_reshape') + cls_prob_loss = mx.sym.Reshape(data=cls_prob_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_reg_classes), + name='cls_prob_loss_reshape') + bbox_loss = mx.sym.Reshape(data=bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_loss_reshape') + + # ---------------------------------------------------------------------------------------------------------------------------------- + # shape of bbox_pred (n, 2 * 5), shape of cls_prob (n, 2) + # implement Decoder + Rroi_arg_dict = {'rois': rois, 'bbox_pred': bbox_pred, 'cls_prob': cls_prob} + if is_train: + Rroi_aux_dict = { + 'op_type': 'RRoIDecoder', 'name': 'Rrois', 'im_info': im_info, + 'Rroi_pre_nms_top_n': cfg.TRAIN.RRoI_PRE_NMS_TOP_N, + 'Rroi_post_nms_top_n': cfg.TRAIN.RRoI_POST_NMS_TOP_N, + 'threshold': cfg.TRAIN.RRoI_NMS_THRESH, 'min_size': cfg.TRAIN.RRoI_MIN_SIZE, + 'cfg': cPickle.dumps(cfg) + } + Rrois, Rrois_elarge = mx.symbol.Custom(**dict(Rroi_arg_dict.items() + Rroi_aux_dict.items())) + + # rotated proposal target + Rrois, Rrois_elarge_gt_ag, Rroi_label, Rroi_bbox_target, Rroi_bbox_weight = mx.symbol.Custom(Rrois=Rrois, + gt_boxes=gt_boxes_reshape, + op_type='RRoI_target_rotbox_v2', + num_classes=Rroi_num_reg_classes, + batch_images=cfg.TRAIN.BATCH_IMAGES, + batch_rois=cfg.TRAIN.RRoI_BATCH_ROIS, + cfg=cPickle.dumps(cfg), + fg_fraction=cfg.TRAIN.RRoI_FG_FRACTION) + else: + Rroi_aux_dict = { + 'op_type': 'RRoIDecoder', 'name': 'Rrois', 'im_info': im_info, + 'Rroi_pre_nms_top_n': cfg.TEST.RRoI_PRE_NMS_TOP_N, + 'Rroi_post_nms_top_n': cfg.TEST.RRoI_POST_NMS_TOP_N, + 'threshold': cfg.TEST.RRoI_NMS_THRESH, 'min_size': cfg.TEST.RRoI_MIN_SIZE, + 'cfg': cPickle.dumps(cfg) + } + Rrois, Rrois_elarge = mx.symbol.Custom(**dict(Rroi_arg_dict.items() + Rroi_aux_dict.items())) + if is_train: + rotated_pool = mx.contrib.sym.PSROIALIGNAVERotatedPooling(name='psroialign_rotated', data=conv_thin_feat, rois=Rrois_elarge_gt_ag, group_size=7, pooled_size=7, + output_dim=10, spatial_scale=0.0625, sampling_ratio=4) + else: + rotated_pool = mx.contrib.sym.PSROIALIGNAVERotatedPooling(name='psroialign_rotated', data=conv_thin_feat, rois=Rrois_elarge, group_size=7, pooled_size=7, + output_dim=10, spatial_scale=0.0625, sampling_ratio=4) + fc_new_3 = mx.symbol.FullyConnected(name='fc_new_3', data=rotated_pool, num_hidden=2048) + fc_new_3_relu = mx.sym.Activation(data=fc_new_3, act_type='relu', name='fc_new_3_relu') + + # 2 fc + # cls_score/bbox_pred + Rroi_cls_score = mx.symbol.FullyConnected(name='Rroi_cls_score', data=fc_new_3_relu, + num_hidden=num_classes) + Rroi_bbox_pred = mx.symbol.FullyConnected(name='Rroi_bbox_pred', data=fc_new_3_relu, + num_hidden=Rroi_num_reg_classes * 5) + + if is_train: + if cfg.TRAIN.RRoI_ENABLE_OHEM: + # turn off ohem + Rroi_labels_ohem, Rroi_bbox_weights_ohem = mx.sym.Custom(name='Rroi_ohem', + op_type='BoxAnnotatorOHEM', + num_classes=num_classes, + num_reg_classes=Rroi_num_reg_classes, + roi_per_img=cfg.TRAIN.RRoI_BATCH_ROIS_OHEM, + cls_score=Rroi_cls_score, + bbox_pred=Rroi_bbox_pred, + labels=Rroi_label, + bbox_targets=Rroi_bbox_target, + bbox_weights=Rroi_bbox_weight) + Rroi_cls_prob = mx.sym.SoftmaxOutput(name='Rroi_cls_prob', data=Rroi_cls_score, + label=Rroi_labels_ohem, normalization='valid', + use_ignore=True, ignore_label=-1) + Rroi_bbox_loss_ = Rroi_bbox_weights_ohem * mx.sym.smooth_l1(name='Rroi_bbox_loss_', scalar=1.0, + data=( + Rroi_bbox_pred - Rroi_bbox_target)) + Rroi_bbox_loss = mx.sym.MakeLoss(name='Rroi_bbox_loss', data=Rroi_bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.RRoI_BATCH_ROIS_OHEM) + Rroi_rcnn_label = Rroi_labels_ohem + else: + + Rroi_cls_prob = mx.sym.SoftmaxOutput(name='Rroi_cls_prob', data=Rroi_cls_score, + label=Rroi_label, normalization='valid') + Rroi_bbox_loss_ = Rroi_bbox_weight * mx.sym.smooth_l1(name='Rroi_bbox_loss_', scalar=1.0, + data=( + Rroi_bbox_pred - Rroi_bbox_target)) + Rroi_bbox_loss = mx.sym.MakeLoss(name='Rroi_bbox_loss', data=Rroi_bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.RRoI_BATCH_ROIS) + Rroi_rcnn_label = Rroi_label + + # reshape output + Rroi_rcnn_label = mx.sym.Reshape(data=Rroi_rcnn_label, shape=(cfg.TRAIN.BATCH_IMAGES, -1), + name='Rroi_label_reshape') + Rroi_cls_prob = mx.sym.Reshape(data=Rroi_cls_prob, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_classes), + name='Rroi_cls_prob_reshape') + Rroi_bbox_loss = mx.sym.Reshape(data=Rroi_bbox_loss, + shape=(cfg.TRAIN.BATCH_IMAGES, -1, 5 * Rroi_num_reg_classes), + name='Rroi_bbox_loss_reshape') + group = mx.sym.Group([rpn_cls_prob, rpn_bbox_loss, cls_prob_loss, bbox_loss, mx.sym.BlockGrad(rcnn_label), + Rroi_cls_prob, Rroi_bbox_loss, mx.sym.BlockGrad(Rroi_rcnn_label)]) + else: + Rroi_cls_prob = mx.sym.SoftmaxActivation(name='Rroi_cls_prob', data=Rroi_cls_score) + Rroi_cls_prob = mx.sym.Reshape(data=Rroi_cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_classes), + name='Rroi_cls_prob_reshape') + Rroi_bbox_pred = mx.sym.Reshape(data=Rroi_bbox_pred, + shape=(cfg.TEST.BATCH_IMAGES, -1, 5 * Rroi_num_reg_classes), + name='Rroi_bbox_pred_reshape') + + if DEBUG: + bbox_pred = mx.sym.Reshape(data=bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_pred_reshape') + group = mx.sym.Group([rois, cls_prob, bbox_pred]) + else: + group = mx.sym.Group([Rrois, Rroi_cls_prob, Rroi_bbox_pred]) + + self.sym = group + return group + + def get_symbol_rpn(self, cfg, is_train=True): + # config alias for convenient + num_anchors = cfg.network.NUM_ANCHORS + + # input init + if is_train: + data = mx.sym.Variable(name="data") + rpn_label = mx.sym.Variable(name='label') + rpn_bbox_target = mx.sym.Variable(name='bbox_target') + rpn_bbox_weight = mx.sym.Variable(name='bbox_weight') + else: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + rpn_cls_score, rpn_bbox_pred = self.get_rpn(conv_feat, num_anchors) + if is_train: + # prepare rpn data + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + + # classification + rpn_cls_prob = mx.sym.SoftmaxOutput(data=rpn_cls_score_reshape, label=rpn_label, multi_output=True, + normalization='valid', use_ignore=True, ignore_label=-1, + name="rpn_cls_prob", + grad_scale=1.0) + # bounding box regression + rpn_bbox_loss_ = rpn_bbox_weight * mx.sym.smooth_l1(name='rpn_bbox_loss_', scalar=3.0, + data=(rpn_bbox_pred - rpn_bbox_target)) + rpn_bbox_loss = mx.sym.MakeLoss(name='rpn_bbox_loss', data=rpn_bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.RPN_BATCH_SIZE) + group = mx.symbol.Group([rpn_cls_prob, rpn_bbox_loss]) + else: + # ROI Proposal + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + rpn_cls_prob = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_prob") + rpn_cls_prob_reshape = mx.sym.Reshape( + data=rpn_cls_prob, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_prob_reshape') + if cfg.TEST.CXX_PROPOSAL: + rois, score = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + output_score=True, + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + else: + rois, score = mx.sym.Custom( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + output_score=True, + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + group = mx.symbol.Group([rois, score]) + self.sym = group + return group + + def get_symbol_rcnn(self, cfg, is_train=True): + # config alias for convenient + num_classes = cfg.dataset.NUM_CLASSES + num_reg_classes = (2 if cfg.CLASS_AGNOSTIC else num_classes) + print 'should not in this function' + pdb.set_trace() + # input init + if is_train: + data = mx.symbol.Variable(name="data") + rois = mx.symbol.Variable(name='rois') + label = mx.symbol.Variable(name='label') + bbox_target = mx.symbol.Variable(name='bbox_target') + bbox_weight = mx.symbol.Variable(name='bbox_weight') + # reshape input + rois = mx.symbol.Reshape(data=rois, shape=(-1, 5), name='rois_reshape') + label = mx.symbol.Reshape(data=label, shape=(-1,), name='label_reshape') + bbox_target = mx.symbol.Reshape(data=bbox_target, shape=(-1, 5 * num_classes), name='bbox_target_reshape') + bbox_weight = mx.symbol.Reshape(data=bbox_weight, shape=(-1, 5 * num_classes), name='bbox_weight_reshape') + else: + data = mx.sym.Variable(name="data") + rois = mx.symbol.Variable(name='rois') + # reshape input + rois = mx.symbol.Reshape(data=rois, shape=(-1, 5), name='rois_reshape') + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + # res5 + relu1 = self.get_resnet_v1_conv5(conv_feat) + + conv_thin_feat = self.get_light_head(data=relu1, mid_num_filter=256) + + roi_pool = mx.contrib.sym.PSROIPooling(name='psroipooling', data=conv_thin_feat, rois=rois, group_size=7, pooled_size=7, + output_dim=10, spatial_scale=0.0625) + + # 2 fc + # fc_new_1 = mx.symbol.FullyConnected(name='fc_new_1', data=roi_pool, num_hidden=1024) + # fc_new_1_relu = mx.sym.Activation(data=fc_new_1, act_type='relu', name='fc_new_1_relu') + + # cls_score/bbox_pred + cls_score = mx.symbol.FullyConnected(name='cls_score', data=roi_pool, num_hidden=num_classes) + bbox_pred = mx.symbol.FullyConnected(name='bbox_pred', data=roi_pool, num_hidden=num_reg_classes * 5) + + if is_train: + if cfg.TRAIN.ENABLE_OHEM: + labels_ohem, bbox_weights_ohem = mx.sym.Custom(op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=num_reg_classes, + roi_per_img=cfg.TRAIN.BATCH_ROIS_OHEM, + cls_score=cls_score, bbox_pred=bbox_pred, labels=label, + bbox_targets=bbox_target, bbox_weights=bbox_weight) + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=labels_ohem, + normalization='valid', use_ignore=True, ignore_label=-1, grad_scale=1.0) + bbox_loss_ = bbox_weights_ohem * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS_OHEM) + else: + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=label, normalization='valid', + grad_scale=1.0) + bbox_loss_ = bbox_weight * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS) + + # reshape output + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_loss = mx.sym.Reshape(data=bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_loss_reshape') + group = mx.sym.Group([cls_prob, bbox_loss]) + else: + cls_prob = mx.sym.SoftmaxActivation(name='cls_prob', data=cls_score) + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_pred = mx.sym.Reshape(data=bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_pred_reshape') + group = mx.sym.Group([cls_prob, bbox_pred]) + + self.sym = group + return group + + def init_weight_rcnn(self, cfg, arg_params, aux_params): + # arg_params['fc_new_1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_1_weight']) + # arg_params['fc_new_1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_1_bias']) + arg_params['cls_score_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['cls_score_weight']) + arg_params['cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['cls_score_bias']) + arg_params['bbox_pred_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['bbox_pred_weight']) + arg_params['bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['bbox_pred_bias']) + + arg_params['fc_new_3_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_3_weight']) + arg_params['fc_new_3_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_3_bias']) + arg_params['Rroi_cls_score_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['Rroi_cls_score_weight']) + arg_params['Rroi_cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['Rroi_cls_score_bias']) + arg_params['Rroi_bbox_pred_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['Rroi_bbox_pred_weight']) + arg_params['Rroi_bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['Rroi_bbox_pred_bias']) + + + def init_weight_rpn(self, cfg, arg_params, aux_params): + arg_params['rpn_conv_3x3_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['rpn_conv_3x3_weight']) + arg_params['rpn_conv_3x3_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_conv_3x3_bias']) + arg_params['rpn_cls_score_weight'] = mx.random.normal(0, 0.01, + shape=self.arg_shape_dict['rpn_cls_score_weight']) + arg_params['rpn_cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_cls_score_bias']) + arg_params['rpn_bbox_pred_weight'] = mx.random.normal(0, 0.01, + shape=self.arg_shape_dict['rpn_bbox_pred_weight']) + arg_params['rpn_bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_bbox_pred_bias']) + + def init_weight(self, cfg, arg_params, aux_params): + self.init_weight_rpn(cfg, arg_params, aux_params) + self.init_weight_rcnn(cfg, arg_params, aux_params) + for name in self.shared_param_list: + arg_params[name + '_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict[name + '_weight']) + arg_params[name + '_bias'] = mx.nd.zeros(shape=self.arg_shape_dict[name + '_bias']) + diff --git a/faster_rcnn/symbols/resnet_v1_101_rcnn_light_head_deformpsroi.py b/faster_rcnn/symbols/resnet_v1_101_rcnn_light_head_deformpsroi.py new file mode 100644 index 0000000..559b0b4 --- /dev/null +++ b/faster_rcnn/symbols/resnet_v1_101_rcnn_light_head_deformpsroi.py @@ -0,0 +1,1146 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Guodong Zhang, Bin Xiao +# -------------------------------------------------------- + +import cPickle +import mxnet as mx +from utils.symbol import Symbol +from operator_py.proposal import * +from operator_py.proposal_target import * +from operator_py.proposal_target_rotbox import * +from operator_py.box_annotator_ohem import * +import pdb + + +class resnet_v1_101_rcnn_light_head_deformpsroi(Symbol): + def __init__(self): + """ + Use __init__ to define parameter network needs + """ + self.eps = 1e-5 + self.use_global_stats = True + self.workspace = 512 + self.units = (3, 4, 23, 3) # use for 101 + self.filter_list = [256, 512, 1024, 2048] + self.shared_param_list = ['conv_new_1', 'conv_new_2', 'conv_new_3', 'conv_new_4'] + self.shared_param_dict = {} + for name in self.shared_param_list: + self.shared_param_dict[name + '_weight'] = mx.sym.Variable(name + '_weight') + self.shared_param_dict[name + '_bias'] = mx.sym.Variable(name + '_bias') + + + def get_resnet_v1_conv4(self, data): + conv1 = mx.symbol.Convolution(name='conv1', data=data, num_filter=64, pad=(3, 3), kernel=(7, 7), stride=(2, 2), + no_bias=True) + bn_conv1 = mx.symbol.BatchNorm(name='bn_conv1', data=conv1, use_global_stats=True, fix_gamma=False, + eps=self.eps) + scale_conv1 = bn_conv1 + conv1_relu = mx.symbol.Activation(name='conv1_relu', data=scale_conv1, act_type='relu') + pool1 = mx.symbol.Pooling(name='pool1', data=conv1_relu, pooling_convention='full', pad=(0, 0), kernel=(3, 3), + stride=(2, 2), pool_type='max') + res2a_branch1 = mx.symbol.Convolution(name='res2a_branch1', data=pool1, num_filter=256, pad=(0, 0), + kernel=(1, 1), + stride=(1, 1), no_bias=True) + bn2a_branch1 = mx.symbol.BatchNorm(name='bn2a_branch1', data=res2a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch1 = bn2a_branch1 + res2a_branch2a = mx.symbol.Convolution(name='res2a_branch2a', data=pool1, num_filter=64, pad=(0, 0), + kernel=(1, 1), + stride=(1, 1), no_bias=True) + bn2a_branch2a = mx.symbol.BatchNorm(name='bn2a_branch2a', data=res2a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2a = bn2a_branch2a + res2a_branch2a_relu = mx.symbol.Activation(name='res2a_branch2a_relu', data=scale2a_branch2a, act_type='relu') + res2a_branch2b = mx.symbol.Convolution(name='res2a_branch2b', data=res2a_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2a_branch2b = mx.symbol.BatchNorm(name='bn2a_branch2b', data=res2a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2b = bn2a_branch2b + res2a_branch2b_relu = mx.symbol.Activation(name='res2a_branch2b_relu', data=scale2a_branch2b, act_type='relu') + res2a_branch2c = mx.symbol.Convolution(name='res2a_branch2c', data=res2a_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2a_branch2c = mx.symbol.BatchNorm(name='bn2a_branch2c', data=res2a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2c = bn2a_branch2c + res2a = mx.symbol.broadcast_add(name='res2a', *[scale2a_branch1, scale2a_branch2c]) + res2a_relu = mx.symbol.Activation(name='res2a_relu', data=res2a, act_type='relu') + res2b_branch2a = mx.symbol.Convolution(name='res2b_branch2a', data=res2a_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2a = mx.symbol.BatchNorm(name='bn2b_branch2a', data=res2b_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2a = bn2b_branch2a + res2b_branch2a_relu = mx.symbol.Activation(name='res2b_branch2a_relu', data=scale2b_branch2a, act_type='relu') + res2b_branch2b = mx.symbol.Convolution(name='res2b_branch2b', data=res2b_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2b_branch2b = mx.symbol.BatchNorm(name='bn2b_branch2b', data=res2b_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2b = bn2b_branch2b + res2b_branch2b_relu = mx.symbol.Activation(name='res2b_branch2b_relu', data=scale2b_branch2b, act_type='relu') + res2b_branch2c = mx.symbol.Convolution(name='res2b_branch2c', data=res2b_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2c = mx.symbol.BatchNorm(name='bn2b_branch2c', data=res2b_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2c = bn2b_branch2c + res2b = mx.symbol.broadcast_add(name='res2b', *[res2a_relu, scale2b_branch2c]) + res2b_relu = mx.symbol.Activation(name='res2b_relu', data=res2b, act_type='relu') + res2c_branch2a = mx.symbol.Convolution(name='res2c_branch2a', data=res2b_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2a = mx.symbol.BatchNorm(name='bn2c_branch2a', data=res2c_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2a = bn2c_branch2a + res2c_branch2a_relu = mx.symbol.Activation(name='res2c_branch2a_relu', data=scale2c_branch2a, act_type='relu') + res2c_branch2b = mx.symbol.Convolution(name='res2c_branch2b', data=res2c_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2c_branch2b = mx.symbol.BatchNorm(name='bn2c_branch2b', data=res2c_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2b = bn2c_branch2b + res2c_branch2b_relu = mx.symbol.Activation(name='res2c_branch2b_relu', data=scale2c_branch2b, act_type='relu') + res2c_branch2c = mx.symbol.Convolution(name='res2c_branch2c', data=res2c_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2c = mx.symbol.BatchNorm(name='bn2c_branch2c', data=res2c_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2c = bn2c_branch2c + res2c = mx.symbol.broadcast_add(name='res2c', *[res2b_relu, scale2c_branch2c]) + res2c_relu = mx.symbol.Activation(name='res2c_relu', data=res2c, act_type='relu') + res3a_branch1 = mx.symbol.Convolution(name='res3a_branch1', data=res2c_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch1 = mx.symbol.BatchNorm(name='bn3a_branch1', data=res3a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch1 = bn3a_branch1 + res3a_branch2a = mx.symbol.Convolution(name='res3a_branch2a', data=res2c_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch2a = mx.symbol.BatchNorm(name='bn3a_branch2a', data=res3a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2a = bn3a_branch2a + res3a_branch2a_relu = mx.symbol.Activation(name='res3a_branch2a_relu', data=scale3a_branch2a, act_type='relu') + res3a_branch2b = mx.symbol.Convolution(name='res3a_branch2b', data=res3a_branch2a_relu, num_filter=128, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3a_branch2b = mx.symbol.BatchNorm(name='bn3a_branch2b', data=res3a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2b = bn3a_branch2b + res3a_branch2b_relu = mx.symbol.Activation(name='res3a_branch2b_relu', data=scale3a_branch2b, act_type='relu') + res3a_branch2c = mx.symbol.Convolution(name='res3a_branch2c', data=res3a_branch2b_relu, num_filter=512, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3a_branch2c = mx.symbol.BatchNorm(name='bn3a_branch2c', data=res3a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2c = bn3a_branch2c + res3a = mx.symbol.broadcast_add(name='res3a', *[scale3a_branch1, scale3a_branch2c]) + res3a_relu = mx.symbol.Activation(name='res3a_relu', data=res3a, act_type='relu') + res3b1_branch2a = mx.symbol.Convolution(name='res3b1_branch2a', data=res3a_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2a = mx.symbol.BatchNorm(name='bn3b1_branch2a', data=res3b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2a = bn3b1_branch2a + res3b1_branch2a_relu = mx.symbol.Activation(name='res3b1_branch2a_relu', data=scale3b1_branch2a, + act_type='relu') + res3b1_branch2b = mx.symbol.Convolution(name='res3b1_branch2b', data=res3b1_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b1_branch2b = mx.symbol.BatchNorm(name='bn3b1_branch2b', data=res3b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2b = bn3b1_branch2b + res3b1_branch2b_relu = mx.symbol.Activation(name='res3b1_branch2b_relu', data=scale3b1_branch2b, + act_type='relu') + res3b1_branch2c = mx.symbol.Convolution(name='res3b1_branch2c', data=res3b1_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2c = mx.symbol.BatchNorm(name='bn3b1_branch2c', data=res3b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2c = bn3b1_branch2c + res3b1 = mx.symbol.broadcast_add(name='res3b1', *[res3a_relu, scale3b1_branch2c]) + res3b1_relu = mx.symbol.Activation(name='res3b1_relu', data=res3b1, act_type='relu') + res3b2_branch2a = mx.symbol.Convolution(name='res3b2_branch2a', data=res3b1_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2a = mx.symbol.BatchNorm(name='bn3b2_branch2a', data=res3b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2a = bn3b2_branch2a + res3b2_branch2a_relu = mx.symbol.Activation(name='res3b2_branch2a_relu', data=scale3b2_branch2a, + act_type='relu') + res3b2_branch2b = mx.symbol.Convolution(name='res3b2_branch2b', data=res3b2_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b2_branch2b = mx.symbol.BatchNorm(name='bn3b2_branch2b', data=res3b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2b = bn3b2_branch2b + res3b2_branch2b_relu = mx.symbol.Activation(name='res3b2_branch2b_relu', data=scale3b2_branch2b, + act_type='relu') + res3b2_branch2c = mx.symbol.Convolution(name='res3b2_branch2c', data=res3b2_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2c = mx.symbol.BatchNorm(name='bn3b2_branch2c', data=res3b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2c = bn3b2_branch2c + res3b2 = mx.symbol.broadcast_add(name='res3b2', *[res3b1_relu, scale3b2_branch2c]) + res3b2_relu = mx.symbol.Activation(name='res3b2_relu', data=res3b2, act_type='relu') + res3b3_branch2a = mx.symbol.Convolution(name='res3b3_branch2a', data=res3b2_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2a = mx.symbol.BatchNorm(name='bn3b3_branch2a', data=res3b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2a = bn3b3_branch2a + res3b3_branch2a_relu = mx.symbol.Activation(name='res3b3_branch2a_relu', data=scale3b3_branch2a, + act_type='relu') + res3b3_branch2b = mx.symbol.Convolution(name='res3b3_branch2b', data=res3b3_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b3_branch2b = mx.symbol.BatchNorm(name='bn3b3_branch2b', data=res3b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2b = bn3b3_branch2b + res3b3_branch2b_relu = mx.symbol.Activation(name='res3b3_branch2b_relu', data=scale3b3_branch2b, + act_type='relu') + res3b3_branch2c = mx.symbol.Convolution(name='res3b3_branch2c', data=res3b3_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2c = mx.symbol.BatchNorm(name='bn3b3_branch2c', data=res3b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2c = bn3b3_branch2c + res3b3 = mx.symbol.broadcast_add(name='res3b3', *[res3b2_relu, scale3b3_branch2c]) + res3b3_relu = mx.symbol.Activation(name='res3b3_relu', data=res3b3, act_type='relu') + res4a_branch1 = mx.symbol.Convolution(name='res4a_branch1', data=res3b3_relu, num_filter=1024, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch1 = mx.symbol.BatchNorm(name='bn4a_branch1', data=res4a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch1 = bn4a_branch1 + res4a_branch2a = mx.symbol.Convolution(name='res4a_branch2a', data=res3b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch2a = mx.symbol.BatchNorm(name='bn4a_branch2a', data=res4a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2a = bn4a_branch2a + res4a_branch2a_relu = mx.symbol.Activation(name='res4a_branch2a_relu', data=scale4a_branch2a, act_type='relu') + res4a_branch2b = mx.symbol.Convolution(name='res4a_branch2b', data=res4a_branch2a_relu, num_filter=256, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4a_branch2b = mx.symbol.BatchNorm(name='bn4a_branch2b', data=res4a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2b = bn4a_branch2b + res4a_branch2b_relu = mx.symbol.Activation(name='res4a_branch2b_relu', data=scale4a_branch2b, act_type='relu') + res4a_branch2c = mx.symbol.Convolution(name='res4a_branch2c', data=res4a_branch2b_relu, num_filter=1024, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4a_branch2c = mx.symbol.BatchNorm(name='bn4a_branch2c', data=res4a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2c = bn4a_branch2c + res4a = mx.symbol.broadcast_add(name='res4a', *[scale4a_branch1, scale4a_branch2c]) + res4a_relu = mx.symbol.Activation(name='res4a_relu', data=res4a, act_type='relu') + res4b1_branch2a = mx.symbol.Convolution(name='res4b1_branch2a', data=res4a_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2a = mx.symbol.BatchNorm(name='bn4b1_branch2a', data=res4b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2a = bn4b1_branch2a + res4b1_branch2a_relu = mx.symbol.Activation(name='res4b1_branch2a_relu', data=scale4b1_branch2a, + act_type='relu') + res4b1_branch2b = mx.symbol.Convolution(name='res4b1_branch2b', data=res4b1_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b1_branch2b = mx.symbol.BatchNorm(name='bn4b1_branch2b', data=res4b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2b = bn4b1_branch2b + res4b1_branch2b_relu = mx.symbol.Activation(name='res4b1_branch2b_relu', data=scale4b1_branch2b, + act_type='relu') + res4b1_branch2c = mx.symbol.Convolution(name='res4b1_branch2c', data=res4b1_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2c = mx.symbol.BatchNorm(name='bn4b1_branch2c', data=res4b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2c = bn4b1_branch2c + res4b1 = mx.symbol.broadcast_add(name='res4b1', *[res4a_relu, scale4b1_branch2c]) + res4b1_relu = mx.symbol.Activation(name='res4b1_relu', data=res4b1, act_type='relu') + res4b2_branch2a = mx.symbol.Convolution(name='res4b2_branch2a', data=res4b1_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2a = mx.symbol.BatchNorm(name='bn4b2_branch2a', data=res4b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2a = bn4b2_branch2a + res4b2_branch2a_relu = mx.symbol.Activation(name='res4b2_branch2a_relu', data=scale4b2_branch2a, + act_type='relu') + res4b2_branch2b = mx.symbol.Convolution(name='res4b2_branch2b', data=res4b2_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b2_branch2b = mx.symbol.BatchNorm(name='bn4b2_branch2b', data=res4b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2b = bn4b2_branch2b + res4b2_branch2b_relu = mx.symbol.Activation(name='res4b2_branch2b_relu', data=scale4b2_branch2b, + act_type='relu') + res4b2_branch2c = mx.symbol.Convolution(name='res4b2_branch2c', data=res4b2_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2c = mx.symbol.BatchNorm(name='bn4b2_branch2c', data=res4b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2c = bn4b2_branch2c + res4b2 = mx.symbol.broadcast_add(name='res4b2', *[res4b1_relu, scale4b2_branch2c]) + res4b2_relu = mx.symbol.Activation(name='res4b2_relu', data=res4b2, act_type='relu') + res4b3_branch2a = mx.symbol.Convolution(name='res4b3_branch2a', data=res4b2_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2a = mx.symbol.BatchNorm(name='bn4b3_branch2a', data=res4b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2a = bn4b3_branch2a + res4b3_branch2a_relu = mx.symbol.Activation(name='res4b3_branch2a_relu', data=scale4b3_branch2a, + act_type='relu') + res4b3_branch2b = mx.symbol.Convolution(name='res4b3_branch2b', data=res4b3_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b3_branch2b = mx.symbol.BatchNorm(name='bn4b3_branch2b', data=res4b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2b = bn4b3_branch2b + res4b3_branch2b_relu = mx.symbol.Activation(name='res4b3_branch2b_relu', data=scale4b3_branch2b, + act_type='relu') + res4b3_branch2c = mx.symbol.Convolution(name='res4b3_branch2c', data=res4b3_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2c = mx.symbol.BatchNorm(name='bn4b3_branch2c', data=res4b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2c = bn4b3_branch2c + res4b3 = mx.symbol.broadcast_add(name='res4b3', *[res4b2_relu, scale4b3_branch2c]) + res4b3_relu = mx.symbol.Activation(name='res4b3_relu', data=res4b3, act_type='relu') + res4b4_branch2a = mx.symbol.Convolution(name='res4b4_branch2a', data=res4b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2a = mx.symbol.BatchNorm(name='bn4b4_branch2a', data=res4b4_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2a = bn4b4_branch2a + res4b4_branch2a_relu = mx.symbol.Activation(name='res4b4_branch2a_relu', data=scale4b4_branch2a, + act_type='relu') + res4b4_branch2b = mx.symbol.Convolution(name='res4b4_branch2b', data=res4b4_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b4_branch2b = mx.symbol.BatchNorm(name='bn4b4_branch2b', data=res4b4_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2b = bn4b4_branch2b + res4b4_branch2b_relu = mx.symbol.Activation(name='res4b4_branch2b_relu', data=scale4b4_branch2b, + act_type='relu') + res4b4_branch2c = mx.symbol.Convolution(name='res4b4_branch2c', data=res4b4_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2c = mx.symbol.BatchNorm(name='bn4b4_branch2c', data=res4b4_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2c = bn4b4_branch2c + res4b4 = mx.symbol.broadcast_add(name='res4b4', *[res4b3_relu, scale4b4_branch2c]) + res4b4_relu = mx.symbol.Activation(name='res4b4_relu', data=res4b4, act_type='relu') + res4b5_branch2a = mx.symbol.Convolution(name='res4b5_branch2a', data=res4b4_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2a = mx.symbol.BatchNorm(name='bn4b5_branch2a', data=res4b5_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2a = bn4b5_branch2a + res4b5_branch2a_relu = mx.symbol.Activation(name='res4b5_branch2a_relu', data=scale4b5_branch2a, + act_type='relu') + res4b5_branch2b = mx.symbol.Convolution(name='res4b5_branch2b', data=res4b5_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b5_branch2b = mx.symbol.BatchNorm(name='bn4b5_branch2b', data=res4b5_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2b = bn4b5_branch2b + res4b5_branch2b_relu = mx.symbol.Activation(name='res4b5_branch2b_relu', data=scale4b5_branch2b, + act_type='relu') + res4b5_branch2c = mx.symbol.Convolution(name='res4b5_branch2c', data=res4b5_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2c = mx.symbol.BatchNorm(name='bn4b5_branch2c', data=res4b5_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2c = bn4b5_branch2c + res4b5 = mx.symbol.broadcast_add(name='res4b5', *[res4b4_relu, scale4b5_branch2c]) + res4b5_relu = mx.symbol.Activation(name='res4b5_relu', data=res4b5, act_type='relu') + res4b6_branch2a = mx.symbol.Convolution(name='res4b6_branch2a', data=res4b5_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2a = mx.symbol.BatchNorm(name='bn4b6_branch2a', data=res4b6_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2a = bn4b6_branch2a + res4b6_branch2a_relu = mx.symbol.Activation(name='res4b6_branch2a_relu', data=scale4b6_branch2a, + act_type='relu') + res4b6_branch2b = mx.symbol.Convolution(name='res4b6_branch2b', data=res4b6_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b6_branch2b = mx.symbol.BatchNorm(name='bn4b6_branch2b', data=res4b6_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2b = bn4b6_branch2b + res4b6_branch2b_relu = mx.symbol.Activation(name='res4b6_branch2b_relu', data=scale4b6_branch2b, + act_type='relu') + res4b6_branch2c = mx.symbol.Convolution(name='res4b6_branch2c', data=res4b6_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2c = mx.symbol.BatchNorm(name='bn4b6_branch2c', data=res4b6_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2c = bn4b6_branch2c + res4b6 = mx.symbol.broadcast_add(name='res4b6', *[res4b5_relu, scale4b6_branch2c]) + res4b6_relu = mx.symbol.Activation(name='res4b6_relu', data=res4b6, act_type='relu') + res4b7_branch2a = mx.symbol.Convolution(name='res4b7_branch2a', data=res4b6_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2a = mx.symbol.BatchNorm(name='bn4b7_branch2a', data=res4b7_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2a = bn4b7_branch2a + res4b7_branch2a_relu = mx.symbol.Activation(name='res4b7_branch2a_relu', data=scale4b7_branch2a, + act_type='relu') + res4b7_branch2b = mx.symbol.Convolution(name='res4b7_branch2b', data=res4b7_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b7_branch2b = mx.symbol.BatchNorm(name='bn4b7_branch2b', data=res4b7_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2b = bn4b7_branch2b + res4b7_branch2b_relu = mx.symbol.Activation(name='res4b7_branch2b_relu', data=scale4b7_branch2b, + act_type='relu') + res4b7_branch2c = mx.symbol.Convolution(name='res4b7_branch2c', data=res4b7_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2c = mx.symbol.BatchNorm(name='bn4b7_branch2c', data=res4b7_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2c = bn4b7_branch2c + res4b7 = mx.symbol.broadcast_add(name='res4b7', *[res4b6_relu, scale4b7_branch2c]) + res4b7_relu = mx.symbol.Activation(name='res4b7_relu', data=res4b7, act_type='relu') + res4b8_branch2a = mx.symbol.Convolution(name='res4b8_branch2a', data=res4b7_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2a = mx.symbol.BatchNorm(name='bn4b8_branch2a', data=res4b8_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2a = bn4b8_branch2a + res4b8_branch2a_relu = mx.symbol.Activation(name='res4b8_branch2a_relu', data=scale4b8_branch2a, + act_type='relu') + res4b8_branch2b = mx.symbol.Convolution(name='res4b8_branch2b', data=res4b8_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b8_branch2b = mx.symbol.BatchNorm(name='bn4b8_branch2b', data=res4b8_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2b = bn4b8_branch2b + res4b8_branch2b_relu = mx.symbol.Activation(name='res4b8_branch2b_relu', data=scale4b8_branch2b, + act_type='relu') + res4b8_branch2c = mx.symbol.Convolution(name='res4b8_branch2c', data=res4b8_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2c = mx.symbol.BatchNorm(name='bn4b8_branch2c', data=res4b8_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2c = bn4b8_branch2c + res4b8 = mx.symbol.broadcast_add(name='res4b8', *[res4b7_relu, scale4b8_branch2c]) + res4b8_relu = mx.symbol.Activation(name='res4b8_relu', data=res4b8, act_type='relu') + res4b9_branch2a = mx.symbol.Convolution(name='res4b9_branch2a', data=res4b8_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2a = mx.symbol.BatchNorm(name='bn4b9_branch2a', data=res4b9_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2a = bn4b9_branch2a + res4b9_branch2a_relu = mx.symbol.Activation(name='res4b9_branch2a_relu', data=scale4b9_branch2a, + act_type='relu') + res4b9_branch2b = mx.symbol.Convolution(name='res4b9_branch2b', data=res4b9_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b9_branch2b = mx.symbol.BatchNorm(name='bn4b9_branch2b', data=res4b9_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2b = bn4b9_branch2b + res4b9_branch2b_relu = mx.symbol.Activation(name='res4b9_branch2b_relu', data=scale4b9_branch2b, + act_type='relu') + res4b9_branch2c = mx.symbol.Convolution(name='res4b9_branch2c', data=res4b9_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2c = mx.symbol.BatchNorm(name='bn4b9_branch2c', data=res4b9_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2c = bn4b9_branch2c + res4b9 = mx.symbol.broadcast_add(name='res4b9', *[res4b8_relu, scale4b9_branch2c]) + res4b9_relu = mx.symbol.Activation(name='res4b9_relu', data=res4b9, act_type='relu') + res4b10_branch2a = mx.symbol.Convolution(name='res4b10_branch2a', data=res4b9_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2a = mx.symbol.BatchNorm(name='bn4b10_branch2a', data=res4b10_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2a = bn4b10_branch2a + res4b10_branch2a_relu = mx.symbol.Activation(name='res4b10_branch2a_relu', data=scale4b10_branch2a, + act_type='relu') + res4b10_branch2b = mx.symbol.Convolution(name='res4b10_branch2b', data=res4b10_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b10_branch2b = mx.symbol.BatchNorm(name='bn4b10_branch2b', data=res4b10_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2b = bn4b10_branch2b + res4b10_branch2b_relu = mx.symbol.Activation(name='res4b10_branch2b_relu', data=scale4b10_branch2b, + act_type='relu') + res4b10_branch2c = mx.symbol.Convolution(name='res4b10_branch2c', data=res4b10_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2c = mx.symbol.BatchNorm(name='bn4b10_branch2c', data=res4b10_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2c = bn4b10_branch2c + res4b10 = mx.symbol.broadcast_add(name='res4b10', *[res4b9_relu, scale4b10_branch2c]) + res4b10_relu = mx.symbol.Activation(name='res4b10_relu', data=res4b10, act_type='relu') + res4b11_branch2a = mx.symbol.Convolution(name='res4b11_branch2a', data=res4b10_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2a = mx.symbol.BatchNorm(name='bn4b11_branch2a', data=res4b11_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2a = bn4b11_branch2a + res4b11_branch2a_relu = mx.symbol.Activation(name='res4b11_branch2a_relu', data=scale4b11_branch2a, + act_type='relu') + res4b11_branch2b = mx.symbol.Convolution(name='res4b11_branch2b', data=res4b11_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b11_branch2b = mx.symbol.BatchNorm(name='bn4b11_branch2b', data=res4b11_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2b = bn4b11_branch2b + res4b11_branch2b_relu = mx.symbol.Activation(name='res4b11_branch2b_relu', data=scale4b11_branch2b, + act_type='relu') + res4b11_branch2c = mx.symbol.Convolution(name='res4b11_branch2c', data=res4b11_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2c = mx.symbol.BatchNorm(name='bn4b11_branch2c', data=res4b11_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2c = bn4b11_branch2c + res4b11 = mx.symbol.broadcast_add(name='res4b11', *[res4b10_relu, scale4b11_branch2c]) + res4b11_relu = mx.symbol.Activation(name='res4b11_relu', data=res4b11, act_type='relu') + res4b12_branch2a = mx.symbol.Convolution(name='res4b12_branch2a', data=res4b11_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2a = mx.symbol.BatchNorm(name='bn4b12_branch2a', data=res4b12_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2a = bn4b12_branch2a + res4b12_branch2a_relu = mx.symbol.Activation(name='res4b12_branch2a_relu', data=scale4b12_branch2a, + act_type='relu') + res4b12_branch2b = mx.symbol.Convolution(name='res4b12_branch2b', data=res4b12_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b12_branch2b = mx.symbol.BatchNorm(name='bn4b12_branch2b', data=res4b12_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2b = bn4b12_branch2b + res4b12_branch2b_relu = mx.symbol.Activation(name='res4b12_branch2b_relu', data=scale4b12_branch2b, + act_type='relu') + res4b12_branch2c = mx.symbol.Convolution(name='res4b12_branch2c', data=res4b12_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2c = mx.symbol.BatchNorm(name='bn4b12_branch2c', data=res4b12_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2c = bn4b12_branch2c + res4b12 = mx.symbol.broadcast_add(name='res4b12', *[res4b11_relu, scale4b12_branch2c]) + res4b12_relu = mx.symbol.Activation(name='res4b12_relu', data=res4b12, act_type='relu') + res4b13_branch2a = mx.symbol.Convolution(name='res4b13_branch2a', data=res4b12_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2a = mx.symbol.BatchNorm(name='bn4b13_branch2a', data=res4b13_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2a = bn4b13_branch2a + res4b13_branch2a_relu = mx.symbol.Activation(name='res4b13_branch2a_relu', data=scale4b13_branch2a, + act_type='relu') + res4b13_branch2b = mx.symbol.Convolution(name='res4b13_branch2b', data=res4b13_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b13_branch2b = mx.symbol.BatchNorm(name='bn4b13_branch2b', data=res4b13_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2b = bn4b13_branch2b + res4b13_branch2b_relu = mx.symbol.Activation(name='res4b13_branch2b_relu', data=scale4b13_branch2b, + act_type='relu') + res4b13_branch2c = mx.symbol.Convolution(name='res4b13_branch2c', data=res4b13_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2c = mx.symbol.BatchNorm(name='bn4b13_branch2c', data=res4b13_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2c = bn4b13_branch2c + res4b13 = mx.symbol.broadcast_add(name='res4b13', *[res4b12_relu, scale4b13_branch2c]) + res4b13_relu = mx.symbol.Activation(name='res4b13_relu', data=res4b13, act_type='relu') + res4b14_branch2a = mx.symbol.Convolution(name='res4b14_branch2a', data=res4b13_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2a = mx.symbol.BatchNorm(name='bn4b14_branch2a', data=res4b14_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2a = bn4b14_branch2a + res4b14_branch2a_relu = mx.symbol.Activation(name='res4b14_branch2a_relu', data=scale4b14_branch2a, + act_type='relu') + res4b14_branch2b = mx.symbol.Convolution(name='res4b14_branch2b', data=res4b14_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b14_branch2b = mx.symbol.BatchNorm(name='bn4b14_branch2b', data=res4b14_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2b = bn4b14_branch2b + res4b14_branch2b_relu = mx.symbol.Activation(name='res4b14_branch2b_relu', data=scale4b14_branch2b, + act_type='relu') + res4b14_branch2c = mx.symbol.Convolution(name='res4b14_branch2c', data=res4b14_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2c = mx.symbol.BatchNorm(name='bn4b14_branch2c', data=res4b14_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2c = bn4b14_branch2c + res4b14 = mx.symbol.broadcast_add(name='res4b14', *[res4b13_relu, scale4b14_branch2c]) + res4b14_relu = mx.symbol.Activation(name='res4b14_relu', data=res4b14, act_type='relu') + res4b15_branch2a = mx.symbol.Convolution(name='res4b15_branch2a', data=res4b14_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2a = mx.symbol.BatchNorm(name='bn4b15_branch2a', data=res4b15_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2a = bn4b15_branch2a + res4b15_branch2a_relu = mx.symbol.Activation(name='res4b15_branch2a_relu', data=scale4b15_branch2a, + act_type='relu') + res4b15_branch2b = mx.symbol.Convolution(name='res4b15_branch2b', data=res4b15_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b15_branch2b = mx.symbol.BatchNorm(name='bn4b15_branch2b', data=res4b15_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2b = bn4b15_branch2b + res4b15_branch2b_relu = mx.symbol.Activation(name='res4b15_branch2b_relu', data=scale4b15_branch2b, + act_type='relu') + res4b15_branch2c = mx.symbol.Convolution(name='res4b15_branch2c', data=res4b15_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2c = mx.symbol.BatchNorm(name='bn4b15_branch2c', data=res4b15_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2c = bn4b15_branch2c + res4b15 = mx.symbol.broadcast_add(name='res4b15', *[res4b14_relu, scale4b15_branch2c]) + res4b15_relu = mx.symbol.Activation(name='res4b15_relu', data=res4b15, act_type='relu') + res4b16_branch2a = mx.symbol.Convolution(name='res4b16_branch2a', data=res4b15_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2a = mx.symbol.BatchNorm(name='bn4b16_branch2a', data=res4b16_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2a = bn4b16_branch2a + res4b16_branch2a_relu = mx.symbol.Activation(name='res4b16_branch2a_relu', data=scale4b16_branch2a, + act_type='relu') + res4b16_branch2b = mx.symbol.Convolution(name='res4b16_branch2b', data=res4b16_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b16_branch2b = mx.symbol.BatchNorm(name='bn4b16_branch2b', data=res4b16_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2b = bn4b16_branch2b + res4b16_branch2b_relu = mx.symbol.Activation(name='res4b16_branch2b_relu', data=scale4b16_branch2b, + act_type='relu') + res4b16_branch2c = mx.symbol.Convolution(name='res4b16_branch2c', data=res4b16_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2c = mx.symbol.BatchNorm(name='bn4b16_branch2c', data=res4b16_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2c = bn4b16_branch2c + res4b16 = mx.symbol.broadcast_add(name='res4b16', *[res4b15_relu, scale4b16_branch2c]) + res4b16_relu = mx.symbol.Activation(name='res4b16_relu', data=res4b16, act_type='relu') + res4b17_branch2a = mx.symbol.Convolution(name='res4b17_branch2a', data=res4b16_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2a = mx.symbol.BatchNorm(name='bn4b17_branch2a', data=res4b17_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2a = bn4b17_branch2a + res4b17_branch2a_relu = mx.symbol.Activation(name='res4b17_branch2a_relu', data=scale4b17_branch2a, + act_type='relu') + res4b17_branch2b = mx.symbol.Convolution(name='res4b17_branch2b', data=res4b17_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b17_branch2b = mx.symbol.BatchNorm(name='bn4b17_branch2b', data=res4b17_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2b = bn4b17_branch2b + res4b17_branch2b_relu = mx.symbol.Activation(name='res4b17_branch2b_relu', data=scale4b17_branch2b, + act_type='relu') + res4b17_branch2c = mx.symbol.Convolution(name='res4b17_branch2c', data=res4b17_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2c = mx.symbol.BatchNorm(name='bn4b17_branch2c', data=res4b17_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2c = bn4b17_branch2c + res4b17 = mx.symbol.broadcast_add(name='res4b17', *[res4b16_relu, scale4b17_branch2c]) + res4b17_relu = mx.symbol.Activation(name='res4b17_relu', data=res4b17, act_type='relu') + res4b18_branch2a = mx.symbol.Convolution(name='res4b18_branch2a', data=res4b17_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2a = mx.symbol.BatchNorm(name='bn4b18_branch2a', data=res4b18_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2a = bn4b18_branch2a + res4b18_branch2a_relu = mx.symbol.Activation(name='res4b18_branch2a_relu', data=scale4b18_branch2a, + act_type='relu') + res4b18_branch2b = mx.symbol.Convolution(name='res4b18_branch2b', data=res4b18_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b18_branch2b = mx.symbol.BatchNorm(name='bn4b18_branch2b', data=res4b18_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2b = bn4b18_branch2b + res4b18_branch2b_relu = mx.symbol.Activation(name='res4b18_branch2b_relu', data=scale4b18_branch2b, + act_type='relu') + res4b18_branch2c = mx.symbol.Convolution(name='res4b18_branch2c', data=res4b18_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2c = mx.symbol.BatchNorm(name='bn4b18_branch2c', data=res4b18_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2c = bn4b18_branch2c + res4b18 = mx.symbol.broadcast_add(name='res4b18', *[res4b17_relu, scale4b18_branch2c]) + res4b18_relu = mx.symbol.Activation(name='res4b18_relu', data=res4b18, act_type='relu') + res4b19_branch2a = mx.symbol.Convolution(name='res4b19_branch2a', data=res4b18_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2a = mx.symbol.BatchNorm(name='bn4b19_branch2a', data=res4b19_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2a = bn4b19_branch2a + res4b19_branch2a_relu = mx.symbol.Activation(name='res4b19_branch2a_relu', data=scale4b19_branch2a, + act_type='relu') + res4b19_branch2b = mx.symbol.Convolution(name='res4b19_branch2b', data=res4b19_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b19_branch2b = mx.symbol.BatchNorm(name='bn4b19_branch2b', data=res4b19_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2b = bn4b19_branch2b + res4b19_branch2b_relu = mx.symbol.Activation(name='res4b19_branch2b_relu', data=scale4b19_branch2b, + act_type='relu') + res4b19_branch2c = mx.symbol.Convolution(name='res4b19_branch2c', data=res4b19_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2c = mx.symbol.BatchNorm(name='bn4b19_branch2c', data=res4b19_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2c = bn4b19_branch2c + res4b19 = mx.symbol.broadcast_add(name='res4b19', *[res4b18_relu, scale4b19_branch2c]) + res4b19_relu = mx.symbol.Activation(name='res4b19_relu', data=res4b19, act_type='relu') + res4b20_branch2a = mx.symbol.Convolution(name='res4b20_branch2a', data=res4b19_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2a = mx.symbol.BatchNorm(name='bn4b20_branch2a', data=res4b20_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2a = bn4b20_branch2a + res4b20_branch2a_relu = mx.symbol.Activation(name='res4b20_branch2a_relu', data=scale4b20_branch2a, + act_type='relu') + res4b20_branch2b = mx.symbol.Convolution(name='res4b20_branch2b', data=res4b20_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b20_branch2b = mx.symbol.BatchNorm(name='bn4b20_branch2b', data=res4b20_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2b = bn4b20_branch2b + res4b20_branch2b_relu = mx.symbol.Activation(name='res4b20_branch2b_relu', data=scale4b20_branch2b, + act_type='relu') + res4b20_branch2c = mx.symbol.Convolution(name='res4b20_branch2c', data=res4b20_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2c = mx.symbol.BatchNorm(name='bn4b20_branch2c', data=res4b20_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2c = bn4b20_branch2c + res4b20 = mx.symbol.broadcast_add(name='res4b20', *[res4b19_relu, scale4b20_branch2c]) + res4b20_relu = mx.symbol.Activation(name='res4b20_relu', data=res4b20, act_type='relu') + res4b21_branch2a = mx.symbol.Convolution(name='res4b21_branch2a', data=res4b20_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2a = mx.symbol.BatchNorm(name='bn4b21_branch2a', data=res4b21_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2a = bn4b21_branch2a + res4b21_branch2a_relu = mx.symbol.Activation(name='res4b21_branch2a_relu', data=scale4b21_branch2a, + act_type='relu') + res4b21_branch2b = mx.symbol.Convolution(name='res4b21_branch2b', data=res4b21_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b21_branch2b = mx.symbol.BatchNorm(name='bn4b21_branch2b', data=res4b21_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2b = bn4b21_branch2b + res4b21_branch2b_relu = mx.symbol.Activation(name='res4b21_branch2b_relu', data=scale4b21_branch2b, + act_type='relu') + res4b21_branch2c = mx.symbol.Convolution(name='res4b21_branch2c', data=res4b21_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2c = mx.symbol.BatchNorm(name='bn4b21_branch2c', data=res4b21_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2c = bn4b21_branch2c + res4b21 = mx.symbol.broadcast_add(name='res4b21', *[res4b20_relu, scale4b21_branch2c]) + res4b21_relu = mx.symbol.Activation(name='res4b21_relu', data=res4b21, act_type='relu') + res4b22_branch2a = mx.symbol.Convolution(name='res4b22_branch2a', data=res4b21_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2a = mx.symbol.BatchNorm(name='bn4b22_branch2a', data=res4b22_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2a = bn4b22_branch2a + res4b22_branch2a_relu = mx.symbol.Activation(name='res4b22_branch2a_relu', data=scale4b22_branch2a, + act_type='relu') + res4b22_branch2b = mx.symbol.Convolution(name='res4b22_branch2b', data=res4b22_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b22_branch2b = mx.symbol.BatchNorm(name='bn4b22_branch2b', data=res4b22_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2b = bn4b22_branch2b + res4b22_branch2b_relu = mx.symbol.Activation(name='res4b22_branch2b_relu', data=scale4b22_branch2b, + act_type='relu') + res4b22_branch2c = mx.symbol.Convolution(name='res4b22_branch2c', data=res4b22_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2c = mx.symbol.BatchNorm(name='bn4b22_branch2c', data=res4b22_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2c = bn4b22_branch2c + res4b22 = mx.symbol.broadcast_add(name='res4b22', *[res4b21_relu, scale4b22_branch2c]) + res4b22_relu = mx.symbol.Activation(name='res4b22_relu', data=res4b22, act_type='relu') + return res4b22_relu + + def get_resnet_v1_conv5(self, conv_feat): + res5a_branch1 = mx.symbol.Convolution(name='res5a_branch1', data=conv_feat, num_filter=2048, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch1 = mx.symbol.BatchNorm(name='bn5a_branch1', data=res5a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch1 = bn5a_branch1 + res5a_branch2a = mx.symbol.Convolution(name='res5a_branch2a', data=conv_feat, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch2a = mx.symbol.BatchNorm(name='bn5a_branch2a', data=res5a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2a = bn5a_branch2a + res5a_branch2a_relu = mx.symbol.Activation(name='res5a_branch2a_relu', data=scale5a_branch2a, act_type='relu') + res5a_branch2b = mx.symbol.Convolution(name='res5a_branch2b', data=res5a_branch2a_relu, num_filter=512, + pad=(2, 2), + kernel=(3, 3), stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5a_branch2b = mx.symbol.BatchNorm(name='bn5a_branch2b', data=res5a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2b = bn5a_branch2b + res5a_branch2b_relu = mx.symbol.Activation(name='res5a_branch2b_relu', data=scale5a_branch2b, act_type='relu') + res5a_branch2c = mx.symbol.Convolution(name='res5a_branch2c', data=res5a_branch2b_relu, num_filter=2048, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch2c = mx.symbol.BatchNorm(name='bn5a_branch2c', data=res5a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2c = bn5a_branch2c + res5a = mx.symbol.broadcast_add(name='res5a', *[scale5a_branch1, scale5a_branch2c]) + res5a_relu = mx.symbol.Activation(name='res5a_relu', data=res5a, act_type='relu') + res5b_branch2a = mx.symbol.Convolution(name='res5b_branch2a', data=res5a_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2a = mx.symbol.BatchNorm(name='bn5b_branch2a', data=res5b_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2a = bn5b_branch2a + res5b_branch2a_relu = mx.symbol.Activation(name='res5b_branch2a_relu', data=scale5b_branch2a, act_type='relu') + res5b_branch2b = mx.symbol.Convolution(name='res5b_branch2b', data=res5b_branch2a_relu, num_filter=512, + pad=(2, 2), + kernel=(3, 3), stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5b_branch2b = mx.symbol.BatchNorm(name='bn5b_branch2b', data=res5b_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2b = bn5b_branch2b + res5b_branch2b_relu = mx.symbol.Activation(name='res5b_branch2b_relu', data=scale5b_branch2b, act_type='relu') + res5b_branch2c = mx.symbol.Convolution(name='res5b_branch2c', data=res5b_branch2b_relu, num_filter=2048, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2c = mx.symbol.BatchNorm(name='bn5b_branch2c', data=res5b_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2c = bn5b_branch2c + res5b = mx.symbol.broadcast_add(name='res5b', *[res5a_relu, scale5b_branch2c]) + res5b_relu = mx.symbol.Activation(name='res5b_relu', data=res5b, act_type='relu') + res5c_branch2a = mx.symbol.Convolution(name='res5c_branch2a', data=res5b_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2a = mx.symbol.BatchNorm(name='bn5c_branch2a', data=res5c_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2a = bn5c_branch2a + res5c_branch2a_relu = mx.symbol.Activation(name='res5c_branch2a_relu', data=scale5c_branch2a, act_type='relu') + res5c_branch2b = mx.symbol.Convolution(name='res5c_branch2b', data=res5c_branch2a_relu, num_filter=512, + pad=(2, 2), + kernel=(3, 3), stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5c_branch2b = mx.symbol.BatchNorm(name='bn5c_branch2b', data=res5c_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2b = bn5c_branch2b + res5c_branch2b_relu = mx.symbol.Activation(name='res5c_branch2b_relu', data=scale5c_branch2b, act_type='relu') + res5c_branch2c = mx.symbol.Convolution(name='res5c_branch2c', data=res5c_branch2b_relu, num_filter=2048, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2c = mx.symbol.BatchNorm(name='bn5c_branch2c', data=res5c_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2c = bn5c_branch2c + res5c = mx.symbol.broadcast_add(name='res5c', *[res5b_relu, scale5c_branch2c]) + res5c_relu = mx.symbol.Activation(name='res5c_relu', data=res5c, act_type='relu') + return res5c_relu + + def get_light_head(self, data, mid_num_filter=256, suffix='separable'): + # mid_num_filter=256 + conv_new_1 = mx.sym.Convolution(data=data, kernel=(15, 1), pad=(7, 0), num_filter=mid_num_filter, name="conv_new_1" + suffix, + weight=self.shared_param_dict['conv_new_1_weight'], bias=self.shared_param_dict['conv_new_1_bias'], lr_mult=3.0) + + relu_new_1 = mx.sym.Activation(data=conv_new_1, act_type='relu', name='relu1' + suffix) + conv_new_2 = mx.sym.Convolution(data=relu_new_1, kernel=(1, 15), pad=(0, 7), num_filter=10 * 7 * 7, name="conv_new_2" + suffix, + weight=self.shared_param_dict['conv_new_2_weight'], bias=self.shared_param_dict['conv_new_2_bias'], + lr_mult=3.0) + relu_new_2 = mx.sym.Activation(data=conv_new_2, act_type='relu', name='relu2' + suffix) + conv_new_3 = mx.sym.Convolution(data=data, kernel=(1, 15), pad=(0, 7), num_filter=mid_num_filter, name="conv_new_3" + suffix, + weight=self.shared_param_dict['conv_new_3_weight'], bias=self.shared_param_dict['conv_new_3_bias'], + lr_mult=3.0) + relu_new_3 = mx.sym.Activation(data=conv_new_3, act_type='relu', name='relu3' + suffix) + conv_new_4 = mx.sym.Convolution(data=relu_new_3, kernel=(15, 1), pad=(7, 0), num_filter=10 * 7 * 7, name="conv_new_4" + suffix, + weight=self.shared_param_dict['conv_new_4_weight'], bias=self.shared_param_dict['conv_new_4_bias'], + lr_mult=3.0) + relu_new_4 = mx.sym.Activation(data=conv_new_4, act_type='relu', name='relu4' + suffix) + light_head = mx.symbol.broadcast_add(name='light_head', *[relu_new_2, relu_new_4]) + return light_head + + def get_rpn(self, conv_feat, num_anchors): + rpn_conv = mx.sym.Convolution( + data=conv_feat, kernel=(3, 3), pad=(1, 1), num_filter=512, name="rpn_conv_3x3") + rpn_relu = mx.sym.Activation(data=rpn_conv, act_type="relu", name="rpn_relu") + rpn_cls_score = mx.sym.Convolution( + data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=2 * num_anchors, name="rpn_cls_score") + rpn_bbox_pred = mx.sym.Convolution( + data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=4 * num_anchors, name="rpn_bbox_pred") + return rpn_cls_score, rpn_bbox_pred + + def get_symbol(self, cfg, is_train=True): + + # config alias for convenient + num_classes = cfg.dataset.NUM_CLASSES + num_reg_classes = (2 if cfg.CLASS_AGNOSTIC else num_classes) + num_anchors = cfg.network.NUM_ANCHORS + + # input init + if is_train: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + gt_boxes = mx.sym.Variable(name="gt_boxes") + rpn_label = mx.sym.Variable(name='label') + rpn_bbox_target = mx.sym.Variable(name='bbox_target') + rpn_bbox_weight = mx.sym.Variable(name='bbox_weight') + else: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + # res5 + relu1 = self.get_resnet_v1_conv5(conv_feat) + + rpn_cls_score, rpn_bbox_pred = self.get_rpn(conv_feat, num_anchors) + + if is_train: + # prepare rpn data + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + + # classification + rpn_cls_prob = mx.sym.SoftmaxOutput(data=rpn_cls_score_reshape, label=rpn_label, multi_output=True, + normalization='valid', use_ignore=True, ignore_label=-1, + name="rpn_cls_prob") + + # bounding box regression + rpn_bbox_loss_ = rpn_bbox_weight * mx.sym.smooth_l1(name='rpn_bbox_loss_', scalar=3.0, + data=(rpn_bbox_pred - rpn_bbox_target)) + + rpn_bbox_loss = mx.sym.MakeLoss(name='rpn_bbox_loss', data=rpn_bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.RPN_BATCH_SIZE) + + # ROI proposal + rpn_cls_act = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_act") + rpn_cls_act_reshape = mx.sym.Reshape( + data=rpn_cls_act, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_act_reshape') + if cfg.TRAIN.CXX_PROPOSAL: + rois = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_act_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TRAIN.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TRAIN.RPN_POST_NMS_TOP_N, + threshold=cfg.TRAIN.RPN_NMS_THRESH, rpn_min_size=cfg.TRAIN.RPN_MIN_SIZE) + else: + rois = mx.sym.Custom( + cls_prob=rpn_cls_act_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TRAIN.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TRAIN.RPN_POST_NMS_TOP_N, + threshold=cfg.TRAIN.RPN_NMS_THRESH, rpn_min_size=cfg.TRAIN.RPN_MIN_SIZE) + # print 'in get_symbol, after proposal' + # pdb.set_trace() + group = mx.sym.Group([rois]) + print group.list_outputs() + # ROI proposal target + gt_boxes_reshape = mx.sym.Reshape(data=gt_boxes, shape=(-1, 9), name='gt_boxes_reshape') + rois, label, bbox_target, bbox_weight = mx.sym.Custom(rois=rois, gt_boxes=gt_boxes_reshape, + op_type='proposal_target_rotbox', + num_classes=num_reg_classes, + batch_images=cfg.TRAIN.BATCH_IMAGES, + batch_rois=cfg.TRAIN.BATCH_ROIS, + cfg=cPickle.dumps(cfg), + fg_fraction=cfg.TRAIN.FG_FRACTION) + # print 'in get_symbol, after proposal_target' + # pdb.set_trace() + else: + # ROI Proposal + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + rpn_cls_prob = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_prob") + rpn_cls_prob_reshape = mx.sym.Reshape( + data=rpn_cls_prob, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_prob_reshape') + if cfg.TEST.CXX_PROPOSAL: + rois = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + else: + rois = mx.sym.Custom( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + + # largeseparable + conv_thin_feat = self.get_light_head(data=relu1, mid_num_filter=256) + # + # roi_pool = mx.contrib.symbol.P( + # name='roi_pool', data=conv_thin_feat, rois=rois, pooled_size=(7, 7), spatial_scale=0.0625) + # + # roi_pool = mx.contrib.sym.PSROIPooling(name='psroipooling', data=conv_thin_feat, rois=rois, group_size=7, pooled_size=7, + # output_dim=10, spatial_scale=0.0625) + # pdb.set_trace() + # roi_pool = mx.contrib.sym.PSROIALIGNAVEPooling(name='psroialignave', data=conv_thin_feat, rois=rois, spatial_scale=0.065, + # group_size=7, pooled_size=7, + # output_dim=10) + + # deformable psroipooling + # pdb.set_trace() + offset_t = mx.contrib.sym.DeformablePSROIPooling(name='offset_t', data=conv_thin_feat, rois=rois, group_size=7, pooled_size=7, + sample_per_part=4, no_trans=True, part_size=7, output_dim=10, spatial_scale=0.0625) + offset = mx.sym.FullyConnected(name='offset', data=offset_t, num_hidden=7 * 7 * 2, lr_mult=0.01) + offset_reshape = mx.sym.Reshape(data=offset, shape=(-1, 2, 7, 7), name="offset_reshape") + + deformable_roi_pool = mx.contrib.sym.DeformablePSROIPooling(name='deformable_roi_pool', data=conv_thin_feat, rois=rois, + trans=offset_reshape, group_size=7, pooled_size=7, sample_per_part=4, + no_trans=False, part_size=7, output_dim=10, spatial_scale=0.0625, trans_std=0.1) + + # 2 fc + # fc_new_1 = mx.symbol.FullyConnected(name='fc_new_1', data=roi_pool, num_hidden=1024) + fc_new_1 = mx.symbol.FullyConnected(name='fc_new_1', data=deformable_roi_pool, num_hidden=2048) + fc_new_1_relu = mx.sym.Activation(data=fc_new_1, act_type='relu', name='fc_new_1_relu') + + # fc_new_2 = mx.symbol.FullyConnected(name='fc_new_2', data=fc_new_1_relu, num_hidden=1024) + # fc_new_2_relu = mx.sym.Activation(data=fc_new_2, act_type='relu', name='fc_new_2_relu') + + # cls_score/bbox_pred + cls_score = mx.symbol.FullyConnected(name='cls_score', data=fc_new_1_relu, num_hidden=num_classes) + bbox_pred = mx.symbol.FullyConnected(name='bbox_pred', data=fc_new_1_relu, num_hidden=num_reg_classes * 5) + + if is_train: + if cfg.TRAIN.ENABLE_OHEM: + labels_ohem, bbox_weights_ohem = mx.sym.Custom(op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=num_reg_classes, + roi_per_img=cfg.TRAIN.BATCH_ROIS_OHEM, + cls_score=cls_score, bbox_pred=bbox_pred, labels=label, + bbox_targets=bbox_target, bbox_weights=bbox_weight) + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=labels_ohem, + normalization='valid', use_ignore=True, ignore_label=-1) + bbox_loss_ = bbox_weights_ohem * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS_OHEM) + rcnn_label = labels_ohem + else: + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=label, normalization='valid') + bbox_loss_ = bbox_weight * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS) + rcnn_label = label + + # reshape output + rcnn_label = mx.sym.Reshape(data=rcnn_label, shape=(cfg.TRAIN.BATCH_IMAGES, -1), name='label_reshape') + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_loss = mx.sym.Reshape(data=bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_loss_reshape') + group = mx.sym.Group([rpn_cls_prob, rpn_bbox_loss, cls_prob, bbox_loss, mx.sym.BlockGrad(rcnn_label)]) + else: + cls_prob = mx.sym.SoftmaxActivation(name='cls_prob', data=cls_score) + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_pred = mx.sym.Reshape(data=bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_pred_reshape') + group = mx.sym.Group([rois, cls_prob, bbox_pred]) + + self.sym = group + return group + + def get_symbol_rpn(self, cfg, is_train=True): + # config alias for convenient + num_anchors = cfg.network.NUM_ANCHORS + + # input init + if is_train: + data = mx.sym.Variable(name="data") + rpn_label = mx.sym.Variable(name='label') + rpn_bbox_target = mx.sym.Variable(name='bbox_target') + rpn_bbox_weight = mx.sym.Variable(name='bbox_weight') + else: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + rpn_cls_score, rpn_bbox_pred = self.get_rpn(conv_feat, num_anchors) + if is_train: + # prepare rpn data + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + + # classification + rpn_cls_prob = mx.sym.SoftmaxOutput(data=rpn_cls_score_reshape, label=rpn_label, multi_output=True, + normalization='valid', use_ignore=True, ignore_label=-1, + name="rpn_cls_prob", + grad_scale=1.0) + # bounding box regression + rpn_bbox_loss_ = rpn_bbox_weight * mx.sym.smooth_l1(name='rpn_bbox_loss_', scalar=3.0, + data=(rpn_bbox_pred - rpn_bbox_target)) + rpn_bbox_loss = mx.sym.MakeLoss(name='rpn_bbox_loss', data=rpn_bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.RPN_BATCH_SIZE) + group = mx.symbol.Group([rpn_cls_prob, rpn_bbox_loss]) + else: + # ROI Proposal + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + rpn_cls_prob = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_prob") + rpn_cls_prob_reshape = mx.sym.Reshape( + data=rpn_cls_prob, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_prob_reshape') + if cfg.TEST.CXX_PROPOSAL: + rois, score = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + output_score=True, + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + else: + rois, score = mx.sym.Custom( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + output_score=True, + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + group = mx.symbol.Group([rois, score]) + self.sym = group + return group + + def get_symbol_rcnn(self, cfg, is_train=True): + # config alias for convenient + num_classes = cfg.dataset.NUM_CLASSES + num_reg_classes = (2 if cfg.CLASS_AGNOSTIC else num_classes) + + # input init + if is_train: + data = mx.symbol.Variable(name="data") + rois = mx.symbol.Variable(name='rois') + label = mx.symbol.Variable(name='label') + bbox_target = mx.symbol.Variable(name='bbox_target') + bbox_weight = mx.symbol.Variable(name='bbox_weight') + # reshape input + rois = mx.symbol.Reshape(data=rois, shape=(-1, 5), name='rois_reshape') + label = mx.symbol.Reshape(data=label, shape=(-1,), name='label_reshape') + bbox_target = mx.symbol.Reshape(data=bbox_target, shape=(-1, 5 * num_classes), name='bbox_target_reshape') + bbox_weight = mx.symbol.Reshape(data=bbox_weight, shape=(-1, 5 * num_classes), name='bbox_weight_reshape') + else: + data = mx.sym.Variable(name="data") + rois = mx.symbol.Variable(name='rois') + # reshape input + rois = mx.symbol.Reshape(data=rois, shape=(-1, 5), name='rois_reshape') + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + # res5 + relu1 = self.get_resnet_v1_conv5(conv_feat) + + conv_thin_feat = self.get_light_head(data=relu1, mid_num_filter=256) + + # roi_pool = mx.contrib.sym.PSROIPooling(name='psroipooling', data=conv_thin_feat, rois=rois, group_size=7, pooled_size=7, + # output_dim=10, spatial_scale=0.0625) + + # deformable psroipooling + offset_t = mx.contrib.sym.DeformablePSROIPooling(name='offset_t', data=conv_thin_feat, rois=rois, group_size=7, pooled_size=7, + sample_per_part=4, no_trans=True, part_size=7, output_dim=10, spatial_scale=0.0625) + offset = mx.sym.FullyConnected(name='offset', data=offset_t, num_hidden=7 * 7 * 2, lr_mult=0.01) + offset_reshape = mx.sym.Reshape(data=offset, shape=(-1, 2, 7, 7), name="offset_reshape") + + deformable_roi_pool = mx.contrib.sym.DeformablePSROIPooling(name='deformable_roi_pool', data=conv_thin_feat, rois=rois, + trans=offset_reshape, group_size=7, pooled_size=7, sample_per_part=4, + no_trans=False, part_size=7, output_dim=10, spatial_scale=0.0625, trans_std=0.1) + + # 2 fc + # fc_new_1 = mx.symbol.FullyConnected(name='fc_new_1', data=roi_pool, num_hidden=1024) + fc_new_1 = mx.symbol.FullyConnected(name='fc_new_1', data=deformable_roi_pool, num_hidden=1024) + fc_new_1_relu = mx.sym.Activation(data=fc_new_1, act_type='relu', name='fc_new_1_relu') + + # cls_score/bbox_pred + cls_score = mx.symbol.FullyConnected(name='cls_score', data=fc_new_1_relu, num_hidden=num_classes) + bbox_pred = mx.symbol.FullyConnected(name='bbox_pred', data=fc_new_1_relu, num_hidden=num_reg_classes * 5) + + if is_train: + if cfg.TRAIN.ENABLE_OHEM: + labels_ohem, bbox_weights_ohem = mx.sym.Custom(op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=num_reg_classes, + roi_per_img=cfg.TRAIN.BATCH_ROIS_OHEM, + cls_score=cls_score, bbox_pred=bbox_pred, labels=label, + bbox_targets=bbox_target, bbox_weights=bbox_weight) + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=labels_ohem, + normalization='valid', use_ignore=True, ignore_label=-1, grad_scale=1.0) + bbox_loss_ = bbox_weights_ohem * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS_OHEM) + else: + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=label, normalization='valid', + grad_scale=1.0) + bbox_loss_ = bbox_weight * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS) + + # reshape output + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_loss = mx.sym.Reshape(data=bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_loss_reshape') + group = mx.sym.Group([cls_prob, bbox_loss]) + else: + cls_prob = mx.sym.SoftmaxActivation(name='cls_prob', data=cls_score) + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_pred = mx.sym.Reshape(data=bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_pred_reshape') + group = mx.sym.Group([cls_prob, bbox_pred]) + + self.sym = group + return group + + def init_weight_rcnn(self, cfg, arg_params, aux_params): + arg_params['fc_new_1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_1_weight']) + arg_params['fc_new_1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_1_bias']) + arg_params['cls_score_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['cls_score_weight']) + arg_params['cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['cls_score_bias']) + arg_params['offset_weight'] = mx.nd.zeros(shape=self.arg_shape_dict['offset_weight']) + arg_params['offset_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['offset_bias']) + arg_params['bbox_pred_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['bbox_pred_weight']) + arg_params['bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['bbox_pred_bias']) + + def init_weight_rpn(self, cfg, arg_params, aux_params): + arg_params['rpn_conv_3x3_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['rpn_conv_3x3_weight']) + arg_params['rpn_conv_3x3_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_conv_3x3_bias']) + arg_params['rpn_cls_score_weight'] = mx.random.normal(0, 0.01, + shape=self.arg_shape_dict['rpn_cls_score_weight']) + arg_params['rpn_cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_cls_score_bias']) + arg_params['rpn_bbox_pred_weight'] = mx.random.normal(0, 0.01, + shape=self.arg_shape_dict['rpn_bbox_pred_weight']) + arg_params['rpn_bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_bbox_pred_bias']) + + def init_weight(self, cfg, arg_params, aux_params): + self.init_weight_rpn(cfg, arg_params, aux_params) + self.init_weight_rcnn(cfg, arg_params, aux_params) + for name in self.shared_param_list: + arg_params[name + '_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict[name + '_weight']) + arg_params[name + '_bias'] = mx.nd.zeros(shape=self.arg_shape_dict[name + '_bias']) + diff --git a/faster_rcnn/symbols/resnet_v1_101_rcnn_obb.py b/faster_rcnn/symbols/resnet_v1_101_rcnn_obb.py new file mode 100644 index 0000000..4f5a3a6 --- /dev/null +++ b/faster_rcnn/symbols/resnet_v1_101_rcnn_obb.py @@ -0,0 +1,1131 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Guodong Zhang, Bin Xiao +# -------------------------------------------------------- + +import cPickle +import mxnet as mx +from utils.symbol import Symbol +from operator_py.proposal import * +from operator_py.proposal_target import * +from operator_py.proposal_target_rotbox import * +from operator_py.box_annotator_ohem import * +import pdb + + +class resnet_v1_101_rcnn_obb(Symbol): + def __init__(self): + """ + Use __init__ to define parameter network needs + """ + self.eps = 1e-5 + self.use_global_stats = True + self.workspace = 512 + self.units = (3, 4, 23, 3) # use for 101 + self.filter_list = [256, 512, 1024, 2048] + # self.shared_param_list = ['conv_new_1', 'conv_new_2', 'conv_new_3', 'conv_new_4'] + # self.shared_param_dict = {} + # for name in self.shared_param_list: + # self.shared_param_dict[name + '_weight'] = mx.sym.Variable(name + '_weight') + # self.shared_param_dict[name + '_bias'] = mx.sym.Variable(name + '_bias') + + + def get_resnet_v1_conv4(self, data): + conv1 = mx.symbol.Convolution(name='conv1', data=data, num_filter=64, pad=(3, 3), kernel=(7, 7), stride=(2, 2), + no_bias=True) + bn_conv1 = mx.symbol.BatchNorm(name='bn_conv1', data=conv1, use_global_stats=True, fix_gamma=False, + eps=self.eps) + scale_conv1 = bn_conv1 + conv1_relu = mx.symbol.Activation(name='conv1_relu', data=scale_conv1, act_type='relu') + pool1 = mx.symbol.Pooling(name='pool1', data=conv1_relu, pooling_convention='full', pad=(0, 0), kernel=(3, 3), + stride=(2, 2), pool_type='max') + res2a_branch1 = mx.symbol.Convolution(name='res2a_branch1', data=pool1, num_filter=256, pad=(0, 0), + kernel=(1, 1), + stride=(1, 1), no_bias=True) + bn2a_branch1 = mx.symbol.BatchNorm(name='bn2a_branch1', data=res2a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch1 = bn2a_branch1 + res2a_branch2a = mx.symbol.Convolution(name='res2a_branch2a', data=pool1, num_filter=64, pad=(0, 0), + kernel=(1, 1), + stride=(1, 1), no_bias=True) + bn2a_branch2a = mx.symbol.BatchNorm(name='bn2a_branch2a', data=res2a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2a = bn2a_branch2a + res2a_branch2a_relu = mx.symbol.Activation(name='res2a_branch2a_relu', data=scale2a_branch2a, act_type='relu') + res2a_branch2b = mx.symbol.Convolution(name='res2a_branch2b', data=res2a_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2a_branch2b = mx.symbol.BatchNorm(name='bn2a_branch2b', data=res2a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2b = bn2a_branch2b + res2a_branch2b_relu = mx.symbol.Activation(name='res2a_branch2b_relu', data=scale2a_branch2b, act_type='relu') + res2a_branch2c = mx.symbol.Convolution(name='res2a_branch2c', data=res2a_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2a_branch2c = mx.symbol.BatchNorm(name='bn2a_branch2c', data=res2a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2a_branch2c = bn2a_branch2c + res2a = mx.symbol.broadcast_add(name='res2a', *[scale2a_branch1, scale2a_branch2c]) + res2a_relu = mx.symbol.Activation(name='res2a_relu', data=res2a, act_type='relu') + res2b_branch2a = mx.symbol.Convolution(name='res2b_branch2a', data=res2a_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2a = mx.symbol.BatchNorm(name='bn2b_branch2a', data=res2b_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2a = bn2b_branch2a + res2b_branch2a_relu = mx.symbol.Activation(name='res2b_branch2a_relu', data=scale2b_branch2a, act_type='relu') + res2b_branch2b = mx.symbol.Convolution(name='res2b_branch2b', data=res2b_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2b_branch2b = mx.symbol.BatchNorm(name='bn2b_branch2b', data=res2b_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2b = bn2b_branch2b + res2b_branch2b_relu = mx.symbol.Activation(name='res2b_branch2b_relu', data=scale2b_branch2b, act_type='relu') + res2b_branch2c = mx.symbol.Convolution(name='res2b_branch2c', data=res2b_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2c = mx.symbol.BatchNorm(name='bn2b_branch2c', data=res2b_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2b_branch2c = bn2b_branch2c + res2b = mx.symbol.broadcast_add(name='res2b', *[res2a_relu, scale2b_branch2c]) + res2b_relu = mx.symbol.Activation(name='res2b_relu', data=res2b, act_type='relu') + res2c_branch2a = mx.symbol.Convolution(name='res2c_branch2a', data=res2b_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2a = mx.symbol.BatchNorm(name='bn2c_branch2a', data=res2c_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2a = bn2c_branch2a + res2c_branch2a_relu = mx.symbol.Activation(name='res2c_branch2a_relu', data=scale2c_branch2a, act_type='relu') + res2c_branch2b = mx.symbol.Convolution(name='res2c_branch2b', data=res2c_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2c_branch2b = mx.symbol.BatchNorm(name='bn2c_branch2b', data=res2c_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2b = bn2c_branch2b + res2c_branch2b_relu = mx.symbol.Activation(name='res2c_branch2b_relu', data=scale2c_branch2b, act_type='relu') + res2c_branch2c = mx.symbol.Convolution(name='res2c_branch2c', data=res2c_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2c = mx.symbol.BatchNorm(name='bn2c_branch2c', data=res2c_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale2c_branch2c = bn2c_branch2c + res2c = mx.symbol.broadcast_add(name='res2c', *[res2b_relu, scale2c_branch2c]) + res2c_relu = mx.symbol.Activation(name='res2c_relu', data=res2c, act_type='relu') + res3a_branch1 = mx.symbol.Convolution(name='res3a_branch1', data=res2c_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch1 = mx.symbol.BatchNorm(name='bn3a_branch1', data=res3a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch1 = bn3a_branch1 + res3a_branch2a = mx.symbol.Convolution(name='res3a_branch2a', data=res2c_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch2a = mx.symbol.BatchNorm(name='bn3a_branch2a', data=res3a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2a = bn3a_branch2a + res3a_branch2a_relu = mx.symbol.Activation(name='res3a_branch2a_relu', data=scale3a_branch2a, act_type='relu') + res3a_branch2b = mx.symbol.Convolution(name='res3a_branch2b', data=res3a_branch2a_relu, num_filter=128, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3a_branch2b = mx.symbol.BatchNorm(name='bn3a_branch2b', data=res3a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2b = bn3a_branch2b + res3a_branch2b_relu = mx.symbol.Activation(name='res3a_branch2b_relu', data=scale3a_branch2b, act_type='relu') + res3a_branch2c = mx.symbol.Convolution(name='res3a_branch2c', data=res3a_branch2b_relu, num_filter=512, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3a_branch2c = mx.symbol.BatchNorm(name='bn3a_branch2c', data=res3a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3a_branch2c = bn3a_branch2c + res3a = mx.symbol.broadcast_add(name='res3a', *[scale3a_branch1, scale3a_branch2c]) + res3a_relu = mx.symbol.Activation(name='res3a_relu', data=res3a, act_type='relu') + res3b1_branch2a = mx.symbol.Convolution(name='res3b1_branch2a', data=res3a_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2a = mx.symbol.BatchNorm(name='bn3b1_branch2a', data=res3b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2a = bn3b1_branch2a + res3b1_branch2a_relu = mx.symbol.Activation(name='res3b1_branch2a_relu', data=scale3b1_branch2a, + act_type='relu') + res3b1_branch2b = mx.symbol.Convolution(name='res3b1_branch2b', data=res3b1_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b1_branch2b = mx.symbol.BatchNorm(name='bn3b1_branch2b', data=res3b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2b = bn3b1_branch2b + res3b1_branch2b_relu = mx.symbol.Activation(name='res3b1_branch2b_relu', data=scale3b1_branch2b, + act_type='relu') + res3b1_branch2c = mx.symbol.Convolution(name='res3b1_branch2c', data=res3b1_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2c = mx.symbol.BatchNorm(name='bn3b1_branch2c', data=res3b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b1_branch2c = bn3b1_branch2c + res3b1 = mx.symbol.broadcast_add(name='res3b1', *[res3a_relu, scale3b1_branch2c]) + res3b1_relu = mx.symbol.Activation(name='res3b1_relu', data=res3b1, act_type='relu') + res3b2_branch2a = mx.symbol.Convolution(name='res3b2_branch2a', data=res3b1_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2a = mx.symbol.BatchNorm(name='bn3b2_branch2a', data=res3b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2a = bn3b2_branch2a + res3b2_branch2a_relu = mx.symbol.Activation(name='res3b2_branch2a_relu', data=scale3b2_branch2a, + act_type='relu') + res3b2_branch2b = mx.symbol.Convolution(name='res3b2_branch2b', data=res3b2_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b2_branch2b = mx.symbol.BatchNorm(name='bn3b2_branch2b', data=res3b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2b = bn3b2_branch2b + res3b2_branch2b_relu = mx.symbol.Activation(name='res3b2_branch2b_relu', data=scale3b2_branch2b, + act_type='relu') + res3b2_branch2c = mx.symbol.Convolution(name='res3b2_branch2c', data=res3b2_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2c = mx.symbol.BatchNorm(name='bn3b2_branch2c', data=res3b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b2_branch2c = bn3b2_branch2c + res3b2 = mx.symbol.broadcast_add(name='res3b2', *[res3b1_relu, scale3b2_branch2c]) + res3b2_relu = mx.symbol.Activation(name='res3b2_relu', data=res3b2, act_type='relu') + res3b3_branch2a = mx.symbol.Convolution(name='res3b3_branch2a', data=res3b2_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2a = mx.symbol.BatchNorm(name='bn3b3_branch2a', data=res3b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2a = bn3b3_branch2a + res3b3_branch2a_relu = mx.symbol.Activation(name='res3b3_branch2a_relu', data=scale3b3_branch2a, + act_type='relu') + res3b3_branch2b = mx.symbol.Convolution(name='res3b3_branch2b', data=res3b3_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b3_branch2b = mx.symbol.BatchNorm(name='bn3b3_branch2b', data=res3b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2b = bn3b3_branch2b + res3b3_branch2b_relu = mx.symbol.Activation(name='res3b3_branch2b_relu', data=scale3b3_branch2b, + act_type='relu') + res3b3_branch2c = mx.symbol.Convolution(name='res3b3_branch2c', data=res3b3_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2c = mx.symbol.BatchNorm(name='bn3b3_branch2c', data=res3b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale3b3_branch2c = bn3b3_branch2c + res3b3 = mx.symbol.broadcast_add(name='res3b3', *[res3b2_relu, scale3b3_branch2c]) + res3b3_relu = mx.symbol.Activation(name='res3b3_relu', data=res3b3, act_type='relu') + res4a_branch1 = mx.symbol.Convolution(name='res4a_branch1', data=res3b3_relu, num_filter=1024, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch1 = mx.symbol.BatchNorm(name='bn4a_branch1', data=res4a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch1 = bn4a_branch1 + res4a_branch2a = mx.symbol.Convolution(name='res4a_branch2a', data=res3b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch2a = mx.symbol.BatchNorm(name='bn4a_branch2a', data=res4a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2a = bn4a_branch2a + res4a_branch2a_relu = mx.symbol.Activation(name='res4a_branch2a_relu', data=scale4a_branch2a, act_type='relu') + res4a_branch2b = mx.symbol.Convolution(name='res4a_branch2b', data=res4a_branch2a_relu, num_filter=256, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4a_branch2b = mx.symbol.BatchNorm(name='bn4a_branch2b', data=res4a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2b = bn4a_branch2b + res4a_branch2b_relu = mx.symbol.Activation(name='res4a_branch2b_relu', data=scale4a_branch2b, act_type='relu') + res4a_branch2c = mx.symbol.Convolution(name='res4a_branch2c', data=res4a_branch2b_relu, num_filter=1024, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4a_branch2c = mx.symbol.BatchNorm(name='bn4a_branch2c', data=res4a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4a_branch2c = bn4a_branch2c + res4a = mx.symbol.broadcast_add(name='res4a', *[scale4a_branch1, scale4a_branch2c]) + res4a_relu = mx.symbol.Activation(name='res4a_relu', data=res4a, act_type='relu') + res4b1_branch2a = mx.symbol.Convolution(name='res4b1_branch2a', data=res4a_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2a = mx.symbol.BatchNorm(name='bn4b1_branch2a', data=res4b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2a = bn4b1_branch2a + res4b1_branch2a_relu = mx.symbol.Activation(name='res4b1_branch2a_relu', data=scale4b1_branch2a, + act_type='relu') + res4b1_branch2b = mx.symbol.Convolution(name='res4b1_branch2b', data=res4b1_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b1_branch2b = mx.symbol.BatchNorm(name='bn4b1_branch2b', data=res4b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2b = bn4b1_branch2b + res4b1_branch2b_relu = mx.symbol.Activation(name='res4b1_branch2b_relu', data=scale4b1_branch2b, + act_type='relu') + res4b1_branch2c = mx.symbol.Convolution(name='res4b1_branch2c', data=res4b1_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2c = mx.symbol.BatchNorm(name='bn4b1_branch2c', data=res4b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b1_branch2c = bn4b1_branch2c + res4b1 = mx.symbol.broadcast_add(name='res4b1', *[res4a_relu, scale4b1_branch2c]) + res4b1_relu = mx.symbol.Activation(name='res4b1_relu', data=res4b1, act_type='relu') + res4b2_branch2a = mx.symbol.Convolution(name='res4b2_branch2a', data=res4b1_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2a = mx.symbol.BatchNorm(name='bn4b2_branch2a', data=res4b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2a = bn4b2_branch2a + res4b2_branch2a_relu = mx.symbol.Activation(name='res4b2_branch2a_relu', data=scale4b2_branch2a, + act_type='relu') + res4b2_branch2b = mx.symbol.Convolution(name='res4b2_branch2b', data=res4b2_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b2_branch2b = mx.symbol.BatchNorm(name='bn4b2_branch2b', data=res4b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2b = bn4b2_branch2b + res4b2_branch2b_relu = mx.symbol.Activation(name='res4b2_branch2b_relu', data=scale4b2_branch2b, + act_type='relu') + res4b2_branch2c = mx.symbol.Convolution(name='res4b2_branch2c', data=res4b2_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2c = mx.symbol.BatchNorm(name='bn4b2_branch2c', data=res4b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b2_branch2c = bn4b2_branch2c + res4b2 = mx.symbol.broadcast_add(name='res4b2', *[res4b1_relu, scale4b2_branch2c]) + res4b2_relu = mx.symbol.Activation(name='res4b2_relu', data=res4b2, act_type='relu') + res4b3_branch2a = mx.symbol.Convolution(name='res4b3_branch2a', data=res4b2_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2a = mx.symbol.BatchNorm(name='bn4b3_branch2a', data=res4b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2a = bn4b3_branch2a + res4b3_branch2a_relu = mx.symbol.Activation(name='res4b3_branch2a_relu', data=scale4b3_branch2a, + act_type='relu') + res4b3_branch2b = mx.symbol.Convolution(name='res4b3_branch2b', data=res4b3_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b3_branch2b = mx.symbol.BatchNorm(name='bn4b3_branch2b', data=res4b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2b = bn4b3_branch2b + res4b3_branch2b_relu = mx.symbol.Activation(name='res4b3_branch2b_relu', data=scale4b3_branch2b, + act_type='relu') + res4b3_branch2c = mx.symbol.Convolution(name='res4b3_branch2c', data=res4b3_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2c = mx.symbol.BatchNorm(name='bn4b3_branch2c', data=res4b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b3_branch2c = bn4b3_branch2c + res4b3 = mx.symbol.broadcast_add(name='res4b3', *[res4b2_relu, scale4b3_branch2c]) + res4b3_relu = mx.symbol.Activation(name='res4b3_relu', data=res4b3, act_type='relu') + res4b4_branch2a = mx.symbol.Convolution(name='res4b4_branch2a', data=res4b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2a = mx.symbol.BatchNorm(name='bn4b4_branch2a', data=res4b4_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2a = bn4b4_branch2a + res4b4_branch2a_relu = mx.symbol.Activation(name='res4b4_branch2a_relu', data=scale4b4_branch2a, + act_type='relu') + res4b4_branch2b = mx.symbol.Convolution(name='res4b4_branch2b', data=res4b4_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b4_branch2b = mx.symbol.BatchNorm(name='bn4b4_branch2b', data=res4b4_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2b = bn4b4_branch2b + res4b4_branch2b_relu = mx.symbol.Activation(name='res4b4_branch2b_relu', data=scale4b4_branch2b, + act_type='relu') + res4b4_branch2c = mx.symbol.Convolution(name='res4b4_branch2c', data=res4b4_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2c = mx.symbol.BatchNorm(name='bn4b4_branch2c', data=res4b4_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b4_branch2c = bn4b4_branch2c + res4b4 = mx.symbol.broadcast_add(name='res4b4', *[res4b3_relu, scale4b4_branch2c]) + res4b4_relu = mx.symbol.Activation(name='res4b4_relu', data=res4b4, act_type='relu') + res4b5_branch2a = mx.symbol.Convolution(name='res4b5_branch2a', data=res4b4_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2a = mx.symbol.BatchNorm(name='bn4b5_branch2a', data=res4b5_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2a = bn4b5_branch2a + res4b5_branch2a_relu = mx.symbol.Activation(name='res4b5_branch2a_relu', data=scale4b5_branch2a, + act_type='relu') + res4b5_branch2b = mx.symbol.Convolution(name='res4b5_branch2b', data=res4b5_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b5_branch2b = mx.symbol.BatchNorm(name='bn4b5_branch2b', data=res4b5_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2b = bn4b5_branch2b + res4b5_branch2b_relu = mx.symbol.Activation(name='res4b5_branch2b_relu', data=scale4b5_branch2b, + act_type='relu') + res4b5_branch2c = mx.symbol.Convolution(name='res4b5_branch2c', data=res4b5_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2c = mx.symbol.BatchNorm(name='bn4b5_branch2c', data=res4b5_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b5_branch2c = bn4b5_branch2c + res4b5 = mx.symbol.broadcast_add(name='res4b5', *[res4b4_relu, scale4b5_branch2c]) + res4b5_relu = mx.symbol.Activation(name='res4b5_relu', data=res4b5, act_type='relu') + res4b6_branch2a = mx.symbol.Convolution(name='res4b6_branch2a', data=res4b5_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2a = mx.symbol.BatchNorm(name='bn4b6_branch2a', data=res4b6_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2a = bn4b6_branch2a + res4b6_branch2a_relu = mx.symbol.Activation(name='res4b6_branch2a_relu', data=scale4b6_branch2a, + act_type='relu') + res4b6_branch2b = mx.symbol.Convolution(name='res4b6_branch2b', data=res4b6_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b6_branch2b = mx.symbol.BatchNorm(name='bn4b6_branch2b', data=res4b6_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2b = bn4b6_branch2b + res4b6_branch2b_relu = mx.symbol.Activation(name='res4b6_branch2b_relu', data=scale4b6_branch2b, + act_type='relu') + res4b6_branch2c = mx.symbol.Convolution(name='res4b6_branch2c', data=res4b6_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2c = mx.symbol.BatchNorm(name='bn4b6_branch2c', data=res4b6_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b6_branch2c = bn4b6_branch2c + res4b6 = mx.symbol.broadcast_add(name='res4b6', *[res4b5_relu, scale4b6_branch2c]) + res4b6_relu = mx.symbol.Activation(name='res4b6_relu', data=res4b6, act_type='relu') + res4b7_branch2a = mx.symbol.Convolution(name='res4b7_branch2a', data=res4b6_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2a = mx.symbol.BatchNorm(name='bn4b7_branch2a', data=res4b7_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2a = bn4b7_branch2a + res4b7_branch2a_relu = mx.symbol.Activation(name='res4b7_branch2a_relu', data=scale4b7_branch2a, + act_type='relu') + res4b7_branch2b = mx.symbol.Convolution(name='res4b7_branch2b', data=res4b7_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b7_branch2b = mx.symbol.BatchNorm(name='bn4b7_branch2b', data=res4b7_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2b = bn4b7_branch2b + res4b7_branch2b_relu = mx.symbol.Activation(name='res4b7_branch2b_relu', data=scale4b7_branch2b, + act_type='relu') + res4b7_branch2c = mx.symbol.Convolution(name='res4b7_branch2c', data=res4b7_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2c = mx.symbol.BatchNorm(name='bn4b7_branch2c', data=res4b7_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b7_branch2c = bn4b7_branch2c + res4b7 = mx.symbol.broadcast_add(name='res4b7', *[res4b6_relu, scale4b7_branch2c]) + res4b7_relu = mx.symbol.Activation(name='res4b7_relu', data=res4b7, act_type='relu') + res4b8_branch2a = mx.symbol.Convolution(name='res4b8_branch2a', data=res4b7_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2a = mx.symbol.BatchNorm(name='bn4b8_branch2a', data=res4b8_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2a = bn4b8_branch2a + res4b8_branch2a_relu = mx.symbol.Activation(name='res4b8_branch2a_relu', data=scale4b8_branch2a, + act_type='relu') + res4b8_branch2b = mx.symbol.Convolution(name='res4b8_branch2b', data=res4b8_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b8_branch2b = mx.symbol.BatchNorm(name='bn4b8_branch2b', data=res4b8_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2b = bn4b8_branch2b + res4b8_branch2b_relu = mx.symbol.Activation(name='res4b8_branch2b_relu', data=scale4b8_branch2b, + act_type='relu') + res4b8_branch2c = mx.symbol.Convolution(name='res4b8_branch2c', data=res4b8_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2c = mx.symbol.BatchNorm(name='bn4b8_branch2c', data=res4b8_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b8_branch2c = bn4b8_branch2c + res4b8 = mx.symbol.broadcast_add(name='res4b8', *[res4b7_relu, scale4b8_branch2c]) + res4b8_relu = mx.symbol.Activation(name='res4b8_relu', data=res4b8, act_type='relu') + res4b9_branch2a = mx.symbol.Convolution(name='res4b9_branch2a', data=res4b8_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2a = mx.symbol.BatchNorm(name='bn4b9_branch2a', data=res4b9_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2a = bn4b9_branch2a + res4b9_branch2a_relu = mx.symbol.Activation(name='res4b9_branch2a_relu', data=scale4b9_branch2a, + act_type='relu') + res4b9_branch2b = mx.symbol.Convolution(name='res4b9_branch2b', data=res4b9_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b9_branch2b = mx.symbol.BatchNorm(name='bn4b9_branch2b', data=res4b9_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2b = bn4b9_branch2b + res4b9_branch2b_relu = mx.symbol.Activation(name='res4b9_branch2b_relu', data=scale4b9_branch2b, + act_type='relu') + res4b9_branch2c = mx.symbol.Convolution(name='res4b9_branch2c', data=res4b9_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2c = mx.symbol.BatchNorm(name='bn4b9_branch2c', data=res4b9_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b9_branch2c = bn4b9_branch2c + res4b9 = mx.symbol.broadcast_add(name='res4b9', *[res4b8_relu, scale4b9_branch2c]) + res4b9_relu = mx.symbol.Activation(name='res4b9_relu', data=res4b9, act_type='relu') + res4b10_branch2a = mx.symbol.Convolution(name='res4b10_branch2a', data=res4b9_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2a = mx.symbol.BatchNorm(name='bn4b10_branch2a', data=res4b10_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2a = bn4b10_branch2a + res4b10_branch2a_relu = mx.symbol.Activation(name='res4b10_branch2a_relu', data=scale4b10_branch2a, + act_type='relu') + res4b10_branch2b = mx.symbol.Convolution(name='res4b10_branch2b', data=res4b10_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b10_branch2b = mx.symbol.BatchNorm(name='bn4b10_branch2b', data=res4b10_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2b = bn4b10_branch2b + res4b10_branch2b_relu = mx.symbol.Activation(name='res4b10_branch2b_relu', data=scale4b10_branch2b, + act_type='relu') + res4b10_branch2c = mx.symbol.Convolution(name='res4b10_branch2c', data=res4b10_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2c = mx.symbol.BatchNorm(name='bn4b10_branch2c', data=res4b10_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b10_branch2c = bn4b10_branch2c + res4b10 = mx.symbol.broadcast_add(name='res4b10', *[res4b9_relu, scale4b10_branch2c]) + res4b10_relu = mx.symbol.Activation(name='res4b10_relu', data=res4b10, act_type='relu') + res4b11_branch2a = mx.symbol.Convolution(name='res4b11_branch2a', data=res4b10_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2a = mx.symbol.BatchNorm(name='bn4b11_branch2a', data=res4b11_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2a = bn4b11_branch2a + res4b11_branch2a_relu = mx.symbol.Activation(name='res4b11_branch2a_relu', data=scale4b11_branch2a, + act_type='relu') + res4b11_branch2b = mx.symbol.Convolution(name='res4b11_branch2b', data=res4b11_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b11_branch2b = mx.symbol.BatchNorm(name='bn4b11_branch2b', data=res4b11_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2b = bn4b11_branch2b + res4b11_branch2b_relu = mx.symbol.Activation(name='res4b11_branch2b_relu', data=scale4b11_branch2b, + act_type='relu') + res4b11_branch2c = mx.symbol.Convolution(name='res4b11_branch2c', data=res4b11_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2c = mx.symbol.BatchNorm(name='bn4b11_branch2c', data=res4b11_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b11_branch2c = bn4b11_branch2c + res4b11 = mx.symbol.broadcast_add(name='res4b11', *[res4b10_relu, scale4b11_branch2c]) + res4b11_relu = mx.symbol.Activation(name='res4b11_relu', data=res4b11, act_type='relu') + res4b12_branch2a = mx.symbol.Convolution(name='res4b12_branch2a', data=res4b11_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2a = mx.symbol.BatchNorm(name='bn4b12_branch2a', data=res4b12_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2a = bn4b12_branch2a + res4b12_branch2a_relu = mx.symbol.Activation(name='res4b12_branch2a_relu', data=scale4b12_branch2a, + act_type='relu') + res4b12_branch2b = mx.symbol.Convolution(name='res4b12_branch2b', data=res4b12_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b12_branch2b = mx.symbol.BatchNorm(name='bn4b12_branch2b', data=res4b12_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2b = bn4b12_branch2b + res4b12_branch2b_relu = mx.symbol.Activation(name='res4b12_branch2b_relu', data=scale4b12_branch2b, + act_type='relu') + res4b12_branch2c = mx.symbol.Convolution(name='res4b12_branch2c', data=res4b12_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2c = mx.symbol.BatchNorm(name='bn4b12_branch2c', data=res4b12_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b12_branch2c = bn4b12_branch2c + res4b12 = mx.symbol.broadcast_add(name='res4b12', *[res4b11_relu, scale4b12_branch2c]) + res4b12_relu = mx.symbol.Activation(name='res4b12_relu', data=res4b12, act_type='relu') + res4b13_branch2a = mx.symbol.Convolution(name='res4b13_branch2a', data=res4b12_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2a = mx.symbol.BatchNorm(name='bn4b13_branch2a', data=res4b13_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2a = bn4b13_branch2a + res4b13_branch2a_relu = mx.symbol.Activation(name='res4b13_branch2a_relu', data=scale4b13_branch2a, + act_type='relu') + res4b13_branch2b = mx.symbol.Convolution(name='res4b13_branch2b', data=res4b13_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b13_branch2b = mx.symbol.BatchNorm(name='bn4b13_branch2b', data=res4b13_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2b = bn4b13_branch2b + res4b13_branch2b_relu = mx.symbol.Activation(name='res4b13_branch2b_relu', data=scale4b13_branch2b, + act_type='relu') + res4b13_branch2c = mx.symbol.Convolution(name='res4b13_branch2c', data=res4b13_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2c = mx.symbol.BatchNorm(name='bn4b13_branch2c', data=res4b13_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b13_branch2c = bn4b13_branch2c + res4b13 = mx.symbol.broadcast_add(name='res4b13', *[res4b12_relu, scale4b13_branch2c]) + res4b13_relu = mx.symbol.Activation(name='res4b13_relu', data=res4b13, act_type='relu') + res4b14_branch2a = mx.symbol.Convolution(name='res4b14_branch2a', data=res4b13_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2a = mx.symbol.BatchNorm(name='bn4b14_branch2a', data=res4b14_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2a = bn4b14_branch2a + res4b14_branch2a_relu = mx.symbol.Activation(name='res4b14_branch2a_relu', data=scale4b14_branch2a, + act_type='relu') + res4b14_branch2b = mx.symbol.Convolution(name='res4b14_branch2b', data=res4b14_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b14_branch2b = mx.symbol.BatchNorm(name='bn4b14_branch2b', data=res4b14_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2b = bn4b14_branch2b + res4b14_branch2b_relu = mx.symbol.Activation(name='res4b14_branch2b_relu', data=scale4b14_branch2b, + act_type='relu') + res4b14_branch2c = mx.symbol.Convolution(name='res4b14_branch2c', data=res4b14_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2c = mx.symbol.BatchNorm(name='bn4b14_branch2c', data=res4b14_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b14_branch2c = bn4b14_branch2c + res4b14 = mx.symbol.broadcast_add(name='res4b14', *[res4b13_relu, scale4b14_branch2c]) + res4b14_relu = mx.symbol.Activation(name='res4b14_relu', data=res4b14, act_type='relu') + res4b15_branch2a = mx.symbol.Convolution(name='res4b15_branch2a', data=res4b14_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2a = mx.symbol.BatchNorm(name='bn4b15_branch2a', data=res4b15_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2a = bn4b15_branch2a + res4b15_branch2a_relu = mx.symbol.Activation(name='res4b15_branch2a_relu', data=scale4b15_branch2a, + act_type='relu') + res4b15_branch2b = mx.symbol.Convolution(name='res4b15_branch2b', data=res4b15_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b15_branch2b = mx.symbol.BatchNorm(name='bn4b15_branch2b', data=res4b15_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2b = bn4b15_branch2b + res4b15_branch2b_relu = mx.symbol.Activation(name='res4b15_branch2b_relu', data=scale4b15_branch2b, + act_type='relu') + res4b15_branch2c = mx.symbol.Convolution(name='res4b15_branch2c', data=res4b15_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2c = mx.symbol.BatchNorm(name='bn4b15_branch2c', data=res4b15_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b15_branch2c = bn4b15_branch2c + res4b15 = mx.symbol.broadcast_add(name='res4b15', *[res4b14_relu, scale4b15_branch2c]) + res4b15_relu = mx.symbol.Activation(name='res4b15_relu', data=res4b15, act_type='relu') + res4b16_branch2a = mx.symbol.Convolution(name='res4b16_branch2a', data=res4b15_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2a = mx.symbol.BatchNorm(name='bn4b16_branch2a', data=res4b16_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2a = bn4b16_branch2a + res4b16_branch2a_relu = mx.symbol.Activation(name='res4b16_branch2a_relu', data=scale4b16_branch2a, + act_type='relu') + res4b16_branch2b = mx.symbol.Convolution(name='res4b16_branch2b', data=res4b16_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b16_branch2b = mx.symbol.BatchNorm(name='bn4b16_branch2b', data=res4b16_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2b = bn4b16_branch2b + res4b16_branch2b_relu = mx.symbol.Activation(name='res4b16_branch2b_relu', data=scale4b16_branch2b, + act_type='relu') + res4b16_branch2c = mx.symbol.Convolution(name='res4b16_branch2c', data=res4b16_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2c = mx.symbol.BatchNorm(name='bn4b16_branch2c', data=res4b16_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b16_branch2c = bn4b16_branch2c + res4b16 = mx.symbol.broadcast_add(name='res4b16', *[res4b15_relu, scale4b16_branch2c]) + res4b16_relu = mx.symbol.Activation(name='res4b16_relu', data=res4b16, act_type='relu') + res4b17_branch2a = mx.symbol.Convolution(name='res4b17_branch2a', data=res4b16_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2a = mx.symbol.BatchNorm(name='bn4b17_branch2a', data=res4b17_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2a = bn4b17_branch2a + res4b17_branch2a_relu = mx.symbol.Activation(name='res4b17_branch2a_relu', data=scale4b17_branch2a, + act_type='relu') + res4b17_branch2b = mx.symbol.Convolution(name='res4b17_branch2b', data=res4b17_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b17_branch2b = mx.symbol.BatchNorm(name='bn4b17_branch2b', data=res4b17_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2b = bn4b17_branch2b + res4b17_branch2b_relu = mx.symbol.Activation(name='res4b17_branch2b_relu', data=scale4b17_branch2b, + act_type='relu') + res4b17_branch2c = mx.symbol.Convolution(name='res4b17_branch2c', data=res4b17_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2c = mx.symbol.BatchNorm(name='bn4b17_branch2c', data=res4b17_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b17_branch2c = bn4b17_branch2c + res4b17 = mx.symbol.broadcast_add(name='res4b17', *[res4b16_relu, scale4b17_branch2c]) + res4b17_relu = mx.symbol.Activation(name='res4b17_relu', data=res4b17, act_type='relu') + res4b18_branch2a = mx.symbol.Convolution(name='res4b18_branch2a', data=res4b17_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2a = mx.symbol.BatchNorm(name='bn4b18_branch2a', data=res4b18_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2a = bn4b18_branch2a + res4b18_branch2a_relu = mx.symbol.Activation(name='res4b18_branch2a_relu', data=scale4b18_branch2a, + act_type='relu') + res4b18_branch2b = mx.symbol.Convolution(name='res4b18_branch2b', data=res4b18_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b18_branch2b = mx.symbol.BatchNorm(name='bn4b18_branch2b', data=res4b18_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2b = bn4b18_branch2b + res4b18_branch2b_relu = mx.symbol.Activation(name='res4b18_branch2b_relu', data=scale4b18_branch2b, + act_type='relu') + res4b18_branch2c = mx.symbol.Convolution(name='res4b18_branch2c', data=res4b18_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2c = mx.symbol.BatchNorm(name='bn4b18_branch2c', data=res4b18_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b18_branch2c = bn4b18_branch2c + res4b18 = mx.symbol.broadcast_add(name='res4b18', *[res4b17_relu, scale4b18_branch2c]) + res4b18_relu = mx.symbol.Activation(name='res4b18_relu', data=res4b18, act_type='relu') + res4b19_branch2a = mx.symbol.Convolution(name='res4b19_branch2a', data=res4b18_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2a = mx.symbol.BatchNorm(name='bn4b19_branch2a', data=res4b19_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2a = bn4b19_branch2a + res4b19_branch2a_relu = mx.symbol.Activation(name='res4b19_branch2a_relu', data=scale4b19_branch2a, + act_type='relu') + res4b19_branch2b = mx.symbol.Convolution(name='res4b19_branch2b', data=res4b19_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b19_branch2b = mx.symbol.BatchNorm(name='bn4b19_branch2b', data=res4b19_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2b = bn4b19_branch2b + res4b19_branch2b_relu = mx.symbol.Activation(name='res4b19_branch2b_relu', data=scale4b19_branch2b, + act_type='relu') + res4b19_branch2c = mx.symbol.Convolution(name='res4b19_branch2c', data=res4b19_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2c = mx.symbol.BatchNorm(name='bn4b19_branch2c', data=res4b19_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b19_branch2c = bn4b19_branch2c + res4b19 = mx.symbol.broadcast_add(name='res4b19', *[res4b18_relu, scale4b19_branch2c]) + res4b19_relu = mx.symbol.Activation(name='res4b19_relu', data=res4b19, act_type='relu') + res4b20_branch2a = mx.symbol.Convolution(name='res4b20_branch2a', data=res4b19_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2a = mx.symbol.BatchNorm(name='bn4b20_branch2a', data=res4b20_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2a = bn4b20_branch2a + res4b20_branch2a_relu = mx.symbol.Activation(name='res4b20_branch2a_relu', data=scale4b20_branch2a, + act_type='relu') + res4b20_branch2b = mx.symbol.Convolution(name='res4b20_branch2b', data=res4b20_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b20_branch2b = mx.symbol.BatchNorm(name='bn4b20_branch2b', data=res4b20_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2b = bn4b20_branch2b + res4b20_branch2b_relu = mx.symbol.Activation(name='res4b20_branch2b_relu', data=scale4b20_branch2b, + act_type='relu') + res4b20_branch2c = mx.symbol.Convolution(name='res4b20_branch2c', data=res4b20_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2c = mx.symbol.BatchNorm(name='bn4b20_branch2c', data=res4b20_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b20_branch2c = bn4b20_branch2c + res4b20 = mx.symbol.broadcast_add(name='res4b20', *[res4b19_relu, scale4b20_branch2c]) + res4b20_relu = mx.symbol.Activation(name='res4b20_relu', data=res4b20, act_type='relu') + res4b21_branch2a = mx.symbol.Convolution(name='res4b21_branch2a', data=res4b20_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2a = mx.symbol.BatchNorm(name='bn4b21_branch2a', data=res4b21_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2a = bn4b21_branch2a + res4b21_branch2a_relu = mx.symbol.Activation(name='res4b21_branch2a_relu', data=scale4b21_branch2a, + act_type='relu') + res4b21_branch2b = mx.symbol.Convolution(name='res4b21_branch2b', data=res4b21_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b21_branch2b = mx.symbol.BatchNorm(name='bn4b21_branch2b', data=res4b21_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2b = bn4b21_branch2b + res4b21_branch2b_relu = mx.symbol.Activation(name='res4b21_branch2b_relu', data=scale4b21_branch2b, + act_type='relu') + res4b21_branch2c = mx.symbol.Convolution(name='res4b21_branch2c', data=res4b21_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2c = mx.symbol.BatchNorm(name='bn4b21_branch2c', data=res4b21_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b21_branch2c = bn4b21_branch2c + res4b21 = mx.symbol.broadcast_add(name='res4b21', *[res4b20_relu, scale4b21_branch2c]) + res4b21_relu = mx.symbol.Activation(name='res4b21_relu', data=res4b21, act_type='relu') + res4b22_branch2a = mx.symbol.Convolution(name='res4b22_branch2a', data=res4b21_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2a = mx.symbol.BatchNorm(name='bn4b22_branch2a', data=res4b22_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2a = bn4b22_branch2a + res4b22_branch2a_relu = mx.symbol.Activation(name='res4b22_branch2a_relu', data=scale4b22_branch2a, + act_type='relu') + res4b22_branch2b = mx.symbol.Convolution(name='res4b22_branch2b', data=res4b22_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b22_branch2b = mx.symbol.BatchNorm(name='bn4b22_branch2b', data=res4b22_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2b = bn4b22_branch2b + res4b22_branch2b_relu = mx.symbol.Activation(name='res4b22_branch2b_relu', data=scale4b22_branch2b, + act_type='relu') + res4b22_branch2c = mx.symbol.Convolution(name='res4b22_branch2c', data=res4b22_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2c = mx.symbol.BatchNorm(name='bn4b22_branch2c', data=res4b22_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale4b22_branch2c = bn4b22_branch2c + res4b22 = mx.symbol.broadcast_add(name='res4b22', *[res4b21_relu, scale4b22_branch2c]) + res4b22_relu = mx.symbol.Activation(name='res4b22_relu', data=res4b22, act_type='relu') + return res4b22_relu + + def get_resnet_v1_conv5(self, conv_feat): + res5a_branch1 = mx.symbol.Convolution(name='res5a_branch1', data=conv_feat, num_filter=2048, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch1 = mx.symbol.BatchNorm(name='bn5a_branch1', data=res5a_branch1, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch1 = bn5a_branch1 + res5a_branch2a = mx.symbol.Convolution(name='res5a_branch2a', data=conv_feat, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch2a = mx.symbol.BatchNorm(name='bn5a_branch2a', data=res5a_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2a = bn5a_branch2a + res5a_branch2a_relu = mx.symbol.Activation(name='res5a_branch2a_relu', data=scale5a_branch2a, act_type='relu') + res5a_branch2b = mx.symbol.Convolution(name='res5a_branch2b', data=res5a_branch2a_relu, num_filter=512, + pad=(2, 2), + kernel=(3, 3), stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5a_branch2b = mx.symbol.BatchNorm(name='bn5a_branch2b', data=res5a_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2b = bn5a_branch2b + res5a_branch2b_relu = mx.symbol.Activation(name='res5a_branch2b_relu', data=scale5a_branch2b, act_type='relu') + res5a_branch2c = mx.symbol.Convolution(name='res5a_branch2c', data=res5a_branch2b_relu, num_filter=2048, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch2c = mx.symbol.BatchNorm(name='bn5a_branch2c', data=res5a_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5a_branch2c = bn5a_branch2c + res5a = mx.symbol.broadcast_add(name='res5a', *[scale5a_branch1, scale5a_branch2c]) + res5a_relu = mx.symbol.Activation(name='res5a_relu', data=res5a, act_type='relu') + res5b_branch2a = mx.symbol.Convolution(name='res5b_branch2a', data=res5a_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2a = mx.symbol.BatchNorm(name='bn5b_branch2a', data=res5b_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2a = bn5b_branch2a + res5b_branch2a_relu = mx.symbol.Activation(name='res5b_branch2a_relu', data=scale5b_branch2a, act_type='relu') + res5b_branch2b = mx.symbol.Convolution(name='res5b_branch2b', data=res5b_branch2a_relu, num_filter=512, + pad=(2, 2), + kernel=(3, 3), stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5b_branch2b = mx.symbol.BatchNorm(name='bn5b_branch2b', data=res5b_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2b = bn5b_branch2b + res5b_branch2b_relu = mx.symbol.Activation(name='res5b_branch2b_relu', data=scale5b_branch2b, act_type='relu') + res5b_branch2c = mx.symbol.Convolution(name='res5b_branch2c', data=res5b_branch2b_relu, num_filter=2048, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2c = mx.symbol.BatchNorm(name='bn5b_branch2c', data=res5b_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5b_branch2c = bn5b_branch2c + res5b = mx.symbol.broadcast_add(name='res5b', *[res5a_relu, scale5b_branch2c]) + res5b_relu = mx.symbol.Activation(name='res5b_relu', data=res5b, act_type='relu') + res5c_branch2a = mx.symbol.Convolution(name='res5c_branch2a', data=res5b_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2a = mx.symbol.BatchNorm(name='bn5c_branch2a', data=res5c_branch2a, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2a = bn5c_branch2a + res5c_branch2a_relu = mx.symbol.Activation(name='res5c_branch2a_relu', data=scale5c_branch2a, act_type='relu') + res5c_branch2b = mx.symbol.Convolution(name='res5c_branch2b', data=res5c_branch2a_relu, num_filter=512, + pad=(2, 2), + kernel=(3, 3), stride=(1, 1), dilate=(2, 2), no_bias=True) + bn5c_branch2b = mx.symbol.BatchNorm(name='bn5c_branch2b', data=res5c_branch2b, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2b = bn5c_branch2b + res5c_branch2b_relu = mx.symbol.Activation(name='res5c_branch2b_relu', data=scale5c_branch2b, act_type='relu') + res5c_branch2c = mx.symbol.Convolution(name='res5c_branch2c', data=res5c_branch2b_relu, num_filter=2048, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2c = mx.symbol.BatchNorm(name='bn5c_branch2c', data=res5c_branch2c, use_global_stats=True, + fix_gamma=False, eps=self.eps) + scale5c_branch2c = bn5c_branch2c + res5c = mx.symbol.broadcast_add(name='res5c', *[res5b_relu, scale5c_branch2c]) + res5c_relu = mx.symbol.Activation(name='res5c_relu', data=res5c, act_type='relu') + return res5c_relu + + # def get_light_head(self, data, mid_num_filter=256, suffix='separable'): + # # mid_num_filter=256 + # conv_new_1 = mx.sym.Convolution(data=data, kernel=(15, 1), pad=(7, 0), num_filter=mid_num_filter, name="conv_new_1" + suffix, + # weight=self.shared_param_dict['conv_new_1_weight'], bias=self.shared_param_dict['conv_new_1_bias'], lr_mult=3.0) + # + # relu_new_1 = mx.sym.Activation(data=conv_new_1, act_type='relu', name='relu1' + suffix) + # conv_new_2 = mx.sym.Convolution(data=relu_new_1, kernel=(1, 15), pad=(0, 7), num_filter=10 * 7 * 7, name="conv_new_2" + suffix, + # weight=self.shared_param_dict['conv_new_2_weight'], bias=self.shared_param_dict['conv_new_2_bias'], + # lr_mult=3.0) + # relu_new_2 = mx.sym.Activation(data=conv_new_2, act_type='relu', name='relu2' + suffix) + # conv_new_3 = mx.sym.Convolution(data=data, kernel=(1, 15), pad=(0, 7), num_filter=mid_num_filter, name="conv_new_3" + suffix, + # weight=self.shared_param_dict['conv_new_3_weight'], bias=self.shared_param_dict['conv_new_3_bias'], + # lr_mult=3.0) + # relu_new_3 = mx.sym.Activation(data=conv_new_3, act_type='relu', name='relu3' + suffix) + # conv_new_4 = mx.sym.Convolution(data=relu_new_3, kernel=(15, 1), pad=(7, 0), num_filter=10 * 7 * 7, name="conv_new_4" + suffix, + # weight=self.shared_param_dict['conv_new_4_weight'], bias=self.shared_param_dict['conv_new_4_bias'], + # lr_mult=3.0) + # relu_new_4 = mx.sym.Activation(data=conv_new_4, act_type='relu', name='relu4' + suffix) + # light_head = mx.symbol.broadcast_add(name='light_head', *[relu_new_2, relu_new_4]) + # return light_head + + def get_rpn(self, conv_feat, num_anchors): + rpn_conv = mx.sym.Convolution( + data=conv_feat, kernel=(3, 3), pad=(1, 1), num_filter=512, name="rpn_conv_3x3") + rpn_relu = mx.sym.Activation(data=rpn_conv, act_type="relu", name="rpn_relu") + rpn_cls_score = mx.sym.Convolution( + data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=2 * num_anchors, name="rpn_cls_score") + rpn_bbox_pred = mx.sym.Convolution( + data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=4 * num_anchors, name="rpn_bbox_pred") + return rpn_cls_score, rpn_bbox_pred + + def get_symbol(self, cfg, is_train=True): + + # config alias for convenient + num_classes = cfg.dataset.NUM_CLASSES + num_reg_classes = (2 if cfg.CLASS_AGNOSTIC else num_classes) + num_anchors = cfg.network.NUM_ANCHORS + + # input init + if is_train: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + gt_boxes = mx.sym.Variable(name="gt_boxes") + rpn_label = mx.sym.Variable(name='label') + rpn_bbox_target = mx.sym.Variable(name='bbox_target') + rpn_bbox_weight = mx.sym.Variable(name='bbox_weight') + else: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + # res5 + relu1 = self.get_resnet_v1_conv5(conv_feat) + + rpn_cls_score, rpn_bbox_pred = self.get_rpn(conv_feat, num_anchors) + + if is_train: + # prepare rpn data + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + + # classification + rpn_cls_prob = mx.sym.SoftmaxOutput(data=rpn_cls_score_reshape, label=rpn_label, multi_output=True, + normalization='valid', use_ignore=True, ignore_label=-1, + name="rpn_cls_prob") + + # bounding box regression + rpn_bbox_loss_ = rpn_bbox_weight * mx.sym.smooth_l1(name='rpn_bbox_loss_', scalar=3.0, + data=(rpn_bbox_pred - rpn_bbox_target)) + + rpn_bbox_loss = mx.sym.MakeLoss(name='rpn_bbox_loss', data=rpn_bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.RPN_BATCH_SIZE) + + # ROI proposal + rpn_cls_act = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_act") + rpn_cls_act_reshape = mx.sym.Reshape( + data=rpn_cls_act, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_act_reshape') + if cfg.TRAIN.CXX_PROPOSAL: + rois = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_act_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TRAIN.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TRAIN.RPN_POST_NMS_TOP_N, + threshold=cfg.TRAIN.RPN_NMS_THRESH, rpn_min_size=cfg.TRAIN.RPN_MIN_SIZE) + else: + rois = mx.sym.Custom( + cls_prob=rpn_cls_act_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TRAIN.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TRAIN.RPN_POST_NMS_TOP_N, + threshold=cfg.TRAIN.RPN_NMS_THRESH, rpn_min_size=cfg.TRAIN.RPN_MIN_SIZE) + # print 'in get_symbol, after proposal' + # pdb.set_trace() + group = mx.sym.Group([rois]) + print group.list_outputs() + # ROI proposal target + gt_boxes_reshape = mx.sym.Reshape(data=gt_boxes, shape=(-1, 9), name='gt_boxes_reshape') + rois, label, bbox_target, bbox_weight = mx.sym.Custom(rois=rois, gt_boxes=gt_boxes_reshape, + op_type='proposal_target_rotbox', + num_classes=num_reg_classes, + batch_images=cfg.TRAIN.BATCH_IMAGES, + batch_rois=cfg.TRAIN.BATCH_ROIS, + cfg=cPickle.dumps(cfg), + fg_fraction=cfg.TRAIN.FG_FRACTION) + # print 'in get_symbol, after proposal_target' + # pdb.set_trace() + else: + # ROI Proposal + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + rpn_cls_prob = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_prob") + rpn_cls_prob_reshape = mx.sym.Reshape( + data=rpn_cls_prob, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_prob_reshape') + if cfg.TEST.CXX_PROPOSAL: + rois = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + else: + rois = mx.sym.Custom( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + + # largeseparable + conv_new_1 = mx.sym.Convolution(data=relu1, kernel=(1, 1), num_filter=256, name="conv_new_1") + conv_new_1_relu = mx.sym.Activation(data=conv_new_1, act_type='relu', name='conv_new_1_relu') + # conv_thin_feat = self.get_light_head(data=relu1, mid_num_filter=256) + # roi_pool = mx.contrib.sym.PSROIALIGNAVEPooling(name='psroialignave', data=conv_thin_feat, rois=rois, spatial_scale=0.065, + # group_size=7, pooled_size=7, + # output_dim=10) + roi_pool = mx.symbol.ROIPooling( + name='roi_pool', data=conv_new_1_relu, rois=rois, pooled_size=(7, 7), spatial_scale=0.0625) + + # 2 fc + fc_new_1 = mx.symbol.FullyConnected(name='fc_new_1', data=roi_pool, num_hidden=1024) + fc_new_1_relu = mx.sym.Activation(data=fc_new_1, act_type='relu', name='fc_new_1_relu') + # fc_new_1 = mx.symbol.FullyConnected(name='fc_new_1', data=roi_pool, num_hidden=2048) + # fc_new_1_relu = mx.sym.Activation(data=fc_new_1, act_type='relu', name='fc_new_1_relu') + fc_new_2 = mx.symbol.FullyConnected(name='fc_new_2', data=fc_new_1_relu, num_hidden=1024) + fc_new_2_relu = mx.sym.Activation(data=fc_new_2, act_type='relu', name='fc_new_2_relu') + + # cls_score/bbox_pred + cls_score = mx.symbol.FullyConnected(name='cls_score', data=fc_new_2_relu, num_hidden=num_classes) + bbox_pred = mx.symbol.FullyConnected(name='bbox_pred', data=fc_new_2_relu, num_hidden=num_reg_classes * 5) + + if is_train: + if cfg.TRAIN.ENABLE_OHEM: + labels_ohem, bbox_weights_ohem = mx.sym.Custom(op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=num_reg_classes, + roi_per_img=cfg.TRAIN.BATCH_ROIS_OHEM, + cls_score=cls_score, bbox_pred=bbox_pred, labels=label, + bbox_targets=bbox_target, bbox_weights=bbox_weight) + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=labels_ohem, + normalization='valid', use_ignore=True, ignore_label=-1) + bbox_loss_ = bbox_weights_ohem * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS_OHEM) + rcnn_label = labels_ohem + else: + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=label, normalization='valid') + bbox_loss_ = bbox_weight * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS) + rcnn_label = label + + # reshape output + rcnn_label = mx.sym.Reshape(data=rcnn_label, shape=(cfg.TRAIN.BATCH_IMAGES, -1), name='label_reshape') + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_loss = mx.sym.Reshape(data=bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_loss_reshape') + group = mx.sym.Group([rpn_cls_prob, rpn_bbox_loss, cls_prob, bbox_loss, mx.sym.BlockGrad(rcnn_label)]) + else: + cls_prob = mx.sym.SoftmaxActivation(name='cls_prob', data=cls_score) + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_pred = mx.sym.Reshape(data=bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_pred_reshape') + group = mx.sym.Group([rois, cls_prob, bbox_pred]) + + self.sym = group + return group + + def get_symbol_rpn(self, cfg, is_train=True): + # config alias for convenient + num_anchors = cfg.network.NUM_ANCHORS + + # input init + if is_train: + data = mx.sym.Variable(name="data") + rpn_label = mx.sym.Variable(name='label') + rpn_bbox_target = mx.sym.Variable(name='bbox_target') + rpn_bbox_weight = mx.sym.Variable(name='bbox_weight') + else: + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + rpn_cls_score, rpn_bbox_pred = self.get_rpn(conv_feat, num_anchors) + if is_train: + # prepare rpn data + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + + # classification + rpn_cls_prob = mx.sym.SoftmaxOutput(data=rpn_cls_score_reshape, label=rpn_label, multi_output=True, + normalization='valid', use_ignore=True, ignore_label=-1, + name="rpn_cls_prob", + grad_scale=1.0) + # bounding box regression + rpn_bbox_loss_ = rpn_bbox_weight * mx.sym.smooth_l1(name='rpn_bbox_loss_', scalar=3.0, + data=(rpn_bbox_pred - rpn_bbox_target)) + rpn_bbox_loss = mx.sym.MakeLoss(name='rpn_bbox_loss', data=rpn_bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.RPN_BATCH_SIZE) + group = mx.symbol.Group([rpn_cls_prob, rpn_bbox_loss]) + else: + # ROI Proposal + rpn_cls_score_reshape = mx.sym.Reshape( + data=rpn_cls_score, shape=(0, 2, -1, 0), name="rpn_cls_score_reshape") + rpn_cls_prob = mx.sym.SoftmaxActivation( + data=rpn_cls_score_reshape, mode="channel", name="rpn_cls_prob") + rpn_cls_prob_reshape = mx.sym.Reshape( + data=rpn_cls_prob, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_prob_reshape') + if cfg.TEST.CXX_PROPOSAL: + rois, score = mx.contrib.sym.Proposal( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + output_score=True, + feature_stride=cfg.network.RPN_FEAT_STRIDE, scales=tuple(cfg.network.ANCHOR_SCALES), + ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + else: + rois, score = mx.sym.Custom( + cls_prob=rpn_cls_prob_reshape, bbox_pred=rpn_bbox_pred, im_info=im_info, name='rois', + output_score=True, + op_type='proposal', feat_stride=cfg.network.RPN_FEAT_STRIDE, + scales=tuple(cfg.network.ANCHOR_SCALES), ratios=tuple(cfg.network.ANCHOR_RATIOS), + rpn_pre_nms_top_n=cfg.TEST.RPN_PRE_NMS_TOP_N, rpn_post_nms_top_n=cfg.TEST.RPN_POST_NMS_TOP_N, + threshold=cfg.TEST.RPN_NMS_THRESH, rpn_min_size=cfg.TEST.RPN_MIN_SIZE) + group = mx.symbol.Group([rois, score]) + self.sym = group + return group + + def get_symbol_rcnn(self, cfg, is_train=True): + # config alias for convenient + num_classes = cfg.dataset.NUM_CLASSES + num_reg_classes = (2 if cfg.CLASS_AGNOSTIC else num_classes) + + # input init + if is_train: + data = mx.symbol.Variable(name="data") + rois = mx.symbol.Variable(name='rois') + label = mx.symbol.Variable(name='label') + bbox_target = mx.symbol.Variable(name='bbox_target') + bbox_weight = mx.symbol.Variable(name='bbox_weight') + # reshape input + rois = mx.symbol.Reshape(data=rois, shape=(-1, 5), name='rois_reshape') + label = mx.symbol.Reshape(data=label, shape=(-1,), name='label_reshape') + bbox_target = mx.symbol.Reshape(data=bbox_target, shape=(-1, 5 * num_classes), name='bbox_target_reshape') + bbox_weight = mx.symbol.Reshape(data=bbox_weight, shape=(-1, 5 * num_classes), name='bbox_weight_reshape') + else: + data = mx.sym.Variable(name="data") + rois = mx.symbol.Variable(name='rois') + # reshape input + rois = mx.symbol.Reshape(data=rois, shape=(-1, 5), name='rois_reshape') + + # shared convolutional layers + conv_feat = self.get_resnet_v1_conv4(data) + # res5 + relu1 = self.get_resnet_v1_conv5(conv_feat) + + conv_new_1 = mx.sym.Convolution(data=relu1, kernel=(1, 1), num_filter=256, name="conv_new_1") + conv_new_1_relu = mx.sym.Activation(data=conv_new_1, act_type='relu', name='conv_new_1_relu') + + roi_pool = mx.symbol.ROIPooling( + name='roi_pool', data=conv_new_1_relu, rois=rois, pooled_size=(7, 7), spatial_scale=0.0625) + # conv_thin_feat = self.get_light_head(data=relu1, mid_num_filter=256) + # + # roi_pool = mx.contrib.sym.PSROIPooling(name='psroipooling', data=conv_thin_feat, rois=rois, group_size=7, pooled_size=7, + # output_dim=10, spatial_scale=0.0625) + + # 2 fc + fc_new_1 = mx.symbol.FullyConnected(name='fc_new_1', data=roi_pool, num_hidden=1024) + fc_new_1_relu = mx.sym.Activation(data=fc_new_1, act_type='relu', name='fc_new_1_relu') + + fc_new_2 = mx.symbol.FullyConnected(name='fc_new_2', data=fc_new_1_relu, num_hidden=1024) + fc_new_2_relu = mx.sym.Activation(data=fc_new_2, act_type='relu', name='fc_new_2_relu') + + # cls_score/bbox_pred + cls_score = mx.symbol.FullyConnected(name='cls_score', data=fc_new_2_relu, num_hidden=num_classes) + bbox_pred = mx.symbol.FullyConnected(name='bbox_pred', data=fc_new_2_relu, num_hidden=num_reg_classes * 5) + + if is_train: + if cfg.TRAIN.ENABLE_OHEM: + labels_ohem, bbox_weights_ohem = mx.sym.Custom(op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=num_reg_classes, + roi_per_img=cfg.TRAIN.BATCH_ROIS_OHEM, + cls_score=cls_score, bbox_pred=bbox_pred, labels=label, + bbox_targets=bbox_target, bbox_weights=bbox_weight) + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=labels_ohem, + normalization='valid', use_ignore=True, ignore_label=-1, grad_scale=1.0) + bbox_loss_ = bbox_weights_ohem * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, + grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS_OHEM) + else: + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=label, normalization='valid', + grad_scale=1.0) + bbox_loss_ = bbox_weight * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, + data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS) + + # reshape output + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_loss = mx.sym.Reshape(data=bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_loss_reshape') + group = mx.sym.Group([cls_prob, bbox_loss]) + else: + cls_prob = mx.sym.SoftmaxActivation(name='cls_prob', data=cls_score) + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_classes), + name='cls_prob_reshape') + bbox_pred = mx.sym.Reshape(data=bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_pred_reshape') + group = mx.sym.Group([cls_prob, bbox_pred]) + + self.sym = group + return group + + def init_weight_rcnn(self, cfg, arg_params, aux_params): + arg_params['conv_new_1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['conv_new_1_weight']) + arg_params['conv_new_1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['conv_new_1_bias']) + arg_params['fc_new_1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_1_weight']) + arg_params['fc_new_1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_1_bias']) + arg_params['fc_new_2_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_2_weight']) + arg_params['fc_new_2_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_2_bias']) + arg_params['cls_score_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['cls_score_weight']) + arg_params['cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['cls_score_bias']) + arg_params['bbox_pred_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['bbox_pred_weight']) + arg_params['bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['bbox_pred_bias']) + + def init_weight_rpn(self, cfg, arg_params, aux_params): + arg_params['rpn_conv_3x3_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['rpn_conv_3x3_weight']) + arg_params['rpn_conv_3x3_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_conv_3x3_bias']) + arg_params['rpn_cls_score_weight'] = mx.random.normal(0, 0.01, + shape=self.arg_shape_dict['rpn_cls_score_weight']) + arg_params['rpn_cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_cls_score_bias']) + arg_params['rpn_bbox_pred_weight'] = mx.random.normal(0, 0.01, + shape=self.arg_shape_dict['rpn_bbox_pred_weight']) + arg_params['rpn_bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['rpn_bbox_pred_bias']) + + def init_weight(self, cfg, arg_params, aux_params): + self.init_weight_rpn(cfg, arg_params, aux_params) + self.init_weight_rcnn(cfg, arg_params, aux_params) + # for name in self.shared_param_list: + # arg_params[name + '_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict[name + '_weight']) + # arg_params[name + '_bias'] = mx.nd.zeros(shape=self.arg_shape_dict[name + '_bias']) + diff --git a/faster_rcnn/test.py b/faster_rcnn/test.py new file mode 100644 index 0000000..7381e78 --- /dev/null +++ b/faster_rcnn/test.py @@ -0,0 +1,60 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import _init_paths + +import cv2 +import argparse +import os +import sys +import time +import logging +from config.config import config, update_config + +def parse_args(): + parser = argparse.ArgumentParser(description='Test a Faster R-CNN network') + # general + parser.add_argument('--cfg', help='experiment configure file name', required=True, type=str) + + args, rest = parser.parse_known_args() + update_config(args.cfg) + + # rcnn + parser.add_argument('--vis', help='turn on visualization', action='store_true') + parser.add_argument('--ignore_cache', help='ignore cached results boxes', action='store_true') + parser.add_argument('--thresh', help='valid detection threshold', default=1e-3, type=float) + parser.add_argument('--shuffle', help='shuffle data on visualization', action='store_true') + args = parser.parse_args() + return args + +args = parse_args() +curr_path = os.path.abspath(os.path.dirname(__file__)) +sys.path.insert(0, os.path.join(curr_path, '../external/mxnet', config.MXNET_VERSION)) + +import mxnet as mx +from function.test_rcnn import test_rcnn +from utils.create_logger import create_logger + + +def main(): + ctx = [mx.gpu(int(i)) for i in config.gpus.split(',')] + print args + + logger, final_output_path = create_logger(config.output_path, args.cfg, config.dataset.test_image_set) + + test_rcnn(config, config.dataset.dataset, config.dataset.test_image_set, config.dataset.root_path, config.dataset.dataset_path, + ctx, os.path.join(final_output_path, '..', '_'.join([iset for iset in config.dataset.image_set.split('+')]), config.TRAIN.model_prefix), config.TEST.test_epoch, + args.vis, args.ignore_cache, args.shuffle, config.TEST.HAS_RPN, config.dataset.proposal, args.thresh, logger=logger, output_path=final_output_path) + +if __name__ == '__main__': + main() diff --git a/faster_rcnn/test_poly.py b/faster_rcnn/test_poly.py new file mode 100644 index 0000000..735c9f3 --- /dev/null +++ b/faster_rcnn/test_poly.py @@ -0,0 +1,68 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import _init_paths + +import cv2 +import argparse +import os +import sys +import time +import logging +from config.config import config, update_config + + +def parse_args(): + parser = argparse.ArgumentParser(description='Test a Faster R-CNN network') + # general + parser.add_argument('--cfg', help='experiment configure file name', required=True, type=str) + + args, rest = parser.parse_known_args() + update_config(args.cfg) + + # rcnn + parser.add_argument('--vis', help='turn on visualization', action='store_true') + parser.add_argument('--draw', help='turn on draw visualization', action='store_true') + parser.add_argument('--ignore_cache', help='ignore cached results boxes', action='store_true') + parser.add_argument('--thresh', help='valid detection threshold', default=1e-3, type=float) + parser.add_argument('--shuffle', help='shuffle data on visualization', action='store_true') + args = parser.parse_args() + return args + + +args = parse_args() +curr_path = os.path.abspath(os.path.dirname(__file__)) +sys.path.insert(0, os.path.join(curr_path, '../external/mxnet', config.MXNET_VERSION)) + +import mxnet as mx +from function.test_rcnn_poly import test_rcnn_poly +from utils.create_logger import create_logger + + +def main(): + ctx = [mx.gpu(int(i)) for i in config.gpus.split(',')] + print args + + logger, final_output_path = create_logger(config.output_path, args.cfg, config.dataset.test_image_set) + + test_rcnn_poly(config, config.dataset.dataset, config.dataset.test_image_set, config.dataset.root_path, + config.dataset.dataset_path, + ctx, + os.path.join(final_output_path, '..', '_'.join([iset for iset in config.dataset.image_set.split('+')]), + config.TRAIN.model_prefix), config.TEST.test_epoch, + args.vis, args.draw, args.ignore_cache, args.shuffle, config.TEST.HAS_RPN, config.dataset.proposal, args.thresh, + logger=logger, output_path=final_output_path) + + +if __name__ == '__main__': + main() diff --git a/faster_rcnn/train_end2end.py b/faster_rcnn/train_end2end.py new file mode 100644 index 0000000..3039497 --- /dev/null +++ b/faster_rcnn/train_end2end.py @@ -0,0 +1,169 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import argparse +import os +import pprint +import sys + +from config.config import config, update_config + + +def parse_args(): + parser = argparse.ArgumentParser(description='Train Faster-RCNN network') + # general + parser.add_argument('--cfg', help='experiment configure file name', required=True, type=str) + + args, rest = parser.parse_known_args() + # update config + update_config(args.cfg) + + # training + parser.add_argument('--frequent', help='frequency of logging', default=config.default.frequent, type=int) + args = parser.parse_args() + return args + +args = parse_args() +curr_path = os.path.abspath(os.path.dirname(__file__)) +sys.path.insert(0, os.path.join(curr_path, '../external/mxnet', config.MXNET_VERSION)) + +import shutil +import numpy as np +import mxnet as mx + +from core import metric, callback +from core.loader import AnchorLoader +from core.module import MutableModule +from utils.create_logger import create_logger +from utils.load_data import load_gt_roidb, merge_roidb, filter_roidb +from utils.load_model import load_param +from utils.PrefetchingIter import PrefetchingIter +from utils.lr_scheduler import WarmupMultiFactorScheduler + + +def train_net(args, ctx, pretrained, epoch, prefix, begin_epoch, end_epoch, lr, lr_step): + logger, final_output_path = create_logger(config.output_path, args.cfg, config.dataset.image_set) + prefix = os.path.join(final_output_path, prefix) + + # load symbol + shutil.copy2(os.path.join(curr_path, 'symbols', config.symbol + '.py'), final_output_path) + sym_instance = eval(config.symbol + '.' + config.symbol)() + sym = sym_instance.get_symbol(config, is_train=True) + feat_sym = sym.get_internals()['rpn_cls_score_output'] + + # setup multi-gpu + batch_size = len(ctx) + input_batch_size = config.TRAIN.BATCH_IMAGES * batch_size + + # print config + pprint.pprint(config) + logger.info('training config:{}\n'.format(pprint.pformat(config))) + + # load dataset and prepare imdb for training + image_sets = [iset for iset in config.dataset.image_set.split('+')] + roidbs = [load_gt_roidb(config.dataset.dataset, image_set, config.dataset.root_path, config.dataset.dataset_path, + flip=config.TRAIN.FLIP) + for image_set in image_sets] + roidb = merge_roidb(roidbs) + roidb = filter_roidb(roidb, config) + + # load training data + train_data = AnchorLoader(feat_sym, roidb, config, batch_size=input_batch_size, shuffle=config.TRAIN.SHUFFLE, ctx=ctx, + feat_stride=config.network.RPN_FEAT_STRIDE, anchor_scales=config.network.ANCHOR_SCALES, + anchor_ratios=config.network.ANCHOR_RATIOS, aspect_grouping=config.TRAIN.ASPECT_GROUPING) + + # infer max shape + max_data_shape = [('data', (config.TRAIN.BATCH_IMAGES, 3, max([v[0] for v in config.SCALES]), max([v[1] for v in config.SCALES])))] + max_data_shape, max_label_shape = train_data.infer_shape(max_data_shape) + max_data_shape.append(('gt_boxes', (config.TRAIN.BATCH_IMAGES, 100, 5))) + print 'providing maximum shape', max_data_shape, max_label_shape + + data_shape_dict = dict(train_data.provide_data_single + train_data.provide_label_single) + pprint.pprint(data_shape_dict) + sym_instance.infer_shape(data_shape_dict) + + # load and initialize params + if config.TRAIN.RESUME: + print('continue training from ', begin_epoch) + arg_params, aux_params = load_param(prefix, begin_epoch, convert=True) + else: + arg_params, aux_params = load_param(pretrained, epoch, convert=True) + sym_instance.init_weight(config, arg_params, aux_params) + + # check parameter shapes + sym_instance.check_parameter_shapes(arg_params, aux_params, data_shape_dict) + + # create solver + fixed_param_prefix = config.network.FIXED_PARAMS + data_names = [k[0] for k in train_data.provide_data_single] + label_names = [k[0] for k in train_data.provide_label_single] + + mod = MutableModule(sym, data_names=data_names, label_names=label_names, + logger=logger, context=ctx, max_data_shapes=[max_data_shape for _ in range(batch_size)], + max_label_shapes=[max_label_shape for _ in range(batch_size)], fixed_param_prefix=fixed_param_prefix) + + if config.TRAIN.RESUME: + mod._preload_opt_states = '%s-%04d.states'%(prefix, begin_epoch) + + # decide training params + # metric + rpn_eval_metric = metric.RPNAccMetric() + rpn_cls_metric = metric.RPNLogLossMetric() + rpn_bbox_metric = metric.RPNL1LossMetric() + eval_metric = metric.RCNNAccMetric(config) + cls_metric = metric.RCNNLogLossMetric(config) + bbox_metric = metric.RCNNL1LossMetric(config) + eval_metrics = mx.metric.CompositeEvalMetric() + # rpn_eval_metric, rpn_cls_metric, rpn_bbox_metric, eval_metric, cls_metric, bbox_metric + for child_metric in [rpn_eval_metric, rpn_cls_metric, rpn_bbox_metric, eval_metric, cls_metric, bbox_metric]: + eval_metrics.add(child_metric) + # callback + batch_end_callback = callback.Speedometer(train_data.batch_size, frequent=args.frequent) + means = np.tile(np.array(config.TRAIN.BBOX_MEANS), 2 if config.CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + stds = np.tile(np.array(config.TRAIN.BBOX_STDS), 2 if config.CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + epoch_end_callback = [mx.callback.module_checkpoint(mod, prefix, period=1, save_optimizer_states=True), callback.do_checkpoint(prefix, means, stds)] + # decide learning rate + base_lr = lr + lr_factor = config.TRAIN.lr_factor + lr_epoch = [float(epoch) for epoch in lr_step.split(',')] + lr_epoch_diff = [epoch - begin_epoch for epoch in lr_epoch if epoch > begin_epoch] + lr = base_lr * (lr_factor ** (len(lr_epoch) - len(lr_epoch_diff))) + lr_iters = [int(epoch * len(roidb) / batch_size) for epoch in lr_epoch_diff] + print('lr', lr, 'lr_epoch_diff', lr_epoch_diff, 'lr_iters', lr_iters) + lr_scheduler = WarmupMultiFactorScheduler(lr_iters, lr_factor, config.TRAIN.warmup, config.TRAIN.warmup_lr, config.TRAIN.warmup_step) + # optimizer + optimizer_params = {'momentum': config.TRAIN.momentum, + 'wd': config.TRAIN.wd, + 'learning_rate': lr, + 'lr_scheduler': lr_scheduler, + 'rescale_grad': 1.0, + 'clip_gradient': None} + + if not isinstance(train_data, PrefetchingIter): + train_data = PrefetchingIter(train_data) + + # train + mod.fit(train_data, eval_metric=eval_metrics, epoch_end_callback=epoch_end_callback, + batch_end_callback=batch_end_callback, kvstore=config.default.kvstore, + optimizer='sgd', optimizer_params=optimizer_params, + arg_params=arg_params, aux_params=aux_params, begin_epoch=begin_epoch, num_epoch=end_epoch) + + +def main(): + print('Called with argument:', args) + ctx = [mx.gpu(int(i)) for i in config.gpus.split(',')] + train_net(args, ctx, config.network.pretrained, config.network.pretrained_epoch, config.TRAIN.model_prefix, + config.TRAIN.begin_epoch, config.TRAIN.end_epoch, config.TRAIN.lr, config.TRAIN.lr_step) + +if __name__ == '__main__': + main() diff --git a/faster_rcnn/train_end2end_poly.py b/faster_rcnn/train_end2end_poly.py new file mode 100644 index 0000000..9fa4538 --- /dev/null +++ b/faster_rcnn/train_end2end_poly.py @@ -0,0 +1,175 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import _init_paths + +import cv2 +import argparse +import pprint +import os +import sys +from config.config import config, update_config + + +def parse_args(): + parser = argparse.ArgumentParser(description='Train Faster-RCNN network') + # general + parser.add_argument('--cfg', help='experiment configure file name', required=True, type=str) + + args, rest = parser.parse_known_args() + # update config + update_config(args.cfg) + + # training + parser.add_argument('--frequent', help='frequency of logging', default=config.default.frequent, type=int) + args = parser.parse_args() + return args + +args = parse_args() +curr_path = os.path.abspath(os.path.dirname(__file__)) +sys.path.insert(0, os.path.join(curr_path, '../external/mxnet', config.MXNET_VERSION)) + +import shutil +import numpy as np +import mxnet as mx + +from symbols import * +from core import metric, callback +# from core.loader import AnchorLoader +from core.loader import AnchorLoader_poly +from core.module import MutableModule +from utils.create_logger import create_logger +from utils.load_data import merge_roidb, filter_roidb, load_gt_roidb_poly +from utils.load_model import load_param +from utils.PrefetchingIter import PrefetchingIter +from utils.lr_scheduler import WarmupMultiFactorScheduler + + +def train_net(args, ctx, pretrained, epoch, prefix, begin_epoch, end_epoch, lr, lr_step): + logger, final_output_path = create_logger(config.output_path, args.cfg, config.dataset.image_set) + prefix = os.path.join(final_output_path, prefix) + + # load symbol + shutil.copy2(os.path.join(curr_path, 'symbols', config.symbol + '.py'), final_output_path) + sym_instance = eval(config.symbol + '.' + config.symbol)() + sym = sym_instance.get_symbol(config, is_train=True) + feat_sym = sym.get_internals()['rpn_cls_score_output'] + + # setup multi-gpu + batch_size = len(ctx) + input_batch_size = config.TRAIN.BATCH_IMAGES * batch_size + + # print config + pprint.pprint(config) + logger.info('training config:{}\n'.format(pprint.pformat(config))) + + # load dataset and prepare imdb for training + image_sets = [iset for iset in config.dataset.image_set.split('+')] + roidbs = [load_gt_roidb_poly(config.dataset.dataset, image_set, config.dataset.root_path, config.dataset.dataset_path, + flip=config.TRAIN.FLIP) + for image_set in image_sets] + roidb = merge_roidb(roidbs) + roidb = filter_roidb(roidb, config) + + # load training data + train_data = AnchorLoader_poly(feat_sym, roidb, config, batch_size=input_batch_size, shuffle=config.TRAIN.SHUFFLE, ctx=ctx, + feat_stride=config.network.RPN_FEAT_STRIDE, anchor_scales=config.network.ANCHOR_SCALES, + anchor_ratios=config.network.ANCHOR_RATIOS, aspect_grouping=config.TRAIN.ASPECT_GROUPING) + + # infer max shape + max_data_shape = [('data', (config.TRAIN.BATCH_IMAGES, 3, max([v[0] for v in config.SCALES]), max([v[1] for v in config.SCALES])))] + max_data_shape, max_label_shape = train_data.infer_shape(max_data_shape) + max_data_shape.append(('gt_boxes', (config.TRAIN.BATCH_IMAGES, 100, 9))) + print 'providing maximum shape', max_data_shape, max_label_shape + + data_shape_dict = dict(train_data.provide_data_single + train_data.provide_label_single) + pprint.pprint(data_shape_dict) + sym_instance.infer_shape(data_shape_dict) + + # load and initialize params + if config.TRAIN.RESUME: + print('continue training from ', begin_epoch) + arg_params, aux_params = load_param(prefix, begin_epoch, convert=True) + else: + arg_params, aux_params = load_param(pretrained, epoch, convert=True) + sym_instance.init_weight(config, arg_params, aux_params) + + # check parameter shapes + sym_instance.check_parameter_shapes(arg_params, aux_params, data_shape_dict) + + # create solver + fixed_param_prefix = config.network.FIXED_PARAMS + data_names = [k[0] for k in train_data.provide_data_single] + label_names = [k[0] for k in train_data.provide_label_single] + + mod = MutableModule(sym, data_names=data_names, label_names=label_names, + logger=logger, context=ctx, max_data_shapes=[max_data_shape for _ in range(batch_size)], + max_label_shapes=[max_label_shape for _ in range(batch_size)], fixed_param_prefix=fixed_param_prefix) + + if config.TRAIN.RESUME: + mod._preload_opt_states = '%s-%04d.states'%(prefix, begin_epoch) + + # decide training params + # metric + rpn_eval_metric = metric.RPNAccMetric() + rpn_cls_metric = metric.RPNLogLossMetric() + rpn_bbox_metric = metric.RPNL1LossMetric() + eval_metric = metric.RCNNAccMetric(config) + cls_metric = metric.RCNNLogLossMetric(config) + bbox_metric = metric.RCNNL1LossMetric(config) + rpn_fg_metric = metric.RPNFGFraction(config) + eval_fg_metric = metric.RCNNFGAccuracy(config) + eval_metrics = mx.metric.CompositeEvalMetric() + # rpn_eval_metric, rpn_cls_metric, rpn_bbox_metric, eval_metric, cls_metric, bbox_metric + for child_metric in [rpn_eval_metric, rpn_cls_metric, rpn_bbox_metric, rpn_fg_metric, eval_fg_metric, eval_metric, cls_metric, bbox_metric]: + eval_metrics.add(child_metric) + # callback + batch_end_callback = callback.Speedometer(train_data.batch_size, frequent=args.frequent) + means = np.tile(np.array(config.TRAIN.BBOX_MEANS), 2 if config.CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + stds = np.tile(np.array(config.TRAIN.BBOX_STDS), 2 if config.CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + epoch_end_callback = [mx.callback.module_checkpoint(mod, prefix, period=1, save_optimizer_states=True), callback.do_checkpoint(prefix, means, stds)] + # decide learning rate + base_lr = lr + lr_factor = config.TRAIN.lr_factor + lr_epoch = [float(epoch) for epoch in lr_step.split(',')] + lr_epoch_diff = [epoch - begin_epoch for epoch in lr_epoch if epoch > begin_epoch] + lr = base_lr * (lr_factor ** (len(lr_epoch) - len(lr_epoch_diff))) + lr_iters = [int(epoch * len(roidb) / batch_size) for epoch in lr_epoch_diff] + print('lr', lr, 'lr_epoch_diff', lr_epoch_diff, 'lr_iters', lr_iters) + lr_scheduler = WarmupMultiFactorScheduler(lr_iters, lr_factor, config.TRAIN.warmup, config.TRAIN.warmup_lr, config.TRAIN.warmup_step) + # optimizer + optimizer_params = {'momentum': config.TRAIN.momentum, + 'wd': config.TRAIN.wd, + 'learning_rate': lr, + 'lr_scheduler': lr_scheduler, + 'rescale_grad': 1.0, + 'clip_gradient': None} + + if not isinstance(train_data, PrefetchingIter): + train_data = PrefetchingIter(train_data) + + # train + mod.fit(train_data, eval_metric=eval_metrics, epoch_end_callback=epoch_end_callback, + batch_end_callback=batch_end_callback, kvstore=config.default.kvstore, + optimizer='sgd', optimizer_params=optimizer_params, + arg_params=arg_params, aux_params=aux_params, begin_epoch=begin_epoch, num_epoch=end_epoch) + + +def main(): + print('Called with argument:', args) + ctx = [mx.gpu(int(i)) for i in config.gpus.split(',')] + train_net(args, ctx, config.network.pretrained, config.network.pretrained_epoch, config.TRAIN.model_prefix, + config.TRAIN.begin_epoch, config.TRAIN.end_epoch, config.TRAIN.lr, config.TRAIN.lr_step) + +if __name__ == '__main__': + main() diff --git a/faster_rcnn/train_end2end_poly_RoITransformer.py b/faster_rcnn/train_end2end_poly_RoITransformer.py new file mode 100644 index 0000000..9acd260 --- /dev/null +++ b/faster_rcnn/train_end2end_poly_RoITransformer.py @@ -0,0 +1,184 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- +import _init_paths + +import cv2 +import time +import argparse +import logging +import pprint +import os +import sys +from config.config import config, update_config + +def parse_args(): + parser = argparse.ArgumentParser(description='Train Faster-RCNN network') + # general + parser.add_argument('--cfg', help='experiment configure file name', required=True, type=str) + + args, rest = parser.parse_known_args() + # update config + update_config(args.cfg) + + # training + parser.add_argument('--frequent', help='frequency of logging', default=config.default.frequent, type=int) + args = parser.parse_args() + return args + +args = parse_args() +curr_path = os.path.abspath(os.path.dirname(__file__)) +sys.path.insert(0, os.path.join(curr_path, '../external/mxnet', config.MXNET_VERSION)) + +import shutil +import numpy as np +import mxnet as mx + +from symbols import * +from core import callback, metric +# from core.loader import AnchorLoader +from core.loader import AnchorLoader_poly +from core.module import MutableModule +from utils.create_logger import create_logger +from utils.load_data import merge_roidb, filter_roidb, load_gt_roidb_poly +from utils.load_model import load_param +from utils.PrefetchingIter import PrefetchingIter +from utils.lr_scheduler import WarmupMultiFactorScheduler + + +def train_net(args, ctx, pretrained, epoch, prefix, begin_epoch, end_epoch, lr, lr_step): + logger, final_output_path = create_logger(config.output_path, args.cfg, config.dataset.image_set) + prefix = os.path.join(final_output_path, prefix) + + # load symbol + shutil.copy2(os.path.join(curr_path, 'symbols', config.symbol + '.py'), final_output_path) + sym_instance = eval(config.symbol + '.' + config.symbol)() + sym = sym_instance.get_symbol(config, is_train=True) + feat_sym = sym.get_internals()['rpn_cls_score_output'] + + # setup multi-gpu + batch_size = len(ctx) + input_batch_size = config.TRAIN.BATCH_IMAGES * batch_size + + # print config + pprint.pprint(config) + logger.info('training config:{}\n'.format(pprint.pformat(config))) + + # load dataset and prepare imdb for training + image_sets = [iset for iset in config.dataset.image_set.split('+')] + roidbs = [load_gt_roidb_poly(config.dataset.dataset, image_set, config.dataset.root_path, config.dataset.dataset_path, + flip=config.TRAIN.FLIP) + for image_set in image_sets] + roidb = merge_roidb(roidbs) + roidb = filter_roidb(roidb, config) + + # load training data + train_data = AnchorLoader_poly(feat_sym, roidb, config, batch_size=input_batch_size, shuffle=config.TRAIN.SHUFFLE, ctx=ctx, + feat_stride=config.network.RPN_FEAT_STRIDE, anchor_scales=config.network.ANCHOR_SCALES, + anchor_ratios=config.network.ANCHOR_RATIOS, aspect_grouping=config.TRAIN.ASPECT_GROUPING) + + # infer max shape + max_data_shape = [('data', (config.TRAIN.BATCH_IMAGES, 3, max([v[0] for v in config.SCALES]), max([v[1] for v in config.SCALES])))] + max_data_shape, max_label_shape = train_data.infer_shape(max_data_shape) + max_data_shape.append(('gt_boxes', (config.TRAIN.BATCH_IMAGES, 100, 9))) + print 'providing maximum shape', max_data_shape, max_label_shape + + data_shape_dict = dict(train_data.provide_data_single + train_data.provide_label_single) + pprint.pprint(data_shape_dict) + sym_instance.infer_shape(data_shape_dict) + + # load and initialize params + if config.TRAIN.RESUME: + print('continue training from ', begin_epoch) + arg_params, aux_params = load_param(prefix, begin_epoch, convert=True) + else: + arg_params, aux_params = load_param(pretrained, epoch, convert=True) + sym_instance.init_weight(config, arg_params, aux_params) + + # check parameter shapes + sym_instance.check_parameter_shapes(arg_params, aux_params, data_shape_dict) + + # create solver + fixed_param_prefix = config.network.FIXED_PARAMS + data_names = [k[0] for k in train_data.provide_data_single] + label_names = [k[0] for k in train_data.provide_label_single] + + mod = MutableModule(sym, data_names=data_names, label_names=label_names, + logger=logger, context=ctx, max_data_shapes=[max_data_shape for _ in range(batch_size)], + max_label_shapes=[max_label_shape for _ in range(batch_size)], fixed_param_prefix=fixed_param_prefix) + + if config.TRAIN.RESUME: + mod._preload_opt_states = '%s-%04d.states'%(prefix, begin_epoch) + + # decide training params + # metric + rpn_eval_metric = metric.RPNAccMetric() + rpn_cls_metric = metric.RPNLogLossMetric() + rpn_bbox_metric = metric.RPNL1LossMetric() + rpn_fg_metric = metric.RPNFGFraction(config) + eval_fg_metric = metric.RCNNFGAccuracy(config) + eval_metric = metric.RCNNAccMetric(config) + cls_metric = metric.RCNNLogLossMetric(config) + bbox_metric = metric.RCNNL1LossMetric(config) + # add Rroi loss here + RCNN_proposal_fraction_metric = metric.RCNNFGFraction(config) + Rroi_fg_accuracy = metric.RRoIRCNNFGAccuracy(config) + Rroi_accuracy = metric.RRoIAccMetric(config) + Rroi_cls_metric = metric.RRoIRCNNLogLossMetric(config) + Rroi_bbox_metric = metric.RRoIRCNNL1LossMetric(config) + eval_metrics = mx.metric.CompositeEvalMetric() + # rpn_eval_metric, rpn_cls_metric, rpn_bbox_metric, eval_metric, cls_metric, bbox_metric + for child_metric in [rpn_eval_metric, rpn_cls_metric, rpn_bbox_metric, rpn_fg_metric, eval_fg_metric, eval_metric, cls_metric, bbox_metric, + RCNN_proposal_fraction_metric, Rroi_fg_accuracy, Rroi_accuracy, Rroi_cls_metric, Rroi_bbox_metric]: + eval_metrics.add(child_metric) + # callback + batch_end_callback = callback.Speedometer(train_data.batch_size, frequent=args.frequent) + means = np.tile(np.array(config.TRAIN.BBOX_MEANS), 2 if config.CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + stds = np.tile(np.array(config.TRAIN.BBOX_STDS), 2 if config.CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + Rroi_means = np.tile(np.array(config.TRAIN.RRoI_BBOX_MEANS), 2 if config.network.RRoI_CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + Rroi_stds = np.tile(np.array(config.TRAIN.RRoI_BBOX_STDS), 2 if config.network.RRoI_CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + epoch_end_callback = [mx.callback.module_checkpoint(mod, prefix, period=1, save_optimizer_states=True), callback.do_checkpoint_Rroi(prefix, means, stds, Rroi_means, Rroi_stds)] + # decide learning rate + base_lr = lr + lr_factor = config.TRAIN.lr_factor + lr_epoch = [float(epoch) for epoch in lr_step.split(',')] + lr_epoch_diff = [epoch - begin_epoch for epoch in lr_epoch if epoch > begin_epoch] + lr = base_lr * (lr_factor ** (len(lr_epoch) - len(lr_epoch_diff))) + lr_iters = [int(epoch * len(roidb) / batch_size) for epoch in lr_epoch_diff] + print('lr', lr, 'lr_epoch_diff', lr_epoch_diff, 'lr_iters', lr_iters) + lr_scheduler = WarmupMultiFactorScheduler(lr_iters, lr_factor, config.TRAIN.warmup, config.TRAIN.warmup_lr, config.TRAIN.warmup_step) + # optimizer + optimizer_params = {'momentum': config.TRAIN.momentum, + 'wd': config.TRAIN.wd, + 'learning_rate': lr, + 'lr_scheduler': lr_scheduler, + 'rescale_grad': 1.0, + 'clip_gradient': None} + + if not isinstance(train_data, PrefetchingIter): + train_data = PrefetchingIter(train_data) + + # train + mod.fit(train_data, eval_metric=eval_metrics, epoch_end_callback=epoch_end_callback, + batch_end_callback=batch_end_callback, kvstore=config.default.kvstore, + optimizer='sgd', optimizer_params=optimizer_params, + arg_params=arg_params, aux_params=aux_params, begin_epoch=begin_epoch, num_epoch=end_epoch) + + +def main(): + print('Called with argument:', args) + ctx = [mx.gpu(int(i)) for i in config.gpus.split(',')] + train_net(args, ctx, config.network.pretrained, config.network.pretrained_epoch, config.TRAIN.model_prefix, + config.TRAIN.begin_epoch, config.TRAIN.end_epoch, config.TRAIN.lr, config.TRAIN.lr_step) + +if __name__ == '__main__': + main() diff --git a/faster_rcnn/train_rcnn.py b/faster_rcnn/train_rcnn.py new file mode 100644 index 0000000..1182c47 --- /dev/null +++ b/faster_rcnn/train_rcnn.py @@ -0,0 +1,69 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import _init_paths + +import cv2 +import time +import argparse +import logging +import pprint +import os +import sys +from config.config import config, update_config + +def parse_args(): + parser = argparse.ArgumentParser(description='Train Faster-RCNN network') + # general + parser.add_argument('--cfg', help='experiment configure file name', required=True, type=str) + + args, rest = parser.parse_known_args() + # update config + update_config(args.cfg) + + # training + parser.add_argument('--frequent', help='frequency of logging', default=config.default.frequent, type=int) + args = parser.parse_args() + return args + +args = parse_args() +curr_path = os.path.abspath(os.path.dirname(__file__)) +sys.path.insert(0, os.path.join(curr_path, '../external/mxnet', config.MXNET_VERSION)) + +import shutil +import numpy as np +import mxnet as mx + +from function.train_rpn import train_rpn +from function.test_rpn import test_rpn +from function.train_rcnn import train_rcnn +from utils.combine_model import combine_model +from utils.create_logger import create_logger + + +def main(): + print ('Called with argument:', args) + ctx = [mx.gpu(int(i)) for i in config.gpus.split(',')] + logger, output_path = create_logger(config.output_path, args.cfg, config.dataset.image_set) + shutil.copy2(os.path.join(curr_path, 'symbols', config.symbol + '.py'), output_path) + + prefix = os.path.join(output_path, 'rcnn') + logging.info('########## TRAIN rcnn WITH IMAGENET INIT AND RPN DETECTION') + train_rcnn(config, config.dataset.dataset, config.dataset.image_set, config.dataset.root_path, config.dataset.dataset_path, + args.frequent, config.default.kvstore, config.TRAIN.FLIP, config.TRAIN.SHUFFLE, config.TRAIN.RESUME, + ctx, config.network.pretrained, config.network.pretrained_epoch, prefix, config.TRAIN.begin_epoch, + config.TRAIN.end_epoch, train_shared=False, lr=config.TRAIN.lr, lr_step=config.TRAIN.lr_step, + proposal=config.dataset.proposal, logger=logger) + +if __name__ == '__main__': + main() diff --git a/fpn/__init__.py b/fpn/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/fpn/_init_paths.py b/fpn/_init_paths.py new file mode 100644 index 0000000..5bbe057 --- /dev/null +++ b/fpn/_init_paths.py @@ -0,0 +1,11 @@ +import os.path as osp +import sys + +def add_path(path): + if path not in sys.path: + sys.path.insert(0, path) + +this_dir = osp.dirname(__file__) + +lib_path = osp.join(this_dir, '..', 'lib') +add_path(lib_path) diff --git a/fpn/config/__init__.py b/fpn/config/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/fpn/config/config.py b/fpn/config/config.py new file mode 100644 index 0000000..f5f59bf --- /dev/null +++ b/fpn/config/config.py @@ -0,0 +1,195 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong, Bin Xiao +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import yaml +import numpy as np +from easydict import EasyDict as edict + +config = edict() + +config.MXNET_VERSION = '' +config.output_path = '' +config.symbol = '' +config.gpus = '' +config.CLASS_AGNOSTIC = True +config.SCALES = [(600, 1000)] # first is scale (the shorter side); second is max size +config.TEST_SCALES = [(600, 1000)] +# default training +config.default = edict() +config.default.frequent = 20 +config.default.kvstore = 'device' + +# network related params +config.network = edict() +config.network.pretrained = '' +config.network.pretrained_epoch = 0 +config.network.PIXEL_MEANS = np.array([0, 0, 0]) +config.network.IMAGE_STRIDE = 0 +config.network.RPN_FEAT_STRIDE = 16 +config.network.RCNN_FEAT_STRIDE = 16 +config.network.FIXED_PARAMS = ['gamma', 'beta'] +config.network.FIXED_PARAMS_SHARED = ['gamma', 'beta'] +config.network.ANCHOR_SCALES = (8, 16, 32) +config.network.ANCHOR_RATIOS = (0.5, 1, 2) +config.network.NUM_ANCHORS = len(config.network.ANCHOR_SCALES) * len(config.network.ANCHOR_RATIOS) + +# dataset related params +config.dataset = edict() +config.dataset.dataset = 'PascalVOC' +config.dataset.image_set = '2007_trainval' +config.dataset.test_image_set = '2007_test' +config.dataset.root_path = './data' +config.dataset.dataset_path = './data/VOCdevkit' +config.dataset.NUM_CLASSES = 21 + + +config.TRAIN = edict() + +config.TRAIN.lr = 0 +config.TRAIN.lr_step = '' +config.TRAIN.lr_factor = 0.1 +config.TRAIN.warmup = False +config.TRAIN.warmup_lr = 0 +config.TRAIN.warmup_step = 0 +config.TRAIN.momentum = 0.9 +config.TRAIN.wd = 0.0005 +config.TRAIN.begin_epoch = 0 +config.TRAIN.end_epoch = 0 +config.TRAIN.model_prefix = '' + +config.TRAIN.ALTERNATE = edict() +config.TRAIN.ALTERNATE.RPN_BATCH_IMAGES = 0 +config.TRAIN.ALTERNATE.RCNN_BATCH_IMAGES = 0 +config.TRAIN.ALTERNATE.rpn1_lr = 0 +config.TRAIN.ALTERNATE.rpn1_lr_step = '' # recommend '2' +config.TRAIN.ALTERNATE.rpn1_epoch = 0 # recommend 3 +config.TRAIN.ALTERNATE.rfcn1_lr = 0 +config.TRAIN.ALTERNATE.rfcn1_lr_step = '' # recommend '5' +config.TRAIN.ALTERNATE.rfcn1_epoch = 0 # recommend 8 +config.TRAIN.ALTERNATE.rpn2_lr = 0 +config.TRAIN.ALTERNATE.rpn2_lr_step = '' # recommend '2' +config.TRAIN.ALTERNATE.rpn2_epoch = 0 # recommend 3 +config.TRAIN.ALTERNATE.rfcn2_lr = 0 +config.TRAIN.ALTERNATE.rfcn2_lr_step = '' # recommend '5' +config.TRAIN.ALTERNATE.rfcn2_epoch = 0 # recommend 8 +# optional +config.TRAIN.ALTERNATE.rpn3_lr = 0 +config.TRAIN.ALTERNATE.rpn3_lr_step = '' # recommend '2' +config.TRAIN.ALTERNATE.rpn3_epoch = 0 # recommend 3 + +# whether resume training +config.TRAIN.RESUME = False +# whether flip image +config.TRAIN.FLIP = True +# whether shuffle image +config.TRAIN.SHUFFLE = True +# whether use OHEM +config.TRAIN.ENABLE_OHEM = False +# size of images for each device, 2 for rcnn, 1 for rpn and e2e +config.TRAIN.BATCH_IMAGES = 2 +# e2e changes behavior of anchor loader and metric +config.TRAIN.END2END = False +# group images with similar aspect ratio +config.TRAIN.ASPECT_GROUPING = True + +# R-CNN +# rcnn rois batch size +config.TRAIN.BATCH_ROIS = 128 +config.TRAIN.BATCH_ROIS_OHEM = 128 +# rcnn rois sampling params +config.TRAIN.FG_FRACTION = 0.25 +config.TRAIN.FG_THRESH = 0.5 +config.TRAIN.BG_THRESH_HI = 0.5 +config.TRAIN.BG_THRESH_LO = 0.0 +# rcnn bounding box regression params +config.TRAIN.BBOX_REGRESSION_THRESH = 0.5 +config.TRAIN.BBOX_WEIGHTS = np.array([1.0, 1.0, 1.0, 1.0]) + +# RPN anchor loader +# rpn anchors batch size +config.TRAIN.RPN_BATCH_SIZE = 256 +# rpn anchors sampling params +config.TRAIN.RPN_FG_FRACTION = 0.5 +config.TRAIN.RPN_POSITIVE_OVERLAP = 0.7 +config.TRAIN.RPN_NEGATIVE_OVERLAP = 0.3 +config.TRAIN.RPN_CLOBBER_POSITIVES = False +# rpn bounding box regression params +config.TRAIN.RPN_BBOX_WEIGHTS = (1.0, 1.0, 1.0, 1.0) +config.TRAIN.RPN_POSITIVE_WEIGHT = -1.0 + +# used for end2end training +# RPN proposal +config.TRAIN.CXX_PROPOSAL = True +config.TRAIN.RPN_NMS_THRESH = 0.7 +config.TRAIN.RPN_PRE_NMS_TOP_N = 12000 +config.TRAIN.RPN_POST_NMS_TOP_N = 2000 +config.TRAIN.RPN_MIN_SIZE = config.network.RPN_FEAT_STRIDE +# approximate bounding box regression +config.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED = False +config.TRAIN.BBOX_MEANS = (0.0, 0.0, 0.0, 0.0) +config.TRAIN.BBOX_STDS = (0.1, 0.1, 0.2, 0.2) + +config.TEST = edict() + +# R-CNN testing +# use rpn to generate proposal +config.TEST.HAS_RPN = False +# size of images for each device +config.TEST.BATCH_IMAGES = 1 + +# RPN proposal +config.TEST.CXX_PROPOSAL = True +config.TEST.RPN_NMS_THRESH = 0.7 +config.TEST.RPN_PRE_NMS_TOP_N = 6000 +config.TEST.RPN_POST_NMS_TOP_N = 300 +config.TEST.RPN_MIN_SIZE = config.network.RPN_FEAT_STRIDE + +# RPN generate proposal +config.TEST.PROPOSAL_NMS_THRESH = 0.7 +config.TEST.PROPOSAL_PRE_NMS_TOP_N = 20000 +config.TEST.PROPOSAL_POST_NMS_TOP_N = 2000 +config.TEST.PROPOSAL_MIN_SIZE = config.network.RPN_FEAT_STRIDE + +# RCNN nms +config.TEST.NMS = 0.3 + +config.TEST.max_per_image = 300 + +# Test Model Epoch +config.TEST.test_epoch = 0 + +config.TEST.USE_SOFTNMS = False + + +def update_config(config_file): + exp_config = None + with open(config_file) as f: + exp_config = edict(yaml.load(f)) + for k, v in exp_config.items(): + if k in config: + if isinstance(v, dict): + if k == 'TRAIN': + if 'BBOX_WEIGHTS' in v: + v['BBOX_WEIGHTS'] = np.array(v['BBOX_WEIGHTS']) + elif k == 'network': + if 'PIXEL_MEANS' in v: + v['PIXEL_MEANS'] = np.array(v['PIXEL_MEANS']) + for vk, vv in v.items(): + config[k][vk] = vv + else: + if k == 'SCALES': + config[k][0] = (tuple(v)) + else: + config[k] = v + else: + raise ValueError("key must exist in config.py") diff --git a/fpn/core/DataParallelExecutorGroup.py b/fpn/core/DataParallelExecutorGroup.py new file mode 100644 index 0000000..69fdd5c --- /dev/null +++ b/fpn/core/DataParallelExecutorGroup.py @@ -0,0 +1,596 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import logging +import numpy as np + +from mxnet import context as ctx +from mxnet import ndarray as nd +from mxnet.io import DataDesc +from mxnet.executor_manager import _split_input_slice + + + +def _load_general(data, targets, major_axis): + """Load a list of arrays into a list of arrays specified by slices""" + for d_src, d_targets in zip(data, targets): + if isinstance(d_targets, nd.NDArray): + d_src.copyto(d_targets) + elif isinstance(d_src, (list, tuple)): + for src, dst in zip(d_src, d_targets): + src.copyto(dst) + else: + raise NotImplementedError + + + +def _load_data(batch, targets, major_axis): + """Load data into sliced arrays""" + _load_general(batch.data, targets, major_axis) + + +def _load_label(batch, targets, major_axis): + """Load label into sliced arrays""" + _load_general(batch.label, targets, major_axis) + + +def _merge_multi_context(outputs, major_axis): + """Merge outputs that lives on multiple context into one, so that they look + like living on one context. + """ + rets = [] + for tensors, axis in zip(outputs, major_axis): + if axis >= 0: + rets.append(nd.concatenate(tensors, axis=axis, always_copy=False)) + else: + # negative axis means the there is no batch_size axis, and all the + # results should be the same on each device. We simply take the + # first one, without checking they are actually the same + rets.append(tensors[0]) + return rets + + + +class DataParallelExecutorGroup(object): + """DataParallelExecutorGroup is a group of executors that lives on a group of devices. + This is a helper class used to implement data parallelization. Each mini-batch will + be split and run on the devices. + + Parameters + ---------- + symbol : Symbol + The common symbolic computation graph for all executors. + contexts : list + A list of contexts. + workload : list + If not `None`, could be a list of numbers that specify the workload to be assigned + to different context. Larger number indicate heavier workload. + data_shapes : list + Should be a list of (name, shape) tuples, for the shapes of data. Note the order is + important and should be the same as the order that the `DataIter` provide the data. + label_shapes : list + Should be a list of (name, shape) tuples, for the shapes of label. Note the order is + important and should be the same as the order that the `DataIter` provide the label. + param_names : list + A list of strings, indicating the names of parameters (e.g. weights, filters, etc.) + in the computation graph. + for_training : bool + Indicate whether the executors should be bind for training. When not doing training, + the memory for gradients will not be allocated. + inputs_need_grad : bool + Indicate whether the gradients for the input data should be computed. This is currently + not used. It will be useful for implementing composition of modules. + shared_group : DataParallelExecutorGroup + Default is `None`. This is used in bucketing. When not `None`, it should be a executor + group corresponding to a different bucket. In other words, it will correspond to a different + symbol but with the same set of parameters (e.g. unrolled RNNs with different lengths). + In this case, many memory will be shared. + logger : Logger + Default is `logging`. + fixed_param_names: list of str + Indicate parameters to be fixed during training. Parameters in this list will not allocate + space for gradient, nor do gradient calculation. + grad_req : str, list of str, dict of str to str + Requirement for gradient accumulation. Can be 'write', 'add', or 'null' + (default to 'write'). + Can be specified globally (str) or for each argument (list, dict). + """ + def __init__(self, symbol, contexts, workload, data_shapes, label_shapes, param_names, + for_training, inputs_need_grad, shared_group=None, logger=logging, + fixed_param_names=None, grad_req='write', state_names=None): + self.param_names = param_names + self.arg_names = symbol.list_arguments() + self.aux_names = symbol.list_auxiliary_states() + + self.symbol = symbol + self.contexts = contexts + self.workload = workload + + self.for_training = for_training + self.inputs_need_grad = inputs_need_grad + + self.logger = logger + #In the future we should have a better way to profile memory per device (haibin) + # self._total_exec_bytes = 0 + self.fixed_param_names = fixed_param_names + if self.fixed_param_names is None: + self.fixed_param_names = [] + + self.state_names = state_names + if self.state_names is None: + self.state_names = [] + + if not for_training: + grad_req = 'null' + + # data_shapes = [x if isinstance(x, DataDesc) else DataDesc(*x) for x in data_shapes] + # if label_shapes is not None: + # label_shapes = [x if isinstance(x, DataDesc) else DataDesc(*x) for x in label_shapes] + + data_names = [x.name for x in data_shapes[0]] + + if isinstance(grad_req, str): + self.grad_req = {} + for k in self.arg_names: + if k in self.param_names: + self.grad_req[k] = 'null' if k in self.fixed_param_names else grad_req + elif k in data_names: + self.grad_req[k] = grad_req if self.inputs_need_grad else 'null' + else: + self.grad_req[k] = 'null' + elif isinstance(grad_req, (list, tuple)): + assert len(grad_req) == len(self.arg_names) + self.grad_req = dict(zip(self.arg_names, grad_req)) + elif isinstance(grad_req, dict): + self.grad_req = {} + for k in self.arg_names: + if k in self.param_names: + self.grad_req[k] = 'null' if k in self.fixed_param_names else 'write' + elif k in data_names: + self.grad_req[k] = 'write' if self.inputs_need_grad else 'null' + else: + self.grad_req[k] = 'null' + self.grad_req.update(grad_req) + else: + raise ValueError("grad_req must be one of str, list, tuple, or dict.") + + if shared_group is not None: + self.shared_data_arrays = shared_group.shared_data_arrays + else: + self.shared_data_arrays = [{} for _ in contexts] + + # initialize some instance variables + self.batch_size = len(data_shapes) + self.slices = None + self.execs = [] + self._default_execs = None + self.data_arrays = None + self.label_arrays = None + self.param_arrays = None + self.state_arrays = None + self.grad_arrays = None + self.aux_arrays = None + self.input_grad_arrays = None + + self.data_shapes = None + self.label_shapes = None + self.data_layouts = None + self.label_layouts = None + self.output_layouts = [DataDesc.get_batch_axis(self.symbol[name].attr('__layout__')) + for name in self.symbol.list_outputs()] + self.bind_exec(data_shapes, label_shapes, shared_group) + + def decide_slices(self, data_shapes): + """Decide the slices for each context according to the workload. + + Parameters + ---------- + data_shapes : list + list of (name, shape) specifying the shapes for the input data or label. + """ + assert len(data_shapes) > 0 + major_axis = [DataDesc.get_batch_axis(x.layout) for x in data_shapes] + + for (name, shape), axis in zip(data_shapes, major_axis): + if axis == -1: + continue + + batch_size = shape[axis] + if self.batch_size is not None: + assert batch_size == self.batch_size, ("all data must have the same batch size: " + + ("batch_size = %d, but " % self.batch_size) + + ("%s has shape %s" % (name, shape))) + else: + self.batch_size = batch_size + self.slices = _split_input_slice(self.batch_size, self.workload) + + return major_axis + + def _collect_arrays(self): + """Collect internal arrays from executors.""" + # convenient data structures + self.data_arrays = [[e.arg_dict[name] for name, _ in self.data_shapes[0]] for e in self.execs] + + self.state_arrays = [[e.arg_dict[name] for e in self.execs] + for name in self.state_names] + + if self.label_shapes is not None: + self.label_arrays = [[e.arg_dict[name] for name, _ in self.label_shapes[0]] for e in self.execs] + else: + self.label_arrays = None + + self.param_arrays = [[exec_.arg_arrays[i] for exec_ in self.execs] + for i, name in enumerate(self.arg_names) + if name in self.param_names] + if self.for_training: + self.grad_arrays = [[exec_.grad_arrays[i] for exec_ in self.execs] + for i, name in enumerate(self.arg_names) + if name in self.param_names] + else: + self.grad_arrays = None + + data_names = [x[0] for x in self.data_shapes] + if self.inputs_need_grad: + self.input_grad_arrays = [[exec_.grad_arrays[i] for exec_ in self.execs] + for i, name in enumerate(self.arg_names) + if name in data_names] + else: + self.input_grad_arrays = None + + self.aux_arrays = [[exec_.aux_arrays[i] for exec_ in self.execs] + for i in range(len(self.aux_names))] + + def bind_exec(self, data_shapes, label_shapes, shared_group=None, reshape=False): + """Bind executors on their respective devices. + + Parameters + ---------- + data_shapes : list + label_shapes : list + shared_group : DataParallelExecutorGroup + reshape : bool + """ + assert reshape or not self.execs + + for i in range(len(self.contexts)): + data_shapes_i = data_shapes[i] + if label_shapes is not None: + label_shapes_i = label_shapes[i] + else: + label_shapes_i = [] + + if reshape: + self.execs[i] = self._default_execs[i].reshape( + allow_up_sizing=True, **dict(data_shapes_i + label_shapes_i)) + else: + self.execs.append(self._bind_ith_exec(i, data_shapes_i, label_shapes_i, + shared_group)) + + self.data_shapes = data_shapes + self.label_shapes = label_shapes + self._collect_arrays() + + def reshape(self, data_shapes, label_shapes): + """Reshape executors. + + Parameters + ---------- + data_shapes : list + label_shapes : list + """ + if self._default_execs is None: + self._default_execs = [i for i in self.execs] + for i in range(len(self.contexts)): + self.execs[i] = self._default_execs[i].reshape( + allow_up_sizing=True, **dict(data_shapes[i] + (label_shapes[i] if label_shapes is not None else [])) + ) + self.data_shapes = data_shapes + self.label_shapes = label_shapes + self._collect_arrays() + + + def set_params(self, arg_params, aux_params): + """Assign, i.e. copy parameters to all the executors. + + Parameters + ---------- + arg_params : dict + A dictionary of name to `NDArray` parameter mapping. + aux_params : dict + A dictionary of name to `NDArray` auxiliary variable mapping. + """ + for exec_ in self.execs: + exec_.copy_params_from(arg_params, aux_params) + + def get_params(self, arg_params, aux_params): + """ Copy data from each executor to `arg_params` and `aux_params`. + + Parameters + ---------- + arg_params : list of NDArray + target parameter arrays + aux_params : list of NDArray + target aux arrays + + Notes + ----- + - This function will inplace update the NDArrays in arg_params and aux_params. + """ + for name, block in zip(self.param_names, self.param_arrays): + weight = sum(w.copyto(ctx.cpu()) for w in block) / len(block) + weight.astype(arg_params[name].dtype).copyto(arg_params[name]) + for name, block in zip(self.aux_names, self.aux_arrays): + weight = sum(w.copyto(ctx.cpu()) for w in block) / len(block) + weight.astype(aux_params[name].dtype).copyto(aux_params[name]) + + def forward(self, data_batch, is_train=None): + """Split `data_batch` according to workload and run forward on each devices. + + Parameters + ---------- + data_batch : DataBatch + Or could be any object implementing similar interface. + is_train : bool + The hint for the backend, indicating whether we are during training phase. + Default is `None`, then the value `self.for_training` will be used. + Returns + ------- + + """ + _load_data(data_batch, self.data_arrays, self.data_layouts) + if is_train is None: + is_train = self.for_training + + if self.label_arrays is not None: + assert not is_train or data_batch.label + if data_batch.label: + _load_label(data_batch, self.label_arrays, self.label_layouts) + + for exec_ in self.execs: + exec_.forward(is_train=is_train) + + + def get_outputs(self, merge_multi_context=True): + """Get outputs of the previous forward computation. + + Parameters + ---------- + merge_multi_context : bool + Default is `True`. In the case when data-parallelism is used, the outputs + will be collected from multiple devices. A `True` value indicate that we + should merge the collected results so that they look like from a single + executor. + + Returns + ------- + If `merge_multi_context` is `True`, it is like `[out1, out2]`. Otherwise, it + is like `[[out1_dev1, out1_dev2], [out2_dev1, out2_dev2]]`. All the output + elements are `NDArray`. + """ + outputs = [[exec_.outputs[i] for exec_ in self.execs] + for i in range(len(self.execs[0].outputs))] + if merge_multi_context: + outputs = _merge_multi_context(outputs, self.output_layouts) + return outputs + + def get_states(self, merge_multi_context=True): + """Get states from all devices + + Parameters + ---------- + merge_multi_context : bool + Default is `True`. In the case when data-parallelism is used, the states + will be collected from multiple devices. A `True` value indicate that we + should merge the collected results so that they look like from a single + executor. + + Returns + ------- + If `merge_multi_context` is `True`, it is like `[out1, out2]`. Otherwise, it + is like `[[out1_dev1, out1_dev2], [out2_dev1, out2_dev2]]`. All the output + elements are `NDArray`. + """ + assert not merge_multi_context, \ + "merge_multi_context=True is not supported for get_states yet." + return self.state_arrays + + def set_states(self, states=None, value=None): + """Set value for states. Only one of states & value can be specified. + + Parameters + ---------- + states : list of list of NDArrays + source states arrays formatted like [[state1_dev1, state1_dev2], + [state2_dev1, state2_dev2]]. + value : number + a single scalar value for all state arrays. + """ + if states is not None: + assert value is None, "Only one of states & value can be specified." + _load_general(states, self.state_arrays, (0,)*len(states)) + else: + assert value is not None, "At least one of states & value must be specified." + assert states is None, "Only one of states & value can be specified." + for d_dst in self.state_arrays: + for dst in d_dst: + dst[:] = value + + def get_input_grads(self, merge_multi_context=True): + """Get the gradients with respect to the inputs of the module. + + Parameters + ---------- + merge_multi_context : bool + Default is `True`. In the case when data-parallelism is used, the outputs + will be collected from multiple devices. A `True` value indicate that we + should merge the collected results so that they look like from a single + executor. + + Returns + ------- + If `merge_multi_context` is `True`, it is like `[grad1, grad2]`. Otherwise, it + is like `[[grad1_dev1, grad1_dev2], [grad2_dev1, grad2_dev2]]`. All the output + elements are `NDArray`. + """ + assert self.inputs_need_grad + if merge_multi_context: + return _merge_multi_context(self.input_grad_arrays, self.data_layouts) + return self.input_grad_arrays + + def backward(self, out_grads=None): + """Run backward on all devices. A backward should be called after + a call to the forward function. Backward cannot be called unless + `self.for_training` is `True`. + + Parameters + ---------- + out_grads : NDArray or list of NDArray, optional + Gradient on the outputs to be propagated back. + This parameter is only needed when bind is called + on outputs that are not a loss function. + """ + assert self.for_training, 're-bind with for_training=True to run backward' + if out_grads is None: + out_grads = [] + + for i, exec_ in enumerate(self.execs): + out_grads_slice = [] + exec_.backward(out_grads=out_grads_slice) + + def update_metric(self, eval_metric, labels): + """Accumulate the performance according to `eval_metric` on all devices. + + Parameters + ---------- + eval_metric : EvalMetric + The metric used for evaluation. + labels : list of NDArray + Typically comes from `label` of a `DataBatch`. + """ + for texec, labels in zip(self.execs, labels): + eval_metric.update(labels, texec.outputs) + + def _bind_ith_exec(self, i, data_shapes, label_shapes, shared_group): + """Internal utility function to bind the i-th executor. + """ + shared_exec = None if shared_group is None else shared_group.execs[i] + context = self.contexts[i] + shared_data_arrays = self.shared_data_arrays[i] + + input_shapes = dict(data_shapes) + if label_shapes is not None: + input_shapes.update(dict(label_shapes)) + + arg_shapes, _, aux_shapes = self.symbol.infer_shape(**input_shapes) + assert arg_shapes is not None, "shape inference failed" + + input_types = {x.name: x.dtype for x in data_shapes} + if label_shapes is not None: + input_types.update({x.name: x.dtype for x in label_shapes}) + arg_types, _, aux_types = self.symbol.infer_type(**input_types) + assert arg_types is not None, "type inference failed" + + arg_arrays = [] + grad_arrays = {} if self.for_training else None + + def _get_or_reshape(name, shared_data_arrays, arg_shape, arg_type, context, logger): + """Internal helper to get a memory block or re-use by re-shaping""" + if name in shared_data_arrays: + arg_arr = shared_data_arrays[name] + + if np.prod(arg_arr.shape) >= np.prod(arg_shape): + # nice, we can directly re-use this data blob + assert arg_arr.dtype == arg_type + arg_arr = arg_arr.reshape(arg_shape) + else: + logger.warning(('bucketing: data "%s" has a shape %s' % (name, arg_shape)) + + (', which is larger than already allocated ') + + ('shape %s' % (arg_arr.shape,)) + + ('. Need to re-allocate. Consider putting ') + + ('default_bucket_key to') + + (' be the bucket taking the largest input for better ') + + ('memory sharing.')) + arg_arr = nd.zeros(arg_shape, context, dtype=arg_type) + + # replace existing shared array because the new one is bigger + shared_data_arrays[name] = arg_arr + else: + arg_arr = nd.zeros(arg_shape, context, dtype=arg_type) + shared_data_arrays[name] = arg_arr + + return arg_arr + + # create or borrow arguments and gradients + for j in range(len(self.arg_names)): + name = self.arg_names[j] + if name in self.param_names: # model parameters + if shared_exec is None: + arg_arr = nd.zeros(arg_shapes[j], context, dtype=arg_types[j]) + if self.grad_req[name] != 'null': + grad_arr = nd.zeros(arg_shapes[j], context, dtype=arg_types[j]) + grad_arrays[name] = grad_arr + else: + arg_arr = shared_exec.arg_dict[name] + assert arg_arr.shape == arg_shapes[j] + assert arg_arr.dtype == arg_types[j] + if self.grad_req[name] != 'null': + grad_arrays[name] = shared_exec.grad_dict[name] + else: # data, label, or states + arg_arr = _get_or_reshape(name, shared_data_arrays, arg_shapes[j], arg_types[j], + context, self.logger) + + # data might also need grad if inputs_need_grad is True + if self.grad_req[name] != 'null': + grad_arrays[name] = _get_or_reshape('grad of ' + name, shared_data_arrays, + arg_shapes[j], arg_types[j], context, + self.logger) + + arg_arrays.append(arg_arr) + + # create or borrow aux variables + if shared_exec is None: + aux_arrays = [nd.zeros(s, context, dtype=t) for s, t in zip(aux_shapes, aux_types)] + else: + for j, arr in enumerate(shared_exec.aux_arrays): + assert aux_shapes[j] == arr.shape + assert aux_types[j] == arr.dtype + aux_arrays = shared_exec.aux_arrays[:] + + executor = self.symbol.bind(ctx=context, args=arg_arrays, + args_grad=grad_arrays, aux_states=aux_arrays, + grad_req=self.grad_req, shared_exec=shared_exec) + # Get the total bytes allocated for this executor + return executor + + def _sliced_shape(self, shapes, i, major_axis): + """Get the sliced shapes for the i-th executor. + + Parameters + ---------- + shapes : list of (str, tuple) + The original (name, shape) pairs. + i : int + Which executor we are dealing with. + """ + sliced_shapes = [] + for desc, axis in zip(shapes, major_axis): + shape = list(desc.shape) + if axis >= 0: + shape[axis] = self.slices[i].stop - self.slices[i].start + sliced_shapes.append(DataDesc(desc.name, tuple(shape), desc.dtype, desc.layout)) + return sliced_shapes + + def install_monitor(self, mon): + """Install monitor on all executors""" + for exe in self.execs: + mon.install(exe) diff --git a/fpn/core/__init__.py b/fpn/core/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/fpn/core/callback.py b/fpn/core/callback.py new file mode 100644 index 0000000..24460eb --- /dev/null +++ b/fpn/core/callback.py @@ -0,0 +1,77 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import time +import logging +import mxnet as mx + + +class Speedometer(object): + def __init__(self, batch_size, frequent=50): + self.batch_size = batch_size + self.frequent = frequent + self.init = False + self.tic = 0 + self.last_count = 0 + + def __call__(self, param): + """Callback to Show speed.""" + count = param.nbatch + if self.last_count > count: + self.init = False + self.last_count = count + + if self.init: + if count % self.frequent == 0: + speed = self.frequent * self.batch_size / (time.time() - self.tic) + s = '' + if param.eval_metric is not None: + name, value = param.eval_metric.get() + s = "Epoch[%d] Batch [%d]\tSpeed: %.2f samples/sec\tTrain-" % (param.epoch, count, speed) + for n, v in zip(name, value): + s += "%s=%f,\t" % (n, v) + else: + s = "Iter[%d] Batch [%d]\tSpeed: %.2f samples/sec" % (param.epoch, count, speed) + + logging.info(s) + print(s) + self.tic = time.time() + else: + self.init = True + self.tic = time.time() + + +def do_checkpoint(prefix, means, stds): + def _callback(iter_no, sym, arg, aux): + arg['bbox_pred_weight_test'] = (arg['bbox_pred_weight'].T * mx.nd.array(stds)).T + arg['bbox_pred_bias_test'] = arg['bbox_pred_bias'] * mx.nd.array(stds) + mx.nd.array(means) + mx.model.save_checkpoint(prefix, iter_no + 1, sym, arg, aux) + arg.pop('bbox_pred_weight_test') + arg.pop('bbox_pred_bias_test') + return _callback + +def do_checkpoint_Rroi(prefix, means, stds, Rroi_means, Rroi_stds): + def _callback(iter_no, sym, arg, aux): + # pdb.set_trace() + arg['bbox_pred_weight_test'] = (arg['bbox_pred_weight'].T * mx.nd.array(stds)).T + arg['bbox_pred_bias_test'] = arg['bbox_pred_bias'] * mx.nd.array(stds) + mx.nd.array(means) + # params for Rroi regression + arg['Rroi_bbox_pred_weight_test'] = (arg['Rroi_bbox_pred_weight'].T * mx.nd.array(Rroi_stds)).T + arg['Rroi_bbox_pred_bias_test'] = arg['Rroi_bbox_pred_bias'] * mx.nd.array(Rroi_stds) + mx.nd.array(Rroi_means) + + mx.model.save_checkpoint(prefix, iter_no + 1, sym, arg, aux) + arg.pop('bbox_pred_weight_test') + arg.pop('bbox_pred_bias_test') + arg.pop('Rroi_bbox_pred_weight_test') + arg.pop('Rroi_bbox_pred_bias_test') + return _callback \ No newline at end of file diff --git a/fpn/core/loader.py b/fpn/core/loader.py new file mode 100644 index 0000000..b7dc9c7 --- /dev/null +++ b/fpn/core/loader.py @@ -0,0 +1,483 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import numpy as np +import mxnet as mx +from mxnet.executor_manager import _split_input_slice + +from config.config import config +from rpn.rpn import get_rpn_testbatch, get_rpn_batch, get_rpn_batch_poly, assign_pyramid_anchor, assign_pyramid_anchor_poly +from rcnn import get_rcnn_testbatch +import pdb + +def par_assign_anchor_wrapper(cfg, iroidb, feat_sym, feat_strides, anchor_scales, anchor_ratios, allowed_border): + # get testing data for multigpu + data, rpn_label = get_rpn_batch(iroidb, cfg) + data_shape = {k: v.shape for k, v in data.items()} + del data_shape['im_info'] + + # add gt_boxes to data for e2e + data['gt_boxes'] = rpn_label['gt_boxes'][np.newaxis, :, :] + + feat_shape = [y[1] for y in [x.infer_shape(**data_shape) for x in feat_sym]] + label = assign_pyramid_anchor(feat_shape, rpn_label['gt_boxes'], data['im_info'], cfg, + feat_strides, anchor_scales, anchor_ratios, allowed_border) + return {'data': data, 'label': label} + +def par_assign_anchor_wrapper_poly(cfg, iroidb, feat_sym, feat_strides, anchor_scales, anchor_ratios, allowed_border): + # get testing data for multigpu + data, rpn_label = get_rpn_batch_poly(iroidb, cfg) + data_shape = {k: v.shape for k, v in data.items()} + del data_shape['im_info'] + # pdb.set_trace() + # add gt_boxes to data for e2e + data['gt_boxes'] = rpn_label['gt_boxes'][np.newaxis, :, :] + + feat_shape = [y[1] for y in [x.infer_shape(**data_shape) for x in feat_sym]] + label = assign_pyramid_anchor_poly(feat_shape, rpn_label['gt_boxes'], data['im_info'], cfg, + feat_strides, anchor_scales, anchor_ratios, allowed_border) + return {'data': data, 'label': label} + +class TestLoader(mx.io.DataIter): + def __init__(self, roidb, config, batch_size=1, shuffle=False, + has_rpn=False): + super(TestLoader, self).__init__() + + # save parameters as properties + self.cfg = config + self.roidb = roidb + self.batch_size = batch_size + self.shuffle = shuffle + self.has_rpn = has_rpn + + # infer properties from roidb + self.size = len(self.roidb) + self.index = np.arange(self.size) + + # decide data and label names (only for training) + if has_rpn: + self.data_name = ['data', 'im_info'] + else: + self.data_name = ['data', 'rois'] + self.label_name = None + + # status variable for synchronization between get_data and get_label + self.cur = 0 + self.data = None + self.label = [] + self.im_info = None + + # get first batch to fill in provide_data and provide_label + self.reset() + self.get_batch() + + @property + def provide_data(self): + return [[(k, v.shape) for k, v in zip(self.data_name, idata)] for idata in self.data] + + @property + def provide_label(self): + return [None for _ in range(len(self.data))] + + @property + def provide_data_single(self): + return [(k, v.shape) for k, v in zip(self.data_name, self.data[0])] + + @property + def provide_label_single(self): + return None + + def reset(self): + self.cur = 0 + if self.shuffle: + np.random.shuffle(self.index) + + def iter_next(self): + return self.cur < self.size + + def next(self): + if self.iter_next(): + self.get_batch() + self.cur += self.batch_size + return self.im_info, mx.io.DataBatch(data=self.data, label=self.label, + pad=self.getpad(), index=self.getindex(), + provide_data=self.provide_data, provide_label=self.provide_label) + else: + raise StopIteration + + def getindex(self): + return self.cur / self.batch_size + + def getpad(self): + if self.cur + self.batch_size > self.size: + return self.cur + self.batch_size - self.size + else: + return 0 + + def get_batch(self): + cur_from = self.cur + cur_to = min(cur_from + self.batch_size, self.size) + roidb = [self.roidb[self.index[i]] for i in range(cur_from, cur_to)] + if self.has_rpn: + data, label, im_info = get_rpn_testbatch(roidb, self.cfg) + else: + data, label, im_info = get_rcnn_testbatch(roidb, self.cfg) + self.data = [[mx.nd.array(idata[name]) for name in self.data_name] for idata in data] + self.im_info = im_info + + def get_batch_individual(self): + cur_from = self.cur + cur_to = min(cur_from + self.batch_size, self.size) + roidb = [self.roidb[self.index[i]] for i in range(cur_from, cur_to)] + if self.has_rpn: + data, label, im_info = get_rpn_testbatch(roidb, self.cfg) + else: + data, label, im_info = get_rcnn_testbatch(roidb, self.cfg) + self.data = [mx.nd.array(data[name]) for name in self.data_name] + self.im_info = im_info + + + + + +class PyramidAnchorIterator(mx.io.DataIter): + + # pool = Pool(processes=4) + def __init__(self, feat_sym, roidb, cfg, batch_size=1, shuffle=False, ctx=None, work_load_list=None, + feat_strides=(4, 8, 16, 32, 64), anchor_scales=(8, ), anchor_ratios=(0.5, 1, 2), allowed_border=0, + aspect_grouping=False): + """ + This Iter will provide roi data to Fast R-CNN network + :param feat_sym: to infer shape of assign_output + :param roidb: must be preprocessed + :param batch_size: must divide BATCH_SIZE(128) + :param shuffle: bool + :param ctx: list of contexts + :param work_load_list: list of work load + :param aspect_grouping: group images with similar aspects + :return: AnchorLoader + """ + super(PyramidAnchorIterator, self).__init__() + + # save parameters as properties + self.feat_sym = feat_sym + self.roidb = roidb + self.cfg = cfg + self.batch_size = batch_size + self.shuffle = shuffle + self.ctx = ctx + if self.ctx is None: + self.ctx = [mx.cpu()] + self.work_load_list = work_load_list + self.feat_strides = feat_strides + self.anchor_scales = anchor_scales + self.anchor_ratios = anchor_ratios + self.allowed_border = allowed_border + self.aspect_grouping = aspect_grouping + + # infer properties from roidb + self.size = len(roidb) + self.index = np.arange(self.size) + + # decide data and label names + if self.cfg.TRAIN.END2END: + self.data_name = ['data', 'im_info', 'gt_boxes'] + else: + self.data_name = ['data'] + self.feat_pyramid_level = np.log2(self.cfg.network.RPN_FEAT_STRIDE).astype(int) + # self.label_name = ['label_p' + str(x) for x in self.feat_pyramid_level] +\ + # ['bbox_target_p' + str(x) for x in self.feat_pyramid_level] +\ + # ['bbox_weight_p' + str(x) for x in self.feat_pyramid_level] + + self.label_name = ['label', 'bbox_target', 'bbox_weight'] + + # status variable for synchronization between get_data and get_label + self.cur = 0 + self.batch = None + self.data = None + self.label = None + + # get first batch to fill in provide_data and provide_label + self.reset() + #gaihuilai + self.get_batch_parallel() + #self.get_batch() + + @property + def provide_data(self): + return [[(k, v.shape) for k, v in zip(self.data_name, self.data[i])] for i in xrange(len(self.data))] + + @property + def provide_label(self): + return [[(k, v.shape) for k, v in zip(self.label_name, self.label[i])] for i in xrange(len(self.data))] + + @property + def provide_data_single(self): + return [(k, v.shape) for k, v in zip(self.data_name, self.data[0])] + + @property + def provide_label_single(self): + return [(k, v.shape) for k, v in zip(self.label_name, self.label[0])] + + def reset(self): + self.cur = 0 + if self.shuffle: + if self.aspect_grouping: + widths = np.array([r['width'] for r in self.roidb]) + heights = np.array([r['height'] for r in self.roidb]) + horz = (widths >= heights) + vert = np.logical_not(horz) + horz_inds = np.where(horz)[0] + vert_inds = np.where(vert)[0] + inds = np.hstack((np.random.permutation(horz_inds), np.random.permutation(vert_inds))) + extra = inds.shape[0] % self.batch_size + inds_ = np.reshape(inds[:-extra], (-1, self.batch_size)) + row_perm = np.random.permutation(np.arange(inds_.shape[0])) + inds[:-extra] = np.reshape(inds_[row_perm, :], (-1,)) + self.index = inds + else: + np.random.shuffle(self.index) + + def iter_next(self): + return self.cur + self.batch_size <= self.size + + def next(self): + if self.iter_next(): + self.get_batch_parallel() + #self.get_batch() + self.cur += self.batch_size + return mx.io.DataBatch(data=self.data, label=self.label, + pad=self.getpad(), index=self.getindex(), + provide_data=self.provide_data, provide_label=self.provide_label) + else: + raise StopIteration + + def getindex(self): + return self.cur / self.batch_size + + def getpad(self): + if self.cur + self.batch_size > self.size: + return self.cur + self.batch_size - self.size + else: + return 0 + + def infer_shape(self, max_data_shape=None, max_label_shape=None): + """ Return maximum data and label shape for single gpu """ + if max_data_shape is None: + max_data_shape = [] + if max_label_shape is None: + max_label_shape = [] + max_shapes = dict(max_data_shape + max_label_shape) + input_batch_size = max_shapes['data'][0] + im_info = [[max_shapes['data'][2], max_shapes['data'][3], 1.0]] + + feat_shape = [y[1] for y in [x.infer_shape(**max_shapes) for x in self.feat_sym]] + label = assign_pyramid_anchor(feat_shape, np.zeros((0, 5)), im_info, self.cfg, + self.feat_strides, self.anchor_scales, self.anchor_ratios, self.allowed_border) + label = [label[k] for k in self.label_name] + label_shape = [(k, tuple([input_batch_size] + list(v.shape[1:]))) for k, v in zip(self.label_name, label)] + + return max_data_shape, label_shape + + def get_batch_parallel(self): + cur_from = self.cur + cur_to = min(cur_from + self.batch_size, self.size) + roidb = [self.roidb[self.index[i]] for i in range(cur_from, cur_to)] + # decide multi device slice + work_load_list = self.work_load_list + ctx = self.ctx + if work_load_list is None: + work_load_list = [1] * len(ctx) + assert isinstance(work_load_list, list) and len(work_load_list) == len(ctx), \ + "Invalid settings for work load. " + slices = _split_input_slice(self.batch_size, work_load_list) + + rst = [] + # print (roidb) + # print (roidb.__len__()) + for idx, islice in enumerate(slices): + #print(range(islice.start, islice.stop)) + iroidb = [roidb[i] for i in range(islice.start, islice.stop)] + rst.append(par_assign_anchor_wrapper(self.cfg, iroidb, self.feat_sym, self.feat_strides, self.anchor_scales, + self.anchor_ratios, self.allowed_border)) + + all_data = [_['data'] for _ in rst] + all_label = [_['label'] for _ in rst] + self.data = [[mx.nd.array(data[key]) for key in self.data_name] for data in all_data] + self.label = [[mx.nd.array(label[key]) for key in self.label_name] for label in all_label] + +class PyramidAnchorIterator_poly(mx.io.DataIter): + + # pool = Pool(processes=4) + def __init__(self, feat_sym, roidb, cfg, batch_size=1, shuffle=False, ctx=None, work_load_list=None, + feat_strides=(4, 8, 16, 32, 64), anchor_scales=(8, ), anchor_ratios=(0.5, 1, 2), allowed_border=0, + aspect_grouping=False): + """ + This Iter will provide roi data to Fast R-CNN network + :param feat_sym: to infer shape of assign_output + :param roidb: must be preprocessed + :param batch_size: must divide BATCH_SIZE(128) + :param shuffle: bool + :param ctx: list of contexts + :param work_load_list: list of work load + :param aspect_grouping: group images with similar aspects + :return: AnchorLoader + """ + super(PyramidAnchorIterator_poly, self).__init__() + + # save parameters as properties + self.feat_sym = feat_sym + self.roidb = roidb + self.cfg = cfg + self.batch_size = batch_size + self.shuffle = shuffle + self.ctx = ctx + if self.ctx is None: + self.ctx = [mx.cpu()] + self.work_load_list = work_load_list + self.feat_strides = feat_strides + self.anchor_scales = anchor_scales + self.anchor_ratios = anchor_ratios + self.allowed_border = allowed_border + self.aspect_grouping = aspect_grouping + + # infer properties from roidb + self.size = len(roidb) + self.index = np.arange(self.size) + + # decide data and label names + if self.cfg.TRAIN.END2END: + self.data_name = ['data', 'im_info', 'gt_boxes'] + else: + self.data_name = ['data'] + self.feat_pyramid_level = np.log2(self.cfg.network.RPN_FEAT_STRIDE).astype(int) + # self.label_name = ['label_p' + str(x) for x in self.feat_pyramid_level] +\ + # ['bbox_target_p' + str(x) for x in self.feat_pyramid_level] +\ + # ['bbox_weight_p' + str(x) for x in self.feat_pyramid_level] + + self.label_name = ['label', 'bbox_target', 'bbox_weight'] + + # status variable for synchronization between get_data and get_label + self.cur = 0 + self.batch = None + self.data = None + self.label = None + + # get first batch to fill in provide_data and provide_label + self.reset() + #gaihuilai + self.get_batch_parallel() + #self.get_batch() + + @property + def provide_data(self): + return [[(k, v.shape) for k, v in zip(self.data_name, self.data[i])] for i in xrange(len(self.data))] + + @property + def provide_label(self): + return [[(k, v.shape) for k, v in zip(self.label_name, self.label[i])] for i in xrange(len(self.data))] + + @property + def provide_data_single(self): + return [(k, v.shape) for k, v in zip(self.data_name, self.data[0])] + + @property + def provide_label_single(self): + return [(k, v.shape) for k, v in zip(self.label_name, self.label[0])] + + def reset(self): + self.cur = 0 + if self.shuffle: + if self.aspect_grouping: + widths = np.array([r['width'] for r in self.roidb]) + heights = np.array([r['height'] for r in self.roidb]) + horz = (widths >= heights) + vert = np.logical_not(horz) + horz_inds = np.where(horz)[0] + vert_inds = np.where(vert)[0] + inds = np.hstack((np.random.permutation(horz_inds), np.random.permutation(vert_inds))) + extra = inds.shape[0] % self.batch_size + inds_ = np.reshape(inds[:-extra], (-1, self.batch_size)) + row_perm = np.random.permutation(np.arange(inds_.shape[0])) + inds[:-extra] = np.reshape(inds_[row_perm, :], (-1,)) + self.index = inds + else: + np.random.shuffle(self.index) + + def iter_next(self): + return self.cur + self.batch_size <= self.size + + def next(self): + if self.iter_next(): + self.get_batch_parallel() + #self.get_batch() + self.cur += self.batch_size + return mx.io.DataBatch(data=self.data, label=self.label, + pad=self.getpad(), index=self.getindex(), + provide_data=self.provide_data, provide_label=self.provide_label) + else: + raise StopIteration + + def getindex(self): + return self.cur / self.batch_size + + def getpad(self): + if self.cur + self.batch_size > self.size: + return self.cur + self.batch_size - self.size + else: + return 0 + + def infer_shape(self, max_data_shape=None, max_label_shape=None): + """ Return maximum data and label shape for single gpu """ + if max_data_shape is None: + max_data_shape = [] + if max_label_shape is None: + max_label_shape = [] + max_shapes = dict(max_data_shape + max_label_shape) + input_batch_size = max_shapes['data'][0] + im_info = [[max_shapes['data'][2], max_shapes['data'][3], 1.0]] + + feat_shape = [y[1] for y in [x.infer_shape(**max_shapes) for x in self.feat_sym]] + label = assign_pyramid_anchor_poly(feat_shape, np.zeros((0, 9)), im_info, self.cfg, + self.feat_strides, self.anchor_scales, self.anchor_ratios, self.allowed_border) + label = [label[k] for k in self.label_name] + label_shape = [(k, tuple([input_batch_size] + list(v.shape[1:]))) for k, v in zip(self.label_name, label)] + + return max_data_shape, label_shape + + def get_batch_parallel(self): + cur_from = self.cur + cur_to = min(cur_from + self.batch_size, self.size) + roidb = [self.roidb[self.index[i]] for i in range(cur_from, cur_to)] + # decide multi device slice + work_load_list = self.work_load_list + ctx = self.ctx + if work_load_list is None: + work_load_list = [1] * len(ctx) + assert isinstance(work_load_list, list) and len(work_load_list) == len(ctx), \ + "Invalid settings for work load. " + slices = _split_input_slice(self.batch_size, work_load_list) + + rst = [] + # print (roidb) + # print (roidb.__len__()) + for idx, islice in enumerate(slices): + #print(range(islice.start, islice.stop)) + iroidb = [roidb[i] for i in range(islice.start, islice.stop)] + rst.append(par_assign_anchor_wrapper_poly(self.cfg, iroidb, self.feat_sym, self.feat_strides, self.anchor_scales, + self.anchor_ratios, self.allowed_border)) + + all_data = [_['data'] for _ in rst] + all_label = [_['label'] for _ in rst] + self.data = [[mx.nd.array(data[key]) for key in self.data_name] for data in all_data] + self.label = [[mx.nd.array(label[key]) for key in self.label_name] for label in all_label] \ No newline at end of file diff --git a/fpn/core/metric.py b/fpn/core/metric.py new file mode 100644 index 0000000..6e0ea56 --- /dev/null +++ b/fpn/core/metric.py @@ -0,0 +1,406 @@ +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by JianDing +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import mxnet as mx +import numpy as np + +def get_rpn_names(): + pred = ['rpn_cls_prob', 'rpn_bbox_loss'] + label = ['rpn_label', 'rpn_bbox_target', 'rpn_bbox_weight'] + return pred, label + + +def get_rcnn_names(cfg): + pred = ['rcnn_cls_prob', 'rcnn_bbox_loss'] + label = ['rcnn_label', 'rcnn_bbox_target', 'rcnn_bbox_weight'] + if cfg.TRAIN.ENABLE_OHEM or cfg.TRAIN.END2END: + pred.append('rcnn_label') + if cfg.TRAIN.END2END: + rpn_pred, rpn_label = get_rpn_names() + pred = rpn_pred + pred + label = rpn_label + return pred, label + +def get_Rroi_names(cfg): + pred, label = get_rcnn_names(cfg) + pred.append('Rroi_rcnn_cls_prob') + pred.append('Rroi_rcnn_bbox_loss') + + if cfg.TRAIN.ENABLE_OHEM or cfg.TRAIN.END2END: + pred.append('Rroi_rcnn_label') + + return pred, label + +class RPNAccMetric(mx.metric.EvalMetric): + def __init__(self): + super(RPNAccMetric, self).__init__('RPNAcc') + self.pred, self.label = get_rpn_names() + + def update(self, labels, preds): + pred = preds[self.pred.index('rpn_cls_prob')] + label = labels[self.label.index('rpn_label')] + + # pred (b, c, p) or (b, c, h, w) + pred_label = mx.ndarray.argmax_channel(pred).asnumpy().astype('int32') + pred_label = pred_label.reshape((pred_label.shape[0], -1)) + # label (b, p) + label = label.asnumpy().astype('int32') + + # filter with keep_inds + keep_inds = np.where(label != -1) + pred_label = pred_label[keep_inds] + label = label[keep_inds] + + self.sum_metric += np.sum(pred_label.flat == label.flat) + self.num_inst += len(pred_label.flat) + +class RPNLogLossMetric(mx.metric.EvalMetric): + def __init__(self): + super(RPNLogLossMetric, self).__init__('RPNLogLoss') + self.pred, self.label = get_rpn_names() + + def update(self, labels, preds): + pred = preds[self.pred.index('rpn_cls_prob')] + label = labels[self.label.index('rpn_label')] + + # label (b, p) + label = label.asnumpy().astype('int32').reshape((-1)) + # pred (b, c, p) or (b, c, h, w) --> (b, p, c) --> (b*p, c) + pred = pred.asnumpy().reshape((pred.shape[0], pred.shape[1], -1)).transpose((0, 2, 1)) + pred = pred.reshape((label.shape[0], -1)) + + # filter with keep_inds + keep_inds = np.where(label != -1)[0] + label = label[keep_inds] + cls = pred[keep_inds, label] + + cls += 1e-14 + cls_loss = -1 * np.log(cls) + cls_loss = np.sum(cls_loss) + self.sum_metric += cls_loss + self.num_inst += label.shape[0] + +class RPNL1LossMetric(mx.metric.EvalMetric): + def __init__(self): + super(RPNL1LossMetric, self).__init__('RPNL1Loss') + self.pred, self.label = get_rpn_names() + + def update(self, labels, preds): + bbox_loss = preds[self.pred.index('rpn_bbox_loss')].asnumpy() + + # calculate num_inst (average on those kept anchors) + label = labels[self.label.index('rpn_label')].asnumpy() + num_inst = np.sum(label != -1) + + self.sum_metric += np.sum(bbox_loss) + self.num_inst += num_inst + +class RPNFGFraction(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RPNFGFraction, self).__init__('Proposal FG Fraction') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_rcnn_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('rcnn_label')] + else: + label = labels[self.label.index('rcnn_label')] + num_classes = pred.shape[-1] + # selection of ground truth label is different from softmax or sigmoid classifier + label = label.asnumpy().reshape(-1, ).astype('int32') + fg_inds = np.where(label > 0)[0] + bg_inds = np.where(label == 0)[0] + self.sum_metric += fg_inds.shape[0] + self.num_inst += (fg_inds.shape[0] + bg_inds.shape[0]) + +class RCNNFGAccuracy(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RCNNFGAccuracy, self).__init__('R-CNN FG Accuracy') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_rcnn_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('rcnn_label')] + else: + label = labels[self.label.index('rcnn_label')] + num_classes = pred.shape[-1] + pred_label = pred.asnumpy().reshape(-1, num_classes).argmax(axis=1).astype('int32') + # selection of ground truth label is different from softmax or sigmoid classifier + label = label.asnumpy().reshape(-1, ).astype('int32') + keep_inds = np.where(label > 0) + # filter out -1 label because of OHEM or invalid samples + pred_label = pred_label[keep_inds] + label = label[keep_inds] + + self.sum_metric += np.sum(np.equal(pred_label.flat, label.flat)) + self.num_inst += pred_label.shape[0] + +class RCNNAccMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RCNNAccMetric, self).__init__('RCNNAcc') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_rcnn_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('rcnn_label')] + else: + label = labels[self.label.index('rcnn_label')] + + last_dim = pred.shape[-1] + pred_label = pred.asnumpy().reshape(-1, last_dim).argmax(axis=1).astype('int32') + label = label.asnumpy().reshape(-1,).astype('int32') + + # filter with keep_inds + keep_inds = np.where(label != -1) + pred_label = pred_label[keep_inds] + label = label[keep_inds] + + self.sum_metric += np.sum(pred_label.flat == label.flat) + self.num_inst += len(pred_label.flat) + +class RCNNLogLossMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RCNNLogLossMetric, self).__init__('RCNNLogLoss') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_rcnn_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('rcnn_label')] + else: + label = labels[self.label.index('rcnn_label')] + + last_dim = pred.shape[-1] + pred = pred.asnumpy().reshape(-1, last_dim) + label = label.asnumpy().reshape(-1,).astype('int32') + + # filter with keep_inds + keep_inds = np.where(label != -1)[0] + label = label[keep_inds] + cls = pred[keep_inds, label] + + cls += 1e-14 + cls_loss = -1 * np.log(cls) + cls_loss = np.sum(cls_loss) + self.sum_metric += cls_loss + self.num_inst += label.shape[0] + +class RCNNL1LossMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RCNNL1LossMetric, self).__init__('RCNNL1Loss') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_rcnn_names(cfg) + + def update(self, labels, preds): + bbox_loss = preds[self.pred.index('rcnn_bbox_loss')].asnumpy() + if self.ohem: + label = preds[self.pred.index('rcnn_label')].asnumpy() + else: + if self.e2e: + label = preds[self.pred.index('rcnn_label')].asnumpy() + else: + label = labels[self.label.index('rcnn_label')].asnumpy() + + # calculate num_inst (average on those kept anchors) + num_inst = np.sum(label != -1) + + self.sum_metric += np.sum(bbox_loss) + self.num_inst += num_inst + +class RCNNFGFraction(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RCNNFGFraction, self).__init__('RRoI Proposal FG Fraction') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_Rroi_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('Rroi_rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('Rroi_rcnn_label')] + else: + label = labels[self.label.index('Rroi_rcnn_label')] + num_classes = pred.shape[-1] + # selection of ground truth label is different from softmax or sigmoid classifier + label = label.asnumpy().reshape(-1, ).astype('int32') + fg_inds = np.where(label > 0)[0] + bg_inds = np.where(label == 0)[0] + self.sum_metric += fg_inds.shape[0] + self.num_inst += (fg_inds.shape[0] + bg_inds.shape[0]) + +class RRoIRCNNFGAccuracy(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RRoIRCNNFGAccuracy, self).__init__('RRoIR-CNN FG Accuracy') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_Rroi_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('Rroi_rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('Rroi_rcnn_label')] + else: + label = labels[self.label.index('Rroi_rcnn_label')] + num_classes = pred.shape[-1] + pred_label = pred.asnumpy().reshape(-1, num_classes).argmax(axis=1).astype('int32') + # selection of ground truth label is different from softmax or sigmoid classifier + label = label.asnumpy().reshape(-1, ).astype('int32') + keep_inds = np.where(label > 0) + # filter out -1 label because of OHEM or invalid samples + pred_label = pred_label[keep_inds] + label = label[keep_inds] + + self.sum_metric += np.sum(np.equal(pred_label.flat, label.flat)) + self.num_inst += pred_label.shape[0] + +class RRoIAccMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RRoIAccMetric, self).__init__('RRoIAcc') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_Rroi_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('Rroi_rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('Rroi_rcnn_label')] + else: + label = labels[self.label.index('Rroi_rcnn_label')] + + last_dim = pred.shape[-1] + pred_label = pred.asnumpy().reshape(-1, last_dim).argmax(axis=1).astype('int32') + label = label.asnumpy().reshape(-1,).astype('int32') + + # filter with keep_inds + keep_inds = np.where(label != -1) + pred_label = pred_label[keep_inds] + label = label[keep_inds] + + self.sum_metric += np.sum(pred_label.flat == label.flat) + self.num_inst += len(pred_label.flat) + +class RRoIRCNNLogLossMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RRoIRCNNLogLossMetric, self).__init__('RRoIRCNNLogLoss') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_Rroi_names(cfg) + + def update(self, labels, preds): + pred = preds[self.pred.index('Rroi_rcnn_cls_prob')] + if self.ohem or self.e2e: + label = preds[self.pred.index('Rroi_rcnn_label')] + else: + label = labels[self.label.index('Rroi_rcnn_label')] + + last_dim = pred.shape[-1] + pred = pred.asnumpy().reshape(-1, last_dim) + label = label.asnumpy().reshape(-1,).astype('int32') + + # filter with keep_inds + keep_inds = np.where(label != -1)[0] + label = label[keep_inds] + cls = pred[keep_inds, label] + + cls += 1e-14 + cls_loss = -1 * np.log(cls) + cls_loss = np.sum(cls_loss) + self.sum_metric += cls_loss + self.num_inst += label.shape[0] + +class RRoIRCNNL1LossMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(RRoIRCNNL1LossMetric, self).__init__('RRoIRCNNL1Loss') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_Rroi_names(cfg) + + def update(self, labels, preds): + # pdb.set_trace() + bbox_loss = preds[self.pred.index('Rroi_rcnn_bbox_loss')].asnumpy() + if self.ohem: + label = preds[self.pred.index('Rroi_rcnn_label')].asnumpy() + else: + if self.e2e: + label = preds[self.pred.index('Rroi_rcnn_label')].asnumpy() + else: + label = labels[self.label.index('Rroi_rcnn_label')].asnumpy() + # pdb.set_trace() + # calculate num_inst (average on those kept anchors) + num_inst = np.sum(label != -1) + + self.sum_metric += np.sum(bbox_loss) + self.num_inst += num_inst + +class STLogLossMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(STLogLossMetric, self).__init__('STLogLoss') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_rcnn_names(cfg) + + def update(self, labels, preds): + # pred = preds[self.pred.index('rcnn_cls_prob')] + # if self.ohem or self.e2e: + # label = preds[self.pred.index('rcnn_label')] + # else: + # label = labels[self.label.index('rcnn_label')] + + pred = preds[-1] + label = preds[-2] + + last_dim = pred.shape[-1] + pred = pred.asnumpy().reshape(-1, last_dim) + label = label.asnumpy().reshape(-1,).astype('int32') + + # filter with keep_inds + keep_inds = np.where(label != -1)[0] + label = label[keep_inds] + cls = pred[keep_inds, label] + + cls += 1e-14 + cls_loss = -1 * np.log(cls) + cls_loss = np.sum(cls_loss) + self.sum_metric += cls_loss + self.num_inst += label.shape[0] + + +class STL1LossMetric(mx.metric.EvalMetric): + def __init__(self, cfg): + super(STL1LossMetric, self).__init__('STL1Loss') + self.e2e = cfg.TRAIN.END2END + self.ohem = cfg.TRAIN.ENABLE_OHEM + self.pred, self.label = get_rcnn_names(cfg) + # add st loss here + self.pred.append('st_loss') + + def update(self, labels, preds): + st_loss = preds[self.pred.index('st_loss')].asnumpy() + + label = preds[-2].asnumpy() + # pdb.set_trace() + num_inst = np.sum(label != 0) + + self.sum_metric += np.sum(st_loss) + self.num_inst += num_inst \ No newline at end of file diff --git a/fpn/core/module.py b/fpn/core/module.py new file mode 100644 index 0000000..0aae9bd --- /dev/null +++ b/fpn/core/module.py @@ -0,0 +1,1086 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Zheng Zhang +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- +"""A `MutableModule` implement the `BaseModule` API, and allows input shape +varying with training iterations. If shapes vary, executors will rebind, +using shared arrays from the initial module binded with maximum shape. +""" + +import time +import logging +import warnings + +from mxnet import context as ctx +from mxnet.initializer import Uniform, InitDesc +from mxnet.module.base_module import BaseModule, _check_input_names, _parse_data_desc, _as_list +from mxnet.model import _create_kvstore, _initialize_kvstore, _update_params, _update_params_on_kvstore, load_checkpoint, BatchEndParam +from mxnet import metric +# from mxnet.module.executor_group import DataParallelExecutorGroup + +from .DataParallelExecutorGroup import DataParallelExecutorGroup +from mxnet import ndarray as nd +from mxnet import optimizer as opt +import sys + +class Module(BaseModule): + """Module is a basic module that wrap a `Symbol`. It is functionally the same + as the `FeedForward` model, except under the module API. + + Parameters + ---------- + symbol : Symbol + data_names : list of str + Default is `('data')` for a typical model used in image classification. + label_names : list of str + Default is `('softmax_label')` for a typical model used in image + classification. + logger : Logger + Default is `logging`. + context : Context or list of Context + Default is `cpu()`. + work_load_list : list of number + Default `None`, indicating uniform workload. + fixed_param_names: list of str + Default `None`, indicating no network parameters are fixed. + state_names : list of str + states are similar to data and label, but not provided by data iterator. + Instead they are initialized to 0 and can be set by set_states() + """ + def __init__(self, symbol, data_names=('data',), label_names=('softmax_label',), + logger=logging, context=ctx.cpu(), work_load_list=None, + fixed_param_names=None, state_names=None): + super(Module, self).__init__(logger=logger) + + if isinstance(context, ctx.Context): + context = [context] + self._context = context + if work_load_list is None: + work_load_list = [1] * len(self._context) + assert len(work_load_list) == len(self._context) + self._work_load_list = work_load_list + + self._symbol = symbol + + data_names = list(data_names) if data_names is not None else [] + label_names = list(label_names) if label_names is not None else [] + state_names = list(state_names) if state_names is not None else [] + fixed_param_names = list(fixed_param_names) if fixed_param_names is not None else [] + + _check_input_names(symbol, data_names, "data", True) + _check_input_names(symbol, label_names, "label", False) + _check_input_names(symbol, state_names, "state", True) + _check_input_names(symbol, fixed_param_names, "fixed_param", True) + + arg_names = symbol.list_arguments() + input_names = data_names + label_names + state_names + self._param_names = [x for x in arg_names if x not in input_names] + self._fixed_param_names = fixed_param_names + self._aux_names = symbol.list_auxiliary_states() + self._data_names = data_names + self._label_names = label_names + self._state_names = state_names + self._output_names = symbol.list_outputs() + + self._arg_params = None + self._aux_params = None + self._params_dirty = False + + self._optimizer = None + self._kvstore = None + self._update_on_kvstore = None + self._updater = None + self._preload_opt_states = None + self._grad_req = None + + self._exec_group = None + self._data_shapes = None + self._label_shapes = None + + @staticmethod + def load(prefix, epoch, load_optimizer_states=False, **kwargs): + """Create a model from previously saved checkpoint. + + Parameters + ---------- + prefix : str + path prefix of saved model files. You should have + "prefix-symbol.json", "prefix-xxxx.params", and + optionally "prefix-xxxx.states", where xxxx is the + epoch number. + epoch : int + epoch to load. + load_optimizer_states : bool + whether to load optimizer states. Checkpoint needs + to have been made with save_optimizer_states=True. + data_names : list of str + Default is `('data')` for a typical model used in image classification. + label_names : list of str + Default is `('softmax_label')` for a typical model used in image + classification. + logger : Logger + Default is `logging`. + context : Context or list of Context + Default is `cpu()`. + work_load_list : list of number + Default `None`, indicating uniform workload. + fixed_param_names: list of str + Default `None`, indicating no network parameters are fixed. + """ + sym, args, auxs = load_checkpoint(prefix, epoch) + mod = Module(symbol=sym, **kwargs) + mod._arg_params = args + mod._aux_params = auxs + mod.params_initialized = True + if load_optimizer_states: + mod._preload_opt_states = '%s-%04d.states'%(prefix, epoch) + return mod + + def save_checkpoint(self, prefix, epoch, save_optimizer_states=False): + """Save current progress to checkpoint. + Use mx.callback.module_checkpoint as epoch_end_callback to save during training. + + Parameters + ---------- + prefix : str + The file prefix to checkpoint to + epoch : int + The current epoch number + save_optimizer_states : bool + Whether to save optimizer states for continue training + """ + self._symbol.save('%s-symbol.json'%prefix) + param_name = '%s-%04d.params' % (prefix, epoch) + self.save_params(param_name) + logging.info('Saved checkpoint to \"%s\"', param_name) + if save_optimizer_states: + state_name = '%s-%04d.states' % (prefix, epoch) + self.save_optimizer_states(state_name) + logging.info('Saved optimizer state to \"%s\"', state_name) + + def _reset_bind(self): + """Internal function to reset binded state.""" + self.binded = False + self._exec_group = None + self._data_shapes = None + self._label_shapes = None + + @property + def data_names(self): + """A list of names for data required by this module.""" + return self._data_names + + @property + def label_names(self): + """A list of names for labels required by this module.""" + return self._label_names + + @property + def output_names(self): + """A list of names for the outputs of this module.""" + return self._output_names + + @property + def data_shapes(self): + """Get data shapes. + Returns + ------- + A list of `(name, shape)` pairs. + """ + assert self.binded + return self._data_shapes + + @property + def label_shapes(self): + """Get label shapes. + Returns + ------- + A list of `(name, shape)` pairs. The return value could be `None` if + the module does not need labels, or if the module is not binded for + training (in this case, label information is not available). + """ + assert self.binded + return self._label_shapes + + @property + def output_shapes(self): + """Get output shapes. + Returns + ------- + A list of `(name, shape)` pairs. + """ + assert self.binded + return self._exec_group.get_output_shapes() + + def get_params(self): + """Get current parameters. + Returns + ------- + `(arg_params, aux_params)`, each a dictionary of name to parameters (in + `NDArray`) mapping. + """ + assert self.binded and self.params_initialized + + if self._params_dirty: + self._sync_params_from_devices() + return (self._arg_params, self._aux_params) + + def init_params(self, initializer=Uniform(0.01), arg_params=None, aux_params=None, + allow_missing=False, force_init=False, allow_extra=False): + """Initialize the parameters and auxiliary states. + + Parameters + ---------- + initializer : Initializer + Called to initialize parameters if needed. + arg_params : dict + If not None, should be a dictionary of existing arg_params. Initialization + will be copied from that. + aux_params : dict + If not None, should be a dictionary of existing aux_params. Initialization + will be copied from that. + allow_missing : bool + If true, params could contain missing values, and the initializer will be + called to fill those missing params. + force_init : bool + If true, will force re-initialize even if already initialized. + """ + if self.params_initialized and not force_init: + warnings.warn("Parameters already initialized and force_init=False. " + "init_params call ignored.", stacklevel=2) + return + assert self.binded, 'call bind before initializing the parameters' + + def _impl(name, arr, cache): + """Internal helper for parameter initialization""" + if cache is not None: + if name in cache: + cache_arr = cache[name] + + # just in case the cached array is just the target itself + if cache_arr is not arr: + cache_arr.copyto(arr) + else: + if not allow_missing: + raise RuntimeError("%s is not presented" % name) + if initializer != None: + initializer(name, arr) + else: + initializer(name, arr) + + attrs = self._symbol.attr_dict() + for name, arr in self._arg_params.items(): + desc = InitDesc(name, attrs.get(name, None)) + _impl(desc, arr, arg_params) + + for name, arr in self._aux_params.items(): + desc = InitDesc(name, attrs.get(name, None)) + _impl(desc, arr, aux_params) + + self.params_initialized = True + self._params_dirty = False + + # copy the initialized parameters to devices + self._exec_group.set_params(self._arg_params, self._aux_params) + + def set_params(self, arg_params, aux_params, allow_missing=False, force_init=True): + """Assign parameter and aux state values. + + Parameters + ---------- + arg_params : dict + Dictionary of name to value (`NDArray`) mapping. + aux_params : dict + Dictionary of name to value (`NDArray`) mapping. + allow_missing : bool + If true, params could contain missing values, and the initializer will be + called to fill those missing params. + force_init : bool + If true, will force re-initialize even if already initialized. + + Examples + -------- + An example of setting module parameters:: + >>> sym, arg_params, aux_params = \ + >>> mx.model.load_checkpoint(model_prefix, n_epoch_load) + >>> mod.set_params(arg_params=arg_params, aux_params=aux_params) + """ + if not allow_missing: + self.init_params(initializer=None, arg_params=arg_params, aux_params=aux_params, + allow_missing=allow_missing, force_init=force_init) + return + + if self.params_initialized and not force_init: + warnings.warn("Parameters already initialized and force_init=False. " + "set_params call ignored.", stacklevel=2) + return + + self._exec_group.set_params(arg_params, aux_params) + + # because we didn't update self._arg_params, they are dirty now. + self._params_dirty = True + self.params_initialized = True + + def bind(self, data_shapes, label_shapes=None, for_training=True, + inputs_need_grad=False, force_rebind=False, shared_module=None, + grad_req='write'): + """Bind the symbols to construct executors. This is necessary before one + can perform computation with the module. + + Parameters + ---------- + data_shapes : list of (str, tuple) + Typically is `data_iter.provide_data`. + label_shapes : list of (str, tuple) + Typically is `data_iter.provide_label`. + for_training : bool + Default is `True`. Whether the executors should be bind for training. + inputs_need_grad : bool + Default is `False`. Whether the gradients to the input data need to be computed. + Typically this is not needed. But this might be needed when implementing composition + of modules. + force_rebind : bool + Default is `False`. This function does nothing if the executors are already + binded. But with this `True`, the executors will be forced to rebind. + shared_module : Module + Default is `None`. This is used in bucketing. When not `None`, the shared module + essentially corresponds to a different bucket -- a module with different symbol + but with the same sets of parameters (e.g. unrolled RNNs with different lengths). + """ + # force rebinding is typically used when one want to switch from + # training to prediction phase. + if force_rebind: + self._reset_bind() + + if self.binded: + self.logger.warning('Already binded, ignoring bind()') + return + + self.for_training = for_training + self.inputs_need_grad = inputs_need_grad + self.binded = True + self._grad_req = grad_req + + if not for_training: + assert not inputs_need_grad + else: + pass + # this is not True, as some module might not contains a loss function + # that consumes the labels + # assert label_shapes is not None + + # self._data_shapes, self._label_shapes = _parse_data_desc( + # self.data_names, self.label_names, data_shapes, label_shapes) + self._data_shapes, self._label_shapes = zip(*[_parse_data_desc(self.data_names, self.label_names, data_shape, label_shape) + for data_shape, label_shape in zip(data_shapes, label_shapes)]) + if self._label_shapes.count(None) == len(self._label_shapes): + self._label_shapes = None + + if shared_module is not None: + assert isinstance(shared_module, Module) and \ + shared_module.binded and shared_module.params_initialized + shared_group = shared_module._exec_group + else: + shared_group = None + self._exec_group = DataParallelExecutorGroup(self._symbol, self._context, + self._work_load_list, self._data_shapes, + self._label_shapes, self._param_names, + for_training, inputs_need_grad, + shared_group, logger=self.logger, + fixed_param_names=self._fixed_param_names, + grad_req=grad_req, + state_names=self._state_names) + # self._total_exec_bytes = self._exec_group._total_exec_bytes + if shared_module is not None: + self.params_initialized = True + self._arg_params = shared_module._arg_params + self._aux_params = shared_module._aux_params + elif self.params_initialized: + # if the parameters are already initialized, we are re-binding + # so automatically copy the already initialized params + self._exec_group.set_params(self._arg_params, self._aux_params) + else: + assert self._arg_params is None and self._aux_params is None + param_arrays = [ + nd.zeros(x[0].shape, dtype=x[0].dtype) + for x in self._exec_group.param_arrays + ] + self._arg_params = {name:arr for name, arr in zip(self._param_names, param_arrays)} + + aux_arrays = [ + nd.zeros(x[0].shape, dtype=x[0].dtype) + for x in self._exec_group.aux_arrays + ] + self._aux_params = {name:arr for name, arr in zip(self._aux_names, aux_arrays)} + + if shared_module is not None and shared_module.optimizer_initialized: + self.borrow_optimizer(shared_module) + + + def reshape(self, data_shapes, label_shapes=None): + """Reshape the module for new input shapes. + + Parameters + ---------- + data_shapes : list of (str, tuple) + Typically is `data_iter.provide_data`. + label_shapes : list of (str, tuple) + Typically is `data_iter.provide_label`. + """ + assert self.binded + # self._data_shapes, self._label_shapes = _parse_data_desc( + # self.data_names, self.label_names, data_shapes, label_shapes) + self._data_shapes, self._label_shapes = zip(*[_parse_data_desc(self.data_names, self.label_names, data_shape, label_shape) + for data_shape, label_shape in zip(data_shapes, label_shapes)]) + + self._exec_group.reshape(self._data_shapes, self._label_shapes) + + + def init_optimizer(self, kvstore='local', optimizer='sgd', + optimizer_params=(('learning_rate', 0.01),), force_init=False): + """Install and initialize optimizers. + + Parameters + ---------- + kvstore : str or KVStore + Default `'local'`. + optimizer : str or Optimizer + Default `'sgd'` + optimizer_params : dict + Default `(('learning_rate', 0.01),)`. The default value is not a dictionary, + just to avoid pylint warning of dangerous default values. + force_init : bool + Default `False`, indicating whether we should force re-initializing the + optimizer in the case an optimizer is already installed. + """ + assert self.binded and self.params_initialized + + if self.optimizer_initialized and not force_init: + self.logger.warning('optimizer already initialized, ignoring...') + return + + (kvstore, update_on_kvstore) = \ + _create_kvstore(kvstore, len(self._context), self._arg_params) + + batch_size = self._exec_group.batch_size + if kvstore and 'dist' in kvstore.type and '_sync' in kvstore.type: + batch_size *= kvstore.num_workers + rescale_grad = 1.0/batch_size + + if isinstance(optimizer, str): + idx2name = {} + if update_on_kvstore: + idx2name.update(enumerate(self._exec_group.param_names)) + else: + for k in range(len(self._context)): + idx2name.update({i*len(self._context)+k: n + for i, n in enumerate(self._exec_group.param_names)}) + optimizer_params = dict(optimizer_params) + if 'rescale_grad' not in optimizer_params: + optimizer_params['rescale_grad'] = rescale_grad + optimizer = opt.create(optimizer, + sym=self.symbol, param_idx2name=idx2name, + **optimizer_params) + else: + assert isinstance(optimizer, opt.Optimizer) + if optimizer.rescale_grad != rescale_grad: + #pylint: disable=no-member + warnings.warn( + "Optimizer created manually outside Module but rescale_grad " + + "is not normalized to 1.0/batch_size/num_workers (%s vs. %s). "%( + optimizer.rescale_grad, rescale_grad) + + "Is this intended?", stacklevel=2) + + self._optimizer = optimizer + self._kvstore = kvstore + self._update_on_kvstore = update_on_kvstore + self._updater = None + + if kvstore: + # copy initialized local parameters to kvstore + _initialize_kvstore(kvstore=kvstore, + param_arrays=self._exec_group.param_arrays, + arg_params=self._arg_params, + param_names=self._param_names, + update_on_kvstore=update_on_kvstore) + if update_on_kvstore: + kvstore.set_optimizer(self._optimizer) + else: + self._updater = opt.get_updater(optimizer) + + self.optimizer_initialized = True + + if self._preload_opt_states is not None: + self.load_optimizer_states(self._preload_opt_states) + self._preload_opt_states = None + + def borrow_optimizer(self, shared_module): + """Borrow optimizer from a shared module. Used in bucketing, where exactly the same + optimizer (esp. kvstore) is used. + + Parameters + ---------- + shared_module : Module + """ + assert shared_module.optimizer_initialized + self._optimizer = shared_module._optimizer + self._kvstore = shared_module._kvstore + self._update_on_kvstore = shared_module._update_on_kvstore + self._updater = shared_module._updater + self.optimizer_initialized = True + + def forward(self, data_batch, is_train=None): + """Forward computation. + + Parameters + ---------- + data_batch : DataBatch + Could be anything with similar API implemented. + is_train : bool + Default is `None`, which means `is_train` takes the value of `self.for_training`. + """ + assert self.binded and self.params_initialized + self._exec_group.forward(data_batch, is_train) + + def backward(self, out_grads=None): + """Backward computation. + + Parameters + ---------- + out_grads : NDArray or list of NDArray, optional + Gradient on the outputs to be propagated back. + This parameter is only needed when bind is called + on outputs that are not a loss function. + """ + assert self.binded and self.params_initialized + self._exec_group.backward(out_grads=out_grads) + + def update(self): + """Update parameters according to the installed optimizer and the gradients computed + in the previous forward-backward batch. + """ + assert self.binded and self.params_initialized and self.optimizer_initialized + + self._params_dirty = True + if self._update_on_kvstore: + try: + _update_params_on_kvstore(self._exec_group.param_arrays, + self._exec_group.grad_arrays, + self._kvstore) + except: + _update_params_on_kvstore(self._exec_group.param_arrays, + self._exec_group.grad_arrays, + self._kvstore, param_names=self._exec_group.param_names) + else: + _update_params(self._exec_group.param_arrays, + self._exec_group.grad_arrays, + updater=self._updater, + num_device=len(self._context), + kvstore=self._kvstore) + + def get_outputs(self, merge_multi_context=True): + """Get outputs of the previous forward computation. + + Parameters + ---------- + merge_multi_context : bool + Default is `True`. In the case when data-parallelism is used, the outputs + will be collected from multiple devices. A `True` value indicate that we + should merge the collected results so that they look like from a single + executor. + + Returns + ------- + If `merge_multi_context` is `True`, it is like `[out1, out2]`. Otherwise, it + is like `[[out1_dev1, out1_dev2], [out2_dev1, out2_dev2]]`. All the output + elements are `NDArray`. + """ + assert self.binded and self.params_initialized + return self._exec_group.get_outputs(merge_multi_context=merge_multi_context) + + def get_input_grads(self, merge_multi_context=True): + """Get the gradients with respect to the inputs of the module. + + Parameters + ---------- + merge_multi_context : bool + Default is `True`. In the case when data-parallelism is used, the outputs + will be collected from multiple devices. A `True` value indicate that we + should merge the collected results so that they look like from a single + executor. + + Returns + ------- + If `merge_multi_context` is `True`, it is like `[grad1, grad2]`. Otherwise, it + is like `[[grad1_dev1, grad1_dev2], [grad2_dev1, grad2_dev2]]`. All the output + elements are `NDArray`. + """ + assert self.binded and self.params_initialized and self.inputs_need_grad + return self._exec_group.get_input_grads(merge_multi_context=merge_multi_context) + + def get_states(self, merge_multi_context=True): + """Get states from all devices + + Parameters + ---------- + merge_multi_context : bool + Default is `True`. In the case when data-parallelism is used, the states + will be collected from multiple devices. A `True` value indicate that we + should merge the collected results so that they look like from a single + executor. + + Returns + ------- + If `merge_multi_context` is `True`, it is like `[out1, out2]`. Otherwise, it + is like `[[out1_dev1, out1_dev2], [out2_dev1, out2_dev2]]`. All the output + elements are `NDArray`. + """ + assert self.binded and self.params_initialized + return self._exec_group.get_states(merge_multi_context=merge_multi_context) + + def set_states(self, states=None, value=None): + """Set value for states. Only one of states & value can be specified. + + Parameters + ---------- + states : list of list of NDArrays + source states arrays formatted like [[state1_dev1, state1_dev2], + [state2_dev1, state2_dev2]]. + value : number + a single scalar value for all state arrays. + """ + assert self.binded and self.params_initialized + self._exec_group.set_states(states, value) + + def update_metric(self, eval_metric, labels): + """Evaluate and accumulate evaluation metric on outputs of the last forward computation. + + Parameters + ---------- + eval_metric : EvalMetric + labels : list of NDArray + Typically `data_batch.label`. + """ + self._exec_group.update_metric(eval_metric, labels) + + def _sync_params_from_devices(self): + """Synchronize parameters from devices to CPU. This function should be called after + calling `update` that updates the parameters on the devices, before one can read the + latest parameters from `self._arg_params` and `self._aux_params`. + """ + self._exec_group.get_params(self._arg_params, self._aux_params) + self._params_dirty = False + + def save_optimizer_states(self, fname): + """Save optimizer (updater) state to file + + Parameters + ---------- + fname : str + Path to output states file. + """ + assert self.optimizer_initialized + + if self._update_on_kvstore: + self._kvstore.save_optimizer_states(fname) + else: + with open(fname, 'wb') as fout: + fout.write(self._updater.get_states()) + + def load_optimizer_states(self, fname): + """Load optimizer (updater) state from file + + Parameters + ---------- + fname : str + Path to input states file. + """ + assert self.optimizer_initialized + + if self._update_on_kvstore: + self._kvstore.load_optimizer_states(fname) + else: + self._updater.set_states(open(fname, 'rb').read()) + + def install_monitor(self, mon): + """ Install monitor on all executors """ + assert self.binded + self._exec_group.install_monitor(mon) + + + + +class MutableModule(BaseModule): + """A mutable module is a module that supports variable input data. + + Parameters + ---------- + symbol : Symbol + data_names : list of str + label_names : list of str + logger : Logger + context : Context or list of Context + work_load_list : list of number + max_data_shapes : list of (name, shape) tuple, designating inputs whose shape vary + max_label_shapes : list of (name, shape) tuple, designating inputs whose shape vary + fixed_param_prefix : list of str, indicating fixed parameters + """ + def __init__(self, symbol, data_names, label_names, + logger=logging, context=ctx.cpu(), work_load_list=None, + max_data_shapes=None, max_label_shapes=None, fixed_param_prefix=None): + super(MutableModule, self).__init__(logger=logger) + self._symbol = symbol + self._data_names = data_names + self._label_names = label_names + self._context = context + self._work_load_list = work_load_list + + self._curr_module = None + self._max_data_shapes = max_data_shapes + self._max_label_shapes = max_label_shapes + self._fixed_param_prefix = fixed_param_prefix + + fixed_param_names = list() + if fixed_param_prefix is not None: + for name in self._symbol.list_arguments(): + for prefix in self._fixed_param_prefix: + if prefix in name: + fixed_param_names.append(name) + self._fixed_param_names = fixed_param_names + self._preload_opt_states = None + + def _reset_bind(self): + self.binded = False + self._curr_module = None + + @property + def data_names(self): + return self._data_names + + @property + def output_names(self): + return self._symbol.list_outputs() + + @property + def data_shapes(self): + assert self.binded + return self._curr_module.data_shapes + + @property + def label_shapes(self): + assert self.binded + return self._curr_module.label_shapes + + @property + def output_shapes(self): + assert self.binded + return self._curr_module.output_shapes + + def get_params(self): + assert self.binded and self.params_initialized + return self._curr_module.get_params() + + def init_params(self, initializer=Uniform(0.01), arg_params=None, aux_params=None, + allow_missing=False, force_init=False, allow_extra=False): + if self.params_initialized and not force_init: + return + assert self.binded, 'call bind before initializing the parameters' + self._curr_module.init_params(initializer=initializer, arg_params=arg_params, + aux_params=aux_params, allow_missing=allow_missing, + force_init=force_init) + self.params_initialized = True + + def bind(self, data_shapes, label_shapes=None, for_training=True, + inputs_need_grad=False, force_rebind=False, shared_module=None, grad_req='write'): + # in case we already initialized params, keep it + if self.params_initialized: + arg_params, aux_params = self.get_params() + + # force rebinding is typically used when one want to switch from + # training to prediction phase. + if force_rebind: + self._reset_bind() + + if self.binded: + self.logger.warning('Already binded, ignoring bind()') + return + + assert shared_module is None, 'shared_module for MutableModule is not supported' + + self.for_training = for_training + self.inputs_need_grad = inputs_need_grad + self.binded = True + + max_shapes_dict = dict() + if self._max_data_shapes is not None: + max_shapes_dict.update(dict(self._max_data_shapes[0])) + if self._max_label_shapes is not None: + max_shapes_dict.update(dict(self._max_label_shapes[0])) + + max_data_shapes = list() + for name, shape in data_shapes[0]: + if name in max_shapes_dict: + max_data_shapes.append((name, max_shapes_dict[name])) + else: + max_data_shapes.append((name, shape)) + + max_label_shapes = list() + if not label_shapes.count(None) == len(label_shapes): + for name, shape in label_shapes[0]: + if name in max_shapes_dict: + max_label_shapes.append((name, max_shapes_dict[name])) + else: + max_label_shapes.append((name, shape)) + + if len(max_label_shapes) == 0: + max_label_shapes = None + + module = Module(self._symbol, self._data_names, self._label_names, logger=self.logger, + context=self._context, work_load_list=self._work_load_list, + fixed_param_names=self._fixed_param_names) + module.bind([max_data_shapes for _ in range(len(self._context))], [max_label_shapes for _ in range(len(self._context))], + for_training, inputs_need_grad, force_rebind=False, shared_module=None) + self._curr_module = module + + # copy back saved params, if already initialized + if self.params_initialized: + self.set_params(arg_params, aux_params) + + def save_checkpoint(self, prefix, epoch, save_optimizer_states=False): + """Save current progress to checkpoint. + Use mx.callback.module_checkpoint as epoch_end_callback to save during training. + + Parameters + ---------- + prefix : str + The file prefix to checkpoint to + epoch : int + The current epoch number + save_optimizer_states : bool + Whether to save optimizer states for continue training + """ + self._curr_module.save_checkpoint(prefix, epoch, save_optimizer_states) + + def init_optimizer(self, kvstore='local', optimizer='sgd', + optimizer_params=(('learning_rate', 0.01),), force_init=False): + assert self.binded and self.params_initialized + if self.optimizer_initialized and not force_init: + self.logger.warning('optimizer already initialized, ignoring.') + return + + self._curr_module._preload_opt_states = self._preload_opt_states + self._curr_module.init_optimizer(kvstore, optimizer, optimizer_params, + force_init=force_init) + self.optimizer_initialized = True + + def fit(self, train_data, eval_data=None, eval_metric='acc', + epoch_end_callback=None, batch_end_callback=None, kvstore='local', + optimizer='sgd', optimizer_params=(('learning_rate', 0.01),), + eval_end_callback=None, + eval_batch_end_callback=None, initializer=Uniform(0.01), + arg_params=None, aux_params=None, allow_missing=False, + force_rebind=False, force_init=False, begin_epoch=0, num_epoch=None, + validation_metric=None, monitor=None, prefix=None, state=None): + """Train the module parameters. + + Parameters + ---------- + train_data : DataIter + eval_data : DataIter + If not `None`, will be used as validation set and evaluate the performance + after each epoch. + eval_metric : str or EvalMetric + Default `'acc'`. The performance measure used to display during training. + epoch_end_callback : function or list of function + Each callback will be called with the current `epoch`, `symbol`, `arg_params` + and `aux_params`. + batch_end_callback : function or list of function + Each callback will be called with a `BatchEndParam`. + kvstore : str or KVStore + Default `'local'`. + optimizer : str or Optimizer + Default `'sgd'` + optimizer_params : dict + Default `(('learning_rate', 0.01),)`. The parameters for the optimizer constructor. + The default value is not a `dict`, just to avoid pylint warning on dangerous + default values. + eval_end_callback : function or list of function + These will be called at the end of each full evaluation, with the metrics over + the entire evaluation set. + eval_batch_end_callback : function or list of function + These will be called at the end of each minibatch during evaluation + initializer : Initializer + Will be called to initialize the module parameters if not already initialized. + arg_params : dict + Default `None`, if not `None`, should be existing parameters from a trained + model or loaded from a checkpoint (previously saved model). In this case, + the value here will be used to initialize the module parameters, unless they + are already initialized by the user via a call to `init_params` or `fit`. + `arg_params` has higher priority to `initializer`. + aux_params : dict + Default `None`. Similar to `arg_params`, except for auxiliary states. + allow_missing : bool + Default `False`. Indicate whether we allow missing parameters when `arg_params` + and `aux_params` are not `None`. If this is `True`, then the missing parameters + will be initialized via the `initializer`. + force_rebind : bool + Default `False`. Whether to force rebinding the executors if already binded. + force_init : bool + Default `False`. Indicate whether we should force initialization even if the + parameters are already initialized. + begin_epoch : int + Default `0`. Indicate the starting epoch. Usually, if we are resuming from a + checkpoint saved at a previous training phase at epoch N, then we should specify + this value as N+1. + num_epoch : int + Number of epochs to run training. + + Examples + -------- + An example of using fit for training:: + >>> #Assume training dataIter and validation dataIter are ready + >>> mod.fit(train_data=train_dataiter, eval_data=val_dataiter, + optimizer_params={'learning_rate':0.01, 'momentum': 0.9}, + num_epoch=10) + """ + assert num_epoch is not None, 'please specify number of epochs' + + self.bind(data_shapes=train_data.provide_data, label_shapes=train_data.provide_label, + for_training=True, force_rebind=force_rebind) + if monitor is not None: + self.install_monitor(monitor) + self.init_params(initializer=initializer, arg_params=arg_params, aux_params=aux_params, + allow_missing=allow_missing, force_init=force_init) + self.init_optimizer(kvstore=kvstore, optimizer=optimizer, + optimizer_params=optimizer_params) + if state is not None: + self._curr_module.load_optimizer_states(state) + + if validation_metric is None: + validation_metric = eval_metric + if not isinstance(eval_metric, metric.EvalMetric): + eval_metric = metric.create(eval_metric) + + ################################################################################ + # training loop + ################################################################################ + for epoch in range(begin_epoch, num_epoch): + tic = time.time() + eval_metric.reset() + ct = 0 + for nbatch, data_batch in enumerate(train_data): + if monitor is not None: + monitor.tic() + self.forward_backward(data_batch) + self.update() + ct = ct + 1 + # print 'ct: ', ct + # pdb.set_trace() + if ct % 50 == 0: + ct = 0 + self.update_metric(eval_metric, data_batch.label) + sys.stdout.flush() + if monitor is not None: + monitor.toc_print() + + if batch_end_callback is not None: + batch_end_params = BatchEndParam(epoch=epoch, nbatch=nbatch, + eval_metric=eval_metric, + locals=locals()) + for callback in _as_list(batch_end_callback): + callback(batch_end_params) + + # one epoch of training is finished + for name, val in eval_metric.get_name_value(): + self.logger.info('Epoch[%d] Train-%s=%f', epoch, name, val) + toc = time.time() + self.logger.info('Epoch[%d] Time cost=%.3f', epoch, (toc-tic)) + + # sync aux params across devices + arg_params, aux_params = self.get_params() + self.set_params(arg_params, aux_params) + + if epoch_end_callback is not None: + for callback in _as_list(epoch_end_callback): + callback(epoch, self.symbol, arg_params, aux_params) + + #---------------------------------------- + # evaluation on validation set + if eval_data: + res = self.score(eval_data, validation_metric, + score_end_callback=eval_end_callback, + batch_end_callback=eval_batch_end_callback, epoch=epoch) + #TODO: pull this into default + for name, val in res: + self.logger.info('Epoch[%d] Validation-%s=%f', epoch, name, val) + + # end of 1 epoch, reset the data-iter for another epoch + train_data.reset() + + + def forward(self, data_batch, is_train=None): + assert self.binded and self.params_initialized + + # get current_shapes + if self._curr_module.label_shapes is not None: + current_shapes = [dict(self._curr_module.data_shapes[i] + self._curr_module.label_shapes[i]) for i in range(len(self._context))] + else: + current_shapes = [dict(self._curr_module.data_shapes[i]) for i in range(len(self._context))] + + # get input_shapes + if is_train: + input_shapes = [dict(data_batch.provide_data[i] + data_batch.provide_label[i]) for i in range(len(self._context))] + else: + input_shapes = [dict(data_batch.provide_data[i]) for i in range(len(data_batch.provide_data))] + + # decide if shape changed + shape_changed = len(current_shapes) != len(input_shapes) + for pre, cur in zip(current_shapes, input_shapes): + for k, v in pre.items(): + if v != cur[k]: + shape_changed = True + + if shape_changed: + # self._curr_module.reshape(data_batch.provide_data, data_batch.provide_label) + module = Module(self._symbol, self._data_names, self._label_names, + logger=self.logger, context=[self._context[i] for i in range(len(data_batch.provide_data))], + work_load_list=self._work_load_list, + fixed_param_names=self._fixed_param_names) + module.bind(data_batch.provide_data, data_batch.provide_label, self._curr_module.for_training, + self._curr_module.inputs_need_grad, force_rebind=False, + shared_module=self._curr_module) + self._curr_module = module + + self._curr_module.forward(data_batch, is_train=is_train) + + def backward(self, out_grads=None): + assert self.binded and self.params_initialized + self._curr_module.backward(out_grads=out_grads) + + def update(self): + assert self.binded and self.params_initialized and self.optimizer_initialized + self._curr_module.update() + + def get_outputs(self, merge_multi_context=True): + assert self.binded and self.params_initialized + return self._curr_module.get_outputs(merge_multi_context=merge_multi_context) + def get_input_grads(self, merge_multi_context=True): + assert self.binded and self.params_initialized and self.inputs_need_grad + return self._curr_module.get_input_grads(merge_multi_context=merge_multi_context) + + def update_metric(self, eval_metric, labels): + assert self.binded and self.params_initialized + self._curr_module.update_metric(eval_metric, labels) + + def install_monitor(self, mon): + """ Install monitor on all executors """ + assert self.binded + self._curr_module.install_monitor(mon) diff --git a/fpn/core/rcnn.py b/fpn/core/rcnn.py new file mode 100644 index 0000000..66a8902 --- /dev/null +++ b/fpn/core/rcnn.py @@ -0,0 +1,713 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- +""" +Fast R-CNN: +data = + {'data': [num_images, c, h, w], + 'rois': [num_rois, 5]} +label = + {'label': [num_rois], + 'bbox_target': [num_rois, 4 * num_classes], + 'bbox_weight': [num_rois, 4 * num_classes]} +roidb extended format [image_index] + ['image', 'height', 'width', 'flipped', + 'boxes', 'gt_classes', 'gt_overlaps', 'max_classes', 'max_overlaps', 'bbox_targets'] +""" + +import numpy as np +import numpy.random as npr +import mxnet as mx + +from utils.image import get_image, tensor_vstack +from bbox.bbox_transform import bbox_overlaps, bbox_transform, bbox_poly2hbb, dbbox_transform, dbbox_transform2 +from bbox.bbox_transform import dbbox_transform2_warp, dbbox_transform2_inv_warp +from bbox.bbox_transform import dbboxtransform3_inv_warp, dbboxtransform3_warp, dbboxtransform3, dbboxtransform3_inv +from bbox.bbox_regression import expand_bbox_regression_targets, expand_bbox_regression_targets_base, expand_bbox_regression_targets_base_new +from bbox.bbox_transform import * +from dota_kit.poly_nms_gpu.poly_overlaps import poly_overlaps + +import pdb + +def get_rcnn_testbatch(roidb, cfg): + """ + return a dict of testbatch + :param roidb: ['image', 'flipped'] + ['boxes'] + :return: data, label, im_info + """ + # assert len(roidb) == 1, 'Single batch only' + imgs, roidb = get_image(roidb, cfg) + im_array = imgs + im_info = [np.array([roidb[i]['im_info']], dtype=np.float32) for i in range(len(roidb))] + + im_rois = [roidb[i]['boxes'] for i in range(len(roidb))] + rois = im_rois + rois_array = [np.hstack((0 * np.ones((rois[i].shape[0], 1)), rois[i])) for i in range(len(rois))] + + data = [{'data': im_array[i], + 'rois': rois_array[i]} for i in range(len(roidb))] + label = {} + + return data, label, im_info + + +def get_rcnn_batch(roidb, cfg): + """ + return a dict of multiple images + :param roidb: a list of dict, whose length controls batch size + ['images', 'flipped'] + ['gt_boxes', 'boxes', 'gt_overlap'] => ['bbox_targets'] + :return: data, label + """ + num_images = len(roidb) + imgs, roidb = get_image(roidb, cfg) + im_array = tensor_vstack(imgs) + + assert cfg.TRAIN.BATCH_ROIS == -1 or cfg.TRAIN.BATCH_ROIS % cfg.TRAIN.BATCH_IMAGES == 0, \ + 'BATCHIMAGES {} must divide BATCH_ROIS {}'.format(cfg.TRAIN.BATCH_IMAGES, cfg.TRAIN.BATCH_ROIS) + + if cfg.TRAIN.BATCH_ROIS == -1: + rois_per_image = np.sum([iroidb['boxes'].shape[0] for iroidb in roidb]) + fg_rois_per_image = rois_per_image + else: + rois_per_image = cfg.TRAIN.BATCH_ROIS / cfg.TRAIN.BATCH_IMAGES + fg_rois_per_image = np.round(cfg.TRAIN.FG_FRACTION * rois_per_image).astype(int) + + rois_array = list() + labels_array = list() + bbox_targets_array = list() + bbox_weights_array = list() + + for im_i in range(num_images): + roi_rec = roidb[im_i] + + # infer num_classes from gt_overlaps + num_classes = roi_rec['gt_overlaps'].shape[1] + + # label = class RoI has max overlap with + rois = roi_rec['boxes'] + labels = roi_rec['max_classes'] + overlaps = roi_rec['max_overlaps'] + bbox_targets = roi_rec['bbox_targets'] + + im_rois, labels, bbox_targets, bbox_weights = \ + sample_rois(rois, fg_rois_per_image, rois_per_image, num_classes, cfg, + labels, overlaps, bbox_targets) + + # project im_rois + # do not round roi + rois = im_rois + batch_index = im_i * np.ones((rois.shape[0], 1)) + rois_array_this_image = np.hstack((batch_index, rois)) + rois_array.append(rois_array_this_image) + + # add labels + labels_array.append(labels) + bbox_targets_array.append(bbox_targets) + bbox_weights_array.append(bbox_weights) + + rois_array = np.array(rois_array) + labels_array = np.array(labels_array) + bbox_targets_array = np.array(bbox_targets_array) + bbox_weights_array = np.array(bbox_weights_array) + + data = {'data': im_array, + 'rois': rois_array} + label = {'label': labels_array, + 'bbox_target': bbox_targets_array, + 'bbox_weight': bbox_weights_array} + + return data, label + +def sample_rois(rois, fg_rois_per_image, rois_per_image, num_classes, cfg, + labels=None, overlaps=None, bbox_targets=None, gt_boxes=None): + """ + generate random sample of ROIs comprising foreground and background examples + :param rois: all_rois [n, 4]; e2e: [n, 5] with batch_index + :param fg_rois_per_image: foreground roi number + :param rois_per_image: total roi number + :param num_classes: number of classes + :param labels: maybe precomputed + :param overlaps: maybe precomputed (max_overlaps) + :param bbox_targets: maybe precomputed + :param gt_boxes: optional for e2e [n, 5] (x1, y1, x2, y2, cls) + :return: (labels, rois, bbox_targets, bbox_weights) + """ + if labels is None: + overlaps = bbox_overlaps(rois[:, 1:].astype(np.float), gt_boxes[:, :4].astype(np.float)) + gt_assignment = overlaps.argmax(axis=1) + overlaps = overlaps.max(axis=1) + labels = gt_boxes[gt_assignment, 4] + + # foreground RoI with FG_THRESH overlap + fg_indexes = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0] + # guard against the case when an image has fewer than fg_rois_per_image foreground RoIs + fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_indexes.size) + # Sample foreground regions without replacement + if len(fg_indexes) > fg_rois_per_this_image: + fg_indexes = npr.choice(fg_indexes, size=fg_rois_per_this_image, replace=False) + + # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI) + bg_indexes = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) & (overlaps >= cfg.TRAIN.BG_THRESH_LO))[0] + # Compute number of background RoIs to take from this image (guarding against there being fewer than desired) + bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image + bg_rois_per_this_image = np.minimum(bg_rois_per_this_image, bg_indexes.size) + # Sample foreground regions without replacement + if len(bg_indexes) > bg_rois_per_this_image: + bg_indexes = npr.choice(bg_indexes, size=bg_rois_per_this_image, replace=False) + + # indexes selected + keep_indexes = np.append(fg_indexes, bg_indexes) + + # pad more to ensure a fixed minibatch size + while keep_indexes.shape[0] < rois_per_image: + gap = np.minimum(len(rois), rois_per_image - keep_indexes.shape[0]) + gap_indexes = npr.choice(range(len(rois)), size=gap, replace=False) + keep_indexes = np.append(keep_indexes, gap_indexes) + + # select labels + labels = labels[keep_indexes] + # set labels of bg_rois to be 0 + labels[fg_rois_per_this_image:] = 0 + rois = rois[keep_indexes] + + # load or compute bbox_target + if bbox_targets is not None: + bbox_target_data = bbox_targets[keep_indexes, :] + else: + targets = bbox_transform(rois[:, 1:], gt_boxes[gt_assignment[keep_indexes], :4]) + if cfg.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED: + targets = ((targets - np.array(cfg.TRAIN.BBOX_MEANS)) + / np.array(cfg.TRAIN.BBOX_STDS)) + bbox_target_data = np.hstack((labels[:, np.newaxis], targets)) + + bbox_targets, bbox_weights = \ + expand_bbox_regression_targets(bbox_target_data, num_classes, cfg) + + return rois, labels, bbox_targets, bbox_weights + +def sample_rotbox_rois(rois, fg_rois_per_image, rois_per_image, num_classes, cfg, + labels=None, overlaps=None, dbbox_targets=None, gt_boxes=None): + """ + + :param rois: al_rois [n, 4]; e2e [n, 5] with batch_index + :param fg_rois_per_image: + :param rois_per_image: + :param num_clases: + :param cfg: + :param labels: + :param overlaps: + :param dbbox_targets: + :param gt_boxes: optional for e2e [n, 9] (x1, y1, ..., x4, y4, cls) + :return: + """ + if labels is None: + # hgt_boxes = np.hstack((bbox_poly2hbb(gt_boxes[:, :-1]), gt_boxes[:, -1])) + hgt_boxes = bbox_poly2hbb(gt_boxes) + ## rois: (xmin, ymin, xmax, ymax) + overlaps = bbox_overlaps(rois[:, 1:].astype(np.float), hgt_boxes[:, :4].astype(np.float)) + gt_assignment = overlaps.argmax(axis=1) + overlaps = overlaps.max(axis=1) + labels = hgt_boxes[gt_assignment, 4] + + # foreground RoI with FG_THRESH overlap + fg_indexes = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0] + # guard against the case when an image has fewer than fg_rois_per_image foreground RoIs + fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_indexes.size) + # Sample foreground regions without replacement + if len(fg_indexes) > fg_rois_per_this_image: + fg_indexes = npr.choice(fg_indexes, size=fg_rois_per_this_image, replace=False) + + # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI) + bg_indexes = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) & (overlaps >= cfg.TRAIN.BG_THRESH_LO))[0] + # Compute number of background RoIs to take from this image (guarding against there being fewer than desired) + bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image + bg_rois_per_this_image = np.minimum(bg_rois_per_this_image, bg_indexes.size) + # Sample foreground regions without replacement + if len(bg_indexes) > bg_rois_per_this_image: + bg_indexes = npr.choice(bg_indexes, size=bg_rois_per_this_image, replace=False) + + # indexes selected + keep_indexes = np.append(fg_indexes, bg_indexes) + + # pad more to ensure a fixed minibatch size + while keep_indexes.shape[0] < rois_per_image: + gap = np.minimum(len(rois), rois_per_image - keep_indexes.shape[0]) + gap_indexes = npr.choice(range(len(rois)), size=gap, replace=False) + keep_indexes = np.append(keep_indexes, gap_indexes) + + # select labels + labels = labels[keep_indexes] + # set labels of bg_rois to be 0 + # pdb.set_trace() + labels[fg_rois_per_this_image:] = 0 + rois = rois[keep_indexes] + # pdb.set_trace() + # load or compute bbox_target + if dbbox_targets is not None: + bbox_target_data = dbbox_targets[keep_indexes, :] + else: + targets = dbbox_transform2_warp(rois[:, 1:], gt_boxes[gt_assignment[keep_indexes], :8]) + if cfg.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED: + targets = ((targets - np.array(cfg.TRAIN.BBOX_MEANS)) + / np.array(cfg.TRAIN.BBOX_STDS)) + bbox_target_data = np.hstack((labels[:, np.newaxis], targets)) + # pdb.set_trace() + bbox_targets, bbox_weights = \ + expand_bbox_regression_targets_base(bbox_target_data, num_classes, cfg) + + return rois, labels, bbox_targets, bbox_weights + +def sample_Rrois(Rrois, fg_rois_per_image, rois_per_image, num_classes, cfg, + labels=None, overlaps=None, dbbox_targets=None, gt_boxes=None, device_id=0): + """ + + :param Rrois: all_rois [n, 5]; e2e [n, 6] with batch_index (x_ctr, y_ctr, w, h, theta) + :param fg_rois_per_image: + :param rois_per_image: + :param num_clases: + :param cfg: + :param labels: + :param overlaps: + :param dbbox_targets: + :param gt_boxes: optional for e2e [n, 6] (x_ctr, y_ctr, w, h, theta, cls) + :return: + """ + if labels is None: + ## rois: (xmin, ymin, xmax, ymax) + # poly_overlaps = poly_overlaps_nms_wrapper(Rrois.context.device_id) + overlaps = poly_overlaps(Rrois[:, 1:].astype(np.float32), gt_boxes[:, :5].astype(np.float32), device_id) + gt_assignment = overlaps.argmax(axis=1) + overlaps = overlaps.max(axis=1) + labels = gt_boxes[gt_assignment, 5] + + # pdb.set_trace() + # foreground RoI with FG_THRESH overlap + # fg_indexes = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0] + fg_indexes = np.where(overlaps >= cfg.TRAIN.RRoI_FG_THRESH)[0] + # guard against the case when an image has fewer than fg_rois_per_image foreground RoIs + fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_indexes.size) + # Sample foreground regions without replacement + if len(fg_indexes) > fg_rois_per_this_image: + fg_indexes = npr.choice(fg_indexes, size=fg_rois_per_this_image, replace=False) + + # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI) + bg_indexes = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) & (overlaps >= cfg.TRAIN.BG_THRESH_LO))[0] + # Compute number of background RoIs to take from this image (guarding against there being fewer than desired) + bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image + bg_rois_per_this_image = np.minimum(bg_rois_per_this_image, bg_indexes.size) + # Sample foreground regions without replacement + if len(bg_indexes) > bg_rois_per_this_image: + bg_indexes = npr.choice(bg_indexes, size=bg_rois_per_this_image, replace=False) + + # indexes selected + keep_indexes = np.append(fg_indexes, bg_indexes) + + # pad more to ensure a fixed minibatch size + while keep_indexes.shape[0] < rois_per_image: + gap = np.minimum(len(Rrois), rois_per_image - keep_indexes.shape[0]) + gap_indexes = npr.choice(range(len(Rrois)), size=gap, replace=False) + keep_indexes = np.append(keep_indexes, gap_indexes) + + # select labels + labels = labels[keep_indexes] + # set labels of bg_rois to be 0 + # pdb.set_trace() + labels[fg_rois_per_this_image:] = 0 + Rrois = Rrois[keep_indexes] + # pdb.set_trace() + # load or compute bbox_target + if dbbox_targets is not None: + bbox_target_data = dbbox_targets[keep_indexes, :] + else: + # targets = dbbox_transform2(Rrois[:, 1:], gt_boxes[gt_assignment[keep_indexes], :5]) + targets = dbbox_transform2_best_match_warp(Rrois[:, 1:], gt_boxes[gt_assignment[keep_indexes], :5]) + if cfg.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED: + # targets = ((targets - np.array(cfg.TRAIN.BBOX_MEANS)) + # / np.array(cfg.TRAIN.BBOX_STDS)) + targets = ((targets - np.array(cfg.TRAIN.RRoI_BBOX_STDS)) + / np.array(cfg.TRAIN.RRoI_BBOX_STDS)) + bbox_target_data = np.hstack((labels[:, np.newaxis], targets)) + # pdb.set_trace() + bbox_targets, bbox_weights = \ + expand_bbox_regression_targets_base_new(bbox_target_data, num_classes, cfg.network.RRoI_CLASS_AGNOSTIC) + + return Rrois, labels, bbox_targets, bbox_weights + +def sample_poly_rois(rois, fg_rois_per_image, rois_per_image, num_classes, cfg, + labels=None, overlaps=None, dbbox_targets=None, gt_boxes=None): + """ + + :param rois: al_rois [n, 4]; e2e [n, 5] with batch_index + :param fg_rois_per_image: + :param rois_per_image: + :param num_clases: + :param cfg: + :param labels: + :param overlaps: + :param dbbox_targets: + :param gt_boxes: optional for e2e [n, 9] (x1, y1, ..., x4, y4, cls) + :return: + """ + if labels is None: + # hgt_boxes = np.hstack((bbox_poly2hbb(gt_boxes[:, :-1]), gt_boxes[:, -1])) + hgt_boxes = bbox_poly2hbb(gt_boxes) + ## rois: (xmin, ymin, xmax, ymax) + overlaps = bbox_overlaps(rois[:, 1:].astype(np.float), hgt_boxes[:, :4].astype(np.float)) + gt_assignment = overlaps.argmax(axis=1) + overlaps = overlaps.max(axis=1) + labels = hgt_boxes[gt_assignment, 4] + + # foreground RoI with FG_THRESH overlap + fg_indexes = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0] + # guard against the case when an image has fewer than fg_rois_per_image foreground RoIs + fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_indexes.size) + # Sample foreground regions without replacement + if len(fg_indexes) > fg_rois_per_this_image: + fg_indexes = npr.choice(fg_indexes, size=fg_rois_per_this_image, replace=False) + + # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI) + bg_indexes = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) & (overlaps >= cfg.TRAIN.BG_THRESH_LO))[0] + # Compute number of background RoIs to take from this image (guarding against there being fewer than desired) + bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image + bg_rois_per_this_image = np.minimum(bg_rois_per_this_image, bg_indexes.size) + # Sample foreground regions without replacement + if len(bg_indexes) > bg_rois_per_this_image: + bg_indexes = npr.choice(bg_indexes, size=bg_rois_per_this_image, replace=False) + + # indexes selected + keep_indexes = np.append(fg_indexes, bg_indexes) + + # pad more to ensure a fixed minibatch size + while keep_indexes.shape[0] < rois_per_image: + gap = np.minimum(len(rois), rois_per_image - keep_indexes.shape[0]) + gap_indexes = npr.choice(range(len(rois)), size=gap, replace=False) + keep_indexes = np.append(keep_indexes, gap_indexes) + + # select labels + labels = labels[keep_indexes] + # set labels of bg_rois to be 0 + labels[fg_rois_per_this_image:] = 0 + rois = rois[keep_indexes] + + # load or compute bbox_target + if dbbox_targets is not None: + bbox_target_data = dbbox_targets[keep_indexes, :] + else: + targets = dbbox_transform(rois[:, 1:], gt_boxes[gt_assignment[keep_indexes], :8]) + if cfg.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED: + targets = ((targets - np.array(cfg.TRAIN.BBOX_MEANS)) + / np.array(cfg.TRAIN.BBOX_STDS)) + bbox_target_data = np.hstack((labels[:, np.newaxis], targets)) + bbox_targets, bbox_weights = \ + expand_bbox_regression_targets_base(bbox_target_data, num_classes, cfg) + + return rois, labels, bbox_targets, bbox_weights + +def sample_rotbox_rois_region_pred(rois, fg_rois_per_image, rois_per_image, num_classes, cfg, + labels=None, overlaps=None, dbbox_targets=None, gt_boxes=None): + """ + + :param rois: al_rois [n, 4]; e2e [n, 5] with batch_index + :param fg_rois_per_image: + :param rois_per_image: + :param num_clases: + :param cfg: + :param labels: + :param overlaps: + :param dbbox_targets: + :param gt_boxes: optional for e2e [n, 9] (x1, y1, ..., x4, y4, cls) + :return: + """ + if labels is None: + # hgt_boxes = np.hstack((bbox_poly2hbb(gt_boxes[:, :-1]), gt_boxes[:, -1])) + hgt_boxes = bbox_poly2hbb(gt_boxes) + ## rois: (xmin, ymin, xmax, ymax) + overlaps = bbox_overlaps(rois[:, 1:].astype(np.float), hgt_boxes[:, :4].astype(np.float)) + gt_assignment = overlaps.argmax(axis=1) + overlaps = overlaps.max(axis=1) + labels = hgt_boxes[gt_assignment, 4] + + # foreground RoI with FG_THRESH overlap + fg_indexes = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0] + # guard against the case when an image has fewer than fg_rois_per_image foreground RoIs + fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_indexes.size) + # Sample foreground regions without replacement + if len(fg_indexes) > fg_rois_per_this_image: + fg_indexes = npr.choice(fg_indexes, size=fg_rois_per_this_image, replace=False) + + # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI) + bg_indexes = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) & (overlaps >= cfg.TRAIN.BG_THRESH_LO))[0] + # Compute number of background RoIs to take from this image (guarding against there being fewer than desired) + bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image + bg_rois_per_this_image = np.minimum(bg_rois_per_this_image, bg_indexes.size) + # Sample foreground regions without replacement + if len(bg_indexes) > bg_rois_per_this_image: + bg_indexes = npr.choice(bg_indexes, size=bg_rois_per_this_image, replace=False) + + # indexes selected + keep_indexes = np.append(fg_indexes, bg_indexes) + + # pad more to ensure a fixed minibatch size + while keep_indexes.shape[0] < rois_per_image: + gap = np.minimum(len(rois), rois_per_image - keep_indexes.shape[0]) + gap_indexes = npr.choice(range(len(rois)), size=gap, replace=False) + keep_indexes = np.append(keep_indexes, gap_indexes) + + # select labels + labels = labels[keep_indexes] + # set labels of bg_rois to be 0 + # pdb.set_trace() + labels[fg_rois_per_this_image:] = 0 + rois = rois[keep_indexes] + # pdb.set_trace() + # load or compute bbox_target + if dbbox_targets is not None: + bbox_target_data = dbbox_targets[keep_indexes, :] + else: + targets = dbbox_transform2_warp(rois[:, 1:], gt_boxes[gt_assignment[keep_indexes], :8]) + if cfg.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED: + targets = ((targets - np.array(cfg.TRAIN.BBOX_MEANS)) + / np.array(cfg.TRAIN.BBOX_STDS)) + bbox_target_data = np.hstack((labels[:, np.newaxis], targets)) + # pdb.set_trace() + bbox_targets, bbox_weights = \ + expand_bbox_regression_targets_base(bbox_target_data, num_classes, cfg) + + return rois, labels, bbox_targets, bbox_weights + +def sample_rotbox_rois_nd(rois, fg_rois_per_image, rois_per_image, num_classes, cfg, + labels=None, overlaps=None, dbbox_targets=None, gt_boxes=None): + """ + + :param rois: al_rois [n, 4]; e2e [n, 5] with batch_index + :param fg_rois_per_image: + :param rois_per_image: + :param num_clases: + :param cfg: + :param labels: + :param overlaps: + :param dbbox_targets: + :param gt_boxes: optional for e2e [n, 9] (x1, y1, ..., x4, y4, cls) + :return: + """ + if labels is None: + # hgt_boxes = np.hstack((bbox_poly2hbb(gt_boxes[:, :-1]), gt_boxes[:, -1])) + hgt_boxes = bbox_poly2hbb_nd(gt_boxes) + ## rois: (xmin, ymin, xmax, ymax) + # overlaps = bbox_overlaps(rois[:, 1:].astype(np.float), hgt_boxes[:, :4].astype(np.float)) + overlaps = mx.nd.contrib.box_iou(rois[:, 1:].astype(np.float), hgt_boxes[:, :4].astype(np.float)) + + gt_assignment = overlaps.argmax(axis=1) + overlaps = overlaps.max(axis=1) + labels = hgt_boxes[gt_assignment, 4] + + overlaps = overlaps.asnumpy() + + # foreground RoI with FG_THRESH overlap + fg_indexes = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0] + # guard against the case when an image has fewer than fg_rois_per_image foreground RoIs + fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_indexes.size) + # Sample foreground regions without replacement + if len(fg_indexes) > fg_rois_per_this_image: + fg_indexes = npr.choice(fg_indexes, size=fg_rois_per_this_image, replace=False) + + # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI) + bg_indexes = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) & (overlaps >= cfg.TRAIN.BG_THRESH_LO))[0] + # Compute number of background RoIs to take from this image (guarding against there being fewer than desired) + bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image + bg_rois_per_this_image = np.minimum(bg_rois_per_this_image, bg_indexes.size) + # Sample foreground regions without replacement + if len(bg_indexes) > bg_rois_per_this_image: + bg_indexes = npr.choice(bg_indexes, size=bg_rois_per_this_image, replace=False) + + # indexes selected + keep_indexes = np.append(fg_indexes, bg_indexes) + + # pad more to ensure a fixed minibatch size + while keep_indexes.shape[0] < rois_per_image: + gap = np.minimum(len(rois), rois_per_image - keep_indexes.shape[0]) + gap_indexes = npr.choice(range(len(rois)), size=gap, replace=False) + keep_indexes = np.append(keep_indexes, gap_indexes) + + # trans it to mx.nd + keep_indexes = mx.nd.array(keep_indexes) + # select labels + labels = labels[keep_indexes] + # set labels of bg_rois to be 0 + # pdb.set_trace() + if (fg_rois_per_this_image < labels.size): + labels[fg_rois_per_this_image:] = 0 + rois = rois[keep_indexes] + # pdb.set_trace() + # load or compute bbox_target + if dbbox_targets is not None: + bbox_target_data = dbbox_targets[keep_indexes, :] + else: + targets = dbbox_transform2_warp_nd(rois[:, 1:], gt_boxes[gt_assignment[keep_indexes], :8]) + if cfg.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED: + targets = ((targets - mx.nd.array(cfg.TRAIN.BBOX_MEANS)) + / mx.nd.array(cfg.TRAIN.BBOX_STDS)) + bbox_target_data = mx.nd.concat(labels.expand_dims(1), targets, dim=1) + # bbox_target_data = np.hstack((labels[:, np.newaxis], targets)) + + bbox_targets, bbox_weights = \ + expand_bbox_regression_targets_base(bbox_target_data.asnumpy(), num_classes, cfg) + + return rois, labels, bbox_targets, bbox_weights + +def sample_xyhs_rois(rois, fg_rois_per_image, rois_per_image, num_classes, cfg, + labels=None, overlaps=None, dbbox_targets=None, gt_boxes=None): + """ + + :param rois: al_rois [n, 4]; e2e [n, 5] with batch_index + :param fg_rois_per_image: + :param rois_per_image: + :param num_clases: + :param cfg: + :param labels: + :param overlaps: + :param dbbox_targets: + :param gt_boxes: optional for e2e [n, 9] (x1, y1, ..., x4, y4, cls) + :return: + """ + if labels is None: + # hgt_boxes = np.hstack((bbox_poly2hbb(gt_boxes[:, :-1]), gt_boxes[:, -1])) + hgt_boxes = bbox_poly2hbb(gt_boxes) + ## rois: (xmin, ymin, xmax, ymax) + overlaps = bbox_overlaps(rois[:, 1:].astype(np.float), hgt_boxes[:, :4].astype(np.float)) + gt_assignment = overlaps.argmax(axis=1) + overlaps = overlaps.max(axis=1) + labels = hgt_boxes[gt_assignment, 4] + + # foreground RoI with FG_THRESH overlap + fg_indexes = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0] + # guard against the case when an image has fewer than fg_rois_per_image foreground RoIs + fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_indexes.size) + # Sample foreground regions without replacement + if len(fg_indexes) > fg_rois_per_this_image: + fg_indexes = npr.choice(fg_indexes, size=fg_rois_per_this_image, replace=False) + + # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI) + bg_indexes = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) & (overlaps >= cfg.TRAIN.BG_THRESH_LO))[0] + # Compute number of background RoIs to take from this image (guarding against there being fewer than desired) + bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image + bg_rois_per_this_image = np.minimum(bg_rois_per_this_image, bg_indexes.size) + # Sample foreground regions without replacement + if len(bg_indexes) > bg_rois_per_this_image: + bg_indexes = npr.choice(bg_indexes, size=bg_rois_per_this_image, replace=False) + + # indexes selected + keep_indexes = np.append(fg_indexes, bg_indexes) + + # pad more to ensure a fixed minibatch size + while keep_indexes.shape[0] < rois_per_image: + gap = np.minimum(len(rois), rois_per_image - keep_indexes.shape[0]) + gap_indexes = npr.choice(range(len(rois)), size=gap, replace=False) + keep_indexes = np.append(keep_indexes, gap_indexes) + + # select labels + labels = labels[keep_indexes] + # set labels of bg_rois to be 0 + labels[fg_rois_per_this_image:] = 0 + rois = rois[keep_indexes] + + # load or compute bbox_target + if dbbox_targets is not None: + bbox_target_data = dbbox_targets[keep_indexes, :] + else: + # targets = dbbox_transform2_warp(rois[:, 1:], gt_boxes[gt_assignment[keep_indexes], :8]) + targets = dbboxtransform3_warp(rois[:, 1:], gt_boxes[gt_assignment[keep_indexes], :8]) + if cfg.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED: + targets = ((targets - np.array(cfg.TRAIN.BBOX_MEANS)) + / np.array(cfg.TRAIN.BBOX_STDS)) + bbox_target_data = np.hstack((labels[:, np.newaxis], targets)) + bbox_targets, bbox_weights = \ + expand_bbox_regression_targets_base(bbox_target_data, num_classes, cfg) + + return rois, labels, bbox_targets, bbox_weights + +def sample_xyhs_rois_nd(rois, fg_rois_per_image, rois_per_image, num_classes, cfg, + labels=None, overlaps=None, dbbox_targets=None, gt_boxes=None): + """ + this is an mxnet version + :param rois: al_rois [n, 4]; e2e [n, 5] with batch_index + :param fg_rois_per_image: + :param rois_per_image: + :param num_clases: + :param cfg: + :param labels: + :param overlaps: + :param dbbox_targets: + :param gt_boxes: optional for e2e [n, 9] (x1, y1, ..., x4, y4, cls) + :return: + """ + if labels is None: + # hgt_boxes = np.hstack((bbox_poly2hbb(gt_boxes[:, :-1]), gt_boxes[:, -1])) + hgt_boxes = bbox_poly2hbb_nd(gt_boxes) + ## rois: (xmin, ymin, xmax, ymax) + overlaps = mx.nd.contrib.box_iou(rois[:, 1:].astype('float32'), hgt_boxes[:, :4].astype('float32')) + # overlaps = bbox_overlaps(rois[:, 1:].astype(np.float), hgt_boxes[:, :4].astype(np.float)) + gt_assignment = overlaps.argmax(axis=1) + overlaps = overlaps.max(axis=1) + labels = hgt_boxes[gt_assignment, 4] + + # tmp trans it to numpy + overlaps = overlaps.asnumpy() + # foreground RoI with FG_THRESH overlap + fg_indexes = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0] + # guard against the case when an image has fewer than fg_rois_per_image foreground RoIs + fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_indexes.size) + # Sample foreground regions without replacement + if len(fg_indexes) > fg_rois_per_this_image: + fg_indexes = npr.choice(fg_indexes, size=fg_rois_per_this_image, replace=False) + + # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI) + bg_indexes = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) & (overlaps >= cfg.TRAIN.BG_THRESH_LO))[0] + # Compute number of background RoIs to take from this image (guarding against there being fewer than desired) + bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image + bg_rois_per_this_image = np.minimum(bg_rois_per_this_image, bg_indexes.size) + # Sample foreground regions without replacement + if len(bg_indexes) > bg_rois_per_this_image: + bg_indexes = npr.choice(bg_indexes, size=bg_rois_per_this_image, replace=False) + + # indexes selected + keep_indexes = np.append(fg_indexes, bg_indexes) + + # pad more to ensure a fixed minibatch size + while keep_indexes.shape[0] < rois_per_image: + gap = np.minimum(len(rois), rois_per_image - keep_indexes.shape[0]) + gap_indexes = npr.choice(range(len(rois)), size=gap, replace=False) + keep_indexes = np.append(keep_indexes, gap_indexes) + + # select labels + labels = labels[keep_indexes] + # set labels of bg_rois to be 0 + labels[fg_rois_per_this_image:] = 0 + rois = rois[keep_indexes] + + # load or compute bbox_target + if dbbox_targets is not None: + bbox_target_data = dbbox_targets[keep_indexes, :] + else: + # targets = dbbox_transform2_warp(rois[:, 1:], gt_boxes[gt_assignment[keep_indexes], :8]) + targets = dbboxtransform3_warp(rois[:, 1:], gt_boxes[gt_assignment[keep_indexes], :8]) + if cfg.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED: + targets = ((targets - np.array(cfg.TRAIN.BBOX_MEANS)) + / np.array(cfg.TRAIN.BBOX_STDS)) + bbox_target_data = np.hstack((labels[:, np.newaxis], targets)) + bbox_targets, bbox_weights = \ + expand_bbox_regression_targets_base(bbox_target_data, num_classes, cfg) + + return rois, labels, bbox_targets, bbox_weights + + + diff --git a/fpn/core/tester.py b/fpn/core/tester.py new file mode 100644 index 0000000..7647f19 --- /dev/null +++ b/fpn/core/tester.py @@ -0,0 +1,1210 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- +import cv2 +import cPickle +import os +import time +import mxnet as mx +import numpy as np + +from module import MutableModule +from utils import image +from bbox.bbox_transform import bbox_pred, dbbox_pred, clip_boxes, clip_polys, dbbox_transform2_inv_warp, dbbox_transform2_warp +from bbox.bbox_transform import polygonToRotRectangle_batch, RotBox2Polys_multi_class +from bbox.bbox_transform import dbboxtransform3_inv_warp, xyhs2polys, xyhs2polys_muli_class +from bbox.bbox_transform import dbbox_transform2_inv, dbbox_transform2_inv_new +from nms.nms import py_nms_wrapper, py_softnms_wrapper +from dota_kit.ResultMerge import py_cpu_nms_poly +from utils.PrefetchingIter import PrefetchingIter +import pdb + +DEBUG = False + +class Predictor(object): + def __init__(self, symbol, data_names, label_names, + context=mx.cpu(), max_data_shapes=None, + provide_data=None, provide_label=None, + arg_params=None, aux_params=None): + self._mod = MutableModule(symbol, data_names, label_names, + context=context, max_data_shapes=max_data_shapes) + self._mod.bind(provide_data, provide_label, for_training=False) + self._mod.init_params(arg_params=arg_params, aux_params=aux_params) + + def predict(self, data_batch): + self._mod.forward(data_batch) + # [dict(zip(self._mod.output_names, _)) for _ in zip(*self._mod.get_outputs(merge_multi_context=False))] + return [dict(zip(self._mod.output_names, _)) for _ in zip(*self._mod.get_outputs(merge_multi_context=False))] + + +def im_detect(predictor, data_batch, data_names, scales, cfg): + output_all = predictor.predict(data_batch) + + data_dict_all = [dict(zip(data_names, idata)) for idata in data_batch.data] + scores_all = [] + pred_boxes_all = [] + for output, data_dict, scale in zip(output_all, data_dict_all, scales): + if cfg.TEST.HAS_RPN: + rois = output['rois_output'].asnumpy()[:, 1:] + else: + rois = data_dict['rois'].asnumpy().reshape((-1, 5))[:, 1:] + im_shape = data_dict['data'].shape + + # save output + scores = output['cls_prob_reshape_output'].asnumpy()[0] + bbox_deltas = output['bbox_pred_reshape_output'].asnumpy()[0] + + # post processing + pred_boxes = bbox_pred(rois, bbox_deltas) + pred_boxes = clip_boxes(pred_boxes, im_shape[-2:]) + + # we used scaled image & roi to train, so it is necessary to transform them back + pred_boxes = pred_boxes / scale + + scores_all.append(scores) + pred_boxes_all.append(pred_boxes) + return scores_all, pred_boxes_all, data_dict_all + +# def im_detect_poly_abstract(predictor, data_batch, data_names, scales, cfg): +# """ +# The function merge poly and rotbox encoding +# :param predictor: +# :param data_batch: +# :param data_names: +# :param scales: +# :param cfg: +# :return: +# """ +# output_all = predictor.predict(data_batch) +# +# data_dict_all = [dict(zip(data_names, idata)) for idata in data_batch.data] +# scores_all = [] +# pred_boxes_all = [] +# for output, data_dict, scale in zip(output_all, data_dict_all, scales): +# if cfg.TEST.HAS_RPN: +# rois = output['rois_output'].asnumpy()[:, 1:] +# else: +# rois = data_dict['rois'].asnumpy().reshape((-1, 5))[:, 1:] +# im_shape = data_dict['data'].shape +# +# # save output +# scores = output['cls_prob_reshape_output'].asnumpy()[0] +# bbox_deltas = output['bbox_pred_reshape_output'].asnumpy()[0] +# +# # post processing +# # pdb.set_trace() +# if cfg.network.BOXENCODING == 'rotbox': +# +# pred_boxes = box_decode(rois, bbox_deltas) +# pred_boxes = clip_polys(pred_boxes, im_shape[-2:]) +# +# # we used scaled image & roi to train, so it is necessary to transform them back +# pred_boxes = pred_boxes / scale +# +# scores_all.append(scores) +# pred_boxes_all.append(pred_boxes) +# +# return scores_all, pred_boxes_all, data_dict_all +def im_detect_poly(predictor, data_batch, data_names, scales, cfg): + output_all = predictor.predict(data_batch) + + data_dict_all = [dict(zip(data_names, idata)) for idata in data_batch.data] + scores_all = [] + pred_boxes_all = [] + for output, data_dict, scale in zip(output_all, data_dict_all, scales): + if cfg.TEST.HAS_RPN: + rois = output['rois_output'].asnumpy()[:, 1:] + else: + rois = data_dict['rois'].asnumpy().reshape((-1, 5))[:, 1:] + im_shape = data_dict['data'].shape + + # save output + scores = output['cls_prob_reshape_output'].asnumpy()[0] + bbox_deltas = output['bbox_pred_reshape_output'].asnumpy()[0] + + # post processing + # pdb.set_trace() + pred_boxes = dbbox_pred(rois, bbox_deltas) + pred_boxes = clip_polys(pred_boxes, im_shape[-2:]) + + # we used scaled image & roi to train, so it is necessary to transform them back + pred_boxes = pred_boxes / scale + + scores_all.append(scores) + pred_boxes_all.append(pred_boxes) + + return scores_all, pred_boxes_all, data_dict_all + +def im_detect_rotbox(predictor, data_batch, data_names, scales, cfg): + output_all = predictor.predict(data_batch) + + data_dict_all = [dict(zip(data_names, idata)) for idata in data_batch.data] + scores_all = [] + pred_boxes_all = [] + for output, data_dict, scale in zip(output_all, data_dict_all, scales): + if cfg.TEST.HAS_RPN: + rois = output['rois_output'].asnumpy()[:, 1:] + else: + rois = data_dict['rois'].asnumpy().reshape((-1, 5))[:, 1:] + im_shape = data_dict['data'].shape + + # save output + scores = output['cls_prob_reshape_output'].asnumpy()[0] + bbox_deltas = output['bbox_pred_reshape_output'].asnumpy()[0] + + pred_boxes = dbbox_transform2_inv_warp(rois, bbox_deltas) + pred_polys = RotBox2Polys_multi_class(pred_boxes) + pred_polys = clip_polys(pred_polys, im_shape[-2:]) + + pred_polys = pred_polys / scale + + scores_all.append(scores) + pred_boxes_all.append(pred_polys) + + return scores_all, pred_boxes_all, data_dict_all + +def im_detect_rotbox_Rroi(predictor, data_batch, data_names, scales, cfg): + output_all = predictor.predict(data_batch) + + data_dict_all = [dict(zip(data_names, idata)) for idata in data_batch.data] + scores_all = [] + pred_boxes_all = [] + # pdb.set_trace() + for output, data_dict, scale in zip(output_all, data_dict_all, scales): + if cfg.TEST.HAS_RPN: + # rois = output['rois_output'].asnumpy()[:, 1:] + rois = output['Rrois_output'].asnumpy()[:, 1:] + else: + rois = data_dict['rois'].asnumpy().reshape((-1, 6))[:, 1:] + im_shape = data_dict['data'].shape + + # save output + scores = output['Rroi_cls_prob_reshape_output'].asnumpy()[0] + bbox_deltas = output['Rroi_bbox_pred_reshape_output'].asnumpy()[0] + + # DEBUG = True + if DEBUG: + bbox_deltas = np.zeros_like(bbox_deltas) + + # pred_boxes = dbbox_transform2_inv(rois, bbox_deltas) + # pdb.set_trace() + pred_boxes = dbbox_transform2_inv_new(rois, bbox_deltas, np.pi/2.) + + pred_polys = RotBox2Polys_multi_class(pred_boxes) + pred_polys = clip_polys(pred_polys, im_shape[-2:]) + + pred_polys = pred_polys / scale + + scores_all.append(scores) + pred_boxes_all.append(pred_polys) + + return scores_all, pred_boxes_all, data_dict_all + +def im_detect_xyhs(predictor, data_batch, data_names, scales, cfg): + output_all = predictor.predict(data_batch) + + data_dict_all = [dict(zip(data_names, idata)) for idata in data_batch.data] + scores_all = [] + pred_boxes_all = [] + for output, data_dict, scale in zip(output_all, data_dict_all, scales): + if cfg.TEST.HAS_RPN: + rois = output['rois_output'].asnumpy()[:, 1:] + else: + rois = data_dict['rois'].asnumpy().reshape((-1, 5))[:, 1:] + im_shape = data_dict['data'].shape + + # save output + scores = output['cls_prob_reshape_output'].asnumpy()[0] + bbox_deltas = output['bbox_pred_reshape_output'].asnumpy()[0] + + # pred_boxes = dbbox_transform2_inv_warp(rois, bbox_deltas) + pred_boxes = dbboxtransform3_inv_warp(rois, bbox_deltas) + # pred_polys = RotBox2Polys_multi_class(pred_boxes) + pred_polys = xyhs2polys_muli_class(pred_boxes) + pred_polys = clip_polys(pred_polys, im_shape[-2:]) + + pred_polys = pred_polys / scale + + scores_all.append(scores) + pred_boxes_all.append(pred_polys) + + return scores_all, pred_boxes_all, data_dict_all + +def detect_at_single_scale(predictor, data_names, imdb, test_data, cfg, thresh, vis, all_boxes_single_scale, logger): + idx = 0 + data_time, net_time, post_time = 0.0, 0.0, 0.0 + t = time.time() + print test_data + for im_info, data_batch in test_data: + #print 'im_info:',im_info + # print 'data_batch:', data_batch + + t1 = time.time() - t + t = time.time() + scales = [iim_info[0, 2] for iim_info in im_info] + scores_all, boxes_all, data_dict_all = im_detect(predictor, data_batch, data_names, scales, cfg) + print 'scales',scales.__len__() + print 'boxes_all', boxes_all.__len__() + print 'data_dict_all', data_dict_all.__len__() + + t2 = time.time() - t + t = time.time() + for delta, (scores, boxes, data_dict) in enumerate(zip(scores_all, boxes_all, data_dict_all)): + for j in range(1, imdb.num_classes): + indexes = np.where(scores[:, j] > thresh)[0] + cls_scores = scores[indexes, j, np.newaxis] + cls_boxes = boxes[indexes, 4:8] if cfg.CLASS_AGNOSTIC else boxes[indexes, j * 4:(j + 1) * 4] + cls_dets = np.hstack((cls_boxes, cls_scores)).copy() + #print j,idx,delta + all_boxes_single_scale[j][idx + delta] = cls_dets + if vis: + boxes_this_image = [[]] + [all_boxes_single_scale[j][idx + delta] for j in range(1, imdb.num_classes)] + data_for_vis = data_dict['data'].asnumpy().copy() + vis_all_detection(data_for_vis, boxes_this_image, imdb.classes, scales[delta], cfg) + + idx += test_data.batch_size + t3 = time.time() - t + t = time.time() + data_time += t1 + net_time += t2 + post_time += t3 + #print imdb.image_path_at(4370) + print 'testing {}/{} with scale {}: data {:.4f}s net {:.4f}s post {:.4f}s' \ + .format(idx, imdb.num_images, cfg.SCALES, data_time / idx * test_data.batch_size, + net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size) + if logger: + logger.info('testing {}/{} with scale {}: data {:.4f}s net {:.4f}s post {:.4f}s' + .format(idx, imdb.num_images, cfg.SCALES, data_time / idx * test_data.batch_size, + net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size)) + +def detect_at_single_scale_poly(predictor, data_names, imdb, test_data, cfg, thresh, vis, all_boxes_single_scale, logger): + idx = 0 + data_time, net_time, post_time = 0.0, 0.0, 0.0 + t = time.time() + print test_data + for im_info, data_batch in test_data: + #print 'im_info:',im_info + # print 'data_batch:', data_batch + + t1 = time.time() - t + t = time.time() + scales = [iim_info[0, 2] for iim_info in im_info] + scores_all, boxes_all, data_dict_all = im_detect_poly(predictor, data_batch, data_names, scales, cfg) + print 'scales',scales.__len__() + print 'boxes_all', boxes_all.__len__() + print 'data_dict_all', data_dict_all.__len__() + + t2 = time.time() - t + t = time.time() + for delta, (scores, boxes, data_dict) in enumerate(zip(scores_all, boxes_all, data_dict_all)): + for j in range(1, imdb.num_classes): + indexes = np.where(scores[:, j] > thresh)[0] + cls_scores = scores[indexes, j, np.newaxis] + cls_boxes = boxes[indexes, 8:16] if cfg.CLASS_AGNOSTIC else boxes[indexes, j * 8:(j + 1) * 8] + cls_dets = np.hstack((cls_boxes, cls_scores)).copy() + #print j,idx,delta + all_boxes_single_scale[j][idx + delta] = cls_dets + if vis: + boxes_this_image = [[]] + [all_boxes_single_scale[j][idx + delta] for j in range(1, imdb.num_classes)] + data_for_vis = data_dict['data'].asnumpy().copy() + vis_all_detection(data_for_vis, boxes_this_image, imdb.classes, scales[delta], cfg) + + idx += test_data.batch_size + t3 = time.time() - t + t = time.time() + data_time += t1 + net_time += t2 + post_time += t3 + #print imdb.image_path_at(4370) + print 'testing {}/{} with scale {}: data {:.4f}s net {:.4f}s post {:.4f}s' \ + .format(idx, imdb.num_images, cfg.SCALES, data_time / idx * test_data.batch_size, + net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size) + if logger: + logger.info('testing {}/{} with scale {}: data {:.4f}s net {:.4f}s post {:.4f}s' + .format(idx, imdb.num_images, cfg.SCALES, data_time / idx * test_data.batch_size, + net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size)) + +def pred_eval_poly(predictor, test_data, imdb, cfg, vis=False, thresh=1e-3, logger=None, ignore_cache=True): + """ + wrapper for calculating offline validation for faster data analysis + in this example, all threshold are set by hand + :param predictor: Predictor + :param test_data: data iterator, must be non-shuffle + :param imdb: image database + :param vis: controls visualization + :param thresh: valid detection threshold + :return: + """ + + det_file = os.path.join(imdb.result_path, imdb.name + '_detections.pkl') + # print det_file + if os.path.exists(det_file) and not ignore_cache: + with open(det_file, 'rb') as fid: + all_boxes = cPickle.load(fid) + info_str = imdb.evaluate_detections(all_boxes) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + return + + assert vis or not test_data.shuffle + data_names = [k[0] for k in test_data.provide_data[0]] + + if not isinstance(test_data, PrefetchingIter): + test_data = PrefetchingIter(test_data) + + # limit detections to max_per_image over all classes + max_per_image = cfg.TEST.max_per_image + num_images = imdb.num_images + print "In tester.py line 147 num_images:",num_images + + for test_scale_index, test_scale in enumerate(cfg.TEST_SCALES): + det_file_single_scale = os.path.join(imdb.result_path, imdb.name + '_detections_' + str(test_scale_index) + '.pkl') + # if os.path.exists(det_file_single_scale): + # continue + cfg.SCALES = [test_scale] + # print test_data.shape()[0] + test_data.reset() + + # all detections are collected into: + # all_boxes[cls][image] = N x 5 array of detections in + # (x1, y1, x2, y2, score) + all_boxes_single_scale = [[[] for _ in range(num_images)] + for _ in range(imdb.num_classes)] + + detect_at_single_scale_poly(predictor, data_names, imdb, test_data, cfg, thresh, vis, all_boxes_single_scale, logger) + + with open(det_file_single_scale, 'wb') as f: + cPickle.dump(all_boxes_single_scale, f, protocol=cPickle.HIGHEST_PROTOCOL) + + # all detections are collected into: + # all_boxes[cls][image] = N x 5 array of detections in + # (x1, y1, x2, y2, score) + all_boxes = [[[] for _ in range(num_images)] for _ in range(imdb.num_classes)] + # print all_boxes.__len__() + + for test_scale_index, test_scale in enumerate(cfg.TEST_SCALES): + det_file_single_scale = os.path.join(imdb.result_path, imdb.name + '_detections_' + str(test_scale_index) + '.pkl') + if os.path.exists(det_file_single_scale): + with open(det_file_single_scale, 'rb') as fid: + all_boxes_single_scale = cPickle.load(fid) + for idx_class in range(1, imdb.num_classes): + for idx_im in range(0, num_images): + if len(all_boxes[idx_class][idx_im]) == 0: + all_boxes[idx_class][idx_im] = all_boxes_single_scale[idx_class][idx_im] + else: + all_boxes[idx_class][idx_im] = np.vstack((all_boxes[idx_class][idx_im], all_boxes_single_scale[idx_class][idx_im])) + #print "ALL_BOXES:",idx_class, all_boxes[idx_class].__len__() + + + # for idx_class in range(1, imdb.num_classes): + # for idx_im in range(0, num_images): ############################################################################################################ + # if cfg.TEST.USE_SOFTNMS: + # + # soft_nms = py_softnms_wrapper(cfg.TEST.SOFTNMS_THRESH, max_dets=max_per_image) + # print all_boxes[idx_class][idx_im] + # all_boxes[idx_class][idx_im] = soft_nms(all_boxes[idx_class][idx_im]) + # else: + # nms = py_nms_wrapper(cfg.TEST.NMS) + # keep = nms(all_boxes[idx_class][idx_im]) + # all_boxes[idx_class][idx_im] = all_boxes[idx_class][idx_im][keep, :] + + + + + + + if max_per_image > 0: + for idx_im in range(0, num_images): + image_scores = np.hstack([all_boxes[j][idx_im][:, -1] + for j in range(1, imdb.num_classes)]) + if len(image_scores) > max_per_image: + image_thresh = np.sort(image_scores)[-max_per_image] + for j in range(1, imdb.num_classes): + keep = np.where(all_boxes[j][idx_im][:, -1] >= image_thresh)[0] + all_boxes[j][idx_im] = all_boxes[j][idx_im][keep, :] + + with open(det_file, 'wb') as f: + cPickle.dump(all_boxes, f, protocol=cPickle.HIGHEST_PROTOCOL) + + # info_str = imdb.evaluate_detections(all_boxes) + # if logger: + # logger.info('evaluate detections: \n{}'.format(info_str)) + + +def pred_eval_poly_multiscale(prefix, predictor, test_data, imdb, cfg, vis=False, draw=False, + thresh=1e-3, logger=None, ignore_cache=True): + """ + wrapper for calculating offline validation for faster data analysis + in this example, all threshold are set by hand + :param predictor: Predictor + :param test_data: data iterator, must be non-shuffle + :param imdb: image database + :param vis: controls visualization + :param thresh: valid detection threshold + :return: + """ + + det_file = os.path.join(imdb.result_path, imdb.name + '{}'.format(prefix) + '_detections.pkl') + if os.path.exists(det_file) and not ignore_cache: + return + + assert vis or not test_data.shuffle + data_names = [k[0] for k in test_data.provide_data[0]] + + if not isinstance(test_data, PrefetchingIter): + test_data = PrefetchingIter(test_data) + + # nms = py_nms_wrapper(cfg.TEST.NMS) + + # limit detections to max_per_image over all classes + max_per_image = cfg.TEST.max_per_image + + num_images = imdb.num_images + # all detections are collected into: + # all_boxes[cls][image] = N x 9 array of detections in + # (x1, y1, x2, y2, x3, y3, x4, y4, score) + all_boxes = [[[] for _ in range(num_images)] + for _ in range(imdb.num_classes)] + + idx = 0 + data_time, net_time, post_time = 0.0, 0.0, 0.0 + t = time.time() + for im_info, data_batch in test_data: + t1 = time.time() - t + t = time.time() + + scales = [iim_info[0, 2] for iim_info in im_info] + scores_all, boxes_all, data_dict_all= im_detect_poly(predictor, data_batch, data_names, scales, cfg) + + t2 = time.time() - t + t = time.time() + for delta, (scores, boxes, data_dict) in enumerate(zip(scores_all, boxes_all, data_dict_all)): + # idx = int(data_dict['im_index'])-1 + for j in range(1, imdb.num_classes): + indexes = np.where(scores[:, j] > thresh)[0] + cls_scores = scores[indexes, j, np.newaxis] + cls_boxes = boxes[indexes, 8:16] if cfg.CLASS_AGNOSTIC else boxes[indexes, j * 8:(j + 1) * 8] + # temp_cls_boxes = np.zeros((cls_boxes.shape[0], 4), dtype=cls_boxes.dtype) + # temp_cls_boxes_x = np.vstack((cls_boxes[:, 0], cls_boxes[:, 2], cls_boxes[:, 4], cls_boxes[:, 6])) + # temp_cls_boxes_y = np.vstack((cls_boxes[:, 1], cls_boxes[:, 3], cls_boxes[:, 5], cls_boxes[:, 7])) + # temp_cls_boxes[:, 0] = np.amin(temp_cls_boxes_x, axis=0) + # temp_cls_boxes[:, 1] = np.amin(temp_cls_boxes_y, axis=0) + # temp_cls_boxes[:, 2] = np.amax(temp_cls_boxes_x, axis=0) + # temp_cls_boxes[:, 3] = np.amax(temp_cls_boxes_y, axis=0) + # cls_dets = np.hstack((temp_cls_boxes, cls_scores)) + cls_quadrangle_dets = np.hstack((cls_boxes, cls_scores)) + # keep = nms(cls_dets) + all_boxes[j][idx+delta] = cls_quadrangle_dets + + if max_per_image > 0: + image_scores = np.hstack([all_boxes[j][idx+delta][:, -1] + for j in range(1, imdb.num_classes)]) + if len(image_scores) > max_per_image: + image_thresh = np.sort(image_scores)[-max_per_image] + for j in range(1, imdb.num_classes): + keep = np.where(all_boxes[j][idx+delta][:, -1] >= image_thresh)[0] + all_boxes[j][idx+delta] = all_boxes[j][idx+delta][keep, :] + + # if vis: + # boxes_this_image = [[]] + [all_boxes[j][idx+delta] for j in range(1, imdb.num_classes)] + # vis_all_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scales[delta], cfg) + # + if draw: + if not os.path.isdir(cfg.TEST.save_img_path): + os.mkdir(cfg.TEST.save_img_path) + path = os.path.join(cfg.TEST.save_img_path, '{}_'.format(prefix) + str(idx) + '.jpg') + boxes_this_image = [[]] + [all_boxes[j][idx + delta] for j in range(1, imdb.num_classes)] + im = draw_all_poly_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scales[delta], cfg, threshold=0.2) + print path + cv2.imwrite(path, im) + + idx += test_data.batch_size + t3 = time.time() - t + t = time.time() + data_time += t1 + net_time += t2 + post_time += t3 + print 'testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size) + if logger: + logger.info('testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size)) + + threshold = 0.1 + detection_results_path = os.path.join(imdb.result_path, 'test_{}_results'.format(prefix)) + if not os.path.isdir(detection_results_path): + os.mkdir(detection_results_path) + for cls_ind, cls in enumerate(imdb.classes): + if cls == '__background__': + continue + for im_ind, index in enumerate(imdb.image_set_index): + dets = all_boxes[cls_ind][im_ind] + with open(os.path.join(imdb.result_path, 'test_{}_results'.format(prefix), 'res_img_{}.txt'.format(index)), 'wt') as f: + for k in range(dets.shape[0]): + if dets[k, 8] <= threshold: + continue + if imdb.validate_clockwise_points(dets[k, 0:8]): + f.write( + '{},{},{},{},{},{},{},{},{}\n'.format(int(dets[k, 0]), int(dets[k, 1]), int(dets[k, 2]), + int(dets[k, 3]), + int(dets[k, 4]), int(dets[k, 5]), int(dets[k, 6]), + int(dets[k, 7]), dets[k, 8])) + else: + print 'A detected box is anti-clockwise! Index:{}'.format(index) + print dets[k, 0:8] + + with open(det_file, 'wb') as f: + cPickle.dump(all_boxes, f, protocol=cPickle.HIGHEST_PROTOCOL) + +def pred_eval_dota_poly(predictor, test_data, imdb, cfg, vis=False, draw=False, + thresh=1e-3, logger=None, ignore_cache=True): + """ + wrapper for calculating offline validation for faster data analysis + in this example, all threshold are set by hand + :param predictor: Predictor + :param test_data: data iterator, must be non-shuffle + :param imdb: image database + :param vis: controls visualization + :param thresh: valid detection threshold + :return: + """ + # ignore_cache = True + # pdb.set_trace() + det_file = os.path.join(imdb.result_path, imdb.name + '_detections.pkl') + if os.path.exists(det_file) and not ignore_cache: + with open(det_file, 'rb') as fid: + all_boxes = cPickle.load(fid) + # imdb.count_ar() + #imdb.check_transform() + # imdb.draw_gt_and_detections(all_boxes, thresh=0.1) + info_str = imdb.evaluate_detections(all_boxes, ignore_cache) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + return + + assert vis or not test_data.shuffle + data_names = [k[0] for k in test_data.provide_data[0]] + + if not isinstance(test_data, PrefetchingIter): + test_data = PrefetchingIter(test_data) + + #nms = py_nms_wrapper(cfg.TEST.NMS) + + # limit detections to max_per_image over all classes + max_per_image = cfg.TEST.max_per_image + + num_images = imdb.num_images + # all detections are collected into: + # all_boxes[cls][image] = N x 9 array of detections in + # (x1, y1, x2, y2, x3, y3, x4, y4, score) + all_boxes = [[[] for _ in range(num_images)] + for _ in range(imdb.num_classes)] + + idx = 0 + data_time, net_time, post_time = 0.0, 0.0, 0.0 + t = time.time() + for im_info, data_batch in test_data: + t1 = time.time() - t + t = time.time() + + scales = [iim_info[0, 2] for iim_info in im_info] + scores_all, boxes_all, data_dict_all= im_detect_poly(predictor, data_batch, data_names, scales, cfg) + # pdb.set_trace() + t2 = time.time() - t + t = time.time() + for delta, (scores, boxes, data_dict) in enumerate(zip(scores_all, boxes_all, data_dict_all)): + # idx = int(data_dict['im_index'])-1 + for j in range(1, imdb.num_classes): + indexes = np.where(scores[:, j] > thresh)[0] + cls_scores = scores[indexes, j, np.newaxis] + cls_boxes = boxes[indexes, 8:16] if cfg.CLASS_AGNOSTIC else boxes[indexes, j * 8:(j + 1) * 8] + cls_quadrangle_dets = np.hstack((cls_boxes, cls_scores)) + # keep = nms(cls_dets) + keep = py_cpu_nms_poly(cls_quadrangle_dets, 0.3) + # pdb.set_trace() + all_boxes[j][idx+delta] = cls_quadrangle_dets[keep, :] + + if max_per_image > 0: + image_scores = np.hstack([all_boxes[j][idx+delta][:, -1] + for j in range(1, imdb.num_classes)]) + if len(image_scores) > max_per_image: + image_thresh = np.sort(image_scores)[-max_per_image] + for j in range(1, imdb.num_classes): + keep = np.where(all_boxes[j][idx+delta][:, -1] >= image_thresh)[0] + all_boxes[j][idx+delta] = all_boxes[j][idx+delta][keep, :] + + if vis: + boxes_this_image = [[]] + [all_boxes[j][idx+delta] for j in range(1, imdb.num_classes)] + vis_all_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scales[delta], cfg) + + if draw: + if not os.path.isdir(cfg.TEST.save_img_path): + os.mkdir(cfg.TEST.save_img_path) + path = os.path.join(cfg.TEST.save_img_path, str(idx) + '.jpg') + boxes_this_image = [[]] + [all_boxes[j][idx + delta] for j in range(1, imdb.num_classes)] + im = draw_all_poly_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scales[delta], cfg, threshold=0.2) + print path + cv2.imwrite(path, im) + + idx += test_data.batch_size + t3 = time.time() - t + t = time.time() + data_time += t1 + net_time += t2 + post_time += t3 + print 'testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size) + if logger: + logger.info('testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size)) + + with open(det_file, 'wb') as f: + cPickle.dump(all_boxes, f, protocol=cPickle.HIGHEST_PROTOCOL) + + # imdb.draw_gt_and_detections(all_boxes, thresh=0.1) + info_str = imdb.evaluate_detections(all_boxes, ignore_cache) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + +def pred_eval_dota_rotbox(predictor, test_data, imdb, cfg, vis=False, draw=False, + thresh=1e-3, logger=None, ignore_cache=True): + """ + wrapper for calculating offline validation for faster data analysis + in this example, all threshold are set by hand + :param predictor: Predictor + :param test_data: data iterator, must be non-shuffle + :param imdb: image database + :param vis: controls visualization + :param thresh: valid detection threshold + :return: + """ + # ignore_cache = True + # pdb.set_trace() + det_file = os.path.join(imdb.result_path, imdb.name + '_detections.pkl') + if os.path.exists(det_file) and not ignore_cache: + with open(det_file, 'rb') as fid: + all_boxes = cPickle.load(fid) + # imdb.count_ar() + #imdb.check_transform() + # imdb.draw_gt_and_detections(all_boxes, thresh=0.1) + info_str = imdb.evaluate_detections(all_boxes, ignore_cache) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + return + + assert vis or not test_data.shuffle + data_names = [k[0] for k in test_data.provide_data[0]] + + if not isinstance(test_data, PrefetchingIter): + test_data = PrefetchingIter(test_data) + + #nms = py_nms_wrapper(cfg.TEST.NMS) + + # limit detections to max_per_image over all classes + max_per_image = cfg.TEST.max_per_image + + num_images = imdb.num_images + # all detections are collected into: + # all_boxes[cls][image] = N x 9 array of detections in + # (x1, y1, x2, y2, x3, y3, x4, y4, score) + all_boxes = [[[] for _ in range(num_images)] + for _ in range(imdb.num_classes)] + + idx = 0 + data_time, net_time, post_time = 0.0, 0.0, 0.0 + t = time.time() + for im_info, data_batch in test_data: + t1 = time.time() - t + t = time.time() + + scales = [iim_info[0, 2] for iim_info in im_info] + # scores_all, boxes_all, data_dict_all= im_detect_poly(predictor, data_batch, data_names, scales, cfg) + scores_all, boxes_all, data_dict_all = im_detect_rotbox(predictor, data_batch, data_names, scales, cfg) + # pdb.set_trace() + t2 = time.time() - t + t = time.time() + for delta, (scores, boxes, data_dict) in enumerate(zip(scores_all, boxes_all, data_dict_all)): + # idx = int(data_dict['im_index'])-1 + if DEBUG: + imdb.num_classes = 2 + for j in range(1, imdb.num_classes): + indexes = np.where(scores[:, j] > thresh)[0] + cls_scores = scores[indexes, j, np.newaxis] + cls_boxes = boxes[indexes, 8:16] if cfg.CLASS_AGNOSTIC else boxes[indexes, j * 8:(j + 1) * 8] + cls_quadrangle_dets = np.hstack((cls_boxes, cls_scores)) + # keep = nms(cls_dets) + keep = py_cpu_nms_poly(cls_quadrangle_dets, 0.3) + # pdb.set_trace() + all_boxes[j][idx+delta] = cls_quadrangle_dets[keep, :] + + if max_per_image > 0: + image_scores = np.hstack([all_boxes[j][idx+delta][:, -1] + for j in range(1, imdb.num_classes)]) + if len(image_scores) > max_per_image: + image_thresh = np.sort(image_scores)[-max_per_image] + for j in range(1, imdb.num_classes): + keep = np.where(all_boxes[j][idx+delta][:, -1] >= image_thresh)[0] + all_boxes[j][idx+delta] = all_boxes[j][idx+delta][keep, :] + + if vis: + boxes_this_image = [[]] + [all_boxes[j][idx+delta] for j in range(1, imdb.num_classes)] + vis_all_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scales[delta], cfg) + + if draw: + if not os.path.isdir(cfg.TEST.save_img_path): + os.mkdir(cfg.TEST.save_img_path) + path = os.path.join(cfg.TEST.save_img_path, str(idx) + '.jpg') + boxes_this_image = [[]] + [all_boxes[j][idx + delta] for j in range(1, imdb.num_classes)] + im = draw_all_poly_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scales[delta], cfg, threshold=0.2) + print path + cv2.imwrite(path, im) + + idx += test_data.batch_size + t3 = time.time() - t + t = time.time() + data_time += t1 + net_time += t2 + post_time += t3 + print 'testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size) + if logger: + logger.info('testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size)) + + with open(det_file, 'wb') as f: + cPickle.dump(all_boxes, f, protocol=cPickle.HIGHEST_PROTOCOL) + + # imdb.draw_gt_and_detections(all_boxes, thresh=0.1) + info_str = imdb.evaluate_detections(all_boxes, ignore_cache) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + +def pred_eval_dota_rotbox_Rroi(predictor, test_data, imdb, cfg, vis=False, draw=False, + thresh=1e-3, logger=None, ignore_cache=True): + """ + wrapper for calculating offline validation for faster data analysis + in this example, all threshold are set by hand + :param predictor: Predictor + :param test_data: data iterator, must be non-shuffle + :param imdb: image database + :param vis: controls visualization + :param thresh: valid detection threshold + :return: + """ + # ignore_cache = True + # pdb.set_trace() + det_file = os.path.join(imdb.result_path, imdb.name + '_detections.pkl') + if os.path.exists(det_file) and not ignore_cache: + with open(det_file, 'rb') as fid: + all_boxes = cPickle.load(fid) + # imdb.count_ar() + #imdb.check_transform() + # imdb.draw_gt_and_detections(all_boxes, thresh=0.1) + info_str = imdb.evaluate_detections(all_boxes, ignore_cache) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + return + + assert vis or not test_data.shuffle + data_names = [k[0] for k in test_data.provide_data[0]] + + if not isinstance(test_data, PrefetchingIter): + test_data = PrefetchingIter(test_data) + + #nms = py_nms_wrapper(cfg.TEST.NMS) + + # limit detections to max_per_image over all classes + max_per_image = cfg.TEST.max_per_image + + num_images = imdb.num_images + # all detections are collected into: + # all_boxes[cls][image] = N x 9 array of detections in + # (x1, y1, x2, y2, x3, y3, x4, y4, score) + all_boxes = [[[] for _ in range(num_images)] + for _ in range(imdb.num_classes)] + + idx = 0 + data_time, net_time, post_time = 0.0, 0.0, 0.0 + t = time.time() + for im_info, data_batch in test_data: + t1 = time.time() - t + t = time.time() + + scales = [iim_info[0, 2] for iim_info in im_info] + # scores_all, boxes_all, data_dict_all= im_detect_poly(predictor, data_batch, data_names, scales, cfg) + scores_all, boxes_all, data_dict_all = im_detect_rotbox_Rroi(predictor, data_batch, data_names, scales, cfg) + # pdb.set_trace() + t2 = time.time() - t + t = time.time() + for delta, (scores, boxes, data_dict) in enumerate(zip(scores_all, boxes_all, data_dict_all)): + # idx = int(data_dict['im_index'])-1 + for j in range(1, imdb.num_classes): + indexes = np.where(scores[:, j] > thresh)[0] + cls_scores = scores[indexes, j, np.newaxis] + cls_boxes = boxes[indexes, 8:16] if cfg.network.RRoI_CLASS_AGNOSTIC else boxes[indexes, j * 8:(j + 1) * 8] + cls_quadrangle_dets = np.hstack((cls_boxes, cls_scores)) + # keep = nms(cls_dets) + keep = py_cpu_nms_poly(cls_quadrangle_dets, 0.3) + # pdb.set_trace() + all_boxes[j][idx+delta] = cls_quadrangle_dets[keep, :] + + if max_per_image > 0: + image_scores = np.hstack([all_boxes[j][idx+delta][:, -1] + for j in range(1, imdb.num_classes)]) + if len(image_scores) > max_per_image: + image_thresh = np.sort(image_scores)[-max_per_image] + for j in range(1, imdb.num_classes): + keep = np.where(all_boxes[j][idx+delta][:, -1] >= image_thresh)[0] + all_boxes[j][idx+delta] = all_boxes[j][idx+delta][keep, :] + + if vis: + boxes_this_image = [[]] + [all_boxes[j][idx+delta] for j in range(1, imdb.num_classes)] + vis_all_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scales[delta], cfg) + + if draw: + if not os.path.isdir(cfg.TEST.save_img_path): + os.mkdir(cfg.TEST.save_img_path) + path = os.path.join(cfg.TEST.save_img_path, str(idx) + '.jpg') + boxes_this_image = [[]] + [all_boxes[j][idx + delta] for j in range(1, imdb.num_classes)] + im = draw_all_poly_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scales[delta], cfg, threshold=0.2) + print path + cv2.imwrite(path, im) + + idx += test_data.batch_size + t3 = time.time() - t + t = time.time() + data_time += t1 + net_time += t2 + post_time += t3 + print 'testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size) + if logger: + logger.info('testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size)) + + with open(det_file, 'wb') as f: + cPickle.dump(all_boxes, f, protocol=cPickle.HIGHEST_PROTOCOL) + + # imdb.draw_gt_and_detections(all_boxes, thresh=0.1) + info_str = imdb.evaluate_detections(all_boxes, ignore_cache) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + +def pred_eval_dota_wyhs(predictor, test_data, imdb, cfg, vis=False, draw=False, + thresh=1e-3, logger=None, ignore_cache=True): + """ + wrapper for calculating offline validation for faster data analysis + in this example, all threshold are set by hand + :param predictor: Predictor + :param test_data: data iterator, must be non-shuffle + :param imdb: image database + :param vis: controls visualization + :param thresh: valid detection threshold + :return: + """ + # ignore_cache = True + # pdb.set_trace() + det_file = os.path.join(imdb.result_path, imdb.name + '_detections.pkl') + if os.path.exists(det_file) and not ignore_cache: + with open(det_file, 'rb') as fid: + all_boxes = cPickle.load(fid) + # imdb.count_ar() + #imdb.check_transform() + # imdb.draw_gt_and_detections(all_boxes, thresh=0.1) + info_str = imdb.evaluate_detections(all_boxes, ignore_cache) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + return + + assert vis or not test_data.shuffle + data_names = [k[0] for k in test_data.provide_data[0]] + + if not isinstance(test_data, PrefetchingIter): + test_data = PrefetchingIter(test_data) + + #nms = py_nms_wrapper(cfg.TEST.NMS) + + # limit detections to max_per_image over all classes + max_per_image = cfg.TEST.max_per_image + + num_images = imdb.num_images + # all detections are collected into: + # all_boxes[cls][image] = N x 9 array of detections in + # (x1, y1, x2, y2, x3, y3, x4, y4, score) + all_boxes = [[[] for _ in range(num_images)] + for _ in range(imdb.num_classes)] + + idx = 0 + data_time, net_time, post_time = 0.0, 0.0, 0.0 + t = time.time() + for im_info, data_batch in test_data: + t1 = time.time() - t + t = time.time() + + scales = [iim_info[0, 2] for iim_info in im_info] + # scores_all, boxes_all, data_dict_all= im_detect_poly(predictor, data_batch, data_names, scales, cfg) + scores_all, boxes_all, data_dict_all = im_detect_xyhs(predictor, data_batch, data_names, scales, cfg) + # pdb.set_trace() + t2 = time.time() - t + t = time.time() + for delta, (scores, boxes, data_dict) in enumerate(zip(scores_all, boxes_all, data_dict_all)): + # idx = int(data_dict['im_index'])-1 + for j in range(1, imdb.num_classes): + indexes = np.where(scores[:, j] > thresh)[0] + cls_scores = scores[indexes, j, np.newaxis] + cls_boxes = boxes[indexes, 8:16] if cfg.CLASS_AGNOSTIC else boxes[indexes, j * 8:(j + 1) * 8] + cls_quadrangle_dets = np.hstack((cls_boxes, cls_scores)) + # keep = nms(cls_dets) + keep = py_cpu_nms_poly(cls_quadrangle_dets, 0.3) + # pdb.set_trace() + all_boxes[j][idx+delta] = cls_quadrangle_dets[keep, :] + + if max_per_image > 0: + image_scores = np.hstack([all_boxes[j][idx+delta][:, -1] + for j in range(1, imdb.num_classes)]) + if len(image_scores) > max_per_image: + image_thresh = np.sort(image_scores)[-max_per_image] + for j in range(1, imdb.num_classes): + keep = np.where(all_boxes[j][idx+delta][:, -1] >= image_thresh)[0] + all_boxes[j][idx+delta] = all_boxes[j][idx+delta][keep, :] + + if vis: + boxes_this_image = [[]] + [all_boxes[j][idx+delta] for j in range(1, imdb.num_classes)] + vis_all_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scales[delta], cfg) + + if draw: + if not os.path.isdir(cfg.TEST.save_img_path): + os.mkdir(cfg.TEST.save_img_path) + path = os.path.join(cfg.TEST.save_img_path, str(idx) + '.jpg') + boxes_this_image = [[]] + [all_boxes[j][idx + delta] for j in range(1, imdb.num_classes)] + im = draw_all_poly_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scales[delta], cfg, threshold=0.2) + print path + cv2.imwrite(path, im) + + idx += test_data.batch_size + t3 = time.time() - t + t = time.time() + data_time += t1 + net_time += t2 + post_time += t3 + print 'testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size) + if logger: + logger.info('testing {}/{} data {:.4f}s net {:.4f}s post {:.4f}s'.format(idx, imdb.num_images, data_time / idx * test_data.batch_size, net_time / idx * test_data.batch_size, post_time / idx * test_data.batch_size)) + + with open(det_file, 'wb') as f: + cPickle.dump(all_boxes, f, protocol=cPickle.HIGHEST_PROTOCOL) + + # imdb.draw_gt_and_detections(all_boxes, thresh=0.1) + info_str = imdb.evaluate_detections(all_boxes, ignore_cache) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + +def pred_eval(predictor, test_data, imdb, cfg, vis=False, thresh=1e-3, logger=None, ignore_cache=True): + """ + wrapper for calculating offline validation for faster data analysis + in this example, all threshold are set by hand + :param predictor: Predictor + :param test_data: data iterator, must be non-shuffle + :param imdb: image database + :param vis: controls visualization + :param thresh: valid detection threshold + :return: + """ + + det_file = os.path.join(imdb.result_path, imdb.name + '_detections.pkl') + # print det_file + if os.path.exists(det_file) and not ignore_cache: + with open(det_file, 'rb') as fid: + all_boxes = cPickle.load(fid) + info_str = imdb.evaluate_detections(all_boxes) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + return + + assert vis or not test_data.shuffle + data_names = [k[0] for k in test_data.provide_data[0]] + + if not isinstance(test_data, PrefetchingIter): + test_data = PrefetchingIter(test_data) + + # limit detections to max_per_image over all classes + max_per_image = cfg.TEST.max_per_image + num_images = imdb.num_images + print "In tester.py line 147 num_images:",num_images + + for test_scale_index, test_scale in enumerate(cfg.TEST_SCALES): + det_file_single_scale = os.path.join(imdb.result_path, imdb.name + '_detections_' + str(test_scale_index) + '.pkl') + # if os.path.exists(det_file_single_scale): + # continue + cfg.SCALES = [test_scale] + # print test_data.shape()[0] + test_data.reset() + + # all detections are collected into: + # all_boxes[cls][image] = N x 5 array of detections in + # (x1, y1, x2, y2, score) + all_boxes_single_scale = [[[] for _ in range(num_images)] + for _ in range(imdb.num_classes)] + + detect_at_single_scale(predictor, data_names, imdb, test_data, cfg, thresh, vis, all_boxes_single_scale, logger) + + with open(det_file_single_scale, 'wb') as f: + cPickle.dump(all_boxes_single_scale, f, protocol=cPickle.HIGHEST_PROTOCOL) + + # all detections are collected into: + # all_boxes[cls][image] = N x 5 array of detections in + # (x1, y1, x2, y2, score) + all_boxes = [[[] for _ in range(num_images)] for _ in range(imdb.num_classes)] + # print all_boxes.__len__() + + for test_scale_index, test_scale in enumerate(cfg.TEST_SCALES): + det_file_single_scale = os.path.join(imdb.result_path, imdb.name + '_detections_' + str(test_scale_index) + '.pkl') + if os.path.exists(det_file_single_scale): + with open(det_file_single_scale, 'rb') as fid: + all_boxes_single_scale = cPickle.load(fid) + for idx_class in range(1, imdb.num_classes): + for idx_im in range(0, num_images): + if len(all_boxes[idx_class][idx_im]) == 0: + all_boxes[idx_class][idx_im] = all_boxes_single_scale[idx_class][idx_im] + else: + all_boxes[idx_class][idx_im] = np.vstack((all_boxes[idx_class][idx_im], all_boxes_single_scale[idx_class][idx_im])) + #print "ALL_BOXES:",idx_class, all_boxes[idx_class].__len__() + + + for idx_class in range(1, imdb.num_classes): + for idx_im in range(0, num_images): ############################################################################################################ + if cfg.TEST.USE_SOFTNMS: + + soft_nms = py_softnms_wrapper(cfg.TEST.SOFTNMS_THRESH, max_dets=max_per_image) + print all_boxes[idx_class][idx_im] + all_boxes[idx_class][idx_im] = soft_nms(all_boxes[idx_class][idx_im]) + else: + nms = py_nms_wrapper(cfg.TEST.NMS) + keep = nms(all_boxes[idx_class][idx_im]) + all_boxes[idx_class][idx_im] = all_boxes[idx_class][idx_im][keep, :] + + + + + + + if max_per_image > 0: + for idx_im in range(0, num_images): + image_scores = np.hstack([all_boxes[j][idx_im][:, -1] + for j in range(1, imdb.num_classes)]) + if len(image_scores) > max_per_image: + image_thresh = np.sort(image_scores)[-max_per_image] + for j in range(1, imdb.num_classes): + keep = np.where(all_boxes[j][idx_im][:, -1] >= image_thresh)[0] + all_boxes[j][idx_im] = all_boxes[j][idx_im][keep, :] + + with open(det_file, 'wb') as f: + cPickle.dump(all_boxes, f, protocol=cPickle.HIGHEST_PROTOCOL) + + info_str = imdb.evaluate_detections(all_boxes) + if logger: + logger.info('evaluate detections: \n{}'.format(info_str)) + + +def vis_all_detection(im_array, detections, class_names, scale, cfg, threshold=1e-3): + """ + visualize all detections in one image + :param im_array: [b=1 c h w] in rgb + :param detections: [ numpy.ndarray([[x1 y1 x2 y2 score]]) for j in classes ] + :param class_names: list of names in imdb + :param scale: visualize the scaled image + :return: + """ + import matplotlib.pyplot as plt + import random + im = image.transform_inverse(im_array, cfg.network.PIXEL_MEANS) + plt.imshow(im) + for j, name in enumerate(class_names): + if name == '__background__': + continue + color = (random.random(), random.random(), random.random()) # generate a random color + dets = detections[j] + for det in dets: + bbox = det[:4] * scale + score = det[-1] + if score < threshold: + continue + rect = plt.Rectangle((bbox[0], bbox[1]), + bbox[2] - bbox[0], + bbox[3] - bbox[1], fill=False, + edgecolor=color, linewidth=3.5) + plt.gca().add_patch(rect) + plt.gca().text(bbox[0], bbox[1] - 2, + '{:s} {:.3f}'.format(name, score), + bbox=dict(facecolor=color, alpha=0.5), fontsize=12, color='white') + plt.show() + + +def draw_all_detection(im_array, detections, class_names, scale, cfg, threshold=1e-1): + """ + visualize all detections in one image + :param im_array: [b=1 c h w] in rgb + :param detections: [ numpy.ndarray([[x1 y1 x2 y2 score]]) for j in classes ] + :param class_names: list of names in imdb + :param scale: visualize the scaled image + :return: + """ + import cv2 + import random + color_white = (255, 255, 255) + im = image.transform_inverse(im_array, cfg.network.PIXEL_MEANS) + # change to bgr + im = cv2.cvtColor(im, cv2.COLOR_RGB2BGR) + for j, name in enumerate(class_names): + if name == '__background__': + continue + color = (random.randint(0, 256), random.randint(0, 256), random.randint(0, 256)) # generate a random color + dets = detections[j] + for det in dets: + bbox = det[:4] * scale + score = det[-1] + if score < threshold: + continue + bbox = map(int, bbox) + cv2.rectangle(im, (bbox[0], bbox[1]), (bbox[2], bbox[3]), color=color, thickness=2) + cv2.putText(im, '%s %.3f' % (class_names[j], score), (bbox[0], bbox[1] + 10), + color=color_white, fontFace=cv2.FONT_HERSHEY_COMPLEX, fontScale=0.5) + return im + +def draw_all_poly_detection(im_array, detections, class_names, scale, cfg, threshold=0.2): + """ + visualize all detections in one image + :param im_array: [b=1 c h w] in rgb + :param detections: [ numpy.ndarray([[x1 y1 x2 y2 score]]) for j in classes ] + :param class_names: list of names in imdb + :param scale: visualize the scaled image + :return: + """ + import cv2 + import random + color_white = (255, 255, 255) + im = image.transform_inverse(im_array, cfg.network.PIXEL_MEANS) + # change to bgr + im = cv2.cvtColor(im, cv2.COLOR_RGB2BGR) + if DEBUG: + class_names = ['__background__', 'fg'] + for j, name in enumerate(class_names): + if name == '__background__': + continue + color = (random.randint(0, 256), random.randint(0, 256), random.randint(0, 256)) # generate a random color + dets = detections[j] + for det in dets: + bbox = det[:8] * scale + score = det[-1] + if score < threshold: + continue + bbox = map(int, bbox) + # draw first point + cv2.circle(im, (bbox[0], bbox[1]), 3, (0, 0, 255), -1) + for i in range(3): + cv2.line(im, (bbox[i * 2], bbox[i * 2 + 1]), (bbox[(i+1) * 2], bbox[(i+1) * 2 + 1]), color=color, thickness=2) + cv2.line(im, (bbox[6], bbox[7]), (bbox[0], bbox[1]), color=color, thickness=2) + cv2.putText(im, '%s %.3f' % (class_names[j], score), (bbox[0], bbox[1] + 10), + color=color_white, fontFace=cv2.FONT_HERSHEY_COMPLEX, fontScale=0.5) + return im \ No newline at end of file diff --git a/fpn/demo.py b/fpn/demo.py new file mode 100644 index 0000000..a843ddf --- /dev/null +++ b/fpn/demo.py @@ -0,0 +1,184 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Yi Li, Haochen Zhang +# -------------------------------------------------------- + +import _init_paths + +import argparse +import os +import sys +import logging +import pprint +import cv2 +from config.config import config, update_config +from utils.image import resize, transform +import numpy as np +# get config +os.environ['PYTHONUNBUFFERED'] = '1' +os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0' +os.environ['MXNET_ENABLE_GPU_P2P'] = '0' +cur_path = os.path.abspath(os.path.dirname(__file__)) +update_config(cur_path + '/../experiments/fpn/cfgs/resnet_v1_101_dota_rotbox_light_head_RoITransformer_trainval_fpn_end2end.yaml') + +sys.path.insert(0, os.path.join(cur_path, '../external/mxnet', config.MXNET_VERSION)) +sys.path.insert(0, '../') +import mxnet as mx +from core.tester import im_detect, Predictor, im_detect_rotbox_Rroi +from symbols import * +from utils.load_model import load_param +from utils.show_boxes import show_boxes +from utils.tictoc import tic, toc +from nms.nms import py_nms_wrapper, cpu_nms_wrapper, gpu_nms_wrapper +from dota_kit.ResultMerge import py_cpu_nms_poly +from utils import image +import pdb + +def parse_args(): + parser = argparse.ArgumentParser(description='Show Deformable ConvNets demo') + # general + parser.add_argument('--rfcn_only', help='whether use fpn only (w/o Deformable ConvNets)', default=False, action='store_true') + + args = parser.parse_args() + return args + +args = parse_args() + +def draw_all_poly_detection(im_array, detections, class_names, scale, cfg, threshold=0.2): + """ + visualize all detections in one image + :param im_array: [b=1 c h w] in rgb + :param detections: [ numpy.ndarray([[x1 y1 x2 y2 score]]) for j in classes ] + :param class_names: list of names in imdb + :param scale: visualize the scaled image + :return: + """ + # pdb.set_trace() + import cv2 + import random + color_white = (255, 255, 255) + im = image.transform_inverse(im_array, cfg.network.PIXEL_MEANS) + # change to bgr + im = cv2.cvtColor(im, cv2.COLOR_RGB2BGR) + for j, name in enumerate(class_names): + if name == '__background__': + continue + color = (random.randint(0, 256), random.randint(0, 256), random.randint(0, 256)) # generate a random color + try: + dets = detections[j] + except: + pdb.set_trace() + for det in dets: + bbox = det[:8] * scale + score = det[-1] + if score < threshold: + continue + bbox = map(int, bbox) + # draw first point + cv2.circle(im, (bbox[0], bbox[1]), 3, (0, 0, 255), -1) + for i in range(3): + cv2.line(im, (bbox[i * 2], bbox[i * 2 + 1]), (bbox[(i+1) * 2], bbox[(i+1) * 2 + 1]), color=color, thickness=2) + cv2.line(im, (bbox[6], bbox[7]), (bbox[0], bbox[1]), color=color, thickness=2) + cv2.putText(im, '%s %.3f' % (class_names[j], score), (bbox[0], bbox[1] + 10), + color=color_white, fontFace=cv2.FONT_HERSHEY_COMPLEX, fontScale=0.5) + return im + +def main(): + # get symbol + pprint.pprint(config) + # config.symbol = 'resnet_v1_101_rfcn_dcn' if not args.rfcn_only else 'resnet_v1_101_rfcn' + config.symbol = 'resnet_v1_101_fpn_rcnn_rotbox_light_head_RoITransformer' + sym_instance = eval(config.symbol + '.' + config.symbol)() + sym = sym_instance.get_symbol(config, is_train=False) + + # set up class names + num_classes = 15 + classes = ['__background__', # always index 0 + 'plane', 'baseball-diamond', + 'bridge', 'ground-track-field', + 'small-vehicle', 'large-vehicle', + 'ship', 'tennis-court', + 'basketball-court', 'storage-tank', + 'soccer-ball-field', 'roundabout', + 'harbor', 'swimming-pool', + 'helicopter'] + # load demo data + image_names = ['P0004__1__0___0.png', 'P0053__1__0___0.png', 'P0060__1__1648___824.png'] + data = [] + for im_name in image_names: + # pdb.set_trace() + assert os.path.exists(cur_path + '/../demo/' + im_name), ('%s does not exist'.format('../demo/' + im_name)) + im = cv2.imread(cur_path + '/../demo/' + im_name, cv2.IMREAD_COLOR | cv2.IMREAD_IGNORE_ORIENTATION) + target_size = config.SCALES[0][0] + max_size = config.SCALES[0][1] + im, im_scale = resize(im, target_size, max_size, stride=config.network.IMAGE_STRIDE) + im_tensor = transform(im, config.network.PIXEL_MEANS) + im_info = np.array([[im_tensor.shape[2], im_tensor.shape[3], im_scale]], dtype=np.float32) + data.append({'data': im_tensor, 'im_info': im_info}) + + + # get predictor + data_names = ['data', 'im_info'] + label_names = [] + data = [[mx.nd.array(data[i][name]) for name in data_names] for i in xrange(len(data))] + max_data_shape = [[('data', (1, 3, max([v[0] for v in config.SCALES]), max([v[1] for v in config.SCALES])))]] + provide_data = [[(k, v.shape) for k, v in zip(data_names, data[i])] for i in xrange(len(data))] + provide_label = [None for i in xrange(len(data))] + # arg_params, aux_params = load_param(cur_path + '/../model/' + ('rfcn_dcn_coco' if not args.rfcn_only else 'rfcn_coco'), 0, process=True) + # TODO: change this path + arg_params, aux_params = load_param(r'/home/dj/code/Deformable_FPN_DOTA/output/fpn/DOTA/resnet_v1_101_dota_rotbox_light_head_Rroi_v6_trainval_fpn_end2end/train/fpn_DOTA_oriented', + config.TEST.test_epoch, process=True) + predictor = Predictor(sym, data_names, label_names, + context=[mx.gpu(0)], max_data_shapes=max_data_shape, + provide_data=provide_data, provide_label=provide_label, + arg_params=arg_params, aux_params=aux_params) + nms = gpu_nms_wrapper(config.TEST.NMS, 0) + + # warm up + for j in xrange(2): + data_batch = mx.io.DataBatch(data=[data[0]], label=[], pad=0, index=0, + provide_data=[[(k, v.shape) for k, v in zip(data_names, data[0])]], + provide_label=[None]) + scales = [data_batch.data[i][1].asnumpy()[0, 2] for i in xrange(len(data_batch.data))] + scores, boxes, data_dict = im_detect_rotbox_Rroi(predictor, data_batch, data_names, scales, config) + + # test + for idx, im_name in enumerate(image_names): + data_batch = mx.io.DataBatch(data=[data[idx]], label=[], pad=0, index=idx, + provide_data=[[(k, v.shape) for k, v in zip(data_names, data[idx])]], + provide_label=[None]) + scales = [data_batch.data[i][1].asnumpy()[0, 2] for i in xrange(len(data_batch.data))] + + tic() + scores, boxes, data_dict = im_detect_rotbox_Rroi(predictor, data_batch, data_names, scales, config) + # boxes = boxes[0].astype('f') + # scores = scores[0].astype('f') + boxes = boxes[0].astype('float64') + scores = scores[0].astype('float64') + dets_nms = [] + for j in range(1, scores.shape[1]): + cls_scores = scores[:, j, np.newaxis] + # cls_boxes = boxes[:, 4:8] if config.CLASS_AGNOSTIC else boxes[:, j * 4:(j + 1) * 4] + cls_boxes = boxes[:, 8:16] if config.CLASS_AGNOSTIC else boxes[:, j * 8:(j + 1) * 8] + cls_quadrangle_dets = np.hstack((cls_boxes, cls_scores)) + # keep = nms(cls_dets) + keep = py_cpu_nms_poly(cls_quadrangle_dets, 0.3) + cls_quadrangle_dets = cls_quadrangle_dets[keep, :] + cls_quadrangle_dets = cls_quadrangle_dets[cls_quadrangle_dets[:, -1] > 0.7, :] + dets_nms.append(cls_quadrangle_dets) + print 'testing {} {:.4f}s'.format(im_name, toc()) + # visualize + # im = cv2.imread(cur_path + '/../demo/' + im_name) + # im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB) + # pdb.set_trace() + im = draw_all_poly_detection(data_dict[0]['data'].asnumpy(), dets_nms, classes[1:], data[idx][1].asnumpy()[0][2], config, + threshold=0.2) + cv2.imwrite(cur_path + '/../demo/' + 'results' + im_name, im) + # show_boxes(im, dets_nms, classes, 1) + + print 'done' + +if __name__ == '__main__': + main() diff --git a/fpn/function/__init__.py b/fpn/function/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/fpn/function/test_rcnn.py b/fpn/function/test_rcnn.py new file mode 100644 index 0000000..e9815a1 --- /dev/null +++ b/fpn/function/test_rcnn.py @@ -0,0 +1,83 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import argparse +import pprint +import logging +import time +import os +import mxnet as mx + +from symbols import * +from dataset import * +from core.loader import TestLoader +from core.tester import Predictor, pred_eval +from utils.load_model import load_param +import numpy as np +import matplotlib.pyplot as plt + + +def test_rcnn(cfg, dataset, image_set, root_path, dataset_path, + ctx, prefix, epoch, + vis, ignore_cache, shuffle, has_rpn, proposal, thresh, logger=None, output_path=None): + if not logger: + assert False, 'require a logger' + + # print cfg + pprint.pprint(cfg) + logger.info('testing cfg:{}\n'.format(pprint.pformat(cfg))) + + # load symbol and testing data + if has_rpn: + sym_instance = eval(cfg.symbol + '.' + cfg.symbol)() + sym = sym_instance.get_symbol(cfg, is_train=False) + # imdb = eval(dataset)(image_set, root_path, dataset_path, result_path=output_path) + # imdb = eval(dataset)(image_set, root_path, '/data/ARD/test', result_path=output_path) + imdb = eval(dataset)(image_set, root_path, dataset_path, result_path=output_path) + # imdb = eval(dataset)(image_set, root_path, '/data/ARD/dota/dota1024/little/', result_path=output_path) + roidb = imdb.gt_roidb() + else: + sym_instance = eval(cfg.symbol + '.' + cfg.symbol)() + sym = sym_instance.get_symbol_rcnn(cfg, is_train=False) + imdb = eval(dataset)(image_set, root_path, dataset_path, result_path=output_path) + gt_roidb = imdb.gt_roidb() + roidb = eval('imdb.' + proposal + '_roidb')(gt_roidb) + + # get test data iter + test_data = TestLoader(roidb, cfg, batch_size=len(ctx), shuffle=shuffle, has_rpn=has_rpn) + print 'test_data size is :',test_data.size + # load model + arg_params, aux_params = load_param(prefix, epoch, process=True) + + # infer shape + data_shape_dict = dict(test_data.provide_data_single) + sym_instance.infer_shape(data_shape_dict) + + sym_instance.check_parameter_shapes(arg_params, aux_params, data_shape_dict, is_train=False) + + # decide maximum shape + data_names = [k[0] for k in test_data.provide_data_single] + label_names = None + max_data_shape = [[('data', (1, 3, max([v[0] for v in cfg.SCALES]), max([v[1] for v in cfg.SCALES])))]] + if not has_rpn: + max_data_shape.append(('rois', (cfg.TEST.PROPOSAL_POST_NMS_TOP_N + 30, 5))) + + # create predictor + predictor = Predictor(sym, data_names, label_names, + context=ctx, max_data_shapes=max_data_shape, + provide_data=test_data.provide_data, provide_label=test_data.provide_label, + arg_params=arg_params, aux_params=aux_params) + + # start detection + pred_eval(predictor, test_data, imdb, cfg, vis=vis, ignore_cache=ignore_cache, thresh=thresh, logger=logger) + diff --git a/fpn/function/test_rcnn_poly.py b/fpn/function/test_rcnn_poly.py new file mode 100644 index 0000000..9ec20dd --- /dev/null +++ b/fpn/function/test_rcnn_poly.py @@ -0,0 +1,95 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import argparse +import pprint +import logging +import time +import os +import mxnet as mx + +from symbols import * +from dataset import * +from core.loader import TestLoader +from core.tester import Predictor, pred_eval, pred_eval_poly_multiscale, \ + pred_eval_dota_poly, pred_eval_dota_rotbox, pred_eval_dota_wyhs, pred_eval_dota_rotbox_Rroi +from utils.load_model import load_param +import numpy as np +import matplotlib.pyplot as plt +import pdb +## TODO: fix bug for multi-gpu test + +def test_rcnn_poly(cfg, dataset, image_set, root_path, dataset_path, + ctx, prefix, epoch, + vis, ignore_cache, shuffle, has_rpn, proposal, thresh, logger=None, output_path=None): + if not logger: + assert False, 'require a logger' + + # print cfg + pprint.pprint(cfg) + logger.info('testing cfg:{}\n'.format(pprint.pformat(cfg))) + + # load symbol and testing data + if has_rpn: + sym_instance = eval(cfg.symbol + '.' + cfg.symbol)() + sym = sym_instance.get_symbol(cfg, is_train=False) + # imdb = eval(dataset)(image_set, root_path, dataset_path, result_path=output_path) + # imdb = eval(dataset)(image_set, root_path, '/data/ARD/test', result_path=output_path) + imdb = eval(dataset)(image_set, root_path, dataset_path, result_path=output_path) + # imdb = eval(dataset)(image_set, root_path, '/data/ARD/dota/dota1024/little/', result_path=output_path) + roidb = imdb.gt_roidb() + else: + sym_instance = eval(cfg.symbol + '.' + cfg.symbol)() + sym = sym_instance.get_symbol_rcnn(cfg, is_train=False) + imdb = eval(dataset)(image_set, root_path, dataset_path, result_path=output_path) + gt_roidb = imdb.gt_roidb() + roidb = eval('imdb.' + proposal + '_roidb')(gt_roidb) + + # get test data iter + test_data = TestLoader(roidb, cfg, batch_size=len(ctx), shuffle=shuffle, has_rpn=has_rpn) + print 'test_data size is :',test_data.size + # load model + arg_params, aux_params = load_param(prefix, epoch, process=True) + + # infer shape + data_shape_dict = dict(test_data.provide_data_single) + sym_instance.infer_shape(data_shape_dict) + + sym_instance.check_parameter_shapes(arg_params, aux_params, data_shape_dict, is_train=False) + + # decide maximum shape + data_names = [k[0] for k in test_data.provide_data_single] + label_names = None + max_data_shape = [[('data', (1, 3, max([v[0] for v in cfg.SCALES]), max([v[1] for v in cfg.SCALES])))]] + if not has_rpn: + max_data_shape.append(('rois', (cfg.TEST.PROPOSAL_POST_NMS_TOP_N + 30, 5))) + + # create predictor + predictor = Predictor(sym, data_names, label_names, + context=ctx, max_data_shapes=max_data_shape, + provide_data=test_data.provide_data, provide_label=test_data.provide_label, + arg_params=arg_params, aux_params=aux_params) + + # set ignore_cache as true + ignore_cache = True + # start detection + if cfg.network.RRoI_REGRESSION: + pred_eval_dota_rotbox_Rroi(predictor, test_data, imdb, cfg, vis=False, draw=True, ignore_cache=ignore_cache, thresh=thresh, logger=logger) + else: + if cfg.network.BOXENCODING == 'rotbox': + pred_eval_dota_rotbox(predictor, test_data, imdb, cfg, vis=False, draw=True, ignore_cache=ignore_cache, thresh=thresh, logger=logger) + elif cfg.network.BOXENCODING == 'xyhs': + pred_eval_dota_wyhs(predictor, test_data, imdb, cfg, vis=False, draw=True, ignore_cache=ignore_cache, thresh=thresh, logger=logger) + else: + pred_eval_dota_poly(predictor, test_data, imdb, cfg, vis=False, draw=True, ignore_cache=ignore_cache, thresh=thresh, logger=logger) + diff --git a/fpn/function/train_rcnn.py b/fpn/function/train_rcnn.py new file mode 100644 index 0000000..6444901 --- /dev/null +++ b/fpn/function/train_rcnn.py @@ -0,0 +1,139 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import logging +import pprint + +import mxnet as mx +import numpy as np + +from bbox.bbox_regression import add_bbox_regression_targets +from core import metric, callback +from core.loader import ROIIter +from core.module import MutableModule +from utils.PrefetchingIter import PrefetchingIter +from utils.load_data import load_proposal_roidb, merge_roidb, filter_roidb +from utils.load_model import load_param +from utils.lr_scheduler import WarmupMultiFactorScheduler + + +def train_rcnn(cfg, dataset, image_set, root_path, dataset_path, + frequent, kvstore, flip, shuffle, resume, + ctx, pretrained, epoch, prefix, begin_epoch, end_epoch, + train_shared, lr, lr_step, proposal, logger=None, output_path=None): + mx.random.seed(3) + np.random.seed(3) + # set up logger + if not logger: + logging.basicConfig() + logger = logging.getLogger() + logger.setLevel(logging.INFO) + + # load symbol + sym_instance = eval(cfg.symbol + '.' + cfg.symbol)() + sym = sym_instance.get_symbol_rcnn(cfg, is_train=True) + + # setup multi-gpu + batch_size = len(ctx) + input_batch_size = cfg.TRAIN.BATCH_IMAGES * batch_size + + # print cfg + pprint.pprint(cfg) + logger.info('training rcnn cfg:{}\n'.format(pprint.pformat(cfg))) + + # load dataset and prepare imdb for training + image_sets = [iset for iset in image_set.split('+')] + roidbs = [load_proposal_roidb(dataset, image_set, root_path, dataset_path, + proposal=proposal, append_gt=True, flip=flip, result_path=output_path) + for image_set in image_sets] + roidb = merge_roidb(roidbs) + roidb = filter_roidb(roidb, cfg) + means, stds = add_bbox_regression_targets(roidb, cfg) + + # load training data + train_data = ROIIter(roidb, cfg, batch_size=input_batch_size, shuffle=shuffle, + ctx=ctx, aspect_grouping=cfg.TRAIN.ASPECT_GROUPING) + + # infer max shape + max_data_shape = [('data', (cfg.TRAIN.BATCH_IMAGES, 3, max([v[0] for v in cfg.SCALES]), max([v[1] for v in cfg.SCALES])))] + + # infer shape + data_shape_dict = dict(train_data.provide_data_single + train_data.provide_label_single) + sym_instance.infer_shape(data_shape_dict) + + # load and initialize params + if resume: + print('continue training from ', begin_epoch) + arg_params, aux_params = load_param(prefix, begin_epoch, convert=True) + else: + arg_params, aux_params = load_param(pretrained, epoch, convert=True) + sym_instance.init_weight_rcnn(cfg, arg_params, aux_params) + + # check parameter shapes + sym_instance.check_parameter_shapes(arg_params, aux_params, data_shape_dict) + + # prepare training + # create solver + data_names = [k[0] for k in train_data.provide_data_single] + label_names = [k[0] for k in train_data.provide_label_single] + if train_shared: + fixed_param_prefix = cfg.network.FIXED_PARAMS_SHARED + else: + fixed_param_prefix = cfg.network.FIXED_PARAMS + mod = MutableModule(sym, data_names=data_names, label_names=label_names, + logger=logger, context=ctx, + max_data_shapes=[max_data_shape for _ in range(batch_size)], fixed_param_prefix=fixed_param_prefix) + + if cfg.TRAIN.RESUME: + mod._preload_opt_states = '%s-%04d.states'%(prefix, begin_epoch) + + + # decide training params + # metric + eval_metric = metric.RCNNAccMetric(cfg) + cls_metric = metric.RCNNLogLossMetric(cfg) + bbox_metric = metric.RCNNL1LossMetric(cfg) + eval_metrics = mx.metric.CompositeEvalMetric() + for child_metric in [eval_metric, cls_metric, bbox_metric]: + eval_metrics.add(child_metric) + # callback + batch_end_callback = callback.Speedometer(train_data.batch_size, frequent=frequent) + epoch_end_callback = [mx.callback.module_checkpoint(mod, prefix, period=1, save_optimizer_states=True), + callback.do_checkpoint(prefix, means, stds)] + # decide learning rate + base_lr = lr + lr_factor = cfg.TRAIN.lr_factor + lr_epoch = [float(epoch) for epoch in lr_step.split(',')] + lr_epoch_diff = [epoch - begin_epoch for epoch in lr_epoch if epoch > begin_epoch] + lr = base_lr * (lr_factor ** (len(lr_epoch) - len(lr_epoch_diff))) + lr_iters = [int(epoch * len(roidb) / batch_size) for epoch in lr_epoch_diff] + print('lr', lr, 'lr_epoch_diff', lr_epoch_diff, 'lr_iters', lr_iters) + lr_scheduler = WarmupMultiFactorScheduler(lr_iters, lr_factor, cfg.TRAIN.warmup, cfg.TRAIN.warmup_lr, cfg.TRAIN.warmup_step) + # optimizer + optimizer_params = {'momentum': cfg.TRAIN.momentum, + 'wd': cfg.TRAIN.wd, + 'learning_rate': lr, + 'lr_scheduler': lr_scheduler, + 'rescale_grad': 1.0, + 'clip_gradient': None} + + # train + + if not isinstance(train_data, PrefetchingIter): + train_data = PrefetchingIter(train_data) + + mod.fit(train_data, eval_metric=eval_metrics, epoch_end_callback=epoch_end_callback, + batch_end_callback=batch_end_callback, kvstore=kvstore, + optimizer='sgd', optimizer_params=optimizer_params, + arg_params=arg_params, aux_params=aux_params, begin_epoch=begin_epoch, num_epoch=end_epoch) + diff --git a/fpn/operator_cxx/psroi_align_ave-inl.h b/fpn/operator_cxx/psroi_align_ave-inl.h new file mode 100644 index 0000000..951baab --- /dev/null +++ b/fpn/operator_cxx/psroi_align_ave-inl.h @@ -0,0 +1,226 @@ +/*! + * Copyright (c) 2017 by Contributors + * Copyright (c) 2017 Microsoft + * Licensed under The Apache-2.0 License [see LICENSE for details] + * \file psroi_pooling-inl.h + * \brief psroi pooling operator and symbol + * \author Yi Li, Tairui Chen, Guodong Zhang, Haozhi Qi, Jifeng Dai + * modified by Jian Ding +*/ +#ifndef MXNET_OPERATOR_CONTRIB_PSROI_ALIGN_AVE_POOLING_INL_H_ +#define MXNET_OPERATOR_CONTRIB_PSROI_ALIGN_AVE_POOLING_INL_H_ + +#include +#include +#include +#include +#include +#include +#include +#include "../mshadow_op.h" +#include "../operator_common.h" + + +namespace mxnet { +namespace op { + +// Declare enumeration of input order to make code more intuitive. +// These enums are only visible within this header +namespace psroialignavepool { +enum PSROIALIGNAVEPoolingOpInputs {kData, kBox}; +enum PSROIALIGNAVEPoolingOpOutputs {kOut}; +} // psroialignavepool + +struct PSROIALIGNAVEPoolingParam : public dmlc::Parameter { + // TShape pooled_size; + float spatial_scale; + int output_dim; + int sampling_ratio; + int pooled_size; + int group_size; + DMLC_DECLARE_PARAMETER(PSROIALIGNAVEPoolingParam) { + DMLC_DECLARE_FIELD(spatial_scale).set_range(0.0, 1.0) + .describe("Ratio of input feature map height (or w) to raw image height (or w). " + "Equals the reciprocal of total stride in convolutional layers"); + DMLC_DECLARE_FIELD(sampling_ratio).set_default(2) + .describe("sampling ratio"); + DMLC_DECLARE_FIELD(output_dim).describe("fix output dim"); + DMLC_DECLARE_FIELD(pooled_size).describe("fix pooled size"); + DMLC_DECLARE_FIELD(group_size).set_default(0).describe("fix group size"); + } +}; + +template +class PSROIALIGNAVEPoolingOp : public Operator { + public: + explicit PSROIALIGNAVEPoolingOp(PSROIALIGNAVEPoolingParam p) { + this->param_ = p; + } + + virtual void Forward(const OpContext &ctx, + const std::vector &in_data, + const std::vector &req, + const std::vector &out_data, + const std::vector &aux_args) { + using namespace mshadow; + CHECK_EQ(in_data.size(), 2); + CHECK_EQ(out_data.size(), 1); + CHECK_EQ(out_data[psroialignavepool::kOut].shape_[0], in_data[psroialignavepool::kBox].shape_[0]); + Stream *s = ctx.get_stream(); + + Tensor data = in_data[psroialignavepool::kData].get(s); + Tensor bbox = in_data[psroialignavepool::kBox].get(s); + Tensor out = out_data[psroialignavepool::kOut].get(s); + CHECK_EQ(data.CheckContiguous(), true); + CHECK_EQ(bbox.CheckContiguous(), true); + CHECK_EQ(out.CheckContiguous(), true); + out = -FLT_MAX; + PSROIALIGNAVEPoolForward(out, data, bbox, param_.spatial_scale, param_.sampling_ratio, param_.output_dim, param_.group_size); + } + + virtual void Backward(const OpContext &ctx, + const std::vector &out_grad, + const std::vector &in_data, + const std::vector &out_data, + const std::vector &req, + const std::vector &in_grad, + const std::vector &aux_args) { + using namespace mshadow; + CHECK_EQ(in_data.size(), 2); + CHECK_EQ(out_data.size(), 1); + CHECK_EQ(out_grad[psroialignavepool::kOut].shape_[0], in_data[psroialignavepool::kBox].shape_[0]); + CHECK_NE(req[psroialignavepool::kData], kWriteInplace) << + "ROIPooling: Backward doesn't support kWriteInplace."; + CHECK_NE(req[psroialignavepool::kBox], kWriteInplace) << + "ROIPooling: Backward doesn't support kWriteInplace."; + Stream *s = ctx.get_stream(); + + Tensor grad_out = out_grad[psroialignavepool::kOut].get(s); + Tensor bbox = in_data[psroialignavepool::kBox].get(s); + Tensor grad_in = in_grad[psroialignavepool::kData].get(s); + Tensor grad_roi = in_grad[psroialignavepool::kBox].get(s); + + CHECK_EQ(grad_out.CheckContiguous(), true); + CHECK_EQ(bbox.CheckContiguous(), true); + CHECK_EQ(grad_in.CheckContiguous(), true); + + if (kAddTo == req[psroialignavepool::kData] || kWriteTo == req[psroialignavepool::kData]) { + if (kWriteTo == req[psroialignavepool::kData]) { + grad_in = 0.0f; + } + PSROIALIGNAVEPoolBackwardAcc(grad_in, grad_out, bbox, param_.spatial_scale, param_.sampling_ratio, + param_.output_dim, param_.group_size); + } + if (kWriteTo == req[psroialignavepool::kBox]) { + grad_roi = 0.0f; + } + } + + private: + PSROIALIGNAVEPoolingParam param_; +}; // class PSROIALIGNAVEPoolingOp + +// Decalre Factory function, used for dispatch specialization +template +Operator* CreateOp(PSROIALIGNAVEPoolingParam param, int dtype); + +#if DMLC_USE_CXX11 +class PSROIALIGNAVEPoolingProp : public OperatorProperty { + public: + std::vector ListArguments() const override { + return {"data", "rois"}; + } + + std::vector ListOutputs() const override { + return {"output"}; + } + + int NumOutputs() const override { + return 1; + } + + int NumVisibleOutputs() const override { + return 1; + } + + void Init(const std::vector >& kwargs) override { + param_.Init(kwargs); + if (param_.group_size == 0) { + param_.group_size = param_.pooled_size; + } + } + + std::map GetParams() const override { + return param_.__DICT__(); + } + + bool InferShape(std::vector *in_shape, + std::vector *out_shape, + std::vector *aux_shape) const override { + using namespace mshadow; + CHECK_EQ(in_shape->size(), 2) << "Input:[data, rois]"; + + // data: [batch_size, c, h, w] + TShape dshape = in_shape->at(psroialignavepool::kData); + CHECK_EQ(dshape.ndim(), 4) << "data should be a 4D tensor"; + + // bbox: [num_rois, 5] + TShape bshape = in_shape->at(psroialignavepool::kBox); + CHECK_EQ(bshape.ndim(), 2) << "bbox should be a 2D tensor of shape [batch, 5]"; + CHECK_EQ(bshape[1], 5) << "bbox should be a 2D tensor of shape [batch, 5]"; + + // out: [num_rois, c, pooled_h, pooled_w] + out_shape->clear(); + out_shape->push_back( + Shape4(bshape[0], param_.output_dim, param_.pooled_size, param_.pooled_size)); + return true; + } + + bool InferType(std::vector *in_type, + std::vector *out_type, + std::vector *aux_type) const override { + CHECK_EQ(in_type->size(), 2); + int dtype = (*in_type)[0]; + CHECK_EQ(dtype, (*in_type)[1]); + CHECK_NE(dtype, -1) << "Input must have specified type"; + + out_type->clear(); + out_type->push_back(dtype); + return true; + } + + OperatorProperty* Copy() const override { + PSROIALIGNAVEPoolingProp* psroi_pooling_sym = new PSROIALIGNAVEPoolingProp(); + psroi_pooling_sym->param_ = this->param_; + return psroi_pooling_sym; + } + + std::string TypeString() const override { + return "_contrib_PSROIALIGNAVEPooling"; + } + + // decalre dependency and inplace optimization options + std::vector DeclareBackwardDependency( + const std::vector &out_grad, + const std::vector &in_data, + const std::vector &out_data) const override { + return {out_grad[psroialignavepool::kOut], in_data[psroialignavepool::kBox]}; + } + + + Operator* CreateOperator(Context ctx) const override { + LOG(FATAL) << "Not Implemented."; + return NULL; + } + + Operator* CreateOperatorEx(Context ctx, std::vector *in_shape, + std::vector *in_type) const override; + + + private: + PSROIALIGNAVEPoolingParam param_; +}; // class PSROIALIGNAVEPoolingProp +#endif +} // namespace op +} // namespace mxnet +#endif // MXNET_OPERATOR_CONTRIB_PSROI_POOLING_INL_H_ diff --git a/fpn/operator_cxx/psroi_align_ave.cc b/fpn/operator_cxx/psroi_align_ave.cc new file mode 100644 index 0000000..94ab2c6 --- /dev/null +++ b/fpn/operator_cxx/psroi_align_ave.cc @@ -0,0 +1,83 @@ +/*! + * Copyright (c) 2017 by Contributors + * Copyright (c) 2017 Microsoft + * Licensed under The Apache-2.0 License [see LICENSE for details] + * \file psroi_pooling.cc + * \brief psroi pooling operator + * \author Yi Li, Tairui Chen, Guodong Zhang, Haozhi Qi, Jifeng Dai + * modified by Jian Ding +*/ +#include "./psroi_align_ave-inl.h" +#include +#include +#include +#include +#include + +using std::max; +using std::min; +using std::floor; +using std::ceil; + +namespace mshadow { +template +inline void PSROIALIGNAVEPoolForward(const Tensor &out, + const Tensor &data, + const Tensor &bbox, + const float spatial_scale_, + const int sampling_ratio, + const int output_dim_, + const int group_size_) { + // NOT_IMPLEMENTED; + return; +} + +template +inline void PSROIALIGNAVEPoolBackwardAcc(const Tensor &in_grad, + const Tensor &out_grad, + const Tensor &bbox, + const float spatial_scale_, + const int sampling_ratio, + const int output_dim_, + const int group_size_) { + // NOT_IMPLEMENTED; + return; +} +} // namespace mshadow + +namespace mxnet { +namespace op { + +template<> +Operator *CreateOp(PSROIALIGNAVEPoolingParam param, int dtype) { + Operator* op = NULL; + MSHADOW_REAL_TYPE_SWITCH(dtype, DType, { + op = new PSROIALIGNAVEPoolingOp(param); + }); + return op; +} + +Operator *PSROIALIGNAVEPoolingProp::CreateOperatorEx(Context ctx, std::vector *in_shape, + std::vector *in_type) const { + std::vector out_shape, aux_shape; + std::vector out_type, aux_type; + CHECK(InferType(in_type, &out_type, &aux_type)); + CHECK(InferShape(in_shape, &out_shape, &aux_shape)); + DO_BIND_DISPATCH(CreateOp, param_, in_type->at(0)); +} + +DMLC_REGISTER_PARAMETER(PSROIALIGNAVEPoolingParam); + +MXNET_REGISTER_OP_PROPERTY(_contrib_PSROIALIGNAVEPooling, PSROIALIGNAVEPoolingProp) +.describe("Performs region-of-interest pooling on inputs. Resize bounding box coordinates by " +"spatial_scale and crop input feature maps accordingly. The cropped feature maps are pooled " +"by max pooling to a fixed size output indicated by pooled_size. batch_size will change to " +"the number of region bounding boxes after PSROIALIGNAVEPooling") +.add_argument("data", "Symbol", "Input data to the pooling operator, a 4D Feature maps") +.add_argument("rois", "Symbol", "Bounding box coordinates, a 2D array of " +"[[batch_index, x1, y1, x2, y2]]. (x1, y1) and (x2, y2) are top left and down right corners " +"of designated region of interest. batch_index indicates the index of corresponding image " +"in the input data") +.add_arguments(PSROIALIGNAVEPoolingParam::__FIELDS__()); +} // namespace op +} // namespace mxnet diff --git a/fpn/operator_cxx/psroi_align_ave.cu b/fpn/operator_cxx/psroi_align_ave.cu new file mode 100644 index 0000000..763d125 --- /dev/null +++ b/fpn/operator_cxx/psroi_align_ave.cu @@ -0,0 +1,469 @@ +/*! + * Copyright (c) 2017 by Contributors + * Copyright (c) 2017 Microsoft + * Licensed under The Apache-2.0 License [see LICENSE for details] + * \file psroi_pooling.cu + * \brief psroi pooling operator + * \author Yi Li, Tairui Chen, Guodong Zhang, Haozhi Qi, Jifeng Dai + * modified by Jian Ding +*/ +#include "./psroi_align_ave-inl.h" +#include +#include +#include +#include +#include "../../common/cuda_utils.h" +#include "../mxnet_op.h" + +#define PSROIALIGNAVEPOOLING_CUDA_CHECK(condition) \ + /* Code block avoids redefinition of cudaError_t error */ \ + do { \ + cudaError_t error = condition; \ + CHECK_EQ(error, cudaSuccess) << " " << cudaGetErrorString(error); \ + } while (0) +#define CUDA_KERNEL_LOOP(i, n) \ +for (int i = blockIdx.x * blockDim.x + threadIdx.x; \ + i < (n); \ + i += blockDim.x * gridDim.x) + +namespace mshadow { +namespace cuda { + + template + __device__ T bilinear_interpolate( + const T* bottom_data, + const int height, + const int width, + T y, + T x, + const int index /* index for debug only*/) { + // deal with cases that inverse elements are out of feature map boundary + if (y < -1.0 || y > height || x < -1.0 || x > width) { + // empty + return 0; + } + + if (y <= 0) { + y = 0; + } + if (x <= 0) { + x = 0; + } + + int y_low = static_cast(y); + int x_low = static_cast(x); + int y_high; + int x_high; + + if (y_low >= height - 1) { + y_high = y_low = height - 1; + y = (T)y_low; + } else { + y_high = y_low + 1; + } + + if (x_low >= width - 1) { + x_high = x_low = width - 1; + x = (T)x_low; + } else { + x_high = x_low + 1; + } + + T ly = y - y_low; + T lx = x - x_low; + T hy = 1. - ly, hx = 1. - lx; + // do bilinear interpolation + T v1 = bottom_data[y_low * width + x_low]; + T v2 = bottom_data[y_low * width + x_high]; + T v3 = bottom_data[y_high * width + x_low]; + T v4 = bottom_data[y_high * width + x_high]; + T w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; + + T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); + + return val; + } + + +template +__global__ void PSROIALIGNAVEPoolForwardKernel( + const int count, + const DType* bottom_data, + const DType spatial_scale, + const int channels, + const int height, const int width, + const int pooled_height, const int pooled_width, + const int sampling_ratio, + const DType* bottom_rois, + const int output_dim, + const int group_size, + DType* top_data) { + CUDA_KERNEL_LOOP(index, count) { + // The output is in order (n, ctop, ph, pw) + int pw = index % pooled_width; + int ph = (index / pooled_width) % pooled_height; + int ctop = (index / pooled_width / pooled_height) % output_dim; + int n = index / pooled_width / pooled_height / output_dim; + + // [start, end) interval for spatial sampling + const DType* offset_bottom_rois = bottom_rois + n * 5; + int roi_batch_ind = offset_bottom_rois[0]; + // DType roi_start_w = static_cast(round(offset_bottom_rois[1])) * spatial_scale; + // DType roi_start_h = static_cast(round(offset_bottom_rois[2])) * spatial_scale; + // DType roi_end_w = static_cast(round(offset_bottom_rois[3]) + 1.) * spatial_scale; + // DType roi_end_h = static_cast(round(offset_bottom_rois[4]) + 1.) * spatial_scale; + DType roi_start_w = (offset_bottom_rois[1]) * spatial_scale; + DType roi_start_h = (offset_bottom_rois[2]) * spatial_scale; + DType roi_end_w = (offset_bottom_rois[3]) * spatial_scale; + DType roi_end_h = (offset_bottom_rois[4]) * spatial_scale; + + // Force too small ROIs to be 1x1 + DType roi_width = max(roi_end_w - roi_start_w, (DType)1.); // avoid 0 + DType roi_height = max(roi_end_h - roi_start_h, (DType)1.); + + // Compute w and h at bottom + DType bin_size_h = static_cast(roi_height) / static_cast(pooled_height); + DType bin_size_w = static_cast(roi_width) / static_cast(pooled_width); + + // int hstart = floor(static_cast(ph) * bin_size_h + // + roi_start_h); + // int wstart = floor(static_cast(pw)* bin_size_w + // + roi_start_w); + // int hend = ceil(static_cast(ph + 1) * bin_size_h + // + roi_start_h); + // int wend = ceil(static_cast(pw + 1) * bin_size_w + // + roi_start_w); + // Add roi offsets and clip to input boundaries + // hstart = min(max(hstart, 0), height); + // hend = min(max(hend, 0), height); + // wstart = min(max(wstart, 0), width); + // wend = min(max(wend, 0), width); + // bool is_empty = (hend <= hstart) || (wend <= wstart); + + + + int gw = floor(static_cast(pw)* group_size / pooled_width); + int gh = floor(static_cast(ph)* group_size / pooled_height); + gw = min(max(gw, 0), group_size - 1); + gh = min(max(gh, 0), group_size - 1); + int c = (ctop*group_size + gh)*group_size + gw; + + const DType* offset_bottom_data = bottom_data + (roi_batch_ind * channels + c) * height * width; + + // We use roi_bin_grid to sample the grid and mimic integral + int roi_bin_grid_h = (sampling_ratio > 0) ? sampling_ratio : ceil(roi_height / pooled_height); // e.g., = 2 + int roi_bin_grid_w = (sampling_ratio > 0) ? sampling_ratio : ceil(roi_width / pooled_width); + + const DType sample_count = roi_bin_grid_h * roi_bin_grid_w; // e.g., iy = 0, 1 + DType output_val = 0.; + for (int iy = 0; iy < roi_bin_grid_h; iy++) { // e.g., iy = 0, 1 + const DType y = roi_start_h + ph * bin_size_h + + static_cast(iy + .5f) * bin_size_h / + static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 + for (int ix = 0; ix < roi_bin_grid_w; ix++) { + const DType x = roi_start_w + pw * bin_size_w + + static_cast(ix + .5f) * bin_size_w / + static_cast(roi_bin_grid_w); + + DType val = bilinear_interpolate( + offset_bottom_data, height, width, y, x, index); + output_val += val; + } + } + output_val /= sample_count; + + top_data[index] = output_val; + // DType out_sum = 0; + // for (int h = hstart; h < hend; ++h) { + // for (int w = wstart; w < wend; ++w) { + // int bottom_index = h*width + w; + // out_sum += offset_bottom_data[bottom_index]; + // } + // } + + // DType bin_area = (hend - hstart)*(wend - wstart); + // top_data[index] = is_empty? (DType)0. : out_sum/bin_area; + } +} + +template +inline void PSROIALIGNAVEPoolForward(const Tensor &out, + const Tensor &data, + const Tensor &bbox, + const float spatial_scale, + const int sampling_ratio, + const int output_dim_, + const int group_size_) { + const DType *bottom_data = data.dptr_; + const DType *bottom_rois = bbox.dptr_; + DType *top_data = out.dptr_; + const int count = out.shape_.Size(); + const int channels = data.size(1); + const int height = data.size(2); + const int width = data.size(3); + const int pooled_height = out.size(2); + const int pooled_width = out.size(3); + cudaStream_t stream = Stream::GetStream(out.stream_); + PSROIALIGNAVEPoolForwardKernel << > >( + count, bottom_data, spatial_scale, channels, height, width, + pooled_height, pooled_width, sampling_ratio, bottom_rois, output_dim_, group_size_, top_data); + PSROIALIGNAVEPOOLING_CUDA_CHECK(cudaPeekAtLastError()); +} + +template +__device__ void bilinear_interpolate_gradient( + const int height, + const int width, + T y, + T x, + T* w1, + T* w2, + T* w3, + T* w4, + int* x_low, + int* x_high, + int* y_low, + int* y_high, + const int /*index*/ /* index for debug only*/) { + // deal with cases that inverse elements are out of feature map boundary + if (y < -1.0 || y > height || x < -1.0 || x > width) { + // empty + *w1 = *w2 = *w3 = *w4 = 0.; + *x_low = *x_high = *y_low = *y_high = -1; + return; + } + + if (y <= 0) { + y = 0; + } + if (x <= 0) { + x = 0; + } + + *y_low = static_cast(y); + *x_low = static_cast(x); + + if (*y_low >= height - 1) { + *y_high = *y_low = height - 1; + y = (T)*y_low; + } else { + *y_high = *y_low + 1; + } + + if (*x_low >= width - 1) { + *x_high = *x_low = width - 1; + x = (T)*x_low; + } else { + *x_high = *x_low + 1; + } + + T ly = y - *y_low; + T lx = x - *x_low; + T hy = 1. - ly, hx = 1. - lx; + + *w1 = hy * hx, *w2 = hy * lx, *w3 = ly * hx, *w4 = ly * lx; + + return; +} + +template +__global__ void PSROIALIGNAVEPoolBackwardAccKernel( + const int count, + const DType* top_diff, + const int num_rois, + const DType spatial_scale, + const int channels, + const int height, const int width, + const int pooled_height, const int pooled_width, + const int sampling_ratio, + const int group_size, + const int output_dim, + DType* bottom_diff, + const DType* bottom_rois) { + CUDA_KERNEL_LOOP(index, count) { + // The output is in order (n, ctop, ph, pw) + int pw = index % pooled_width; + int ph = (index / pooled_width) % pooled_height; + int ctop = (index / pooled_width / pooled_height) % output_dim; + int n = index / pooled_width / pooled_height / output_dim; + + // [start, end) interval for spatial sampling + const DType* offset_bottom_rois = bottom_rois + n * 5; + int roi_batch_ind = offset_bottom_rois[0]; + // DType roi_start_w = static_cast(round(offset_bottom_rois[1])) * spatial_scale; + // DType roi_start_h = static_cast(round(offset_bottom_rois[2])) * spatial_scale; + // DType roi_end_w = static_cast(round(offset_bottom_rois[3]) + 1.) * spatial_scale; + // DType roi_end_h = static_cast(round(offset_bottom_rois[4]) + 1.) * spatial_scale; + // Do not using rounding; this implementation detail is critical + DType roi_start_w = offset_bottom_rois[1] * spatial_scale; + DType roi_start_h = offset_bottom_rois[2] * spatial_scale; + DType roi_end_w = offset_bottom_rois[3] * spatial_scale; + DType roi_end_h = offset_bottom_rois[4] * spatial_scale; + + // Force too small ROIs to be 1x1 + DType roi_width = max(roi_end_w - roi_start_w, (DType)1.); // avoid 0 + DType roi_height = max(roi_end_h - roi_start_h, (DType)1.); + + // Compute w and h at bottom + DType bin_size_h = static_cast(roi_height) / static_cast(pooled_height); + DType bin_size_w = static_cast(roi_width) / static_cast(pooled_width); + + // int hstart = floor(static_cast(ph)* bin_size_h + // + roi_start_h); + // int wstart = floor(static_cast(pw)* bin_size_w + // + roi_start_w); + // int hend = ceil(static_cast(ph + 1) * bin_size_h + // + roi_start_h); + // int wend = ceil(static_cast(pw + 1) * bin_size_w + // + roi_start_w); + // // Add roi offsets and clip to input boundaries + // hstart = min(max(hstart, 0), height); + // hend = min(max(hend, 0), height); + // wstart = min(max(wstart, 0), width); + // wend = min(max(wend, 0), width); + // bool is_empty = (hend <= hstart) || (wend <= wstart); + + // Compute c at bottom + int gw = floor(static_cast(pw)* group_size / pooled_width); + int gh = floor(static_cast(ph)* group_size / pooled_height); + gw = min(max(gw, 0), group_size - 1); + gh = min(max(gh, 0), group_size - 1); + int c = (ctop*group_size + gh)*group_size + gw; + DType* offset_bottom_diff = bottom_diff + (roi_batch_ind * channels + c) * height * width; + // DType bin_area = (hend - hstart)*(wend - wstart); + // DType diff_val = is_empty ? (DType)0. : top_diff[index] / bin_area; + // for (int h = hstart; h < hend; ++h) { + // for (int w = wstart; w < wend; ++w) { + // int bottom_index = h*width + w; + // atomicAdd(offset_bottom_diff + bottom_index, diff_val); + // } + // } + // int top_offset = (n * channels + ctop) * pooled_height * pooled_width; + // const DType* offset_top_diff = top_diff + top_offset; + // const DType top_diff_this_bin = offset_top_diff[ph * pooled_width + pw]; + + const DType top_diff_this_bin = top_diff[index]; + + // We use roi_bin_grid to sample the grid and mimic integral + int roi_bin_grid_h = (sampling_ratio > 0) ? sampling_ratio : ceil(roi_height / pooled_height); // e.g., = 2 + int roi_bin_grid_w = (sampling_ratio > 0) ? sampling_ratio : ceil(roi_width / pooled_width); + // We do average (integral) pooling inside a bin + const DType sample_count = roi_bin_grid_h * roi_bin_grid_w; // e.g. = 4 + + for (int iy = 0; iy < roi_bin_grid_h; iy++) { // e.g., iy = 0, 1 + const DType y = roi_start_h + ph * bin_size_h + + static_cast(iy + .5f) * bin_size_h / + static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 + for (int ix = 0; ix < roi_bin_grid_w; ix++) { + const DType x = roi_start_w + pw * bin_size_w + + static_cast(ix + .5f) * bin_size_w / + static_cast(roi_bin_grid_w); + + DType w1, w2, w3, w4; + int x_low, x_high, y_low, y_high; + + bilinear_interpolate_gradient( + height, + width, + y, + x, + &w1, + &w2, + &w3, + &w4, + &x_low, + &x_high, + &y_low, + &y_high, + index); // + + DType g1 = top_diff_this_bin * w1 / sample_count; + DType g2 = top_diff_this_bin * w2 / sample_count; + DType g3 = top_diff_this_bin * w3 / sample_count; + DType g4 = top_diff_this_bin * w4 / sample_count; + + if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0) { + atomicAdd( + offset_bottom_diff + y_low * width + x_low, static_cast(g1)); + atomicAdd( + offset_bottom_diff + y_low * width + x_high, static_cast(g2)); + atomicAdd( + offset_bottom_diff + y_high * width + x_low, static_cast(g3)); + atomicAdd( + offset_bottom_diff + y_high * width + x_high, static_cast(g4)); + } // if + } // ix + } // iy + } +} + + +template +inline void PSROIALIGNAVEPoolBackwardAcc(const Tensor &in_grad, + const Tensor &out_grad, + const Tensor &bbox, + const float spatial_scale, + const int sampling_ratio, + const int output_dim_, + const int group_size_) { + // LOG(INFO) << "PSROIALIGNAVEPoolBackward"; + const DType *top_diff = out_grad.dptr_; + const DType *bottom_rois = bbox.dptr_; + DType *bottom_diff = in_grad.dptr_; + const int count = out_grad.shape_.Size(); + const int num_rois = bbox.size(0); + const int channels = in_grad.size(1); + const int height = in_grad.size(2); + const int width = in_grad.size(3); + const int pooled_height = out_grad.size(2); + const int pooled_width = out_grad.size(3); + cudaStream_t stream = Stream::GetStream(in_grad.stream_); + PSROIALIGNAVEPoolBackwardAccKernel << > >( + count, top_diff, num_rois, spatial_scale, channels, height, width, + pooled_height, pooled_width, sampling_ratio, group_size_, output_dim_, bottom_diff, bottom_rois); + PSROIALIGNAVEPOOLING_CUDA_CHECK(cudaPeekAtLastError()); +} + +} // namespace cuda + +template +inline void PSROIALIGNAVEPoolForward(const Tensor &out, + const Tensor &data, + const Tensor &bbox, + const float spatial_scale, + const int sampling_ratio, + const int output_dim_, + const int group_size_) { + cuda::PSROIALIGNAVEPoolForward(out, data, bbox, spatial_scale, sampling_ratio,output_dim_, group_size_); +} + +template +inline void PSROIALIGNAVEPoolBackwardAcc(const Tensor &in_grad, + const Tensor &out_grad, + const Tensor &bbox, + const float spatial_scale, + const int sampling_ratio, + const int output_dim_, + const int group_size_) { + cuda::PSROIALIGNAVEPoolBackwardAcc(in_grad, out_grad, bbox, spatial_scale, sampling_ratio, output_dim_, group_size_); +} + +} // namespace mshadow + + +namespace mxnet { +namespace op { + +template<> +Operator* CreateOp(PSROIALIGNAVEPoolingParam param, int dtype) { + Operator* op = NULL; + MSHADOW_REAL_TYPE_SWITCH(dtype, DType, { + op = new PSROIALIGNAVEPoolingOp(param); + }); + return op; +} + +} // namespace op +} // namespace mxnet diff --git a/fpn/operator_cxx/psroi_rotated_align_ave-inl.h b/fpn/operator_cxx/psroi_rotated_align_ave-inl.h new file mode 100644 index 0000000..d867b17 --- /dev/null +++ b/fpn/operator_cxx/psroi_rotated_align_ave-inl.h @@ -0,0 +1,225 @@ +/*! + * Copyright (c) 2017 by Contributors + * Copyright (c) 2017 Microsoft + * Licensed under The Apache-2.0 License [see LICENSE for details] + * \file psroi_pooling-inl.h + * \brief psroi pooling operator and symbol + * \author Yi Li, Tairui Chen, Guodong Zhang, Haozhi Qi, Jifeng Dai + * modified by Jian Ding +*/ +#ifndef MXNET_OPERATOR_CONTRIB_PSROI_ALIGN_AVE_ROTATED_POOLING_INL_H_ +#define MXNET_OPERATOR_CONTRIB_PSROI_ALIGN_AVE_ROTATED_POOLING_INL_H_ + +#include +#include +#include +#include +#include +#include +#include +#include "../mshadow_op.h" +#include "../operator_common.h" + + +namespace mxnet { +namespace op { + +// Declare enumeration of input order to make code more intuitive. +// These enums are only visible within this header +namespace psroialignaverotatedpool { +enum PSROIALIGNAVERotatedPoolingOpInputs {kData, kBox}; +enum PSROIALIGNAVERotatedPoolingOpOutputs {kOut}; +} // psroialignavepool + +struct PSROIALIGNAVERotatedPoolingParam : public dmlc::Parameter { + // TShape pooled_size; + float spatial_scale; + int output_dim; + int sampling_ratio; + int pooled_size; + int group_size; + DMLC_DECLARE_PARAMETER(PSROIALIGNAVERotatedPoolingParam) { + DMLC_DECLARE_FIELD(spatial_scale).set_range(0.0, 1.0) + .describe("Ratio of input feature map height (or w) to raw image height (or w). " + "Equals the reciprocal of total stride in convolutional layers"); + DMLC_DECLARE_FIELD(sampling_ratio).set_default(2) + .describe("sampling ratio"); + DMLC_DECLARE_FIELD(output_dim).describe("fix output dim"); + DMLC_DECLARE_FIELD(pooled_size).describe("fix pooled size"); + DMLC_DECLARE_FIELD(group_size).set_default(0).describe("fix group size"); + } +}; + +template +class PSROIALIGNAVERotatedPoolingOp : public Operator { + public: + explicit PSROIALIGNAVERotatedPoolingOp(PSROIALIGNAVERotatedPoolingParam p) { + this->param_ = p; + } + + virtual void Forward(const OpContext &ctx, + const std::vector &in_data, + const std::vector &req, + const std::vector &out_data, + const std::vector &aux_args) { + using namespace mshadow; + CHECK_EQ(in_data.size(), 2); + CHECK_EQ(out_data.size(), 1); + CHECK_EQ(out_data[psroialignaverotatedpool::kOut].shape_[0], in_data[psroialignaverotatedpool::kBox].shape_[0]); + Stream *s = ctx.get_stream(); + Tensor data = in_data[psroialignaverotatedpool::kData].get(s); + Tensor bbox = in_data[psroialignaverotatedpool::kBox].get(s); + Tensor out = out_data[psroialignaverotatedpool::kOut].get(s); + CHECK_EQ(data.CheckContiguous(), true); + CHECK_EQ(bbox.CheckContiguous(), true); + CHECK_EQ(out.CheckContiguous(), true); + out = -FLT_MAX; + PSROIALIGNAVERotatedPoolForward(out, data, bbox, param_.spatial_scale, param_.sampling_ratio, param_.output_dim, param_.group_size); + } + + virtual void Backward(const OpContext &ctx, + const std::vector &out_grad, + const std::vector &in_data, + const std::vector &out_data, + const std::vector &req, + const std::vector &in_grad, + const std::vector &aux_args) { + using namespace mshadow; + CHECK_EQ(in_data.size(), 2); + CHECK_EQ(out_data.size(), 1); + CHECK_EQ(out_grad[psroialignaverotatedpool::kOut].shape_[0], in_data[psroialignaverotatedpool::kBox].shape_[0]); + CHECK_NE(req[psroialignaverotatedpool::kData], kWriteInplace) << + "ROIPooling: Backward doesn't support kWriteInplace."; + CHECK_NE(req[psroialignaverotatedpool::kBox], kWriteInplace) << + "ROIPooling: Backward doesn't support kWriteInplace."; + Stream *s = ctx.get_stream(); + + Tensor grad_out = out_grad[psroialignaverotatedpool::kOut].get(s); + Tensor bbox = in_data[psroialignaverotatedpool::kBox].get(s); + Tensor grad_in = in_grad[psroialignaverotatedpool::kData].get(s); + Tensor grad_roi = in_grad[psroialignaverotatedpool::kBox].get(s); + + CHECK_EQ(grad_out.CheckContiguous(), true); + CHECK_EQ(bbox.CheckContiguous(), true); + CHECK_EQ(grad_in.CheckContiguous(), true); + + if (kAddTo == req[psroialignaverotatedpool::kData] || kWriteTo == req[psroialignaverotatedpool::kData]) { + if (kWriteTo == req[psroialignaverotatedpool::kData]) { + grad_in = 0.0f; + } + PSROIALIGNAVERotatedPoolBackwardAcc(grad_in, grad_out, bbox, param_.spatial_scale, param_.sampling_ratio, + param_.output_dim, param_.group_size); + } + if (kWriteTo == req[psroialignaverotatedpool::kBox]) { + grad_roi = 0.0f; + } + } + + private: + PSROIALIGNAVERotatedPoolingParam param_; +}; // class PSROIALIGNAVEPoolingOp + +// Decalre Factory function, used for dispatch specialization +template +Operator* CreateOp(PSROIALIGNAVERotatedPoolingParam param, int dtype); + +#if DMLC_USE_CXX11 +class PSROIALIGNAVERotatedPoolingProp : public OperatorProperty { + public: + std::vector ListArguments() const override { + return {"data", "rois"}; + } + + std::vector ListOutputs() const override { + return {"output"}; + } + + int NumOutputs() const override { + return 1; + } + + int NumVisibleOutputs() const override { + return 1; + } + + void Init(const std::vector >& kwargs) override { + param_.Init(kwargs); + if (param_.group_size == 0) { + param_.group_size = param_.pooled_size; + } + } + + std::map GetParams() const override { + return param_.__DICT__(); + } + + bool InferShape(std::vector *in_shape, + std::vector *out_shape, + std::vector *aux_shape) const override { + using namespace mshadow; + CHECK_EQ(in_shape->size(), 2) << "Input:[data, rois]"; + + // data: [batch_size, c, h, w] + TShape dshape = in_shape->at(psroialignaverotatedpool::kData); + CHECK_EQ(dshape.ndim(), 4) << "data should be a 4D tensor"; + + // bbox: [num_rois, 5] + TShape bshape = in_shape->at(psroialignaverotatedpool::kBox); + CHECK_EQ(bshape.ndim(), 2) << "bbox should be a 2D tensor of shape [batch, 6]"; + CHECK_EQ(bshape[1], 6) << "bbox should be a 2D tensor of shape [batch, 6]"; + + // out: [num_rois, c, pooled_h, pooled_w] + out_shape->clear(); + out_shape->push_back( + Shape4(bshape[0], param_.output_dim, param_.pooled_size, param_.pooled_size)); + return true; + } + + bool InferType(std::vector *in_type, + std::vector *out_type, + std::vector *aux_type) const override { + CHECK_EQ(in_type->size(), 2); + int dtype = (*in_type)[0]; + CHECK_EQ(dtype, (*in_type)[1]); + CHECK_NE(dtype, -1) << "Input must have specified type"; + + out_type->clear(); + out_type->push_back(dtype); + return true; + } + + OperatorProperty* Copy() const override { + PSROIALIGNAVERotatedPoolingProp* psroi_pooling_sym = new PSROIALIGNAVERotatedPoolingProp(); + psroi_pooling_sym->param_ = this->param_; + return psroi_pooling_sym; + } + + std::string TypeString() const override { + return "_contrib_PSROIALIGNAVERotatedPooling"; + } + + // decalre dependency and inplace optimization options + std::vector DeclareBackwardDependency( + const std::vector &out_grad, + const std::vector &in_data, + const std::vector &out_data) const override { + return {out_grad[psroialignaverotatedpool::kOut], in_data[psroialignaverotatedpool::kBox]}; + } + + + Operator* CreateOperator(Context ctx) const override { + LOG(FATAL) << "Not Implemented."; + return NULL; + } + + Operator* CreateOperatorEx(Context ctx, std::vector *in_shape, + std::vector *in_type) const override; + + + private: + PSROIALIGNAVERotatedPoolingParam param_; +}; // class PSROIALIGNAVEPoolingProp +#endif +} // namespace op +} // namespace mxnet +#endif // MXNET_OPERATOR_CONTRIB_PSROI_POOLING_INL_H_ diff --git a/fpn/operator_cxx/psroi_rotated_align_ave.cc b/fpn/operator_cxx/psroi_rotated_align_ave.cc new file mode 100644 index 0000000..41a4412 --- /dev/null +++ b/fpn/operator_cxx/psroi_rotated_align_ave.cc @@ -0,0 +1,83 @@ +/*! + * Copyright (c) 2017 by Contributors + * Copyright (c) 2017 Microsoft + * Licensed under The Apache-2.0 License [see LICENSE for details] + * \file psroi_pooling.cc + * \brief psroi pooling operator + * \author Yi Li, Tairui Chen, Guodong Zhang, Haozhi Qi, Jifeng Dai + * modified by Jian Ding +*/ +#include "./psroi_rotated_align_ave-inl.h" +#include +#include +#include +#include +#include + +using std::max; +using std::min; +using std::floor; +using std::ceil; + +namespace mshadow { +template +inline void PSROIALIGNAVERotatedPoolForward(const Tensor &out, + const Tensor &data, + const Tensor &bbox, + const float spatial_scale_, + const int sampling_ratio, + const int output_dim_, + const int group_size_) { + // NOT_IMPLEMENTED; + return; +} + +template +inline void PSROIALIGNAVERotatedPoolBackwardAcc(const Tensor &in_grad, + const Tensor &out_grad, + const Tensor &bbox, + const float spatial_scale_, + const int sampling_ratio, + const int output_dim_, + const int group_size_) { + // NOT_IMPLEMENTED; + return; +} +} // namespace mshadow + +namespace mxnet { +namespace op { + +template<> +Operator *CreateOp(PSROIALIGNAVERotatedPoolingParam param, int dtype) { + Operator* op = NULL; + MSHADOW_REAL_TYPE_SWITCH(dtype, DType, { + op = new PSROIALIGNAVERotatedPoolingOp(param); + }); + return op; +} + +Operator *PSROIALIGNAVERotatedPoolingProp::CreateOperatorEx(Context ctx, std::vector *in_shape, + std::vector *in_type) const { + std::vector out_shape, aux_shape; + std::vector out_type, aux_type; + CHECK(InferType(in_type, &out_type, &aux_type)); + CHECK(InferShape(in_shape, &out_shape, &aux_shape)); + DO_BIND_DISPATCH(CreateOp, param_, in_type->at(0)); +} + +DMLC_REGISTER_PARAMETER(PSROIALIGNAVERotatedPoolingParam); + +MXNET_REGISTER_OP_PROPERTY(_contrib_PSROIALIGNAVERotatedPooling, PSROIALIGNAVERotatedPoolingProp) +.describe("Performs region-of-interest pooling on inputs. Resize bounding box coordinates by " +"spatial_scale and crop input feature maps accordingly. The cropped feature maps are pooled " +"by max pooling to a fixed size output indicated by pooled_size. batch_size will change to " +"the number of region bounding boxes after PSROIALIGNAVERotatedPooling") +.add_argument("data", "Symbol", "Input data to the pooling operator, a 4D Feature maps") +.add_argument("rois", "Symbol", "Bounding box coordinates, a 2D array of " +"[[batch_index, x1, y1, x2, y2]]. (x1, y1) and (x2, y2) are top left and down right corners " +"of designated region of interest. batch_index indicates the index of corresponding image " +"in the input data") +.add_arguments(PSROIALIGNAVERotatedPoolingParam::__FIELDS__()); +} // namespace op +} // namespace mxnet diff --git a/fpn/operator_cxx/psroi_rotated_align_ave.cu b/fpn/operator_cxx/psroi_rotated_align_ave.cu new file mode 100644 index 0000000..a68ecbc --- /dev/null +++ b/fpn/operator_cxx/psroi_rotated_align_ave.cu @@ -0,0 +1,492 @@ +/*! + * Copyright (c) 2017 by Contributors + * Copyright (c) 2017 Microsoft + * Licensed under The Apache-2.0 License [see LICENSE for details] + * \file psroi_pooling.cu + * \brief psroi pooling operator + * \author Yi Li, Tairui Chen, Guodong Zhang, Haozhi Qi, Jifeng Dai + * modified by Jian Ding +*/ +#include "./psroi_rotated_align_ave-inl.h" +#include +#include +#include +#include +#include "../../common/cuda_utils.h" +#include "../mxnet_op.h" + +#define PSROIALIGNAVEROTATEDPOOLING_CUDA_CHECK(condition) \ + /* Code block avoids redefinition of cudaError_t error */ \ + do { \ + cudaError_t error = condition; \ + CHECK_EQ(error, cudaSuccess) << " " << cudaGetErrorString(error); \ + } while (0) +#define CUDA_KERNEL_LOOP(i, n) \ +for (int i = blockIdx.x * blockDim.x + threadIdx.x; \ + i < (n); \ + i += blockDim.x * gridDim.x) + +namespace mshadow { +namespace cuda { + + template + __device__ T bilinear_interpolate( + const T* bottom_data, + const int height, + const int width, + T y, + T x, + const int index /* index for debug only*/) { + // deal with cases that inverse elements are out of feature map boundary + if (y < -1.0 || y > height || x < -1.0 || x > width) { + // empty + return 0; + } + + if (y <= 0) { + y = 0; + } + if (x <= 0) { + x = 0; + } + + int y_low = static_cast(y); + int x_low = static_cast(x); + int y_high; + int x_high; + + if (y_low >= height - 1) { + y_high = y_low = height - 1; + y = (T)y_low; + } else { + y_high = y_low + 1; + } + + if (x_low >= width - 1) { + x_high = x_low = width - 1; + x = (T)x_low; + } else { + x_high = x_low + 1; + } + + T ly = y - y_low; + T lx = x - x_low; + T hy = 1. - ly, hx = 1. - lx; + // do bilinear interpolation + T v1 = bottom_data[y_low * width + x_low]; + T v2 = bottom_data[y_low * width + x_high]; + T v3 = bottom_data[y_high * width + x_low]; + T v4 = bottom_data[y_high * width + x_high]; + T w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; + + T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); + + return val; + } + + +template +__global__ void PSROIALIGNAVERotatedPoolForwardKernel( + const int count, + const DType* bottom_data, + const DType spatial_scale, + const int channels, + const int height, const int width, + const int pooled_height, const int pooled_width, + const int sampling_ratio, + const DType* bottom_rois, + const int output_dim, + const int group_size, + DType* top_data) { + CUDA_KERNEL_LOOP(index, count) { + // The output is in order (n, ctop, ph, pw) + int pw = index % pooled_width; + int ph = (index / pooled_width) % pooled_height; + int ctop = (index / pooled_width / pooled_height) % output_dim; + int n = index / pooled_width / pooled_height / output_dim; + + // [start, end) interval for spatial sampling + const DType* offset_bottom_rois = bottom_rois + n * 6; + int roi_batch_ind = offset_bottom_rois[0]; + + // DType roi_start_w = (offset_bottom_rois[1]) * spatial_scale; + // DType roi_start_h = (offset_bottom_rois[2]) * spatial_scale; + // DType roi_end_w = (offset_bottom_rois[3]) * spatial_scale; + // DType roi_end_h = (offset_bottom_rois[4]) * spatial_scale; + + // Do not using rounding; this implementation detail is critical + DType roi_center_w = offset_bottom_rois[1] * spatial_scale; + DType roi_center_h = offset_bottom_rois[2] * spatial_scale; + DType roi_width = offset_bottom_rois[3] * spatial_scale; + DType roi_height = offset_bottom_rois[4] * spatial_scale; + // T theta = offset_bottom_rois[5] * M_PI / 180.0; + DType theta = offset_bottom_rois[5]; + + // // Force too small ROIs to be 1x1 + // DType roi_width = max(roi_end_w - roi_start_w, (DType)1.); // avoid 0 + // DType roi_height = max(roi_end_h - roi_start_h, (DType)1.); + + // Force malformed ROIs to be 1x1 + roi_width = max(roi_width, (DType)1.); + roi_height = max(roi_height, (DType)1.); + + // Compute w and h at bottom + DType bin_size_h = static_cast(roi_height) / static_cast(pooled_height); + DType bin_size_w = static_cast(roi_width) / static_cast(pooled_width); + + int gw = floor(static_cast(pw)* group_size / pooled_width); + int gh = floor(static_cast(ph)* group_size / pooled_height); + gw = min(max(gw, 0), group_size - 1); + gh = min(max(gh, 0), group_size - 1); + int c = (ctop*group_size + gh)*group_size + gw; + + const DType* offset_bottom_data = bottom_data + (roi_batch_ind * channels + c) * height * width; + + // We use roi_bin_grid to sample the grid and mimic integral + int roi_bin_grid_h = (sampling_ratio > 0) ? sampling_ratio : ceil(roi_height / pooled_height); // e.g., = 2 + int roi_bin_grid_w = (sampling_ratio > 0) ? sampling_ratio : ceil(roi_width / pooled_width); + + // roi_start_h and roi_start_w are computed wrt the center of RoI (x, y). + // Appropriate translation needs to be applied after. + DType roi_start_h = -roi_height / 2.0; + DType roi_start_w = -roi_width / 2.0; + DType cosTheta = cos(theta); + DType sinTheta = sin(theta); + + const DType sample_count = roi_bin_grid_h * roi_bin_grid_w; // e.g., iy = 0, 1 + DType output_val = 0.; + for (int iy = 0; iy < roi_bin_grid_h; iy++) { // e.g., iy = 0, 1 + const DType yy = roi_start_h + ph * bin_size_h + + static_cast(iy + .5f) * bin_size_h / + static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 + for (int ix = 0; ix < roi_bin_grid_w; ix++) { + const DType xx = roi_start_w + pw * bin_size_w + + static_cast(ix + .5f) * bin_size_w / + static_cast(roi_bin_grid_w); + + // Rotate by theta around the center and translate + // T x = xx * cosTheta + yy * sinTheta + roi_center_w; + // T y = yy * cosTheta - xx * sinTheta + roi_center_h; + DType x = xx * cosTheta - yy * sinTheta + roi_center_w; + DType y = xx * sinTheta + yy * cosTheta + roi_center_h; + + DType val = bilinear_interpolate( + offset_bottom_data, height, width, y, x, index); + output_val += val; + } + } + output_val /= sample_count; + + top_data[index] = output_val; + // DType out_sum = 0; + // for (int h = hstart; h < hend; ++h) { + // for (int w = wstart; w < wend; ++w) { + // int bottom_index = h*width + w; + // out_sum += offset_bottom_data[bottom_index]; + // } + // } + + // DType bin_area = (hend - hstart)*(wend - wstart); + // top_data[index] = is_empty? (DType)0. : out_sum/bin_area; + } +} + +template +inline void PSROIALIGNAVERotatedPoolForward(const Tensor &out, + const Tensor &data, + const Tensor &bbox, + const float spatial_scale, + const int sampling_ratio, + const int output_dim_, + const int group_size_) { + const DType *bottom_data = data.dptr_; + const DType *bottom_rois = bbox.dptr_; + DType *top_data = out.dptr_; + const int count = out.shape_.Size(); + const int channels = data.size(1); + const int height = data.size(2); + const int width = data.size(3); + const int pooled_height = out.size(2); + const int pooled_width = out.size(3); + cudaStream_t stream = Stream::GetStream(out.stream_); + PSROIALIGNAVERotatedPoolForwardKernel << > >( + count, bottom_data, spatial_scale, channels, height, width, + pooled_height, pooled_width, sampling_ratio, bottom_rois, output_dim_, group_size_, top_data); + PSROIALIGNAVEROTATEDPOOLING_CUDA_CHECK(cudaPeekAtLastError()); +} + +template +__device__ void bilinear_interpolate_gradient( + const int height, + const int width, + T y, + T x, + T* w1, + T* w2, + T* w3, + T* w4, + int* x_low, + int* x_high, + int* y_low, + int* y_high, + const int /*index*/ /* index for debug only*/) { + // deal with cases that inverse elements are out of feature map boundary + if (y < -1.0 || y > height || x < -1.0 || x > width) { + // empty + *w1 = *w2 = *w3 = *w4 = 0.; + *x_low = *x_high = *y_low = *y_high = -1; + return; + } + + if (y <= 0) { + y = 0; + } + if (x <= 0) { + x = 0; + } + + *y_low = static_cast(y); + *x_low = static_cast(x); + + if (*y_low >= height - 1) { + *y_high = *y_low = height - 1; + y = (T)*y_low; + } else { + *y_high = *y_low + 1; + } + + if (*x_low >= width - 1) { + *x_high = *x_low = width - 1; + x = (T)*x_low; + } else { + *x_high = *x_low + 1; + } + + T ly = y - *y_low; + T lx = x - *x_low; + T hy = 1. - ly, hx = 1. - lx; + + *w1 = hy * hx, *w2 = hy * lx, *w3 = ly * hx, *w4 = ly * lx; + + return; +} + +template +__global__ void PSROIALIGNAVERotatedPoolBackwardAccKernel( + const int count, + const DType* top_diff, + const int num_rois, + const DType spatial_scale, + const int channels, + const int height, const int width, + const int pooled_height, const int pooled_width, + const int sampling_ratio, + const int group_size, + const int output_dim, + DType* bottom_diff, + const DType* bottom_rois) { + CUDA_KERNEL_LOOP(index, count) { + // The output is in order (n, ctop, ph, pw) + int pw = index % pooled_width; + int ph = (index / pooled_width) % pooled_height; + int ctop = (index / pooled_width / pooled_height) % output_dim; + int n = index / pooled_width / pooled_height / output_dim; + + // [start, end) interval for spatial sampling + const DType* offset_bottom_rois = bottom_rois + n * 6; + int roi_batch_ind = offset_bottom_rois[0]; + // Do not round + DType roi_center_w = offset_bottom_rois[1] * spatial_scale; + DType roi_center_h = offset_bottom_rois[2] * spatial_scale; + DType roi_width = offset_bottom_rois[3] * spatial_scale; + DType roi_height = offset_bottom_rois[4] * spatial_scale; + // T theta = offset_bottom_rois[5] * M_PI / 180.0; + DType theta = offset_bottom_rois[5]; + + // DType roi_start_w = offset_bottom_rois[1] * spatial_scale; + // DType roi_start_h = offset_bottom_rois[2] * spatial_scale; + // DType roi_end_w = offset_bottom_rois[3] * spatial_scale; + // DType roi_end_h = offset_bottom_rois[4] * spatial_scale; + + // Force too small ROIs to be 1x1 + // DType roi_width = max(roi_end_w - roi_start_w, (DType)1.); // avoid 0 + // DType roi_height = max(roi_end_h - roi_start_h, (DType)1.); + roi_width = max(roi_width, (DType)1.); + roi_height = max(roi_height, (DType)1.); + // Compute w and h at bottom + DType bin_size_h = static_cast(roi_height) / static_cast(pooled_height); + DType bin_size_w = static_cast(roi_width) / static_cast(pooled_width); + + // int hstart = floor(static_cast(ph)* bin_size_h + // + roi_start_h); + // int wstart = floor(static_cast(pw)* bin_size_w + // + roi_start_w); + // int hend = ceil(static_cast(ph + 1) * bin_size_h + // + roi_start_h); + // int wend = ceil(static_cast(pw + 1) * bin_size_w + // + roi_start_w); + // // Add roi offsets and clip to input boundaries + // hstart = min(max(hstart, 0), height); + // hend = min(max(hend, 0), height); + // wstart = min(max(wstart, 0), width); + // wend = min(max(wend, 0), width); + // bool is_empty = (hend <= hstart) || (wend <= wstart); + + // Compute c at bottom + int gw = floor(static_cast(pw)* group_size / pooled_width); + int gh = floor(static_cast(ph)* group_size / pooled_height); + gw = min(max(gw, 0), group_size - 1); + gh = min(max(gh, 0), group_size - 1); + int c = (ctop*group_size + gh)*group_size + gw; + DType* offset_bottom_diff = bottom_diff + (roi_batch_ind * channels + c) * height * width; + // DType bin_area = (hend - hstart)*(wend - wstart); + // DType diff_val = is_empty ? (DType)0. : top_diff[index] / bin_area; + // for (int h = hstart; h < hend; ++h) { + // for (int w = wstart; w < wend; ++w) { + // int bottom_index = h*width + w; + // atomicAdd(offset_bottom_diff + bottom_index, diff_val); + // } + // } + // int top_offset = (n * channels + ctop) * pooled_height * pooled_width; + // const DType* offset_top_diff = top_diff + top_offset; + // const DType top_diff_this_bin = offset_top_diff[ph * pooled_width + pw]; + + const DType top_diff_this_bin = top_diff[index]; + + // We use roi_bin_grid to sample the grid and mimic integral + int roi_bin_grid_h = (sampling_ratio > 0) ? sampling_ratio : ceil(roi_height / pooled_height); // e.g., = 2 + int roi_bin_grid_w = (sampling_ratio > 0) ? sampling_ratio : ceil(roi_width / pooled_width); + + // roi_start_h and roi_start_w are computed wrt the center of RoI (x, y). + // Appropriate translation needs to be applied after. + DType roi_start_h = -roi_height / 2.0; + DType roi_start_w = -roi_width / 2.0; + DType cosTheta = cos(theta); + DType sinTheta = sin(theta); + + // We do average (integral) pooling inside a bin + const DType sample_count = roi_bin_grid_h * roi_bin_grid_w; // e.g. = 4 + + for (int iy = 0; iy < roi_bin_grid_h; iy++) { // e.g., iy = 0, 1 + const DType yy = roi_start_h + ph * bin_size_h + + static_cast(iy + .5f) * bin_size_h / + static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 + for (int ix = 0; ix < roi_bin_grid_w; ix++) { + const DType xx = roi_start_w + pw * bin_size_w + + static_cast(ix + .5f) * bin_size_w / + static_cast(roi_bin_grid_w); + + // Rotate by theta around the center and translate + // T x = xx * cosTheta + yy * sinTheta + roi_center_w; + // T y = yy * cosTheta - xx * sinTheta + roi_center_h; + DType x = xx * cosTheta - yy * sinTheta + roi_center_w; + DType y = xx * sinTheta + yy * cosTheta + roi_center_h; + + DType w1, w2, w3, w4; + int x_low, x_high, y_low, y_high; + + bilinear_interpolate_gradient( + height, + width, + y, + x, + &w1, + &w2, + &w3, + &w4, + &x_low, + &x_high, + &y_low, + &y_high, + index); // TODO: choose the index + + DType g1 = top_diff_this_bin * w1 / sample_count; + DType g2 = top_diff_this_bin * w2 / sample_count; + DType g3 = top_diff_this_bin * w3 / sample_count; + DType g4 = top_diff_this_bin * w4 / sample_count; + + if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0) { + atomicAdd( + offset_bottom_diff + y_low * width + x_low, static_cast(g1)); + atomicAdd( + offset_bottom_diff + y_low * width + x_high, static_cast(g2)); + atomicAdd( + offset_bottom_diff + y_high * width + x_low, static_cast(g3)); + atomicAdd( + offset_bottom_diff + y_high * width + x_high, static_cast(g4)); + } // if + } // ix + } // iy + } +} + + +template +inline void PSROIALIGNAVERotatedPoolBackwardAcc(const Tensor &in_grad, + const Tensor &out_grad, + const Tensor &bbox, + const float spatial_scale, + const int sampling_ratio, + const int output_dim_, + const int group_size_) { + // LOG(INFO) << "PSROIALIGNAVERotatedPoolBackward"; + const DType *top_diff = out_grad.dptr_; + const DType *bottom_rois = bbox.dptr_; + DType *bottom_diff = in_grad.dptr_; + const int count = out_grad.shape_.Size(); + const int num_rois = bbox.size(0); + const int channels = in_grad.size(1); + const int height = in_grad.size(2); + const int width = in_grad.size(3); + const int pooled_height = out_grad.size(2); + const int pooled_width = out_grad.size(3); + cudaStream_t stream = Stream::GetStream(in_grad.stream_); + PSROIALIGNAVERotatedPoolBackwardAccKernel << > >( + count, top_diff, num_rois, spatial_scale, channels, height, width, + pooled_height, pooled_width, sampling_ratio, group_size_, output_dim_, bottom_diff, bottom_rois); + PSROIALIGNAVEROTATEDPOOLING_CUDA_CHECK(cudaPeekAtLastError()); +} + +} // namespace cuda + +template +inline void PSROIALIGNAVERotatedPoolForward(const Tensor &out, + const Tensor &data, + const Tensor &bbox, + const float spatial_scale, + const int sampling_ratio, + const int output_dim_, + const int group_size_) { + cuda::PSROIALIGNAVERotatedPoolForward(out, data, bbox, spatial_scale, sampling_ratio,output_dim_, group_size_); +} + +template +inline void PSROIALIGNAVERotatedPoolBackwardAcc(const Tensor &in_grad, + const Tensor &out_grad, + const Tensor &bbox, + const float spatial_scale, + const int sampling_ratio, + const int output_dim_, + const int group_size_) { + cuda::PSROIALIGNAVERotatedPoolBackwardAcc(in_grad, out_grad, bbox, spatial_scale, sampling_ratio, output_dim_, group_size_); +} + +} // namespace mshadow + + +namespace mxnet { +namespace op { + +template<> +Operator* CreateOp(PSROIALIGNAVERotatedPoolingParam param, int dtype) { + Operator* op = NULL; + MSHADOW_REAL_TYPE_SWITCH(dtype, DType, { + op = new PSROIALIGNAVERotatedPoolingOp(param); + }); + return op; +} + +} // namespace op +} // namespace mxnet diff --git a/fpn/operator_cxx/psroi_rotatedpooling-inl.h b/fpn/operator_cxx/psroi_rotatedpooling-inl.h new file mode 100644 index 0000000..f99a2c6 --- /dev/null +++ b/fpn/operator_cxx/psroi_rotatedpooling-inl.h @@ -0,0 +1,223 @@ +/*! + * Copyright (c) 2017 by Contributors + * Copyright (c) 2017 Microsoft + * Licensed under The Apache-2.0 License [see LICENSE for details] + * \file psroi_pooling-inl.h + * \brief psroi pooling operator and symbol + * \author Yi Li, Tairui Chen, Guodong Zhang, Haozhi Qi, Jifeng Dai + * modified by Jian Ding +*/ +#ifndef MXNET_OPERATOR_CONTRIB_PSROIROTATED_POOLING_INL_H_ +#define MXNET_OPERATOR_CONTRIB_PSROIROTATED_POOLING_INL_H_ + +#include +#include +#include +#include +#include +#include +#include +#include "../mshadow_op.h" +#include "../operator_common.h" + + +namespace mxnet { +namespace op { + +// Declare enumeration of input order to make code more intuitive. +// These enums are only visible within this header +namespace psroirotatedpool { +enum PSROIROTATEDPoolingOpInputs {kData, kBox}; +enum PSROIROTATEDPoolingOpOutputs {kOut}; +} // psroipool + +struct PSROIROTATEDPoolingParam : public dmlc::Parameter { + // TShape pooled_size; + float spatial_scale; + int output_dim; + int pooled_size; + int group_size; + DMLC_DECLARE_PARAMETER(PSROIROTATEDPoolingParam) { + DMLC_DECLARE_FIELD(spatial_scale).set_range(0.0, 1.0) + .describe("Ratio of input feature map height (or w) to raw image height (or w). " + "Equals the reciprocal of total stride in convolutional layers"); + DMLC_DECLARE_FIELD(output_dim).describe("fix output dim"); + DMLC_DECLARE_FIELD(pooled_size).describe("fix pooled size"); + DMLC_DECLARE_FIELD(group_size).set_default(0).describe("fix group size"); + } +}; + +template +class PSROIROTATEDPoolingOp : public Operator { + public: + explicit PSROIROTATEDPoolingOp(PSROIROTATEDPoolingParam p) { + this->param_ = p; + } + + virtual void Forward(const OpContext &ctx, + const std::vector &in_data, + const std::vector &req, + const std::vector &out_data, + const std::vector &aux_args) { + using namespace mshadow; + CHECK_EQ(in_data.size(), 2); + CHECK_EQ(out_data.size(), 1); + CHECK_EQ(out_data[psroirotatedpool::kOut].shape_[0], in_data[psroirotatedpool::kBox].shape_[0]); + Stream *s = ctx.get_stream(); + + Tensor data = in_data[psroirotatedpool::kData].get(s); + Tensor bbox = in_data[psroirotatedpool::kBox].get(s); + Tensor out = out_data[psroirotatedpool::kOut].get(s); + CHECK_EQ(data.CheckContiguous(), true); + CHECK_EQ(bbox.CheckContiguous(), true); + CHECK_EQ(out.CheckContiguous(), true); + out = -FLT_MAX; + PSROIROTATEDPoolForward(out, data, bbox, param_.spatial_scale, param_.output_dim, param_.group_size); + } + + virtual void Backward(const OpContext &ctx, + const std::vector &out_grad, + const std::vector &in_data, + const std::vector &out_data, + const std::vector &req, + const std::vector &in_grad, + const std::vector &aux_args) { + using namespace mshadow; + CHECK_EQ(in_data.size(), 2); + CHECK_EQ(out_data.size(), 1); + CHECK_EQ(out_grad[psroirotatedpool::kOut].shape_[0], in_data[psroirotatedpool::kBox].shape_[0]); + CHECK_NE(req[psroirotatedpool::kData], kWriteInplace) << + "ROIPooling: Backward doesn't support kWriteInplace."; + CHECK_NE(req[psroirotatedpool::kBox], kWriteInplace) << + "ROIPooling: Backward doesn't support kWriteInplace."; + Stream *s = ctx.get_stream(); + + Tensor grad_out = out_grad[psroirotatedpool::kOut].get(s); + Tensor bbox = in_data[psroirotatedpool::kBox].get(s); + Tensor grad_in = in_grad[psroirotatedpool::kData].get(s); + Tensor grad_roi = in_grad[psroirotatedpool::kBox].get(s); + + CHECK_EQ(grad_out.CheckContiguous(), true); + CHECK_EQ(bbox.CheckContiguous(), true); + CHECK_EQ(grad_in.CheckContiguous(), true); + + if (kAddTo == req[psroirotatedpool::kData] || kWriteTo == req[psroirotatedpool::kData]) { + if (kWriteTo == req[psroirotatedpool::kData]) { + grad_in = 0.0f; + } + PSROIROTATEDPoolBackwardAcc(grad_in, grad_out, bbox, param_.spatial_scale, + param_.output_dim, param_.group_size); + } + if (kWriteTo == req[psroirotatedpool::kBox]) { + grad_roi = 0.0f; + } + } + + private: + PSROIROTATEDPoolingParam param_; +}; // class PSROIPoolingOp + +// Decalre Factory function, used for dispatch specialization +template +Operator* CreateOp(PSROIROTATEDPoolingParam param, int dtype); + +#if DMLC_USE_CXX11 +class PSROIROTATEDPoolingProp : public OperatorProperty { + public: + std::vector ListArguments() const override { + return {"data", "rois"}; + } + + std::vector ListOutputs() const override { + return {"output"}; + } + + int NumOutputs() const override { + return 1; + } + + int NumVisibleOutputs() const override { + return 1; + } + + void Init(const std::vector >& kwargs) override { + param_.Init(kwargs); + if (param_.group_size == 0) { + param_.group_size = param_.pooled_size; + } + } + + std::map GetParams() const override { + return param_.__DICT__(); + } + + bool InferShape(std::vector *in_shape, + std::vector *out_shape, + std::vector *aux_shape) const override { + using namespace mshadow; + CHECK_EQ(in_shape->size(), 2) << "Input:[data, rois]"; + + // data: [batch_size, c, h, w] + TShape dshape = in_shape->at(psroirotatedpool::kData); + CHECK_EQ(dshape.ndim(), 4) << "data should be a 4D tensor"; + + // bbox: [num_rois, 5] + TShape bshape = in_shape->at(psroirotatedpool::kBox); + CHECK_EQ(bshape.ndim(), 2) << "bbox should be a 2D tensor of shape [batch, 6]"; + CHECK_EQ(bshape[1], 6) << "bbox should be a 2D tensor of shape [batch, 6]"; + + // out: [num_rois, c, pooled_h, pooled_w] + out_shape->clear(); + out_shape->push_back( + Shape4(bshape[0], param_.output_dim, param_.pooled_size, param_.pooled_size)); + return true; + } + + bool InferType(std::vector *in_type, + std::vector *out_type, + std::vector *aux_type) const override { + CHECK_EQ(in_type->size(), 2); + int dtype = (*in_type)[0]; + CHECK_EQ(dtype, (*in_type)[1]); + CHECK_NE(dtype, -1) << "Input must have specified type"; + + out_type->clear(); + out_type->push_back(dtype); + return true; + } + + OperatorProperty* Copy() const override { + PSROIROTATEDPoolingProp* psroi_pooling_sym = new PSROIROTATEDPoolingProp(); + psroi_pooling_sym->param_ = this->param_; + return psroi_pooling_sym; + } + + std::string TypeString() const override { + return "_contrib_PSROIROTATEDPooling"; + } + + // decalre dependency and inplace optimization options + std::vector DeclareBackwardDependency( + const std::vector &out_grad, + const std::vector &in_data, + const std::vector &out_data) const override { + return {out_grad[psroirotatedpool::kOut], in_data[psroirotatedpool::kBox]}; + } + + + Operator* CreateOperator(Context ctx) const override { + LOG(FATAL) << "Not Implemented."; + return NULL; + } + + Operator* CreateOperatorEx(Context ctx, std::vector *in_shape, + std::vector *in_type) const override; + + + private: + PSROIROTATEDPoolingParam param_; +}; // class PSROIPoolingProp +#endif +} // namespace op +} // namespace mxnet +#endif // MXNET_OPERATOR_CONTRIB_PSROI_POOLING_INL_H_ diff --git a/fpn/operator_cxx/psroi_rotatedpooling.cc b/fpn/operator_cxx/psroi_rotatedpooling.cc new file mode 100644 index 0000000..66f98a7 --- /dev/null +++ b/fpn/operator_cxx/psroi_rotatedpooling.cc @@ -0,0 +1,81 @@ +/*! + * Copyright (c) 2017 by Contributors + * Copyright (c) 2017 Microsoft + * Licensed under The Apache-2.0 License [see LICENSE for details] + * \file psroi_pooling.cc + * \brief psroi pooling operator + * \author Yi Li, Tairui Chen, Guodong Zhang, Haozhi Qi, Jifeng Dai + * modified by Jian Ding +*/ +#include "./psroi_rotatedpooling-inl.h" +#include +#include +#include +#include +#include + +using std::max; +using std::min; +using std::floor; +using std::ceil; + +namespace mshadow { +template +inline void PSROIROTATEDPoolForward(const Tensor &out, + const Tensor &data, + const Tensor &bbox, + const float spatial_scale_, + const int output_dim_, + const int group_size_) { + // NOT_IMPLEMENTED; + return; +} + +template +inline void PSROIROTATEDPoolBackwardAcc(const Tensor &in_grad, + const Tensor &out_grad, + const Tensor &bbox, + const float spatial_scale_, + const int output_dim_, + const int group_size_) { + // NOT_IMPLEMENTED; + return; +} +} // namespace mshadow + +namespace mxnet { +namespace op { + +template<> +Operator *CreateOp(PSROIROTATEDPoolingParam param, int dtype) { + Operator* op = NULL; + MSHADOW_REAL_TYPE_SWITCH(dtype, DType, { + op = new PSROIROTATEDPoolingOp(param); + }); + return op; +} + +Operator *PSROIROTATEDPoolingProp::CreateOperatorEx(Context ctx, std::vector *in_shape, + std::vector *in_type) const { + std::vector out_shape, aux_shape; + std::vector out_type, aux_type; + CHECK(InferType(in_type, &out_type, &aux_type)); + CHECK(InferShape(in_shape, &out_shape, &aux_shape)); + DO_BIND_DISPATCH(CreateOp, param_, in_type->at(0)); +} + +DMLC_REGISTER_PARAMETER(PSROIROTATEDPoolingParam); + +MXNET_REGISTER_OP_PROPERTY(_contrib_PSROIROTATEDPooling, PSROIROTATEDPoolingProp) +.describe("Performs region-of-interest pooling on inputs. Resize bounding box coordinates by " +"spatial_scale and crop input feature maps accordingly. The cropped feature maps are pooled " +"by max pooling to a fixed size output indicated by pooled_size. batch_size will change to " +"the number of region bounding boxes after PSROIROTATEDPooling") +.add_argument("data", "Symbol", "Input data to the pooling operator, a 4D Feature maps") +.add_argument("rois", "Symbol", "Bounding box coordinates, a 2D array of " +"[[batch_index, x_ctr, y_ctr, w, h, theta]]. " +"of designated region of interest. batch_index indicates the index of corresponding image " +"in the input data") +.add_arguments(PSROIROTATEDPoolingParam::__FIELDS__()); +} // namespace op +} // namespace mxnet diff --git a/fpn/operator_cxx/psroi_rotatedpooling.cu b/fpn/operator_cxx/psroi_rotatedpooling.cu new file mode 100644 index 0000000..fa961cf --- /dev/null +++ b/fpn/operator_cxx/psroi_rotatedpooling.cu @@ -0,0 +1,283 @@ +/*! + * Copyright (c) 2017 by Contributors + * Copyright (c) 2017 Microsoft + * Licensed under The Apache-2.0 License [see LICENSE for details] + * \file psroi_pooling.cu + * \brief psroi pooling operator + * \author Yi Li, Tairui Chen, Guodong Zhang, Haozhi Qi, Jifeng Dai + * modified by Jian Ding +*/ +#include "./psroi_rotatedpooling-inl.h" +#include +#include +#include +#include +#include "../../common/cuda_utils.h" +#include "../mxnet_op.h" + +#define PSROIROTATEDPOOLING_CUDA_CHECK(condition) \ + /* Code block avoids redefinition of cudaError_t error */ \ + do { \ + cudaError_t error = condition; \ + CHECK_EQ(error, cudaSuccess) << " " << cudaGetErrorString(error); \ + } while (0) +#define CUDA_KERNEL_LOOP(i, n) \ +for (int i = blockIdx.x * blockDim.x + threadIdx.x; \ + i < (n); \ + i += blockDim.x * gridDim.x) + +namespace mshadow { +namespace cuda { + +template +__global__ void PSROIROTATEDPoolForwardKernel( + const int count, + const DType* bottom_data, + const DType spatial_scale, + const int channels, + const int height, const int width, + const int pooled_height, const int pooled_width, + const DType* bottom_rois, + const int output_dim, + const int group_size, + DType* top_data) { + CUDA_KERNEL_LOOP(index, count) { + // The output is in order (n, ctop, ph, pw) + int pw = index % pooled_width; + int ph = (index / pooled_width) % pooled_height; + int ctop = (index / pooled_width / pooled_height) % output_dim; + int n = index / pooled_width / pooled_height / output_dim; + + // [start, end) interval for spatial sampling + const DType* offset_bottom_rois = bottom_rois + n * 6; + int roi_batch_ind = offset_bottom_rois[0]; + DType roi_xc = static_cast(round(offset_bottom_rois[1])) * spatial_scale; + DType roi_yc = static_cast(round(offset_bottom_rois[2])) * spatial_scale; + DType roi_w = static_cast(round(offset_bottom_rois[3])) * spatial_scale; + DType roi_h = static_cast(round(offset_bottom_rois[4])) * spatial_scale; + DType Theta = static_cast(offset_bottom_rois[5]); + + DType cosTheta = cos(Theta); + DType sinTheta = sin(Theta); + + // Force too small ROIs to be 1x1 + DType roi_width = max(roi_w, 1.); // avoid 0 + DType roi_height = max(roi_h, 1.); + + // Compute w and h at bottom + DType bin_size_h = roi_height / static_cast(pooled_height); + DType bin_size_w = roi_width / static_cast(pooled_width); + + int hstart = floor(static_cast(ph) * bin_size_h); + int wstart = floor(static_cast(pw)* bin_size_w); + int hend = ceil(static_cast(ph + 1) * bin_size_h); + int wend = ceil(static_cast(pw + 1) * bin_size_w); + // Add roi offsets and clip to input boundaries + // hstart = min(max(hstart, 0), roi_h); + // hend = min(max(hend, 0), roi_h); + // wstart = min(max(wstart, 0), roi_w); + // wend = min(max(wend, 0), roi_w); + bool is_empty = (hend <= hstart) || (wend <= wstart); + + int gw = floor(static_cast(pw)* group_size / pooled_width); + int gh = floor(static_cast(ph)* group_size / pooled_height); + gw = min(max(gw, 0), group_size - 1); + gh = min(max(gh, 0), group_size - 1); + int c = (ctop*group_size + gh)*group_size + gw; + + const DType* offset_bottom_data = bottom_data + (roi_batch_ind * channels + c) * height * width; + DType out_sum = 0; + float half_w = (float)(roi_w)/2.0; + float half_h = (float)(roi_h)/2.0; + for (int h = hstart; h < hend; ++h) { + for (int w = wstart; w < wend; ++w) { + float xx = cosTheta*((float)(w) - half_w) - sinTheta*((float)(h) - half_h) + roi_xc; + float yy = sinTheta*((float)(w) - half_w) + cosTheta*((float)(h) - half_h) + roi_yc; + int xint = (int)(round(xx)); + int yint = (int)(round(yy)); + + if (xint >= width || xint < 0 || yint>=height || yint < 0) + continue; + + int bottom_index = yint*width + xint; + out_sum += offset_bottom_data[bottom_index]; + } + } + + DType bin_area = (hend - hstart)*(wend - wstart); + top_data[index] = is_empty? (DType)0. : out_sum/bin_area; + } +} + +template +inline void PSROIROTATEDPoolForward(const Tensor &out, + const Tensor &data, + const Tensor &bbox, + const float spatial_scale, + const int output_dim_, + const int group_size_) { + const DType *bottom_data = data.dptr_; + const DType *bottom_rois = bbox.dptr_; + DType *top_data = out.dptr_; + const int count = out.shape_.Size(); + const int channels = data.size(1); + const int height = data.size(2); + const int width = data.size(3); + const int pooled_height = out.size(2); + const int pooled_width = out.size(3); + cudaStream_t stream = Stream::GetStream(out.stream_); + PSROIROTATEDPoolForwardKernel << > >( + count, bottom_data, spatial_scale, channels, height, width, + pooled_height, pooled_width, bottom_rois, output_dim_, group_size_, top_data); + PSROIROTATEDPOOLING_CUDA_CHECK(cudaPeekAtLastError()); +} + + +template +__global__ void PSROIROTATEDPoolBackwardAccKernel( + const int count, + const DType* top_diff, + const int num_rois, + const DType spatial_scale, + const int channels, + const int height, const int width, + const int pooled_height, const int pooled_width, + const int group_size, + const int output_dim, + DType* bottom_diff, + const DType* bottom_rois) { + CUDA_KERNEL_LOOP(index, count) { + // The output is in order (n, ctop, ph, pw) + int pw = index % pooled_width; + int ph = (index / pooled_width) % pooled_height; + int ctop = (index / pooled_width / pooled_height) % output_dim; + int n = index / pooled_width / pooled_height / output_dim; + + // [start, end) interval for spatial sampling + const DType* offset_bottom_rois = bottom_rois + n * 6; + int roi_batch_ind = offset_bottom_rois[0]; + DType roi_xc = static_cast(round(offset_bottom_rois[1])) * spatial_scale; + DType roi_yc = static_cast(round(offset_bottom_rois[2])) * spatial_scale; + DType roi_w = static_cast(round(offset_bottom_rois[3])) * spatial_scale; + DType roi_h = static_cast(round(offset_bottom_rois[4])) * spatial_scale; + DType Theta = static_cast(offset_bottom_rois[5]); + + DType cosTheta = cos(Theta); + DType sinTheta = sin(Theta); + + // Force too small ROIs to be 1x1 + DType roi_width = max(roi_w, 1.0); // avoid 0 + DType roi_height = max(roi_h, 1.0); + + // Compute w and h at bottom + DType bin_size_h = roi_height / static_cast(pooled_height); + DType bin_size_w = roi_width / static_cast(pooled_width); + + int hstart = floor(static_cast(ph)* bin_size_h); + int wstart = floor(static_cast(pw)* bin_size_w); + int hend = ceil(static_cast(ph + 1) * bin_size_h); + int wend = ceil(static_cast(pw + 1) * bin_size_w); + // Add roi offsets and clip to input boundaries + // hstart = min(max(hstart, 0), roi_h); + // hend = min(max(hend, 0), roi_h); + // wstart = min(max(wstart, 0), roi_w); + // wend = min(max(wend, 0), roi_w); + bool is_empty = (hend <= hstart) || (wend <= wstart); + + // Compute c at bottom + int gw = floor(static_cast(pw)* group_size / pooled_width); + int gh = floor(static_cast(ph)* group_size / pooled_height); + gw = min(max(gw, 0), group_size - 1); + gh = min(max(gh, 0), group_size - 1); + int c = (ctop*group_size + gh)*group_size + gw; + DType* offset_bottom_diff = bottom_diff + (roi_batch_ind * channels + c) * height * width; + DType bin_area = (hend - hstart)*(wend - wstart); + DType diff_val = is_empty ? (DType)0. : top_diff[index] / bin_area; + + float half_w = (float)(roi_w)/2.0; + float half_h = (float)(roi_h)/2.0; + + for (int h = hstart; h < hend; ++h) { + for (int w = wstart; w < wend; ++w) { + float xx = cosTheta*((float)(w)-half_w) - sinTheta*((float)(h)-half_h) + roi_xc; + float yy = sinTheta*((float)(w)-half_w) + cosTheta*((float)(h)-half_h) + roi_yc; + int xint = (int)(round(xx)); + int yint = (int)(round(yy)); + + if (xint>=width || xint<0 || yint>=height || yint<0) + continue; + + int bottom_index = yint*width + xint; + atomicAdd(offset_bottom_diff + bottom_index, diff_val); + } + } + } +} + + +template +inline void PSROIROTATEDPoolBackwardAcc(const Tensor &in_grad, + const Tensor &out_grad, + const Tensor &bbox, + const float spatial_scale, + const int output_dim_, + const int group_size_) { + // LOG(INFO) << "PSROIROTATEDPoolBackward"; + const DType *top_diff = out_grad.dptr_; + const DType *bottom_rois = bbox.dptr_; + DType *bottom_diff = in_grad.dptr_; + const int count = out_grad.shape_.Size(); + const int num_rois = bbox.size(0); + const int channels = in_grad.size(1); + const int height = in_grad.size(2); + const int width = in_grad.size(3); + const int pooled_height = out_grad.size(2); + const int pooled_width = out_grad.size(3); + cudaStream_t stream = Stream::GetStream(in_grad.stream_); + PSROIROTATEDPoolBackwardAccKernel << > >( + count, top_diff, num_rois, spatial_scale, channels, height, width, + pooled_height, pooled_width, group_size_, output_dim_, bottom_diff, bottom_rois); + PSROIROTATEDPOOLING_CUDA_CHECK(cudaPeekAtLastError()); +} + +} // namespace cuda + +template +inline void PSROIROTATEDPoolForward(const Tensor &out, + const Tensor &data, + const Tensor &bbox, + const float spatial_scale, + const int output_dim_, + const int group_size_) { + cuda::PSROIROTATEDPoolForward(out, data, bbox, spatial_scale, output_dim_, group_size_); +} + +template +inline void PSROIROTATEDPoolBackwardAcc(const Tensor &in_grad, + const Tensor &out_grad, + const Tensor &bbox, + const float spatial_scale, + const int output_dim_, + const int group_size_) { + cuda::PSROIROTATEDPoolBackwardAcc(in_grad, out_grad, bbox, spatial_scale, output_dim_, group_size_); +} + +} // namespace mshadow + + +namespace mxnet { +namespace op { + +template<> +Operator* CreateOp(PSROIROTATEDPoolingParam param, int dtype) { + Operator* op = NULL; + MSHADOW_REAL_TYPE_SWITCH(dtype, DType, { + op = new PSROIROTATEDPoolingOp(param); + }); + return op; +} + +} // namespace op +} // namespace mxnet diff --git a/fpn/operator_cxx/roi_align_rotated-inl.h b/fpn/operator_cxx/roi_align_rotated-inl.h new file mode 100644 index 0000000..25ea3bd --- /dev/null +++ b/fpn/operator_cxx/roi_align_rotated-inl.h @@ -0,0 +1,67 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +/*! + * Copyright (c) 2018 by Contributors + * \file roi_align-inl.h + * \brief roi align operator and symbol + * \author Hang Zhang + * modified from Caffe2 + * modified by Jian Ding +*/ +#ifndef MXNET_OPERATOR_CONTRIB_ROI_ALIGNROTATED_INL_H_ +#define MXNET_OPERATOR_CONTRIB_ROI_ALIGNROTATED_INL_H_ + +#include +#include +#include "../mshadow_op.h" +#include "../tensor/init_op.h" + + +namespace mxnet { +namespace op { + + +// Declare enumeration of input order to make code more intuitive. +// These enums are only visible within this header +namespace roialignrotated { +enum ROIAlignRotatedOpInputs {kData, kBox}; +enum ROIAlignRotatedOpOutputs {kOut}; +} // roialign + + +struct ROIAlignRotatedParam : public dmlc::Parameter { + TShape pooled_size; + float spatial_scale; + int sample_ratio; + DMLC_DECLARE_PARAMETER(ROIAlignRotatedParam) { + DMLC_DECLARE_FIELD(pooled_size) + .set_expect_ndim(2).enforce_nonzero() + .describe("ROI Align Rotated output roi feature map height and width: (h, w)"); + DMLC_DECLARE_FIELD(spatial_scale).set_range(0.0, 1.0) + .describe("Ratio of input feature map height (or w) to raw image height (or w). " + "Equals the reciprocal of total stride in convolutional layers"); + DMLC_DECLARE_FIELD(sample_ratio).set_default(-1) + .describe("Optional sampling ratio of ROI align Rotated, using adaptive size by default."); + } +}; + +} // namespace op +} // namespace mxnet + +#endif // MXNET_OPERATOR_CONTRIB_ROI_ALIGN_INL_H_ \ No newline at end of file diff --git a/fpn/operator_cxx/roi_align_rotated.cc b/fpn/operator_cxx/roi_align_rotated.cc new file mode 100644 index 0000000..5e75849 --- /dev/null +++ b/fpn/operator_cxx/roi_align_rotated.cc @@ -0,0 +1,408 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +/*! + * Copyright (c) 2018 by Contributors + * \file roi_align.cc + * \brief roi align operator + * \author Hang Zhang + * Adapted from Caffe2 + * modified by Jian Ding +*/ +#include "./roi_align_rotated-inl.h" + + +namespace mxnet { +namespace op { + +template +struct PreCalc { + int pos1; + int pos2; + int pos3; + int pos4; + T w1; + T w2; + T w3; + T w4; +}; + +template +void pre_calc_for_bilinear_interpolate( + const int height, + const int width, + const int pooled_height, + const int pooled_width, + const int iy_upper, + const int ix_upper, + T roi_start_h, + T roi_start_w, + T bin_size_h, + T bin_size_w, + int roi_bin_grid_h, + int roi_bin_grid_w, + std::vector>* pre_calc) { + int pre_calc_index = 0; + for (int ph = 0; ph < pooled_height; ph++) { + for (int pw = 0; pw < pooled_width; pw++) { + for (int iy = 0; iy < iy_upper; iy++) { + const T yy = roi_start_h + ph * bin_size_h + + static_cast(iy + .5f) * bin_size_h / + static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 + for (int ix = 0; ix < ix_upper; ix++) { + const T xx = roi_start_w + pw * bin_size_w + + static_cast(ix + .5f) * bin_size_w / + static_cast(roi_bin_grid_w); + + T x = xx; + T y = yy; + // deal with: inverse elements are out of feature map boundary + if (y < -1.0 || y > height || x < -1.0 || x > width) { + // empty + PreCalc pc; + pc.pos1 = 0; + pc.pos2 = 0; + pc.pos3 = 0; + pc.pos4 = 0; + pc.w1 = 0; + pc.w2 = 0; + pc.w3 = 0; + pc.w4 = 0; + pre_calc->at(pre_calc_index) = pc; + pre_calc_index += 1; + continue; + } + + if (y <= 0) { + y = 0; + } + if (x <= 0) { + x = 0; + } + + int y_low = static_cast(y); + int x_low = static_cast(x); + int y_high; + int x_high; + + if (y_low >= height - 1) { + y_high = y_low = height - 1; + y = (T)y_low; + } else { + y_high = y_low + 1; + } + + if (x_low >= width - 1) { + x_high = x_low = width - 1; + x = (T)x_low; + } else { + x_high = x_low + 1; + } + + T ly = y - y_low; + T lx = x - x_low; + T hy = 1. - ly, hx = 1. - lx; + T w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; + + // save weights and indeces + PreCalc pc; + pc.pos1 = y_low * width + x_low; + pc.pos2 = y_low * width + x_high; + pc.pos3 = y_high * width + x_low; + pc.pos4 = y_high * width + x_high; + pc.w1 = w1; + pc.w2 = w2; + pc.w3 = w3; + pc.w4 = w4; + pre_calc->at(pre_calc_index) = pc; + + pre_calc_index += 1; + } + } + } + } +} + +template +void ROIAlignRotatedForward( + const int nthreads, + const T* bottom_data, + const T& spatial_scale, + const int channels, + const int height, + const int width, + const int pooled_height, + const int pooled_width, + const int sampling_ratio, + const T* bottom_rois, + T* top_data) { + return; +} + + +template +void bilinear_interpolate_gradient( + const int height, + const int width, + T y, + T x, + T* w1, + T* w2, + T* w3, + T* w4, + int* x_low, + int* x_high, + int* y_low, + int* y_high, + const int /*index*/ /* index for debug only*/) { + // deal with cases that inverse elements are out of feature map boundary + if (y < -1.0 || y > height || x < -1.0 || x > width) { + // empty + *w1 = *w2 = *w3 = *w4 = 0.; + *x_low = *x_high = *y_low = *y_high = -1; + return; + } + + if (y <= 0) { + y = 0; + } + if (x <= 0) { + x = 0; + } + + *y_low = static_cast(y); + *x_low = static_cast(x); + + if (*y_low >= height - 1) { + *y_high = *y_low = height - 1; + y = (T)*y_low; + } else { + *y_high = *y_low + 1; + } + + if (*x_low >= width - 1) { + *x_high = *x_low = width - 1; + x = (T)*x_low; + } else { + *x_high = *x_low + 1; + } + + T ly = y - *y_low; + T lx = x - *x_low; + T hy = 1. - ly, hx = 1. - lx; + + *w1 = hy * hx, *w2 = hy * lx, *w3 = ly * hx, *w4 = ly * lx; + + return; +} + +template +inline void add(const T& val, T* address) { + *address += val; +} + +template +void ROIAlignRotatedBackward( + const int nthreads, + const T* top_diff, + const int /*num_rois*/, + const T& spatial_scale, + const int channels, + const int height, + const int width, + const int pooled_height, + const int pooled_width, + const int sampling_ratio, + T* bottom_diff, + const T* bottom_roiss) { + // NOT_IMPLEMENTED; + return; +} // ROIAlignBackward + + +template +void ROIAlignRotatedForwardCompute(const nnvm::NodeAttrs& attrs, + const OpContext& ctx, + const std::vector& in_data, + const std::vector& req, + const std::vector& out_data) { + using namespace mshadow; + size_t expected_in = 2; + size_t expected_out = 1; + CHECK_EQ(in_data.size(), expected_in); + CHECK_EQ(out_data.size(), expected_out); + CHECK_EQ(out_data[roialignrotated::kOut].shape_[0], in_data[roialignrotated::kBox].shape_[0]); + + const ROIAlignRotatedParam& param = nnvm::get(attrs.parsed); + + const int count = out_data[roialignrotated::kOut].Size(); + // const int num_rois = in_data[roialignrotated::kBox].size(0); + const int channels = in_data[roialignrotated::kData].size(1); + const int height = in_data[roialignrotated::kData].size(2); + const int width = in_data[roialignrotated::kData].size(3); + const int pooled_height = out_data[roialignrotated::kOut].size(2); + const int pooled_width = out_data[roialignrotated::kOut].size(3); + + // assume all the data and gradient have the same type + MSHADOW_REAL_TYPE_SWITCH(in_data[0].type_flag_, DType, { + const DType *bottom_data = in_data[roialignrotated::kData].dptr(); + const DType *bottom_rois = in_data[roialignrotated::kBox].dptr(); + DType *top_data = out_data[roialignrotated::kOut].dptr(); + + ROIAlignRotatedForward(count, bottom_data, param.spatial_scale, channels, + height, width, pooled_height, pooled_width, param.sample_ratio, + bottom_rois, top_data); + }) +} + +template +void ROIAlignRotatedBackwardCompute(const nnvm::NodeAttrs& attrs, + const OpContext& ctx, + const std::vector& inputs, + const std::vector& req, + const std::vector& outputs) { + using namespace mshadow; + + CHECK_EQ(inputs.size(), 2); + CHECK_EQ(outputs.size(), 2); + // the order here relates to the order in ROIAlignGrad + std::vector out_grad(1, inputs[0]); + std::vector in_data(1, inputs[1]); + // std::vector out_data(1, inputs[2]); + + CHECK_EQ(out_grad[0].shape_[0], in_data[0].shape_[0]); + CHECK_NE(req[0], kWriteInplace) << + "ROIAlignRotated: Backward doesn't support kWriteInplace."; + CHECK_NE(req[1], kWriteInplace) << + "ROIAlignRotated: Backward doesn't support kWriteInplace."; + + const ROIAlignRotatedParam& param = nnvm::get(attrs.parsed); + + const int count = out_grad[0].Size(); + const int num_rois = in_data[0].size(0); + const int channels = outputs[0].size(1); + const int height = outputs[0].size(2); + const int width = outputs[0].size(3); + const int pooled_height = out_grad[0].size(2); + const int pooled_width = out_grad[0].size(3); + + Stream *s = ctx.get_stream(); + // assume all the data and gradient have the same type + MSHADOW_REAL_TYPE_SWITCH(out_grad[0].type_flag_, DType, { + const DType *top_diff = out_grad[0].dptr(); + const DType *bottom_rois = in_data[0].dptr(); + DType *grad_in = outputs[0].dptr(); + + if (kAddTo == req[roialignrotated::kData] || kWriteTo == req[roialignrotated::kData]) { + if (kWriteTo == req[roialignrotated::kData]) { + Fill(s, outputs[0], kWriteTo, static_cast(0)); + } + ROIAlignRotatedBackward(count, top_diff, num_rois, param.spatial_scale, + channels, height, width, pooled_height, pooled_width, + param.sample_ratio, grad_in, bottom_rois); + } + if (kWriteTo == req[roialignrotated::kBox]) { + Fill(s, outputs[1], kWriteTo, static_cast(0)); + } + }) +} + +DMLC_REGISTER_PARAMETER(ROIAlignRotatedParam); + +NNVM_REGISTER_OP(_contrib_ROIAlignRotated) +.describe(R"code( +This operator takes a 4D feature map as an input array and rotated region proposals as `rois`, +then align the feature map over sub-regions of input and produces a fixed-sized output array. +This operator is typically used in Faster R-CNN & Mask R-CNN networks. + +Different from ROI pooling, ROI Align removes the harsh quantization, properly aligning +the extracted features with the input. RoIAlign computes the value of each sampling point +by bilinear interpolation from the nearby grid points on the feature map. No quantization is +performed on any coordinates involved in the RoI, its bins, or the sampling points. +Bilinear interpolation is used to compute the exact values of the +input features at four regularly sampled locations in each RoI bin. +Then the feature map can be aggregated by avgpooling. + + +Reference +--------- + +He, Kaiming, et al. "Mask R-CNN." ICCV, 2017 +)code" ADD_FILELINE) +.set_num_inputs(2) +.set_num_outputs(1) +.set_attr("FListInputNames", + [](const NodeAttrs& attrs) { + return std::vector{"data", "rois"}; +}) +.set_attr("FListOutputNames", + [](const NodeAttrs& attrs) { + return std::vector{"output"}; +}) +.set_attr_parser(ParamParser) +.set_attr("FInferShape", [](const nnvm::NodeAttrs& attrs, + std::vector *in_shape, std::vector *out_shape){ + using namespace mshadow; + const ROIAlignRotatedParam& param = nnvm::get(attrs.parsed); + CHECK_EQ(in_shape->size(), 2) << "Input:[data, rois]"; + // data: [batch_size, c, h, w] + TShape dshape = in_shape->at(roialignrotated::kData); + CHECK_EQ(dshape.ndim(), 4) << "data should be a 4D tensor"; + // bbox: [num_rois, 6] + TShape bshape = in_shape->at(roialignrotated::kBox); + CHECK_EQ(bshape.ndim(), 2) << "bbox should be a 2D tensor of shape [batch, 6]"; + CHECK_EQ(bshape[1], 6) << "bbox should be a 2D tensor of shape [batch, 6]"; + // out: [num_rois, c, pooled_h, pooled_w] + out_shape->clear(); + out_shape->push_back( + Shape4(bshape[0], dshape[1], param.pooled_size[0], param.pooled_size[1])); + return true; +}) +.set_attr("FInferType", [](const nnvm::NodeAttrs& attrs, + std::vector *in_type, std::vector *out_type) { + CHECK_EQ(in_type->size(), 2); + int dtype = (*in_type)[0]; + CHECK_EQ(dtype, (*in_type)[1]); + CHECK_NE(dtype, -1) << "Input must have specified type"; + + out_type->clear(); + out_type->push_back(dtype); + return true; +}) +.set_attr("FCompute", ROIAlignRotatedForwardCompute) +.set_attr("FGradient", + [](const nnvm::NodePtr& n, const std::vector& ograds) { + std::vector heads; + heads.push_back(ograds[roialignrotated::kOut]); + heads.push_back(n->inputs[roialignrotated::kBox]); + return MakeGradNode("_backward_ROIAlignRotated", n, heads, n->attrs.dict); + }) +.add_argument("data", "NDArray-or-Symbol", "Input data to the pooling operator, a 4D Feature maps") +.add_argument("rois", "NDArray-or-Symbol", "Bounding box coordinates, a 2D array") +.add_arguments(ROIAlignRotatedParam::__FIELDS__()); + + +NNVM_REGISTER_OP(_backward_ROIAlignRotated) +.set_num_outputs(2) +.set_attr("TIsBackward", true) +.set_attr_parser(ParamParser) +.set_attr("FCompute", ROIAlignRotatedBackwardCompute); + +} // namespace op +} // namespace mxnet + diff --git a/fpn/operator_cxx/roi_align_rotated.cu b/fpn/operator_cxx/roi_align_rotated.cu new file mode 100644 index 0000000..a003b94 --- /dev/null +++ b/fpn/operator_cxx/roi_align_rotated.cu @@ -0,0 +1,491 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +/*! + * Copyright (c) 2018 by Contributors + * \file roi_align.cu + * \brief roi align operator + * \author Hang Zhang + * Adapted from Caffe2 + * modified by Jian Ding +*/ +#include "./roi_align_rotated-inl.h" + + +namespace mxnet { +namespace op { + +#define CUDA_1D_KERNEL_LOOP(i, n) \ + for (size_t i = blockIdx.x * blockDim.x + threadIdx.x; i < (n); \ + i += blockDim.x * gridDim.x) + +using namespace mshadow::cuda; + +// The maximum number of blocks to use in the default kernel call. +constexpr int ROI_MAXIMUM_NUM_BLOCKS = 4096; + +/** + * @brief Compute the number of blocks needed to run N threads. + */ +inline int ROI_GET_BLOCKS(const int N) { + return std::max( + std::min( + (N + kMaxThreadsPerBlock - 1) / kMaxThreadsPerBlock, + ROI_MAXIMUM_NUM_BLOCKS), + // Use at least 1 block, since CUDA does not allow empty block + 1); +} + + +template +__device__ T bilinear_interpolate( + const T* bottom_data, + const int height, + const int width, + T y, + T x, + const int index /* index for debug only*/) { + // deal with cases that inverse elements are out of feature map boundary + if (y < -1.0 || y > height || x < -1.0 || x > width) { + // empty + return 0; + } + + if (y <= 0) { + y = 0; + } + if (x <= 0) { + x = 0; + } + + int y_low = static_cast(y); + int x_low = static_cast(x); + int y_high; + int x_high; + + if (y_low >= height - 1) { + y_high = y_low = height - 1; + y = (T)y_low; + } else { + y_high = y_low + 1; + } + + if (x_low >= width - 1) { + x_high = x_low = width - 1; + x = (T)x_low; + } else { + x_high = x_low + 1; + } + + T ly = y - y_low; + T lx = x - x_low; + T hy = 1. - ly, hx = 1. - lx; + // do bilinear interpolation + T v1 = bottom_data[y_low * width + x_low]; + T v2 = bottom_data[y_low * width + x_high]; + T v3 = bottom_data[y_high * width + x_low]; + T v4 = bottom_data[y_high * width + x_high]; + T w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; + + T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); + + return val; +} + +template +__global__ void RoIAlignRotatedForwardKernel( + const int nthreads, + const T* bottom_data, + const T spatial_scale, + const int channels, + const int height, + const int width, + const int pooled_height, + const int pooled_width, + const int sampling_ratio, + const T* bottom_rois, + T* top_data) { + CUDA_1D_KERNEL_LOOP(index, nthreads) { + // (n, c, ph, pw) is an element in the pooled output + int pw = index % pooled_width; + int ph = (index / pooled_width) % pooled_height; + int c = (index / pooled_width / pooled_height) % channels; + int n = index / pooled_width / pooled_height / channels; + + const T* offset_bottom_rois = bottom_rois + n * 6; + int roi_batch_ind = offset_bottom_rois[0]; + + // Do not using rounding; this implementation detail is critical + T roi_center_w = offset_bottom_rois[1] * spatial_scale; + T roi_center_h = offset_bottom_rois[2] * spatial_scale; + T roi_width = offset_bottom_rois[3] * spatial_scale; + T roi_height = offset_bottom_rois[4] * spatial_scale; + // T theta = offset_bottom_rois[5] * M_PI / 180.0; + T theta = offset_bottom_rois[5]; + + // Force malformed ROIs to be 1x1 + roi_width = max(roi_width, (T)1.); + roi_height = max(roi_height, (T)1.); + T bin_size_h = static_cast(roi_height) / static_cast(pooled_height); + T bin_size_w = static_cast(roi_width) / static_cast(pooled_width); + + const T* offset_bottom_data = + bottom_data + (roi_batch_ind * channels + c) * height * width; + + // We use roi_bin_grid to sample the grid and mimic integral + int roi_bin_grid_h = (sampling_ratio > 0) + ? sampling_ratio + : ceil(roi_height / pooled_height); // e.g., = 2 + int roi_bin_grid_w = + (sampling_ratio > 0) ? sampling_ratio : ceil(roi_width / pooled_width); + + // roi_start_h and roi_start_w are computed wrt the center of RoI (x, y). + // Appropriate translation needs to be applied after. + T roi_start_h = -roi_height / 2.0; + T roi_start_w = -roi_width / 2.0; + T cosTheta = cos(theta); + T sinTheta = sin(theta); + + // We do average (integral) pooling inside a bin + const T count = roi_bin_grid_h * roi_bin_grid_w; // e.g. = 4 + + T output_val = 0.; + for (int iy = 0; iy < roi_bin_grid_h; iy++) { // e.g., iy = 0, 1 + const T yy = roi_start_h + ph * bin_size_h + + static_cast(iy + .5f) * bin_size_h / + static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 + for (int ix = 0; ix < roi_bin_grid_w; ix++) { + const T xx = roi_start_w + pw * bin_size_w + + static_cast(ix + .5f) * bin_size_w / + static_cast(roi_bin_grid_w); + + // Rotate by theta around the center and translate + // T x = xx * cosTheta + yy * sinTheta + roi_center_w; + // T y = yy * cosTheta - xx * sinTheta + roi_center_h; + T x = xx * cosTheta - yy * sinTheta + roi_center_w; + T y = xx * sinTheta + yy * cosTheta + roi_center_h; + + T val = bilinear_interpolate( + offset_bottom_data, height, width, y, x, index); + output_val += val; + } + } + output_val /= count; + + top_data[index] = output_val; + } +} + + +template +__device__ void bilinear_interpolate_gradient( + const int height, + const int width, + T y, + T x, + T* w1, + T* w2, + T* w3, + T* w4, + int* x_low, + int* x_high, + int* y_low, + int* y_high, + const int /*index*/ /* index for debug only*/) { + // deal with cases that inverse elements are out of feature map boundary + if (y < -1.0 || y > height || x < -1.0 || x > width) { + // empty + *w1 = *w2 = *w3 = *w4 = 0.; + *x_low = *x_high = *y_low = *y_high = -1; + return; + } + + if (y <= 0) { + y = 0; + } + if (x <= 0) { + x = 0; + } + + *y_low = static_cast(y); + *x_low = static_cast(x); + + if (*y_low >= height - 1) { + *y_high = *y_low = height - 1; + y = (T)*y_low; + } else { + *y_high = *y_low + 1; + } + + if (*x_low >= width - 1) { + *x_high = *x_low = width - 1; + x = (T)*x_low; + } else { + *x_high = *x_low + 1; + } + + T ly = y - *y_low; + T lx = x - *x_low; + T hy = 1. - ly, hx = 1. - lx; + + *w1 = hy * hx, *w2 = hy * lx, *w3 = ly * hx, *w4 = ly * lx; + + return; +} + +template +__global__ void RoIAlignRotatedBackwardKernel( + const int nthreads, + const T* top_diff, + const int num_rois, + const T spatial_scale, + const int channels, + const int height, + const int width, + const int pooled_height, + const int pooled_width, + const int sampling_ratio, + T* bottom_diff, + const T* bottom_rois) { + CUDA_1D_KERNEL_LOOP(index, nthreads) { + // (n, c, ph, pw) is an element in the pooled output + int pw = index % pooled_width; + int ph = (index / pooled_width) % pooled_height; + int c = (index / pooled_width / pooled_height) % channels; + int n = index / pooled_width / pooled_height / channels; + + const T* offset_bottom_rois = bottom_rois + n * 6; + int roi_batch_ind = offset_bottom_rois[0]; + + // Do not round + T roi_center_w = offset_bottom_rois[1] * spatial_scale; + T roi_center_h = offset_bottom_rois[2] * spatial_scale; + T roi_width = offset_bottom_rois[3] * spatial_scale; + T roi_height = offset_bottom_rois[4] * spatial_scale; + // T theta = offset_bottom_rois[5] * M_PI / 180.0; + T theta = offset_bottom_rois[5]; + + + // Force malformed ROIs to be 1x1 + roi_width = max(roi_width, (T)1.); + roi_height = max(roi_height, (T)1.); + T bin_size_h = static_cast(roi_height) / static_cast(pooled_height); + T bin_size_w = static_cast(roi_width) / static_cast(pooled_width); + + T* offset_bottom_diff = + bottom_diff + (roi_batch_ind * channels + c) * height * width; + + int top_offset = (n * channels + c) * pooled_height * pooled_width; + const T* offset_top_diff = top_diff + top_offset; + const T top_diff_this_bin = offset_top_diff[ph * pooled_width + pw]; + + // We use roi_bin_grid to sample the grid and mimic integral + int roi_bin_grid_h = (sampling_ratio > 0) + ? sampling_ratio + : ceil(roi_height / pooled_height); // e.g., = 2 + int roi_bin_grid_w = + (sampling_ratio > 0) ? sampling_ratio : ceil(roi_width / pooled_width); + + // roi_start_h and roi_start_w are computed wrt the center of RoI (x, y). + // Appropriate translation needs to be applied after. + T roi_start_h = -roi_height / 2.0; + T roi_start_w = -roi_width / 2.0; + T cosTheta = cos(theta); + T sinTheta = sin(theta); + + // We do average (integral) pooling inside a bin + const T count = roi_bin_grid_h * roi_bin_grid_w; // e.g. = 4 + + for (int iy = 0; iy < roi_bin_grid_h; iy++) { // e.g., iy = 0, 1 + const T yy = roi_start_h + ph * bin_size_h + + static_cast(iy + .5f) * bin_size_h / + static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 + for (int ix = 0; ix < roi_bin_grid_w; ix++) { + const T xx = roi_start_w + pw * bin_size_w + + static_cast(ix + .5f) * bin_size_w / + static_cast(roi_bin_grid_w); + + // Rotate by theta around the center and translate + // T x = xx * cosTheta + yy * sinTheta + roi_center_w; + // T y = yy * cosTheta - xx * sinTheta + roi_center_h; + T x = xx * cosTheta - yy * sinTheta + roi_center_w; + T y = xx * sinTheta + yy * cosTheta + roi_center_h; + + T w1, w2, w3, w4; + int x_low, x_high, y_low, y_high; + + bilinear_interpolate_gradient( + height, + width, + y, + x, + &w1, + &w2, + &w3, + &w4, + &x_low, + &x_high, + &y_low, + &y_high, + index); + + T g1 = top_diff_this_bin * w1 / count; + T g2 = top_diff_this_bin * w2 / count; + T g3 = top_diff_this_bin * w3 / count; + T g4 = top_diff_this_bin * w4 / count; + + if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0) { + atomicAdd( + offset_bottom_diff + y_low * width + x_low, static_cast(g1)); + atomicAdd( + offset_bottom_diff + y_low * width + x_high, static_cast(g2)); + atomicAdd( + offset_bottom_diff + y_high * width + x_low, static_cast(g3)); + atomicAdd( + offset_bottom_diff + y_high * width + x_high, static_cast(g4)); + } // if + } // ix + } // iy + } // CUDA_1D_KERNEL_LOOP +} // RoIAlignBackward + +template +void ROIAlignRotatedForwardCompute(const nnvm::NodeAttrs& attrs, + const OpContext& ctx, + const std::vector& in_data, + const std::vector& req, + const std::vector& out_data) { + using namespace mshadow; + size_t expected_in = 2; + size_t expected_out = 1; + CHECK_EQ(in_data.size(), expected_in); + CHECK_EQ(out_data.size(), expected_out); + CHECK_EQ(out_data[roialignrotated::kOut].shape_[0], in_data[roialignrotated::kBox].shape_[0]); + + const ROIAlignRotatedParam param = nnvm::get(attrs.parsed); + + const int count = out_data[roialignrotated::kOut].Size(); + const int num_rois = in_data[roialignrotated::kBox].size(0); + const int channels = in_data[roialignrotated::kData].size(1); + const int height = in_data[roialignrotated::kData].size(2); + const int width = in_data[roialignrotated::kData].size(3); + const int pooled_height = out_data[roialignrotated::kOut].size(2); + const int pooled_width = out_data[roialignrotated::kOut].size(3); + + Stream *s = ctx.get_stream(); + cudaStream_t stream = mshadow::Stream::GetStream(s); + MSHADOW_REAL_TYPE_SWITCH(in_data[0].type_flag_, DType, { + const DType *bottom_data = in_data[roialignrotated::kData].dptr(); + const DType *bottom_rois = in_data[roialignrotated::kBox].dptr(); + DType *top_data = out_data[roialignrotated::kOut].dptr(); + RoIAlignRotatedForwardKernel + <<>>( + count, + bottom_data, + param.spatial_scale, + channels, + height, + width, + pooled_height, + pooled_width, + param.sample_ratio, + bottom_rois, + top_data); + }) +} + + +template +void ROIAlignRotatedBackwardCompute(const nnvm::NodeAttrs& attrs, + const OpContext& ctx, + const std::vector& inputs, + const std::vector& req, + const std::vector& outputs) { + using namespace mshadow; + + CHECK_EQ(inputs.size(), 2); + CHECK_EQ(outputs.size(), 2); + // the order here relates to the order in ROIAlignGrad + std::vector out_grad(1, inputs[0]); + std::vector in_data(1, inputs[1]); + // std::vector out_data(1, inputs[2]); + + CHECK_EQ(out_grad[0].shape_[0], in_data[0].shape_[0]); + CHECK_NE(req[0], kWriteInplace) << + "ROIAlignRotated: Backward doesn't support kWriteInplace."; + CHECK_NE(req[1], kWriteInplace) << + "ROIAlignRotated: Backward doesn't support kWriteInplace."; + + const ROIAlignRotatedParam param = nnvm::get(attrs.parsed); + + const int count = out_grad[0].Size(); + const int num_rois = in_data[0].size(0); + const int channels = outputs[0].size(1); + const int height = outputs[0].size(2); + const int width = outputs[0].size(3); + const int pooled_height = out_grad[0].size(2); + const int pooled_width = out_grad[0].size(3); + + Stream *s = ctx.get_stream(); + cudaStream_t stream = mshadow::Stream::GetStream(s); + + // assume all the data and gradient have the same type + MSHADOW_REAL_TYPE_SWITCH(out_grad[0].type_flag_, DType, { + const DType *top_diff = out_grad[0].dptr(); + const DType *bottom_rois = in_data[0].dptr(); + DType *grad_in = outputs[0].dptr(); + + if (kWriteTo == req[roialignrotated::kBox]) { + Fill(s, outputs[1], kWriteTo, static_cast(0)); + } + if (kNullOp == req[roialignrotated::kData]) return; + if (kWriteTo == req[roialignrotated::kData]) { + Fill(s, outputs[0], kWriteTo, static_cast(0)); + } + RoIAlignRotatedBackwardKernel + <<>>( + count, + top_diff, + num_rois, + param.spatial_scale, + channels, + height, + width, + pooled_height, + pooled_width, + param.sample_ratio, + grad_in, + bottom_rois); + }) +} + + +NNVM_REGISTER_OP(_contrib_ROIAlignRotated) +.set_attr("FCompute", ROIAlignRotatedForwardCompute); + +NNVM_REGISTER_OP(_backward_ROIAlignRotated) +.set_attr("FCompute", ROIAlignRotatedBackwardCompute); + +} // namespace op +} // namespace mxnet diff --git a/fpn/operator_py/RRoIDecoder.py b/fpn/operator_py/RRoIDecoder.py new file mode 100644 index 0000000..afc94db --- /dev/null +++ b/fpn/operator_py/RRoIDecoder.py @@ -0,0 +1,167 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Haozhi Qi +# -------------------------------------------------------- + +import mxnet as mx +import numpy as np +import numpy.random as npr +from distutils.util import strtobool + +from bbox.bbox_transform import bbox_pred, clip_boxes +from rpn.generate_anchor import generate_anchors +from nms.nms import gpu_nms_wrapper +# from poly_nms_gpu.poly_nms import poly_gpu_nms +from poly_nms_gpu.nms import poly_gpu_nms_wrapper +from bbox.bbox_transform import dbbox_transform2_inv_warp +from bbox.bbox_transform import clip_polys, RotBox2Polys, polygonToRotRectangle_batch, choose_best_Rroi_batch +import cPickle +import pdb +import copy +DEBUG = False + +## version 2 did not apply nms +class RRoIDecoderOperator(mx.operator.CustomOp): + def __init__(self, pre_nms_top_n, post_nms_top_n, threshold, min_size, cfg): + super(RRoIDecoderOperator, self).__init__() + self._pre_nms_top_n = pre_nms_top_n + self._post_nms_top_n = post_nms_top_n + self._threshold = threshold + self._min_size = min_size + self._cfg = cfg + + def forward(self, is_train, req, in_data, out_data, aux): + # batch_size = in_data[0].shape[0] + batch_size = in_data[0][0][0] + if batch_size.asnumpy() > 1: + raise ValueError("Sorry, multiple images each device is not implemented") + + rois = in_data[0].asnumpy() + st_pred = in_data[1].asnumpy() + # st_score: shape (n, 2) + st_score = in_data[2].asnumpy()[:, :, 1].reshape(-1, 1) + im_info = in_data[-1].asnumpy()[0, :] + + pre_nms_topN = self._pre_nms_top_n + post_nms_topN = self._post_nms_top_n + min_size = self._min_size + # 1. generate Rrois + cfg = self._cfg + # checked it, yes, the weights is different in training and testing, so the st_pred is different in training and testing + # this is very critical + if is_train: + if cfg.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED: + means = np.tile(np.array(cfg.TRAIN.BBOX_MEANS), 2 if cfg.CLASS_AGNOSTIC else cfg.dataset.NUM_CLASSES) + stds = np.tile(np.array(cfg.TRAIN.BBOX_STDS), 2 if cfg.CLASS_AGNOSTIC else cfg.dataset.NUM_CLASSES) + st_pred = st_pred * stds + means + Rrois = dbbox_transform2_inv_warp(rois[:, 1:], st_pred)[:, 5:] + if (len(Rrois) == 0): + pdb.set_trace() + # remove Rrois with either height or width < thredhold + keep = self._filter_boxes_v2(Rrois, min_size * im_info[2] * min_size * im_info[2]) + keep_Rrois = Rrois[keep] + scores = st_score[keep] + + if len(keep_Rrois) == 0: + Rrois[:, 2] = np.maximum(Rrois[:, 2], min_size * im_info[2]) + Rrois[:, 3] = np.maximum(Rrois[:, 3], min_size * im_info[2]) + # if after filter, there are no instances, clip all Rrois' size + keep_Rrois = Rrois + scores = st_score + proposals = RotBox2Polys(keep_Rrois) + + # sort all (proposal, score) pairs by score from highest to lowest + # take top pre_nms_topN (e.g. 6000) + order = scores.ravel().argsort()[::-1] + if pre_nms_topN > 0: + order = order[:pre_nms_topN] + proposals = proposals[order, :] + scores = scores[order] + # take after_nms_topN (e.g. 300) + # return the top proposals (-> RoIs top) + det = np.hstack((proposals, scores)).astype(np.float32) + + keep = np.arange(len(det)) + if post_nms_topN > 0: + keep = keep[:post_nms_topN] + # pad to ensure output size remains unchanged + if len(keep) < post_nms_topN: + pad = npr.choice(keep, size=post_nms_topN - len(keep)) + keep = np.hstack((keep, pad)) + proposals = proposals[keep, :] + + scores = scores[keep] + # ----------------------------- + # trans polys to rotboxes + proposals = polygonToRotRectangle_batch(proposals) + # range angle in [0, 180] to eliminate ambiguity of orientation agnostic instance regression + proposals = choose_best_Rroi_batch(proposals) + # proposals: (x_ctr, y_ctr, w, h, angle) + # Output rois array + # Our RPN implementation only supports a single input image, so all + # batch inds are 0 + batch_inds = np.zeros((proposals.shape[0], 1), dtype=np.float32) + blob = np.hstack((batch_inds, proposals.astype(np.float32, copy=False))) + # if is_train: + self.assign(out_data[0], req[0], blob) + + # elarged area for feature extraction + elarge_proposals = copy.deepcopy(proposals) + elarge_proposals[:, 2] = proposals[:, 2] * 1.2 + elarge_proposals[:, 3] = proposals[:, 3] * 1.4 + elarge_blob = np.hstack((batch_inds, elarge_proposals.astype(np.float32, copy=False))) + self.assign(out_data[1], req[1], elarge_blob) + + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + for i in range(len(in_grad)): + self.assign(in_grad[i], req[i], 0) + + @staticmethod + def _filter_boxes(boxes, min_size): + """ Remove all boxes with any side smaller than min_size """ + ws = boxes[:, 2] + hs = boxes[:, 3] + keep = np.where((ws >= min_size) & (hs >= min_size))[0] + return keep + + @staticmethod + def _filter_boxes_v2(boxes, area): + """ Remove all boxes with area below 10 * 10 """ + ws = boxes[:, 2] + hs = boxes[:, 3] + # keep = np.where((ws >= min_size) & (hs >= min_size))[0] + keep = np.where(ws * hs >= area)[0] + return keep + +@mx.operator.register("RRoIDecoder") +class RRoIDecoderProp(mx.operator.CustomOpProp): + def __init__(self, cfg, Rroi_pre_nms_top_n='12000', Rroi_post_nms_top_n='2000', threshold='0.5', min_size='10'): + super(RRoIDecoderProp, self).__init__(need_top_grad=False) + self._cfg = cPickle.loads(cfg) + self._Rroi_pre_nms_top_n = int(Rroi_pre_nms_top_n) + self._Rroi_post_nms_top_n = int(Rroi_post_nms_top_n) + self._threshold = float(threshold) + self._min_size = int(min_size) + + def list_arguments(self): + + return ['rois', 'bbox_pred', 'cls_prob', 'im_info'] + + def list_outputs(self): + + # return ['output_rois', 'output_rois_L'] + return ['output', 'output_rois_L'] + + def infer_shape(self, in_shape): + output_shape = (self._Rroi_post_nms_top_n, 6) + + return in_shape, [output_shape, output_shape] + + def create_operator(self, ctx, shapes, dtypes): + return RRoIDecoderOperator(self._Rroi_pre_nms_top_n, self._Rroi_post_nms_top_n, + self._threshold, self._min_size, self._cfg) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [] diff --git a/fpn/operator_py/RRoI_target_rotbox_v2.py b/fpn/operator_py/RRoI_target_rotbox_v2.py new file mode 100644 index 0000000..e79052a --- /dev/null +++ b/fpn/operator_py/RRoI_target_rotbox_v2.py @@ -0,0 +1,135 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +""" +Proposal Target Operator selects foreground and background roi and assigns label, bbox_transform to them. +""" + +import mxnet as mx +import numpy as np +from distutils.util import strtobool +from easydict import EasyDict as edict +import cPickle +from bbox.bbox_transform import bbox_poly2hbb, poly2bbox, polygonToRotRectangle_batch, choose_best_Rroi_batch + +from core.rcnn import sample_Rrois +import copy +import pdb +DEBUG = False + +# v2 is the version with Rroi elarge +class RRoITargetRotBox_v2Operator(mx.operator.CustomOp): + def __init__(self, num_classes, batch_images, batch_rois, cfg, fg_fraction): + super(RRoITargetRotBox_v2Operator, self).__init__() + self._num_classes = num_classes + self._batch_images = batch_images + self._batch_rois = batch_rois + self._cfg = cfg + self._fg_fraction = fg_fraction + + if DEBUG: + self._count = 0 + self._fg_num = 0 + self._bg_num = 0 + + def forward(self, is_train, req, in_data, out_data, aux): + assert self._batch_rois == -1 or self._batch_rois % self._batch_images == 0, \ + 'batchimages {} must devide batch_rois {}'.format(self._batch_images, self._batch_rois) + # pdb.set_trace() + all_rois = in_data[0].asnumpy() + gt_boxes = in_data[1].asnumpy() + + if self._batch_rois == -1: + rois_per_image = all_rois.shape[0] + gt_boxes.shape[0] + fg_rois_per_image = rois_per_image + else: + rois_per_image = self._batch_rois / self._batch_images + fg_rois_per_image = np.round(self._fg_fraction * rois_per_image).astype(int) + + # Include ground-truth boxes in the set of candidate rois + zeros = np.zeros((gt_boxes.shape[0], 1), dtype=gt_boxes.dtype) + # pdb.set_trace() + gt_rotboxes = np.concatenate((polygonToRotRectangle_batch(gt_boxes[:, :-1]), gt_boxes[:, -1][:, np.newaxis]), axis=1).astype(np.float32) + + all_rois = np.vstack((all_rois, np.hstack((zeros, choose_best_Rroi_batch(gt_rotboxes[:, :-1]))))) + # Sanity check: single batch only + assert np.all(all_rois[:, 0] == 0), 'Only single item batches are supported' + gpu_id = in_data[0].context.device_id + rois, labels, bbox_targets, bbox_weights = \ + sample_Rrois(all_rois, fg_rois_per_image, rois_per_image, self._num_classes, self._cfg, gt_boxes=gt_rotboxes, device_id=gpu_id) + + # elarge roi for feature extraction + # rois: (n, 6) (batch, x, y, w, h ,theta) + # pdb.set_trace() + elarge_rois = copy.deepcopy(rois) + elarge_rois[:, 3] = rois[:, 3] * 1.2 + elarge_rois[:, 4] = rois[:, 4] * 1.4 + + if DEBUG: + print "labels=", labels + print 'num fg: {}'.format((labels > 0).sum()) + print 'num bg: {}'.format((labels == 0).sum()) + self._count += 1 + self._fg_num += (labels > 0).sum() + self._bg_num += (labels == 0).sum() + print "self._count=", self._count + print 'num fg avg: {}'.format(self._fg_num / self._count) + print 'num bg avg: {}'.format(self._bg_num / self._count) + print 'ratio: {:.3f}'.format(float(self._fg_num) / float(self._bg_num)) + # pdb.set_trace() + for ind, val in enumerate([rois, elarge_rois, labels, bbox_targets, bbox_weights]): + self.assign(out_data[ind], req[ind], val) + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + for i in range(len(in_grad)): + self.assign(in_grad[i], req[i], 0) + +@mx.operator.register('RRoI_target_rotbox_v2') +class RRoITargetRotbox_v2Prop(mx.operator.CustomOpProp): + def __init__(self, num_classes, batch_images, batch_rois, cfg, fg_fraction='0.25'): + super(RRoITargetRotbox_v2Prop, self).__init__(need_top_grad=False) + self._num_classes = int(num_classes) + self._batch_images = int(batch_images) + self._batch_rois = int(batch_rois) + self._cfg = cPickle.loads(cfg) + self._fg_fraction = float(fg_fraction) + + def list_arguments(self): + return ['Rrois', 'gt_boxes'] + + def list_outputs(self): + return ['Rrois_output', 'Rrois_output_elarge', 'Rlabel', 'Rbbox_target', 'Rbbox_weight'] + + def infer_shape(self, in_shape): + rpn_rois_shape = in_shape[0] + gt_boxes_shape = in_shape[1] + + rois = rpn_rois_shape[0] + gt_boxes_shape[0] if self._batch_rois == -1 else self._batch_rois + # rois = rpn_rois_shape[0] if self._batch_rois == -1 else self._batch_rois + + output_rois_shape = (rois, 6) + label_shape = (rois, ) + bbox_target_shape = (rois, 5 * self._num_classes) + bbox_weight_shape = (rois, 5 * self._num_classes) + + return [rpn_rois_shape, gt_boxes_shape], \ + [output_rois_shape, output_rois_shape, label_shape, bbox_target_shape, bbox_weight_shape] + + # return [rpn_rois_shape], \ + # [output_rois_shape, label_shape, bbox_target_shape, bbox_weight_shape] + + + def create_operator(self, ctx, shapes, dtypes): + return RRoITargetRotBox_v2Operator(self._num_classes, self._batch_images, self._batch_rois, self._cfg, self._fg_fraction) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [] diff --git a/fpn/operator_py/__init__.py b/fpn/operator_py/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/fpn/operator_py/box_annotator_ohem.py b/fpn/operator_py/box_annotator_ohem.py new file mode 100644 index 0000000..3779032 --- /dev/null +++ b/fpn/operator_py/box_annotator_ohem.py @@ -0,0 +1,83 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- + +""" +Proposal Target Operator selects foreground and background roi and assigns label, bbox_transform to them. +""" + +import mxnet as mx +import numpy as np +from distutils.util import strtobool + + +class BoxAnnotatorOHEMOperator(mx.operator.CustomOp): + def __init__(self, num_classes, num_reg_classes, roi_per_img): + super(BoxAnnotatorOHEMOperator, self).__init__() + self._num_classes = num_classes + self._num_reg_classes = num_reg_classes + self._roi_per_img = roi_per_img + + def forward(self, is_train, req, in_data, out_data, aux): + + cls_score = in_data[0] + bbox_pred = in_data[1] + labels = in_data[2].asnumpy() + bbox_targets = in_data[3] + bbox_weights = in_data[4] + + per_roi_loss_cls = mx.nd.SoftmaxActivation(cls_score) + 1e-14 + per_roi_loss_cls = per_roi_loss_cls.asnumpy() + per_roi_loss_cls = per_roi_loss_cls[np.arange(per_roi_loss_cls.shape[0], dtype='int'), labels.astype('int')] + per_roi_loss_cls = -1 * np.log(per_roi_loss_cls) + per_roi_loss_cls = np.reshape(per_roi_loss_cls, newshape=(-1,)) + + per_roi_loss_bbox = bbox_weights * mx.nd.smooth_l1((bbox_pred - bbox_targets), scalar=1.0) + per_roi_loss_bbox = mx.nd.sum(per_roi_loss_bbox, axis=1).asnumpy() + + top_k_per_roi_loss = np.argsort(per_roi_loss_cls + per_roi_loss_bbox) + labels_ohem = labels + labels_ohem[top_k_per_roi_loss[::-1][self._roi_per_img:]] = -1 + bbox_weights_ohem = bbox_weights.asnumpy() + bbox_weights_ohem[top_k_per_roi_loss[::-1][self._roi_per_img:]] = 0 + + labels_ohem = mx.nd.array(labels_ohem) + bbox_weights_ohem = mx.nd.array(bbox_weights_ohem) + + for ind, val in enumerate([labels_ohem, bbox_weights_ohem]): + self.assign(out_data[ind], req[ind], val) + + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + for i in range(len(in_grad)): + self.assign(in_grad[i], req[i], 0) + + +@mx.operator.register('BoxAnnotatorOHEM') +class BoxAnnotatorOHEMProp(mx.operator.CustomOpProp): + def __init__(self, num_classes, num_reg_classes, roi_per_img): + super(BoxAnnotatorOHEMProp, self).__init__(need_top_grad=False) + self._num_classes = int(num_classes) + self._num_reg_classes = int(num_reg_classes) + self._roi_per_img = int(roi_per_img) + + def list_arguments(self): + return ['cls_score', 'bbox_pred', 'labels', 'bbox_targets', 'bbox_weights'] + + def list_outputs(self): + return ['labels_ohem', 'bbox_weights_ohem'] + + def infer_shape(self, in_shape): + labels_shape = in_shape[2] + bbox_weights_shape = in_shape[4] + + return in_shape, \ + [labels_shape, bbox_weights_shape] + + def create_operator(self, ctx, shapes, dtypes): + return BoxAnnotatorOHEMOperator(self._num_classes, self._num_reg_classes, self._roi_per_img) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [] diff --git a/fpn/operator_py/fpn_psroi_rotatedpooling.py b/fpn/operator_py/fpn_psroi_rotatedpooling.py new file mode 100644 index 0000000..999570a --- /dev/null +++ b/fpn/operator_py/fpn_psroi_rotatedpooling.py @@ -0,0 +1,129 @@ +# -------------------------------------------------------- +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi, Yuwen Xiong +# -------------------------------------------------------- + +import mxnet as mx +import numpy as np +from mxnet.contrib import autograd +import gc +import pdb + +class FPNPSROIROTATEDPoolingOperator(mx.operator.CustomOp): + def __init__(self, feat_strides, pooled_height, pooled_width, output_dim): + self.pooled_height = pooled_height + self.pooled_width = pooled_width + self.feat_strides = feat_strides + self.output_dim = output_dim + self.in_grad_hist_list = [] + self.num_strides = len(self.feat_strides) + self.roi_pool = [None for _ in range(self.num_strides)] + self.feat_idx = [None for _ in range(self.num_strides)] + + def forward(self, is_train, req, in_data, out_data, aux): + rois = in_data[-1].asnumpy() + # w = rois[:, 3] - rois[:, 1] + 1 + # h = rois[:, 4] - rois[:, 2] + 1 + w = np.maximum(rois[:, 3], 1) + h = np.maximum(rois[:, 4], 1) + # TODO: carefully scale the w, h + feat_id = np.clip(np.floor(2 + np.log2(np.sqrt(w * h) / 224)), 0, len(self.feat_strides) - 1) + pyramid_idx = [] + + rois_p = [None for _ in range(self.num_strides)] + for i in range(self.num_strides): + self.feat_idx[i] = np.where(feat_id == i)[0] + if len(self.feat_idx[i]) == 0: + # padding dummy roi + rois_p[i] = np.zeros((1, 6)) + pyramid_idx.append(-1) + else: + rois_p[i] = rois[self.feat_idx[i]] + pyramid_idx.append(self.feat_idx[i]) + rois_idx = np.argsort(np.hstack(pyramid_idx))[-rois.shape[0]:] + # pdb.set_trace() + if is_train: + for i in range(self.num_strides): + self.in_grad_hist_list.append(mx.nd.zeros_like(in_data[i])) + + + autograd.mark_variables([in_data[i] for i in range(self.num_strides)], self.in_grad_hist_list) + with autograd.train_section(): + for i in range(self.num_strides): + + # self.roi_pool[i] = mx.nd.contrib.PSROIROTATEDPooling(data=in_data[i], rois=mx.nd.array(rois_p[i], in_data[i].context), group_size=7, pooled_size=7, + # output_dim=10, spatial_scale=1.0 / self.feat_strides[i]) + self.roi_pool[i] = mx.contrib.nd.PSROIROTATEDPooling(data=in_data[i], + rois=mx.nd.array(rois_p[i], in_data[i].context), + group_size=7, pooled_size=7, + output_dim=10, + spatial_scale=1.0 / self.feat_strides[i]) + + roi_pool = mx.nd.concatenate(self.roi_pool, axis=0) + else: + # during testing, there is no need to record variable, thus saving memory + # pdb.set_trace() + roi_pool = [None for _ in range(self.num_strides)] + + + for i in range(self.num_strides): + + roi_pool[i] = mx.contrib.nd.PSROIROTATEDPooling(data=in_data[i], + rois=mx.nd.array(rois_p[i], in_data[i].context), + group_size=7, pooled_size=7, + output_dim=10, + spatial_scale=1.0 / self.feat_strides[i]) + roi_pool = mx.nd.concatenate(roi_pool, axis=0) + + roi_pool = mx.nd.take(roi_pool, mx.nd.array(rois_idx, roi_pool.context)) + self.assign(out_data[0], req[0], roi_pool) + + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + for i in range(len(in_grad)): + self.assign(in_grad[i], req[i], 0) + + with autograd.train_section(): + for i in range(self.num_strides): + if len(self.feat_idx[i] > 0): + autograd.compute_gradient([mx.nd.take(out_grad[0], mx.nd.array(self.feat_idx[i], out_grad[0].context)) * self.roi_pool[i]]) + + for i in range(0, self.num_strides): + self.assign(in_grad[i], req[i], self.in_grad_hist_list[i]) + + gc.collect() + + +@mx.operator.register('fpn_psroi_rotatedpooling') +class FPNPSROIROTATEDPoolingProp(mx.operator.CustomOpProp): + def __init__(self, feat_strides='(4,8,16,32)', pooled_height='7', pooled_width='7', output_dim='10'): + super(FPNPSROIROTATEDPoolingProp, self).__init__(need_top_grad=True) + self.pooled_height = int(pooled_height) + self.pooled_width = int(pooled_width) + self.feat_strides = np.fromstring(feat_strides[1:-1], dtype=int, sep=',') + self.output_dim = int(output_dim) + + self.num_strides = len(self.feat_strides) + + def list_arguments(self): + args_list = [] + for i in range(self.num_strides): + args_list.append('data_p{}'.format(2 + i)) + args_list.append('Rrois') + return args_list + + def list_outputs(self): + return ['output'] + + def infer_shape(self, in_shape): + # pdb.set_trace() + # output_feat_shape = [in_shape[-1][0], in_shape[0][1], self.pooled_height, self.pooled_width] + output_feat_shape = [in_shape[-1][0], self.output_dim, self.pooled_height, self.pooled_width] + + return in_shape, [output_feat_shape] + + def create_operator(self, ctx, shapes, dtypes): + return FPNPSROIROTATEDPoolingOperator(self.feat_strides, self.pooled_height, self.pooled_width, self.output_dim) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [out_grad[0]] diff --git a/fpn/operator_py/fpn_psroipooling_v2.py b/fpn/operator_py/fpn_psroipooling_v2.py new file mode 100644 index 0000000..df94b3c --- /dev/null +++ b/fpn/operator_py/fpn_psroipooling_v2.py @@ -0,0 +1,198 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi, Yuwen Xiong +# -------------------------------------------------------- + +import mxnet as mx +import numpy as np +from mxnet.contrib import autograd +import gc +import pdb + +class FPNPSROIPooling_v2Operator(mx.operator.CustomOp): + def __init__(self, feat_strides, pooled_height, pooled_width, output_dim, pooling_mode): + self.pooled_height = pooled_height + self.pooled_width = pooled_width + self.feat_strides = feat_strides + self.pooling_mode = pooling_mode + self.output_dim = output_dim + self.in_grad_hist_list = [] + self.num_strides = len(self.feat_strides) + self.roi_pool = [None for _ in range(self.num_strides)] + self.feat_idx = [None for _ in range(self.num_strides)] + + def forward(self, is_train, req, in_data, out_data, aux): + rois = in_data[-1].asnumpy() + w = rois[:, 3] - rois[:, 1] + 1 + h = rois[:, 4] - rois[:, 2] + 1 + feat_id = np.clip(np.floor(2 + np.log2(np.sqrt(w * h) / 224)), 0, len(self.feat_strides) - 1) + pyramid_idx = [] + + rois_p = [None for _ in range(self.num_strides)] + for i in range(self.num_strides): + self.feat_idx[i] = np.where(feat_id == i)[0] + if len(self.feat_idx[i]) == 0: + # padding dummy roi + rois_p[i] = np.zeros((1, 5)) + pyramid_idx.append(-1) + else: + rois_p[i] = rois[self.feat_idx[i]] + pyramid_idx.append(self.feat_idx[i]) + rois_idx = np.argsort(np.hstack(pyramid_idx))[-rois.shape[0]:] + # pdb.set_trace() + if is_train: + for i in range(self.num_strides): + self.in_grad_hist_list.append(mx.nd.zeros_like(in_data[i])) + + if self.pooling_mode == 'deform': + for i in range(self.num_strides, self.num_strides * 3): + self.in_grad_hist_list.append(mx.nd.zeros_like(in_data[i])) + autograd.mark_variables([in_data[i] for i in range(self.num_strides * 3)], self.in_grad_hist_list) + + with autograd.train_section(): + for i in range(self.num_strides): + roi_offset_t = mx.contrib.nd.DeformablePSROIPooling(data=in_data[i], rois=mx.nd.array(rois_p[i], in_data[i].context), group_size=7, pooled_size=7, + sample_per_part=4, no_trans=True, part_size=7, output_dim=10, spatial_scale=1.0 / self.feat_strides[i]) + roi_offset = mx.nd.FullyConnected(data=roi_offset_t, num_hidden=7 * 7 * 2, weight=in_data[i * 2 + self.num_strides], bias=in_data[i * 2 + 1 + self.num_strides]) + roi_offset_reshape = mx.nd.reshape(data=roi_offset, shape=(-1, 2, 7, 7)) + self.roi_pool[i] = mx.contrib.nd.DeformablePSROIPooling(data=in_data[i], rois=mx.nd.array(rois_p[i], in_data[i].context), trans=roi_offset_reshape, + group_size=7, pooled_size=7, sample_per_part=4, no_trans=False, part_size=7, + output_dim=self.output_dim, spatial_scale=1.0 / self.feat_strides[i], trans_std=0.1) + elif self.pooling_mode == 'alignave': + # pdb.set_trace() + autograd.mark_variables([in_data[i] for i in range(self.num_strides)], self.in_grad_hist_list) + with autograd.train_section(): + for i in range(self.num_strides): + self.roi_pool[i] = mx.contrib.nd.PSROIALIGNAVEPooling(data=in_data[i], + rois=mx.nd.array(rois_p[i], in_data[i].context), + group_size=7, pooled_size=7, sampling_ratio=4, + output_dim=10, + spatial_scale=1.0 / self.feat_strides[i]) + elif self.pooling_mode == 'pooling': + autograd.mark_variables([in_data[i] for i in range(self.num_strides)], self.in_grad_hist_list) + with autograd.train_section(): + for i in range(self.num_strides): + # TODO: finish it, and fix the output_dim hard code here + + # self.roi_pool[i] = mx.nd.contrib.PSROIPooling(data=in_data[i], rois=mx.nd.array(rois_p[i], in_data[i].context), group_size=7, pooled_size=7, + # output_dim=10, spatial_scale=1.0 / self.feat_strides[i]) + self.roi_pool[i] = mx.contrib.nd.PSROIPooling(data=in_data[i], + rois=mx.nd.array(rois_p[i], in_data[i].context), + group_size=7, pooled_size=7, + output_dim=10, + spatial_scale=1.0 / self.feat_strides[i]) + else: + print 'no such pooling mode' + pdb.set_trace() + roi_pool = mx.nd.concatenate(self.roi_pool, axis=0) + else: + # during testing, there is no need to record variable, thus saving memory + # pdb.set_trace() + roi_pool = [None for _ in range(self.num_strides)] + if self.pooling_mode == 'deform': + for i in range(self.num_strides): + roi_offset_t = mx.contrib.nd.DeformablePSROIPooling(data=in_data[i], + rois=mx.nd.array(rois_p[i], in_data[i].context), + group_size=7, pooled_size=7, + sample_per_part=4, no_trans=True, part_size=7, + output_dim=10, + spatial_scale=1.0 / self.feat_strides[i]) + roi_offset = mx.nd.FullyConnected(data=roi_offset_t, num_hidden=7 * 7 * 2, + weight=in_data[i * 2 + self.num_strides], + bias=in_data[i * 2 + 1 + self.num_strides]) + roi_offset_reshape = mx.nd.reshape(data=roi_offset, shape=(-1, 2, 7, 7)) + roi_pool[i] = mx.contrib.nd.DeformablePSROIPooling(data=in_data[i], + rois=mx.nd.array(rois_p[i], in_data[i].context), + trans=roi_offset_reshape, + group_size=7, pooled_size=7, sample_per_part=4, + no_trans=False, part_size=7, + output_dim=self.output_dim, + spatial_scale=1.0 / self.feat_strides[i], + trans_std=0.1) + elif self.pooling_mode == 'alignave': + for i in range(self.num_strides): + + roi_pool[i] = mx.contrib.nd.PSROIALIGNAVEPooling(data=in_data[i], + rois=mx.nd.array(rois_p[i], in_data[i].context), + group_size=7, pooled_size=7, sampling_ratio=4, + output_dim=10, + spatial_scale=1.0 / self.feat_strides[i]) + elif self.pooling_mode == 'pooling': + for i in range(self.num_strides): + + roi_pool[i] = mx.contrib.nd.PSROIPooling(data=in_data[i], + rois=mx.nd.array(rois_p[i], in_data[i].context), + group_size=7, pooled_size=7, + output_dim=10, + spatial_scale=1.0 / self.feat_strides[i]) + else: + print 'no such pooling mode' + pdb.set_trace() + roi_pool = mx.nd.concatenate(roi_pool, axis=0) + + roi_pool = mx.nd.take(roi_pool, mx.nd.array(rois_idx, roi_pool.context)) + self.assign(out_data[0], req[0], roi_pool) + + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + for i in range(len(in_grad)): + self.assign(in_grad[i], req[i], 0) + + with autograd.train_section(): + for i in range(self.num_strides): + if len(self.feat_idx[i] > 0): + autograd.compute_gradient([mx.nd.take(out_grad[0], mx.nd.array(self.feat_idx[i], out_grad[0].context)) * self.roi_pool[i]]) + + if self.pooling_mode == 'deform': + for i in range(0, self.num_strides * 3): + self.assign(in_grad[i], req[i], self.in_grad_hist_list[i]) + else: + for i in range(0, self.num_strides): + self.assign(in_grad[i], req[i], self.in_grad_hist_list[i]) + + gc.collect() + + +@mx.operator.register('fpn_psroi_pooling_v2') +class FPNPSROIPooling_v2Prop(mx.operator.CustomOpProp): + def __init__(self, feat_strides='(4,8,16,32)', pooled_height='7', pooled_width='7', pooling_mode='alignave', output_dim='10'): + super(FPNPSROIPooling_v2Prop, self).__init__(need_top_grad=True) + self.pooled_height = int(pooled_height) + self.pooled_width = int(pooled_width) + self.feat_strides = np.fromstring(feat_strides[1:-1], dtype=int, sep=',') + self.pooling_mode = pooling_mode + self.output_dim = int(output_dim) + + self.num_strides = len(self.feat_strides) + + def list_arguments(self): + args_list = [] + for i in range(self.num_strides): + args_list.append('data_p{}'.format(2 + i)) + if self.pooling_mode == 'deform': + for i in range(self.num_strides): + args_list.extend(['offset_weight_p{}'.format(2 + i), 'offset_bias_p{}'.format(2 + i)]) + args_list.append('rois') + return args_list + + def list_outputs(self): + return ['output'] + + def infer_shape(self, in_shape): + # pdb.set_trace() + # output_feat_shape = [in_shape[-1][0], in_shape[0][1], self.pooled_height, self.pooled_width] + output_feat_shape = [in_shape[-1][0], self.output_dim, self.pooled_height, self.pooled_width] + if self.pooling_mode == 'deform': + offset_dim = self.pooled_height * self.pooled_width * 2 + input_dim = self.pooled_height * self.pooled_width * self.output_dim + for i in range(self.num_strides): + in_shape[i * 2 + self.num_strides], in_shape[i * 2 + 1 + self.num_strides] = [offset_dim, input_dim], [ + offset_dim, ] + return in_shape, [output_feat_shape] + + def create_operator(self, ctx, shapes, dtypes): + return FPNPSROIPooling_v2Operator(self.feat_strides, self.pooled_height, self.pooled_width, self.output_dim, self.pooling_mode) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [out_grad[0]] diff --git a/fpn/operator_py/fpn_roi_pooling.py b/fpn/operator_py/fpn_roi_pooling.py new file mode 100644 index 0000000..8ca9d03 --- /dev/null +++ b/fpn/operator_py/fpn_roi_pooling.py @@ -0,0 +1,148 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi, Yuwen Xiong +# -------------------------------------------------------- + +import mxnet as mx +import numpy as np +from mxnet.contrib import autograd +import gc +import pdb + +class FPNROIPoolingOperator(mx.operator.CustomOp): + def __init__(self, feat_strides, pooled_height, pooled_width, output_dim, with_deformable): + self.pooled_height = pooled_height + self.pooled_width = pooled_width + self.feat_strides = feat_strides + self.with_deformable = with_deformable + self.output_dim = output_dim + self.in_grad_hist_list = [] + self.num_strides = len(self.feat_strides) + self.roi_pool = [None for _ in range(self.num_strides)] + self.feat_idx = [None for _ in range(self.num_strides)] + + def forward(self, is_train, req, in_data, out_data, aux): + rois = in_data[-1].asnumpy() + w = rois[:, 3] - rois[:, 1] + 1 + h = rois[:, 4] - rois[:, 2] + 1 + feat_id = np.clip(np.floor(2 + np.log2(np.sqrt(w * h) / 224)), 0, len(self.feat_strides) - 1) + pyramid_idx = [] + + rois_p = [None for _ in range(self.num_strides)] + for i in range(self.num_strides): + self.feat_idx[i] = np.where(feat_id == i)[0] + if len(self.feat_idx[i]) == 0: + # padding dummy roi + rois_p[i] = np.zeros((1, 5)) + pyramid_idx.append(-1) + else: + rois_p[i] = rois[self.feat_idx[i]] + pyramid_idx.append(self.feat_idx[i]) + rois_idx = np.argsort(np.hstack(pyramid_idx))[-rois.shape[0]:] + + if is_train: + for i in range(self.num_strides): + self.in_grad_hist_list.append(mx.nd.zeros_like(in_data[i])) + + if self.with_deformable: + for i in range(self.num_strides, self.num_strides * 3): + self.in_grad_hist_list.append(mx.nd.zeros_like(in_data[i])) + autograd.mark_variables([in_data[i] for i in range(self.num_strides * 3)], self.in_grad_hist_list) + + with autograd.train_section(): + for i in range(self.num_strides): + roi_offset_t = mx.contrib.nd.DeformablePSROIPooling(data=in_data[i], rois=mx.nd.array(rois_p[i], in_data[i].context), group_size=1, pooled_size=7, + sample_per_part=4, no_trans=True, part_size=7, output_dim=256, spatial_scale=1.0 / self.feat_strides[i]) + roi_offset = mx.nd.FullyConnected(data=roi_offset_t, num_hidden=7 * 7 * 2, weight=in_data[i * 2 + self.num_strides], bias=in_data[i * 2 + 1 + self.num_strides]) + roi_offset_reshape = mx.nd.reshape(data=roi_offset, shape=(-1, 2, 7, 7)) + self.roi_pool[i] = mx.contrib.nd.DeformablePSROIPooling(data=in_data[i], rois=mx.nd.array(rois_p[i], in_data[i].context), trans=roi_offset_reshape, + group_size=1, pooled_size=7, sample_per_part=4, no_trans=False, part_size=7, + output_dim=self.output_dim, spatial_scale=1.0 / self.feat_strides[i], trans_std=0.1) + else: + autograd.mark_variables([in_data[i] for i in range(self.num_strides)], self.in_grad_hist_list) + with autograd.train_section(): + for i in range(self.num_strides): + self.roi_pool[i] = mx.nd.ROIPooling(in_data[i], mx.nd.array(rois_p[i], in_data[i].context), (7, 7), spatial_scale=1.0 / self.feat_strides[i]) + roi_pool = mx.nd.concatenate(self.roi_pool, axis=0) + else: + # during testing, there is no need to record variable, thus saving memory + # pdb.set_trace() + roi_pool = [None for _ in range(self.num_strides)] + if self.with_deformable: + for i in range(self.num_strides): + roi_offset_t = mx.contrib.nd.DeformablePSROIPooling(data=in_data[i], rois=mx.nd.array(rois_p[i], in_data[i].context), group_size=1, pooled_size=7, + sample_per_part=4, no_trans=True, part_size=7, output_dim=256, spatial_scale=1.0 / self.feat_strides[i]) + roi_offset = mx.nd.FullyConnected(data=roi_offset_t, num_hidden=7 * 7 * 2, weight=in_data[i * 2 + self.num_strides], bias=in_data[i * 2 + 1 + self.num_strides]) + roi_offset_reshape = mx.nd.reshape(data=roi_offset, shape=(-1, 2, 7, 7)) + roi_pool[i] = mx.contrib.nd.DeformablePSROIPooling(data=in_data[i], rois=mx.nd.array(rois_p[i], in_data[i].context), trans=roi_offset_reshape, + group_size=1, pooled_size=7, sample_per_part=4, no_trans=False, part_size=7, + output_dim=self.output_dim, spatial_scale=1.0 / self.feat_strides[i], trans_std=0.1) + else: + for i in range(self.num_strides): + roi_pool[i] = mx.nd.ROIPooling(in_data[i], mx.nd.array(rois_p[i], in_data[i].context), (7, 7), spatial_scale=1.0 / self.feat_strides[i]) + + roi_pool = mx.nd.concatenate(roi_pool, axis=0) + + roi_pool = mx.nd.take(roi_pool, mx.nd.array(rois_idx, roi_pool.context)) + self.assign(out_data[0], req[0], roi_pool) + + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + for i in range(len(in_grad)): + self.assign(in_grad[i], req[i], 0) + + with autograd.train_section(): + for i in range(self.num_strides): + if len(self.feat_idx[i] > 0): + autograd.compute_gradient([mx.nd.take(out_grad[0], mx.nd.array(self.feat_idx[i], out_grad[0].context)) * self.roi_pool[i]]) + + if self.with_deformable: + for i in range(0, self.num_strides * 3): + self.assign(in_grad[i], req[i], self.in_grad_hist_list[i]) + else: + for i in range(0, self.num_strides): + self.assign(in_grad[i], req[i], self.in_grad_hist_list[i]) + + gc.collect() + + +@mx.operator.register('fpn_roi_pooling') +class FPNROIPoolingProp(mx.operator.CustomOpProp): + def __init__(self, feat_strides='(4,8,16,32)', pooled_height='7', pooled_width='7', with_deformable='False', output_dim='256'): + super(FPNROIPoolingProp, self).__init__(need_top_grad=True) + self.pooled_height = int(pooled_height) + self.pooled_width = int(pooled_width) + self.feat_strides = np.fromstring(feat_strides[1:-1], dtype=int, sep=',') + self.with_deformable = with_deformable == 'True' + self.output_dim = int(output_dim) + + self.num_strides = len(self.feat_strides) + + def list_arguments(self): + args_list = [] + for i in range(self.num_strides): + args_list.append('data_p{}'.format(2 + i)) + if self.with_deformable: + for i in range(self.num_strides): + args_list.extend(['offset_weight_p{}'.format(2 + i), 'offset_bias_p{}'.format(2 + i)]) + args_list.append('rois') + return args_list + + def list_outputs(self): + return ['output'] + + def infer_shape(self, in_shape): + output_feat_shape = [in_shape[-1][0], in_shape[0][1], self.pooled_height, self.pooled_width] + if self.with_deformable: + offset_dim = self.pooled_height * self.pooled_width * 2 + input_dim = self.pooled_height * self.pooled_width * self.output_dim + for i in range(self.num_strides): + in_shape[i * 2 + self.num_strides], in_shape[i * 2 + 1 + self.num_strides] = [offset_dim, input_dim], [offset_dim, ] + return in_shape, [output_feat_shape] + + def create_operator(self, ctx, shapes, dtypes): + return FPNROIPoolingOperator(self.feat_strides, self.pooled_height, self.pooled_width, self.output_dim, self.with_deformable) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [out_grad[0]] diff --git a/fpn/operator_py/fpn_rotated_psroialign.py b/fpn/operator_py/fpn_rotated_psroialign.py new file mode 100644 index 0000000..aa3edc4 --- /dev/null +++ b/fpn/operator_py/fpn_rotated_psroialign.py @@ -0,0 +1,129 @@ +# -------------------------------------------------------- +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi, Yuwen Xiong +# -------------------------------------------------------- + +import mxnet as mx +import numpy as np +from mxnet.contrib import autograd +import gc +import pdb + +class FPNPSROIROTATEDAlignOperator(mx.operator.CustomOp): + def __init__(self, feat_strides, pooled_height, pooled_width, output_dim): + self.pooled_height = pooled_height + self.pooled_width = pooled_width + self.feat_strides = feat_strides + self.output_dim = output_dim + self.in_grad_hist_list = [] + self.num_strides = len(self.feat_strides) + self.roi_pool = [None for _ in range(self.num_strides)] + self.feat_idx = [None for _ in range(self.num_strides)] + + def forward(self, is_train, req, in_data, out_data, aux): + rois = in_data[-1].asnumpy() + # w = rois[:, 3] - rois[:, 1] + 1 + # h = rois[:, 4] - rois[:, 2] + 1 + w = np.maximum(rois[:, 3], 1) + h = np.maximum(rois[:, 4], 1) + # TODO: carefully scale the w, h + feat_id = np.clip(np.floor(2 + np.log2(np.sqrt(w * h) / 224)), 0, len(self.feat_strides) - 1) + pyramid_idx = [] + + rois_p = [None for _ in range(self.num_strides)] + for i in range(self.num_strides): + self.feat_idx[i] = np.where(feat_id == i)[0] + if len(self.feat_idx[i]) == 0: + # padding dummy roi + rois_p[i] = np.zeros((1, 6)) + pyramid_idx.append(-1) + else: + rois_p[i] = rois[self.feat_idx[i]] + pyramid_idx.append(self.feat_idx[i]) + rois_idx = np.argsort(np.hstack(pyramid_idx))[-rois.shape[0]:] + # pdb.set_trace() + if is_train: + for i in range(self.num_strides): + self.in_grad_hist_list.append(mx.nd.zeros_like(in_data[i])) + + + autograd.mark_variables([in_data[i] for i in range(self.num_strides)], self.in_grad_hist_list) + with autograd.train_section(): + for i in range(self.num_strides): + # TODO: finish it, and fix the output_dim hard code here + + # self.roi_pool[i] = mx.nd.contrib.PSROIROTATEDPooling(data=in_data[i], rois=mx.nd.array(rois_p[i], in_data[i].context), group_size=7, pooled_size=7, + # output_dim=10, spatial_scale=1.0 / self.feat_strides[i]) + self.roi_pool[i] = mx.contrib.nd.PSROIALIGNAVERotatedPooling(data=in_data[i], + rois=mx.nd.array(rois_p[i], in_data[i].context), + group_size=7, pooled_size=7, sampling_ratio=2, + output_dim=10, + spatial_scale=1.0 / self.feat_strides[i]) + + roi_pool = mx.nd.concatenate(self.roi_pool, axis=0) + else: + # during testing, there is no need to record variable, thus saving memory + roi_pool = [None for _ in range(self.num_strides)] + + + for i in range(self.num_strides): + # TODO: finish it, and fix the output_dim hard code here + roi_pool[i] = mx.contrib.nd.PSROIALIGNAVERotatedPooling(data=in_data[i], + rois=mx.nd.array(rois_p[i], in_data[i].context), + group_size=7, pooled_size=7, sampling_ratio=2, + output_dim=10, + spatial_scale=1.0 / self.feat_strides[i]) + roi_pool = mx.nd.concatenate(roi_pool, axis=0) + + roi_pool = mx.nd.take(roi_pool, mx.nd.array(rois_idx, roi_pool.context)) + self.assign(out_data[0], req[0], roi_pool) + + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + for i in range(len(in_grad)): + self.assign(in_grad[i], req[i], 0) + + with autograd.train_section(): + for i in range(self.num_strides): + if len(self.feat_idx[i] > 0): + autograd.compute_gradient([mx.nd.take(out_grad[0], mx.nd.array(self.feat_idx[i], out_grad[0].context)) * self.roi_pool[i]]) + + for i in range(0, self.num_strides): + self.assign(in_grad[i], req[i], self.in_grad_hist_list[i]) + + gc.collect() + + +@mx.operator.register('fpn_psroi_rotatedalign') +class FPNPSROIROTATEDAlignProp(mx.operator.CustomOpProp): + def __init__(self, feat_strides='(4,8,16,32)', pooled_height='7', pooled_width='7', output_dim='10'): + super(FPNPSROIROTATEDAlignProp, self).__init__(need_top_grad=True) + self.pooled_height = int(pooled_height) + self.pooled_width = int(pooled_width) + self.feat_strides = np.fromstring(feat_strides[1:-1], dtype=int, sep=',') + self.output_dim = int(output_dim) + + self.num_strides = len(self.feat_strides) + + def list_arguments(self): + args_list = [] + for i in range(self.num_strides): + args_list.append('data_p{}'.format(2 + i)) + args_list.append('Rrois') + return args_list + + def list_outputs(self): + return ['output'] + + def infer_shape(self, in_shape): + # pdb.set_trace() + # output_feat_shape = [in_shape[-1][0], in_shape[0][1], self.pooled_height, self.pooled_width] + output_feat_shape = [in_shape[-1][0], self.output_dim, self.pooled_height, self.pooled_width] + + return in_shape, [output_feat_shape] + + def create_operator(self, ctx, shapes, dtypes): + return FPNPSROIROTATEDAlignOperator(self.feat_strides, self.pooled_height, self.pooled_width, self.output_dim) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [out_grad[0]] diff --git a/fpn/operator_py/fpn_rotated_roialign.py b/fpn/operator_py/fpn_rotated_roialign.py new file mode 100644 index 0000000..8edf48d --- /dev/null +++ b/fpn/operator_py/fpn_rotated_roialign.py @@ -0,0 +1,119 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi, Yuwen Xiong +# -------------------------------------------------------- + +import mxnet as mx +import numpy as np +from mxnet.contrib import autograd +import gc +import pdb + +class FPNRotatedROIAlignOperator(mx.operator.CustomOp): + def __init__(self, feat_strides, pooled_height, pooled_width, output_dim): + self.pooled_height = pooled_height + self.pooled_width = pooled_width + self.feat_strides = feat_strides + self.output_dim = output_dim + self.in_grad_hist_list = [] + self.num_strides = len(self.feat_strides) + self.roi_pool = [None for _ in range(self.num_strides)] + self.feat_idx = [None for _ in range(self.num_strides)] + + def forward(self, is_train, req, in_data, out_data, aux): + rois = in_data[-1].asnumpy() + # w = rois[:, 3] - rois[:, 1] + 1 + # h = rois[:, 4] - rois[:, 2] + 1 + w = np.maximum(rois[:, 3], 1) + h = np.maximum(rois[:, 4], 1) + # TODO: carefully scale the w, h + feat_id = np.clip(np.floor(2 + np.log2(np.sqrt(w * h) / 224)), 0, len(self.feat_strides) - 1) + pyramid_idx = [] + + rois_p = [None for _ in range(self.num_strides)] + for i in range(self.num_strides): + self.feat_idx[i] = np.where(feat_id == i)[0] + if len(self.feat_idx[i]) == 0: + # padding dummy roi + rois_p[i] = np.zeros((1, 6)) + pyramid_idx.append(-1) + else: + rois_p[i] = rois[self.feat_idx[i]] + pyramid_idx.append(self.feat_idx[i]) + rois_idx = np.argsort(np.hstack(pyramid_idx))[-rois.shape[0]:] + + if is_train: + for i in range(self.num_strides): + self.in_grad_hist_list.append(mx.nd.zeros_like(in_data[i])) + + + autograd.mark_variables([in_data[i] for i in range(self.num_strides)], self.in_grad_hist_list) + with autograd.train_section(): + for i in range(self.num_strides): + self.roi_pool[i] = mx.nd.contrib.ROIAlignRotated(in_data[i], mx.nd.array(rois_p[i], in_data[i].context), (7, 7), spatial_scale=1.0/self.feat_strides[i], sample_ratio=4) + # self.roi_pool[i] = mx.nd.ROIPooling(in_data[i], mx.nd.array(rois_p[i], in_data[i].context), (7, 7), spatial_scale=1.0 / self.feat_strides[i]) + + roi_pool = mx.nd.concatenate(self.roi_pool, axis=0) + else: + # during testing, there is no need to record variable, thus saving memory + # pdb.set_trace() + roi_pool = [None for _ in range(self.num_strides)] + for i in range(self.num_strides): + # roi_pool[i] = mx.nd.ROIPooling(in_data[i], mx.nd.array(rois_p[i], in_data[i].context), (7, 7), spatial_scale=1.0 / self.feat_strides[i]) + roi_pool[i] = mx.nd.contrib.ROIAlignRotated(in_data[i], mx.nd.array(rois_p[i], in_data[i].context), + (7, 7), spatial_scale=1.0 / self.feat_strides[i], + sample_ratio=4) + roi_pool = mx.nd.concatenate(roi_pool, axis=0) + # pdb.set_trace() + roi_pool = mx.nd.take(roi_pool, mx.nd.array(rois_idx, roi_pool.context)) + self.assign(out_data[0], req[0], roi_pool) + + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + for i in range(len(in_grad)): + self.assign(in_grad[i], req[i], 0) + + with autograd.train_section(): + for i in range(self.num_strides): + if len(self.feat_idx[i] > 0): + autograd.compute_gradient([mx.nd.take(out_grad[0], mx.nd.array(self.feat_idx[i], out_grad[0].context)) * self.roi_pool[i]]) + + + for i in range(0, self.num_strides): + self.assign(in_grad[i], req[i], self.in_grad_hist_list[i]) + + gc.collect() + + +@mx.operator.register('fpn_rotated_roialign') +class FPNRotatedROIAlignProp(mx.operator.CustomOpProp): + def __init__(self, feat_strides='(4,8,16,32)', pooled_height='7', pooled_width='7', output_dim='490'): + super(FPNRotatedROIAlignProp, self).__init__(need_top_grad=True) + self.pooled_height = int(pooled_height) + self.pooled_width = int(pooled_width) + self.feat_strides = np.fromstring(feat_strides[1:-1], dtype=int, sep=',') + self.output_dim = int(output_dim) + + self.num_strides = len(self.feat_strides) + + def list_arguments(self): + args_list = [] + for i in range(self.num_strides): + args_list.append('data_p{}'.format(2 + i)) + args_list.append('Rrois') + return args_list + + def list_outputs(self): + return ['rotated_pooled'] + + def infer_shape(self, in_shape): + # print 'num of Rrois:', in_shape[-1][0] + output_feat_shape = [in_shape[-1][0], in_shape[0][1], self.pooled_height, self.pooled_width] + return in_shape, [output_feat_shape] + + def create_operator(self, ctx, shapes, dtypes): + return FPNRotatedROIAlignOperator(self.feat_strides, self.pooled_height, self.pooled_width, self.output_dim) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [out_grad[0]] diff --git a/fpn/operator_py/proposal_target.py b/fpn/operator_py/proposal_target.py new file mode 100644 index 0000000..0b1bdc6 --- /dev/null +++ b/fpn/operator_py/proposal_target.py @@ -0,0 +1,121 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +""" +Proposal Target Operator selects foreground and background roi and assigns label, bbox_transform to them. +""" + +import mxnet as mx +import numpy as np +from distutils.util import strtobool +from easydict import EasyDict as edict +import cPickle +import pdb + +from core.rcnn import sample_rois + +DEBUG = False + + +class ProposalTargetOperator(mx.operator.CustomOp): + def __init__(self, num_classes, batch_images, batch_rois, cfg, fg_fraction): + super(ProposalTargetOperator, self).__init__() + self._num_classes = num_classes + self._batch_images = batch_images + self._batch_rois = batch_rois + self._cfg = cfg + self._fg_fraction = fg_fraction + + if DEBUG: + self._count = 0 + self._fg_num = 0 + self._bg_num = 0 + + def forward(self, is_train, req, in_data, out_data, aux): + assert self._batch_rois == -1 or self._batch_rois % self._batch_images == 0, \ + 'batchimages {} must devide batch_rois {}'.format(self._batch_images, self._batch_rois) + # pdb.set_trace() + all_rois = in_data[0].asnumpy() + gt_boxes = in_data[1].asnumpy() + + if self._batch_rois == -1: + rois_per_image = all_rois.shape[0] + gt_boxes.shape[0] + fg_rois_per_image = rois_per_image + else: + rois_per_image = self._batch_rois / self._batch_images + fg_rois_per_image = np.round(self._fg_fraction * rois_per_image).astype(int) + + # Include ground-truth boxes in the set of candidate rois + zeros = np.zeros((gt_boxes.shape[0], 1), dtype=gt_boxes.dtype) + all_rois = np.vstack((all_rois, np.hstack((zeros, gt_boxes[:, :-1])))) + # Sanity check: single batch only + assert np.all(all_rois[:, 0] == 0), 'Only single item batches are supported' + + rois, labels, bbox_targets, bbox_weights = \ + sample_rois(all_rois, fg_rois_per_image, rois_per_image, self._num_classes, self._cfg, gt_boxes=gt_boxes) + + if DEBUG: + print "labels=", labels + print 'num fg: {}'.format((labels > 0).sum()) + print 'num bg: {}'.format((labels == 0).sum()) + self._count += 1 + self._fg_num += (labels > 0).sum() + self._bg_num += (labels == 0).sum() + print "self._count=", self._count + print 'num fg avg: {}'.format(self._fg_num / self._count) + print 'num bg avg: {}'.format(self._bg_num / self._count) + print 'ratio: {:.3f}'.format(float(self._fg_num) / float(self._bg_num)) + + for ind, val in enumerate([rois, labels, bbox_targets, bbox_weights]): + self.assign(out_data[ind], req[ind], val) + + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + for i in range(len(in_grad)): + self.assign(in_grad[i], req[i], 0) + + +@mx.operator.register('proposal_target') +class ProposalTargetProp(mx.operator.CustomOpProp): + def __init__(self, num_classes, batch_images, batch_rois, cfg, fg_fraction='0.25'): + super(ProposalTargetProp, self).__init__(need_top_grad=False) + self._num_classes = int(num_classes) + self._batch_images = int(batch_images) + self._batch_rois = int(batch_rois) + self._cfg = cPickle.loads(cfg) + self._fg_fraction = float(fg_fraction) + + def list_arguments(self): + return ['rois', 'gt_boxes'] + + def list_outputs(self): + return ['rois_output', 'label', 'bbox_target', 'bbox_weight'] + + def infer_shape(self, in_shape): + rpn_rois_shape = in_shape[0] + gt_boxes_shape = in_shape[1] + + rois = rpn_rois_shape[0] + gt_boxes_shape[0] if self._batch_rois == -1 else self._batch_rois + + output_rois_shape = (rois, 5) + label_shape = (rois, ) + bbox_target_shape = (rois, self._num_classes * 4) + bbox_weight_shape = (rois, self._num_classes * 4) + + return [rpn_rois_shape, gt_boxes_shape], \ + [output_rois_shape, label_shape, bbox_target_shape, bbox_weight_shape] + + def create_operator(self, ctx, shapes, dtypes): + return ProposalTargetOperator(self._num_classes, self._batch_images, self._batch_rois, self._cfg, self._fg_fraction) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [] diff --git a/fpn/operator_py/proposal_target_rotbox.py b/fpn/operator_py/proposal_target_rotbox.py new file mode 100644 index 0000000..70043f9 --- /dev/null +++ b/fpn/operator_py/proposal_target_rotbox.py @@ -0,0 +1,123 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +""" +Proposal Target Operator selects foreground and background roi and assigns label, bbox_transform to them. +""" + +import mxnet as mx +import numpy as np +import cPickle +from bbox.bbox_transform import poly2bbox + +from core.rcnn import sample_rotbox_rois +import pdb +DEBUG = False + + +class ProposalTargetRotBoxOperator(mx.operator.CustomOp): + def __init__(self, num_classes, batch_images, batch_rois, cfg, fg_fraction, fg_class_agnostic): + super(ProposalTargetRotBoxOperator, self).__init__() + self._num_classes = num_classes + self._batch_images = batch_images + self._batch_rois = batch_rois + self._cfg = cfg + self._fg_fraction = fg_fraction + self._fg_class_agnostic = fg_class_agnostic + + if DEBUG: + self._count = 0 + self._fg_num = 0 + self._bg_num = 0 + + def forward(self, is_train, req, in_data, out_data, aux): + assert self._batch_rois == -1 or self._batch_rois % self._batch_images == 0, \ + 'batchimages {} must devide batch_rois {}'.format(self._batch_images, self._batch_rois) + # pdb.set_trace() + all_rois = in_data[0].asnumpy() + gt_boxes = in_data[1].asnumpy() + + if self._batch_rois == -1: + rois_per_image = all_rois.shape[0] + gt_boxes.shape[0] + fg_rois_per_image = rois_per_image + else: + rois_per_image = self._batch_rois / self._batch_images + fg_rois_per_image = np.round(self._fg_fraction * rois_per_image).astype(int) + + # Include ground-truth boxes in the set of candidate rois + zeros = np.zeros((gt_boxes.shape[0], 1), dtype=gt_boxes.dtype) + # pdb.set_trace() + all_rois = np.vstack((all_rois, np.hstack((zeros, poly2bbox(gt_boxes[:, :-1]))))) + # Sanity check: single batch only + assert np.all(all_rois[:, 0] == 0), 'Only single item batches are supported' + rois, labels, bbox_targets, bbox_weights = \ + sample_rotbox_rois(all_rois, fg_rois_per_image, rois_per_image, self._num_classes, self._cfg, gt_boxes=gt_boxes) + + if self._fg_class_agnostic: + fg_indexes = labels > 0 + labels[fg_indexes] = 1 + # pdb.set_trace() + if DEBUG: + print "labels=", labels + print 'num fg: {}'.format((labels > 0).sum()) + print 'num bg: {}'.format((labels == 0).sum()) + self._count += 1 + self._fg_num += (labels > 0).sum() + self._bg_num += (labels == 0).sum() + print "self._count=", self._count + print 'num fg avg: {}'.format(self._fg_num / self._count) + print 'num bg avg: {}'.format(self._bg_num / self._count) + print 'ratio: {:.3f}'.format(float(self._fg_num) / float(self._bg_num)) + # pdb.set_trace() + for ind, val in enumerate([rois, labels, bbox_targets, bbox_weights]): + self.assign(out_data[ind], req[ind], val) + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + for i in range(len(in_grad)): + self.assign(in_grad[i], req[i], 0) + +@mx.operator.register('proposal_target_rotbox') +class ProposalTargetRotboxtProp(mx.operator.CustomOpProp): + def __init__(self, num_classes, batch_images, batch_rois, cfg, fg_class_agnostic='False', fg_fraction='0.25'): + super(ProposalTargetRotboxtProp, self).__init__(need_top_grad=False) + self._num_classes = int(num_classes) + self._batch_images = int(batch_images) + self._batch_rois = int(batch_rois) + self._cfg = cPickle.loads(cfg) + self._fg_class_agnostic = fg_class_agnostic == 'True' + self._fg_fraction = float(fg_fraction) + + def list_arguments(self): + return ['rois', 'gt_boxes'] + + def list_outputs(self): + return ['rois_output', 'label', 'bbox_target', 'bbox_weight'] + + def infer_shape(self, in_shape): + rpn_rois_shape = in_shape[0] + gt_boxes_shape = in_shape[1] + + rois = rpn_rois_shape[0] + gt_boxes_shape[0] if self._batch_rois == -1 else self._batch_rois + + output_rois_shape = (rois, 5) + label_shape = (rois, ) + bbox_target_shape = (rois, 5 * self._num_classes) + bbox_weight_shape = (rois, 5 * self._num_classes) + + return [rpn_rois_shape, gt_boxes_shape], \ + [output_rois_shape, label_shape, bbox_target_shape, bbox_weight_shape] + + def create_operator(self, ctx, shapes, dtypes): + return ProposalTargetRotBoxOperator(self._num_classes, self._batch_images, self._batch_rois, self._cfg, self._fg_fraction, self._fg_class_agnostic) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [] diff --git a/fpn/operator_py/pyramid_proposal.py b/fpn/operator_py/pyramid_proposal.py new file mode 100644 index 0000000..a3b344d --- /dev/null +++ b/fpn/operator_py/pyramid_proposal.py @@ -0,0 +1,249 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Haozhi Qi +# -------------------------------------------------------- + +import mxnet as mx +import numpy as np +import numpy.random as npr +from distutils.util import strtobool + +from bbox.bbox_transform import bbox_pred, clip_boxes +from rpn.generate_anchor import generate_anchors +from nms.nms import gpu_nms_wrapper +import pdb +DEBUG = False + + +class PyramidProposalOperator(mx.operator.CustomOp): + def __init__(self, feat_stride, scales, ratios, output_score, + rpn_pre_nms_top_n, rpn_post_nms_top_n, threshold, rpn_min_size): + super(PyramidProposalOperator, self).__init__() + self._feat_stride = np.fromstring(feat_stride[1:-1], dtype=int, sep=',') + self._scales = np.fromstring(scales[1:-1], dtype=float, sep=',') + self._ratios = np.fromstring(ratios[1:-1], dtype=float, sep=',') + self._num_anchors = len(self._scales) * len(self._ratios) + self._output_score = output_score + self._rpn_pre_nms_top_n = rpn_pre_nms_top_n + self._rpn_post_nms_top_n = rpn_post_nms_top_n + self._threshold = threshold + self._rpn_min_size = rpn_min_size + + def forward(self, is_train, req, in_data, out_data, aux): + nms = gpu_nms_wrapper(self._threshold, in_data[0].context.device_id) + + batch_size = in_data[0].shape[0] + if batch_size > 1: + raise ValueError("Sorry, multiple images each device is not implemented") + + # for each (H, W) location i + # generate A anchor boxes centered on cell i + # apply predicted bbox deltas at cell i to each of the A anchors + # clip predicted boxes to image + # remove predicted boxes with either height or width < threshold + # sort all (proposal, score) pairs by score from highest to lowest + # take top pre_nms_topN proposals before NMS + # apply NMS with threshold 0.7 to remaining proposals + # take after_nms_topN proposals after NMS + # return the top proposals (-> RoIs top, scores top) + + cls_prob_dict = { + 'stride64': in_data[4], + 'stride32': in_data[3], + 'stride16': in_data[2], + 'stride8': in_data[1], + 'stride4': in_data[0], + } + bbox_pred_dict = { + 'stride64': in_data[9], + 'stride32': in_data[8], + 'stride16': in_data[7], + 'stride8': in_data[6], + 'stride4': in_data[5], + } + + pre_nms_topN = self._rpn_pre_nms_top_n + post_nms_topN = self._rpn_post_nms_top_n + min_size = self._rpn_min_size + + proposal_list = [] + score_list = [] + for s in self._feat_stride: + stride = int(s) + # anchors: (xmin, ymin, xmax, ymax) + sub_anchors = generate_anchors(base_size=stride, scales=self._scales, ratios=self._ratios) + scores = cls_prob_dict['stride' + str(s)].asnumpy()[:, self._num_anchors:, :, :] + bbox_deltas = bbox_pred_dict['stride' + str(s)].asnumpy() + im_info = in_data[-1].asnumpy()[0, :] + # pdb.set_trace() + # 1. Generate proposals from bbox_deltas and shifted anchors + # use real image size instead of padded feature map sizes + height, width = int(im_info[0] / stride), int(im_info[1] / stride) + + # Enumerate all shifts + shift_x = np.arange(0, width) * stride + shift_y = np.arange(0, height) * stride + shift_x, shift_y = np.meshgrid(shift_x, shift_y) + shifts = np.vstack((shift_x.ravel(), shift_y.ravel(), shift_x.ravel(), shift_y.ravel())).transpose() + + # Enumerate all shifted anchors: + # + # add A anchors (1, A, 4) to + # cell K shifts (K, 1, 4) to get + # shift anchors (K, A, 4) + # reshape to (K*A, 4) shifted anchors + A = self._num_anchors + K = shifts.shape[0] + anchors = sub_anchors.reshape((1, A, 4)) + shifts.reshape((1, K, 4)).transpose((1, 0, 2)) + anchors = anchors.reshape((K * A, 4)) + + # Transpose and reshape predicted bbox transformations to get them + # into the same order as the anchors: + # + # bbox deltas will be (1, 4 * A, H, W) format + # transpose to (1, H, W, 4 * A) + # reshape to (1 * H * W * A, 4) where rows are ordered by (h, w, a) + # in slowest to fastest order + bbox_deltas = self._clip_pad(bbox_deltas, (height, width)) + bbox_deltas = bbox_deltas.transpose((0, 2, 3, 1)).reshape((-1, 4)) + + # Same story for the scores: + # + # scores are (1, A, H, W) format + # transpose to (1, H, W, A) + # reshape to (1 * H * W * A, 1) where rows are ordered by (h, w, a) + scores = self._clip_pad(scores, (height, width)) + scores = scores.transpose((0, 2, 3, 1)).reshape((-1, 1)) + + # Convert anchors into proposals via bbox transformations + proposals = bbox_pred(anchors, bbox_deltas) + + # 2. clip predicted boxes to image + proposals = clip_boxes(proposals, im_info[:2]) + + # 3. remove predicted boxes with either height or width < threshold + # (NOTE: convert min_size to input image scale stored in im_info[2]) + keep = self._filter_boxes(proposals, min_size * im_info[2]) + proposals = proposals[keep, :] + scores = scores[keep] + + proposal_list.append(proposals) + score_list.append(scores) + + proposals = np.vstack(proposal_list) + scores = np.vstack(score_list) + + # 4. sort all (proposal, score) pairs by score from highest to lowest + # 5. take top pre_nms_topN (e.g. 6000) + order = scores.ravel().argsort()[::-1] + if pre_nms_topN > 0: + order = order[:pre_nms_topN] + proposals = proposals[order, :] + scores = scores[order] + + # 6. apply nms (e.g. threshold = 0.7) + # 7. take after_nms_topN (e.g. 300) + # 8. return the top proposals (-> RoIs top) + det = np.hstack((proposals, scores)).astype(np.float32) + keep = nms(det) + if post_nms_topN > 0: + keep = keep[:post_nms_topN] + # pad to ensure output size remains unchanged + if len(keep) < post_nms_topN: + pad = npr.choice(keep, size=post_nms_topN - len(keep)) + keep = np.hstack((keep, pad)) + proposals = proposals[keep, :] + scores = scores[keep] + + # Output rois array + # Our RPN implementation only supports a single input image, so all + # batch inds are 0 + batch_inds = np.zeros((proposals.shape[0], 1), dtype=np.float32) + blob = np.hstack((batch_inds, proposals.astype(np.float32, copy=False))) + # if is_train: + self.assign(out_data[0], req[0], blob) + if self._output_score: + self.assign(out_data[1], req[1], scores.astype(np.float32, copy=False)) + + def backward(self, req, out_grad, in_data, out_data, in_grad, aux): + for i in range(len(in_grad)): + self.assign(in_grad[i], req[i], 0) + + @staticmethod + def _filter_boxes(boxes, min_size): + """ Remove all boxes with any side smaller than min_size """ + ws = boxes[:, 2] - boxes[:, 0] + 1 + hs = boxes[:, 3] - boxes[:, 1] + 1 + keep = np.where((ws >= min_size) & (hs >= min_size))[0] + return keep + + @staticmethod + def _clip_pad(tensor, pad_shape): + """ + Clip boxes of the pad area. + :param tensor: [n, c, H, W] + :param pad_shape: [h, w] + :return: [n, c, h, w] + """ + H, W = tensor.shape[2:] + h, w = pad_shape + + if h < H or w < W: + tensor = tensor[:, :, :h, :w].copy() + + return tensor + + +@mx.operator.register("pyramid_proposal") +class PyramidProposalProp(mx.operator.CustomOpProp): + def __init__(self, feat_stride='(64, 32, 16, 8, 4)', scales='(8)', ratios='(0.5, 1, 2)', output_score='False', + rpn_pre_nms_top_n='12000', rpn_post_nms_top_n='2000', threshold='0.3', rpn_min_size='16', output_pyramid_rois='False'): + super(PyramidProposalProp, self).__init__(need_top_grad=False) + self._feat_stride = feat_stride + self._scales = scales + self._ratios = ratios + self._output_score = strtobool(output_score) + self._rpn_pre_nms_top_n = int(rpn_pre_nms_top_n) + self._rpn_post_nms_top_n = int(rpn_post_nms_top_n) + self._threshold = float(threshold) + self._rpn_min_size = int(rpn_min_size) + self.output_pyramid_rois = strtobool(output_pyramid_rois) + + def list_arguments(self): + arg_list = [] + for s in np.fromstring(self._feat_stride[1:-1], dtype=int, sep=','): + arg_list.append('rpn_cls_prob_stride' + str(s)) + for s in np.fromstring(self._feat_stride[1:-1], dtype=int, sep=','): + arg_list.append('rpn_bbox_pred_stride' + str(s)) + arg_list.append('im_info') + return arg_list + + def list_outputs(self): + if self.output_pyramid_rois: + return ['output', 'output_p3', 'output_p4', 'output_p5', 'output_idx'] + else: + if self._output_score: + return ['output', 'score'] + else: + return ['output'] + + def infer_shape(self, in_shape): + output_shape = (self._rpn_post_nms_top_n, 5) + score_shape = (self._rpn_post_nms_top_n, 1) + + if self.output_pyramid_rois: + return in_shape, [output_shape, output_shape, output_shape, output_shape, (self._rpn_post_nms_top_n,)] + else: + if self._output_score: + return in_shape, [output_shape, score_shape] + else: + return in_shape, [output_shape] + + def create_operator(self, ctx, shapes, dtypes): + return PyramidProposalOperator(self._feat_stride, self._scales, self._ratios, self._output_score, + self._rpn_pre_nms_top_n, self._rpn_post_nms_top_n, self._threshold, self._rpn_min_size) + + def declare_backward_dependency(self, out_grad, in_data, out_data): + return [] diff --git a/fpn/symbols/__init__.py b/fpn/symbols/__init__.py new file mode 100644 index 0000000..697173c --- /dev/null +++ b/fpn/symbols/__init__.py @@ -0,0 +1,3 @@ +import resnet_v1_101_fpn_rcnn +import resnet_v1_101_fpn_rcnn_rotbox_light_head +import resnet_v1_101_fpn_rcnn_rotbox_light_head_RoITransformer \ No newline at end of file diff --git a/fpn/symbols/resnet_v1_101_fpn_rcnn.py b/fpn/symbols/resnet_v1_101_fpn_rcnn.py new file mode 100644 index 0000000..4278d99 --- /dev/null +++ b/fpn/symbols/resnet_v1_101_fpn_rcnn.py @@ -0,0 +1,969 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Haozhi Qi +# -------------------------------------------------------- + +import cPickle +import mxnet as mx +from utils.symbol import Symbol +from operator_py.pyramid_proposal import * +from operator_py.proposal_target import * +from operator_py.fpn_roi_pooling import * +from operator_py.box_annotator_ohem import * + + +class resnet_v1_101_fpn_rcnn(Symbol): + def __init__(self): + """ + Use __init__ to define parameter network needs + """ + self.shared_param_list = ['rpn_conv', 'rpn_cls_score', 'rpn_bbox_pred'] + self.shared_param_dict = {} + for name in self.shared_param_list: + self.shared_param_dict[name + '_weight'] = mx.sym.Variable(name + '_weight') + self.shared_param_dict[name + '_bias'] = mx.sym.Variable(name + '_bias') + + def get_resnet_backbone(self, data, with_dilated=False, with_dconv=False, with_dpyramid=False, eps=1e-5): + conv1 = mx.symbol.Convolution(name='conv1', data=data, num_filter=64, pad=(3, 3), kernel=(7, 7), stride=(2, 2), no_bias=True) + bn_conv1 = mx.symbol.BatchNorm(name='bn_conv1', data=conv1, use_global_stats=True, fix_gamma=False, eps=eps) + scale_conv1 = bn_conv1 + conv1_relu = mx.symbol.Activation(name='conv1_relu', data=scale_conv1, act_type='relu') + pool1 = mx.symbol.Pooling(name='pool1', data=conv1_relu, pooling_convention='full', pad=(0, 0), kernel=(3, 3), stride=(2, 2), pool_type='max') + res2a_branch1 = mx.symbol.Convolution(name='res2a_branch1', data=pool1, num_filter=256, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2a_branch1 = mx.symbol.BatchNorm(name='bn2a_branch1', data=res2a_branch1, use_global_stats=True, fix_gamma=False, eps=eps) + scale2a_branch1 = bn2a_branch1 + res2a_branch2a = mx.symbol.Convolution(name='res2a_branch2a', data=pool1, num_filter=64, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2a_branch2a = mx.symbol.BatchNorm(name='bn2a_branch2a', data=res2a_branch2a, use_global_stats=True, fix_gamma=False, eps=eps) + scale2a_branch2a = bn2a_branch2a + res2a_branch2a_relu = mx.symbol.Activation(name='res2a_branch2a_relu', data=scale2a_branch2a, act_type='relu') + res2a_branch2b = mx.symbol.Convolution(name='res2a_branch2b', data=res2a_branch2a_relu, num_filter=64, pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2a_branch2b = mx.symbol.BatchNorm(name='bn2a_branch2b', data=res2a_branch2b, use_global_stats=True, fix_gamma=False, eps=eps) + scale2a_branch2b = bn2a_branch2b + res2a_branch2b_relu = mx.symbol.Activation(name='res2a_branch2b_relu', data=scale2a_branch2b, act_type='relu') + res2a_branch2c = mx.symbol.Convolution(name='res2a_branch2c', data=res2a_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2a_branch2c = mx.symbol.BatchNorm(name='bn2a_branch2c', data=res2a_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2a_branch2c = bn2a_branch2c + res2a = mx.symbol.broadcast_add(name='res2a', *[scale2a_branch1, scale2a_branch2c]) + res2a_relu = mx.symbol.Activation(name='res2a_relu', data=res2a, act_type='relu') + res2b_branch2a = mx.symbol.Convolution(name='res2b_branch2a', data=res2a_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2a = mx.symbol.BatchNorm(name='bn2b_branch2a', data=res2b_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2b_branch2a = bn2b_branch2a + res2b_branch2a_relu = mx.symbol.Activation(name='res2b_branch2a_relu', data=scale2b_branch2a, act_type='relu') + res2b_branch2b = mx.symbol.Convolution(name='res2b_branch2b', data=res2b_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2b_branch2b = mx.symbol.BatchNorm(name='bn2b_branch2b', data=res2b_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2b_branch2b = bn2b_branch2b + res2b_branch2b_relu = mx.symbol.Activation(name='res2b_branch2b_relu', data=scale2b_branch2b, act_type='relu') + res2b_branch2c = mx.symbol.Convolution(name='res2b_branch2c', data=res2b_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2c = mx.symbol.BatchNorm(name='bn2b_branch2c', data=res2b_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2b_branch2c = bn2b_branch2c + res2b = mx.symbol.broadcast_add(name='res2b', *[res2a_relu, scale2b_branch2c]) + res2b_relu = mx.symbol.Activation(name='res2b_relu', data=res2b, act_type='relu') + res2c_branch2a = mx.symbol.Convolution(name='res2c_branch2a', data=res2b_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2a = mx.symbol.BatchNorm(name='bn2c_branch2a', data=res2c_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2c_branch2a = bn2c_branch2a + res2c_branch2a_relu = mx.symbol.Activation(name='res2c_branch2a_relu', data=scale2c_branch2a, act_type='relu') + res2c_branch2b = mx.symbol.Convolution(name='res2c_branch2b', data=res2c_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2c_branch2b = mx.symbol.BatchNorm(name='bn2c_branch2b', data=res2c_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2c_branch2b = bn2c_branch2b + res2c_branch2b_relu = mx.symbol.Activation(name='res2c_branch2b_relu', data=scale2c_branch2b, act_type='relu') + res2c_branch2c = mx.symbol.Convolution(name='res2c_branch2c', data=res2c_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2c = mx.symbol.BatchNorm(name='bn2c_branch2c', data=res2c_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2c_branch2c = bn2c_branch2c + res2c = mx.symbol.broadcast_add(name='res2c', *[res2b_relu, scale2c_branch2c]) + res2c_relu = mx.symbol.Activation(name='res2c_relu', data=res2c, act_type='relu') + res3a_branch1 = mx.symbol.Convolution(name='res3a_branch1', data=res2c_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch1 = mx.symbol.BatchNorm(name='bn3a_branch1', data=res3a_branch1, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3a_branch1 = bn3a_branch1 + res3a_branch2a = mx.symbol.Convolution(name='res3a_branch2a', data=res2c_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch2a = mx.symbol.BatchNorm(name='bn3a_branch2a', data=res3a_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3a_branch2a = bn3a_branch2a + res3a_branch2a_relu = mx.symbol.Activation(name='res3a_branch2a_relu', data=scale3a_branch2a, act_type='relu') + res3a_branch2b = mx.symbol.Convolution(name='res3a_branch2b', data=res3a_branch2a_relu, num_filter=128, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3a_branch2b = mx.symbol.BatchNorm(name='bn3a_branch2b', data=res3a_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3a_branch2b = bn3a_branch2b + res3a_branch2b_relu = mx.symbol.Activation(name='res3a_branch2b_relu', data=scale3a_branch2b, act_type='relu') + res3a_branch2c = mx.symbol.Convolution(name='res3a_branch2c', data=res3a_branch2b_relu, num_filter=512, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3a_branch2c = mx.symbol.BatchNorm(name='bn3a_branch2c', data=res3a_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3a_branch2c = bn3a_branch2c + res3a = mx.symbol.broadcast_add(name='res3a', *[scale3a_branch1, scale3a_branch2c]) + res3a_relu = mx.symbol.Activation(name='res3a_relu', data=res3a, act_type='relu') + res3b1_branch2a = mx.symbol.Convolution(name='res3b1_branch2a', data=res3a_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2a = mx.symbol.BatchNorm(name='bn3b1_branch2a', data=res3b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b1_branch2a = bn3b1_branch2a + res3b1_branch2a_relu = mx.symbol.Activation(name='res3b1_branch2a_relu', data=scale3b1_branch2a, + act_type='relu') + res3b1_branch2b = mx.symbol.Convolution(name='res3b1_branch2b', data=res3b1_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b1_branch2b = mx.symbol.BatchNorm(name='bn3b1_branch2b', data=res3b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b1_branch2b = bn3b1_branch2b + res3b1_branch2b_relu = mx.symbol.Activation(name='res3b1_branch2b_relu', data=scale3b1_branch2b, + act_type='relu') + res3b1_branch2c = mx.symbol.Convolution(name='res3b1_branch2c', data=res3b1_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2c = mx.symbol.BatchNorm(name='bn3b1_branch2c', data=res3b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b1_branch2c = bn3b1_branch2c + res3b1 = mx.symbol.broadcast_add(name='res3b1', *[res3a_relu, scale3b1_branch2c]) + res3b1_relu = mx.symbol.Activation(name='res3b1_relu', data=res3b1, act_type='relu') + res3b2_branch2a = mx.symbol.Convolution(name='res3b2_branch2a', data=res3b1_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2a = mx.symbol.BatchNorm(name='bn3b2_branch2a', data=res3b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b2_branch2a = bn3b2_branch2a + res3b2_branch2a_relu = mx.symbol.Activation(name='res3b2_branch2a_relu', data=scale3b2_branch2a, + act_type='relu') + res3b2_branch2b = mx.symbol.Convolution(name='res3b2_branch2b', data=res3b2_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b2_branch2b = mx.symbol.BatchNorm(name='bn3b2_branch2b', data=res3b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b2_branch2b = bn3b2_branch2b + res3b2_branch2b_relu = mx.symbol.Activation(name='res3b2_branch2b_relu', data=scale3b2_branch2b, + act_type='relu') + res3b2_branch2c = mx.symbol.Convolution(name='res3b2_branch2c', data=res3b2_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2c = mx.symbol.BatchNorm(name='bn3b2_branch2c', data=res3b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b2_branch2c = bn3b2_branch2c + res3b2 = mx.symbol.broadcast_add(name='res3b2', *[res3b1_relu, scale3b2_branch2c]) + res3b2_relu = mx.symbol.Activation(name='res3b2_relu', data=res3b2, act_type='relu') + res3b3_branch2a = mx.symbol.Convolution(name='res3b3_branch2a', data=res3b2_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2a = mx.symbol.BatchNorm(name='bn3b3_branch2a', data=res3b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b3_branch2a = bn3b3_branch2a + res3b3_branch2a_relu = mx.symbol.Activation(name='res3b3_branch2a_relu', data=scale3b3_branch2a, + act_type='relu') + if with_dpyramid: + res3b3_branch2b_offset = mx.symbol.Convolution(name='res3b3_branch2b_offset', data=res3b3_branch2a_relu, + num_filter=72, pad=(1, 1), kernel=(3, 3), stride=(1, 1)) + res3b3_branch2b = mx.contrib.symbol.DeformableConvolution(name='res3b3_branch2b', data=res3b3_branch2a_relu, + offset=res3b3_branch2b_offset, + num_filter=128, pad=(1, 1), kernel=(3, 3), + num_deformable_group=4, + stride=(1, 1), no_bias=True) + else: + res3b3_branch2b = mx.symbol.Convolution(name='res3b3_branch2b', data=res3b3_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b3_branch2b = mx.symbol.BatchNorm(name='bn3b3_branch2b', data=res3b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b3_branch2b = bn3b3_branch2b + res3b3_branch2b_relu = mx.symbol.Activation(name='res3b3_branch2b_relu', data=scale3b3_branch2b, + act_type='relu') + res3b3_branch2c = mx.symbol.Convolution(name='res3b3_branch2c', data=res3b3_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2c = mx.symbol.BatchNorm(name='bn3b3_branch2c', data=res3b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b3_branch2c = bn3b3_branch2c + res3b3 = mx.symbol.broadcast_add(name='res3b3', *[res3b2_relu, scale3b3_branch2c]) + res3b3_relu = mx.symbol.Activation(name='res3b3_relu', data=res3b3, act_type='relu') + res4a_branch1 = mx.symbol.Convolution(name='res4a_branch1', data=res3b3_relu, num_filter=1024, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch1 = mx.symbol.BatchNorm(name='bn4a_branch1', data=res4a_branch1, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4a_branch1 = bn4a_branch1 + res4a_branch2a = mx.symbol.Convolution(name='res4a_branch2a', data=res3b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch2a = mx.symbol.BatchNorm(name='bn4a_branch2a', data=res4a_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4a_branch2a = bn4a_branch2a + res4a_branch2a_relu = mx.symbol.Activation(name='res4a_branch2a_relu', data=scale4a_branch2a, act_type='relu') + res4a_branch2b = mx.symbol.Convolution(name='res4a_branch2b', data=res4a_branch2a_relu, num_filter=256, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4a_branch2b = mx.symbol.BatchNorm(name='bn4a_branch2b', data=res4a_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4a_branch2b = bn4a_branch2b + res4a_branch2b_relu = mx.symbol.Activation(name='res4a_branch2b_relu', data=scale4a_branch2b, act_type='relu') + res4a_branch2c = mx.symbol.Convolution(name='res4a_branch2c', data=res4a_branch2b_relu, num_filter=1024, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4a_branch2c = mx.symbol.BatchNorm(name='bn4a_branch2c', data=res4a_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4a_branch2c = bn4a_branch2c + res4a = mx.symbol.broadcast_add(name='res4a', *[scale4a_branch1, scale4a_branch2c]) + res4a_relu = mx.symbol.Activation(name='res4a_relu', data=res4a, act_type='relu') + res4b1_branch2a = mx.symbol.Convolution(name='res4b1_branch2a', data=res4a_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2a = mx.symbol.BatchNorm(name='bn4b1_branch2a', data=res4b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b1_branch2a = bn4b1_branch2a + res4b1_branch2a_relu = mx.symbol.Activation(name='res4b1_branch2a_relu', data=scale4b1_branch2a, + act_type='relu') + res4b1_branch2b = mx.symbol.Convolution(name='res4b1_branch2b', data=res4b1_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b1_branch2b = mx.symbol.BatchNorm(name='bn4b1_branch2b', data=res4b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b1_branch2b = bn4b1_branch2b + res4b1_branch2b_relu = mx.symbol.Activation(name='res4b1_branch2b_relu', data=scale4b1_branch2b, + act_type='relu') + res4b1_branch2c = mx.symbol.Convolution(name='res4b1_branch2c', data=res4b1_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2c = mx.symbol.BatchNorm(name='bn4b1_branch2c', data=res4b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b1_branch2c = bn4b1_branch2c + res4b1 = mx.symbol.broadcast_add(name='res4b1', *[res4a_relu, scale4b1_branch2c]) + res4b1_relu = mx.symbol.Activation(name='res4b1_relu', data=res4b1, act_type='relu') + res4b2_branch2a = mx.symbol.Convolution(name='res4b2_branch2a', data=res4b1_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2a = mx.symbol.BatchNorm(name='bn4b2_branch2a', data=res4b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b2_branch2a = bn4b2_branch2a + res4b2_branch2a_relu = mx.symbol.Activation(name='res4b2_branch2a_relu', data=scale4b2_branch2a, + act_type='relu') + res4b2_branch2b = mx.symbol.Convolution(name='res4b2_branch2b', data=res4b2_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b2_branch2b = mx.symbol.BatchNorm(name='bn4b2_branch2b', data=res4b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b2_branch2b = bn4b2_branch2b + res4b2_branch2b_relu = mx.symbol.Activation(name='res4b2_branch2b_relu', data=scale4b2_branch2b, + act_type='relu') + res4b2_branch2c = mx.symbol.Convolution(name='res4b2_branch2c', data=res4b2_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2c = mx.symbol.BatchNorm(name='bn4b2_branch2c', data=res4b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b2_branch2c = bn4b2_branch2c + res4b2 = mx.symbol.broadcast_add(name='res4b2', *[res4b1_relu, scale4b2_branch2c]) + res4b2_relu = mx.symbol.Activation(name='res4b2_relu', data=res4b2, act_type='relu') + res4b3_branch2a = mx.symbol.Convolution(name='res4b3_branch2a', data=res4b2_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2a = mx.symbol.BatchNorm(name='bn4b3_branch2a', data=res4b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b3_branch2a = bn4b3_branch2a + res4b3_branch2a_relu = mx.symbol.Activation(name='res4b3_branch2a_relu', data=scale4b3_branch2a, + act_type='relu') + res4b3_branch2b = mx.symbol.Convolution(name='res4b3_branch2b', data=res4b3_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b3_branch2b = mx.symbol.BatchNorm(name='bn4b3_branch2b', data=res4b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b3_branch2b = bn4b3_branch2b + res4b3_branch2b_relu = mx.symbol.Activation(name='res4b3_branch2b_relu', data=scale4b3_branch2b, + act_type='relu') + res4b3_branch2c = mx.symbol.Convolution(name='res4b3_branch2c', data=res4b3_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2c = mx.symbol.BatchNorm(name='bn4b3_branch2c', data=res4b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b3_branch2c = bn4b3_branch2c + res4b3 = mx.symbol.broadcast_add(name='res4b3', *[res4b2_relu, scale4b3_branch2c]) + res4b3_relu = mx.symbol.Activation(name='res4b3_relu', data=res4b3, act_type='relu') + res4b4_branch2a = mx.symbol.Convolution(name='res4b4_branch2a', data=res4b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2a = mx.symbol.BatchNorm(name='bn4b4_branch2a', data=res4b4_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b4_branch2a = bn4b4_branch2a + res4b4_branch2a_relu = mx.symbol.Activation(name='res4b4_branch2a_relu', data=scale4b4_branch2a, + act_type='relu') + res4b4_branch2b = mx.symbol.Convolution(name='res4b4_branch2b', data=res4b4_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b4_branch2b = mx.symbol.BatchNorm(name='bn4b4_branch2b', data=res4b4_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b4_branch2b = bn4b4_branch2b + res4b4_branch2b_relu = mx.symbol.Activation(name='res4b4_branch2b_relu', data=scale4b4_branch2b, + act_type='relu') + res4b4_branch2c = mx.symbol.Convolution(name='res4b4_branch2c', data=res4b4_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2c = mx.symbol.BatchNorm(name='bn4b4_branch2c', data=res4b4_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b4_branch2c = bn4b4_branch2c + res4b4 = mx.symbol.broadcast_add(name='res4b4', *[res4b3_relu, scale4b4_branch2c]) + res4b4_relu = mx.symbol.Activation(name='res4b4_relu', data=res4b4, act_type='relu') + res4b5_branch2a = mx.symbol.Convolution(name='res4b5_branch2a', data=res4b4_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2a = mx.symbol.BatchNorm(name='bn4b5_branch2a', data=res4b5_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b5_branch2a = bn4b5_branch2a + res4b5_branch2a_relu = mx.symbol.Activation(name='res4b5_branch2a_relu', data=scale4b5_branch2a, + act_type='relu') + res4b5_branch2b = mx.symbol.Convolution(name='res4b5_branch2b', data=res4b5_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b5_branch2b = mx.symbol.BatchNorm(name='bn4b5_branch2b', data=res4b5_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b5_branch2b = bn4b5_branch2b + res4b5_branch2b_relu = mx.symbol.Activation(name='res4b5_branch2b_relu', data=scale4b5_branch2b, + act_type='relu') + res4b5_branch2c = mx.symbol.Convolution(name='res4b5_branch2c', data=res4b5_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2c = mx.symbol.BatchNorm(name='bn4b5_branch2c', data=res4b5_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b5_branch2c = bn4b5_branch2c + res4b5 = mx.symbol.broadcast_add(name='res4b5', *[res4b4_relu, scale4b5_branch2c]) + res4b5_relu = mx.symbol.Activation(name='res4b5_relu', data=res4b5, act_type='relu') + res4b6_branch2a = mx.symbol.Convolution(name='res4b6_branch2a', data=res4b5_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2a = mx.symbol.BatchNorm(name='bn4b6_branch2a', data=res4b6_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b6_branch2a = bn4b6_branch2a + res4b6_branch2a_relu = mx.symbol.Activation(name='res4b6_branch2a_relu', data=scale4b6_branch2a, + act_type='relu') + res4b6_branch2b = mx.symbol.Convolution(name='res4b6_branch2b', data=res4b6_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b6_branch2b = mx.symbol.BatchNorm(name='bn4b6_branch2b', data=res4b6_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b6_branch2b = bn4b6_branch2b + res4b6_branch2b_relu = mx.symbol.Activation(name='res4b6_branch2b_relu', data=scale4b6_branch2b, + act_type='relu') + res4b6_branch2c = mx.symbol.Convolution(name='res4b6_branch2c', data=res4b6_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2c = mx.symbol.BatchNorm(name='bn4b6_branch2c', data=res4b6_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b6_branch2c = bn4b6_branch2c + res4b6 = mx.symbol.broadcast_add(name='res4b6', *[res4b5_relu, scale4b6_branch2c]) + res4b6_relu = mx.symbol.Activation(name='res4b6_relu', data=res4b6, act_type='relu') + res4b7_branch2a = mx.symbol.Convolution(name='res4b7_branch2a', data=res4b6_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2a = mx.symbol.BatchNorm(name='bn4b7_branch2a', data=res4b7_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b7_branch2a = bn4b7_branch2a + res4b7_branch2a_relu = mx.symbol.Activation(name='res4b7_branch2a_relu', data=scale4b7_branch2a, + act_type='relu') + res4b7_branch2b = mx.symbol.Convolution(name='res4b7_branch2b', data=res4b7_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b7_branch2b = mx.symbol.BatchNorm(name='bn4b7_branch2b', data=res4b7_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b7_branch2b = bn4b7_branch2b + res4b7_branch2b_relu = mx.symbol.Activation(name='res4b7_branch2b_relu', data=scale4b7_branch2b, + act_type='relu') + res4b7_branch2c = mx.symbol.Convolution(name='res4b7_branch2c', data=res4b7_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2c = mx.symbol.BatchNorm(name='bn4b7_branch2c', data=res4b7_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b7_branch2c = bn4b7_branch2c + res4b7 = mx.symbol.broadcast_add(name='res4b7', *[res4b6_relu, scale4b7_branch2c]) + res4b7_relu = mx.symbol.Activation(name='res4b7_relu', data=res4b7, act_type='relu') + res4b8_branch2a = mx.symbol.Convolution(name='res4b8_branch2a', data=res4b7_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2a = mx.symbol.BatchNorm(name='bn4b8_branch2a', data=res4b8_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b8_branch2a = bn4b8_branch2a + res4b8_branch2a_relu = mx.symbol.Activation(name='res4b8_branch2a_relu', data=scale4b8_branch2a, + act_type='relu') + res4b8_branch2b = mx.symbol.Convolution(name='res4b8_branch2b', data=res4b8_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b8_branch2b = mx.symbol.BatchNorm(name='bn4b8_branch2b', data=res4b8_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b8_branch2b = bn4b8_branch2b + res4b8_branch2b_relu = mx.symbol.Activation(name='res4b8_branch2b_relu', data=scale4b8_branch2b, + act_type='relu') + res4b8_branch2c = mx.symbol.Convolution(name='res4b8_branch2c', data=res4b8_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2c = mx.symbol.BatchNorm(name='bn4b8_branch2c', data=res4b8_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b8_branch2c = bn4b8_branch2c + res4b8 = mx.symbol.broadcast_add(name='res4b8', *[res4b7_relu, scale4b8_branch2c]) + res4b8_relu = mx.symbol.Activation(name='res4b8_relu', data=res4b8, act_type='relu') + res4b9_branch2a = mx.symbol.Convolution(name='res4b9_branch2a', data=res4b8_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2a = mx.symbol.BatchNorm(name='bn4b9_branch2a', data=res4b9_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b9_branch2a = bn4b9_branch2a + res4b9_branch2a_relu = mx.symbol.Activation(name='res4b9_branch2a_relu', data=scale4b9_branch2a, + act_type='relu') + res4b9_branch2b = mx.symbol.Convolution(name='res4b9_branch2b', data=res4b9_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b9_branch2b = mx.symbol.BatchNorm(name='bn4b9_branch2b', data=res4b9_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b9_branch2b = bn4b9_branch2b + res4b9_branch2b_relu = mx.symbol.Activation(name='res4b9_branch2b_relu', data=scale4b9_branch2b, + act_type='relu') + res4b9_branch2c = mx.symbol.Convolution(name='res4b9_branch2c', data=res4b9_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2c = mx.symbol.BatchNorm(name='bn4b9_branch2c', data=res4b9_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b9_branch2c = bn4b9_branch2c + res4b9 = mx.symbol.broadcast_add(name='res4b9', *[res4b8_relu, scale4b9_branch2c]) + res4b9_relu = mx.symbol.Activation(name='res4b9_relu', data=res4b9, act_type='relu') + res4b10_branch2a = mx.symbol.Convolution(name='res4b10_branch2a', data=res4b9_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2a = mx.symbol.BatchNorm(name='bn4b10_branch2a', data=res4b10_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b10_branch2a = bn4b10_branch2a + res4b10_branch2a_relu = mx.symbol.Activation(name='res4b10_branch2a_relu', data=scale4b10_branch2a, + act_type='relu') + res4b10_branch2b = mx.symbol.Convolution(name='res4b10_branch2b', data=res4b10_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b10_branch2b = mx.symbol.BatchNorm(name='bn4b10_branch2b', data=res4b10_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b10_branch2b = bn4b10_branch2b + res4b10_branch2b_relu = mx.symbol.Activation(name='res4b10_branch2b_relu', data=scale4b10_branch2b, + act_type='relu') + res4b10_branch2c = mx.symbol.Convolution(name='res4b10_branch2c', data=res4b10_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2c = mx.symbol.BatchNorm(name='bn4b10_branch2c', data=res4b10_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b10_branch2c = bn4b10_branch2c + res4b10 = mx.symbol.broadcast_add(name='res4b10', *[res4b9_relu, scale4b10_branch2c]) + res4b10_relu = mx.symbol.Activation(name='res4b10_relu', data=res4b10, act_type='relu') + res4b11_branch2a = mx.symbol.Convolution(name='res4b11_branch2a', data=res4b10_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2a = mx.symbol.BatchNorm(name='bn4b11_branch2a', data=res4b11_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b11_branch2a = bn4b11_branch2a + res4b11_branch2a_relu = mx.symbol.Activation(name='res4b11_branch2a_relu', data=scale4b11_branch2a, + act_type='relu') + res4b11_branch2b = mx.symbol.Convolution(name='res4b11_branch2b', data=res4b11_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b11_branch2b = mx.symbol.BatchNorm(name='bn4b11_branch2b', data=res4b11_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b11_branch2b = bn4b11_branch2b + res4b11_branch2b_relu = mx.symbol.Activation(name='res4b11_branch2b_relu', data=scale4b11_branch2b, + act_type='relu') + res4b11_branch2c = mx.symbol.Convolution(name='res4b11_branch2c', data=res4b11_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2c = mx.symbol.BatchNorm(name='bn4b11_branch2c', data=res4b11_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b11_branch2c = bn4b11_branch2c + res4b11 = mx.symbol.broadcast_add(name='res4b11', *[res4b10_relu, scale4b11_branch2c]) + res4b11_relu = mx.symbol.Activation(name='res4b11_relu', data=res4b11, act_type='relu') + res4b12_branch2a = mx.symbol.Convolution(name='res4b12_branch2a', data=res4b11_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2a = mx.symbol.BatchNorm(name='bn4b12_branch2a', data=res4b12_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b12_branch2a = bn4b12_branch2a + res4b12_branch2a_relu = mx.symbol.Activation(name='res4b12_branch2a_relu', data=scale4b12_branch2a, + act_type='relu') + res4b12_branch2b = mx.symbol.Convolution(name='res4b12_branch2b', data=res4b12_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b12_branch2b = mx.symbol.BatchNorm(name='bn4b12_branch2b', data=res4b12_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b12_branch2b = bn4b12_branch2b + res4b12_branch2b_relu = mx.symbol.Activation(name='res4b12_branch2b_relu', data=scale4b12_branch2b, + act_type='relu') + res4b12_branch2c = mx.symbol.Convolution(name='res4b12_branch2c', data=res4b12_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2c = mx.symbol.BatchNorm(name='bn4b12_branch2c', data=res4b12_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b12_branch2c = bn4b12_branch2c + res4b12 = mx.symbol.broadcast_add(name='res4b12', *[res4b11_relu, scale4b12_branch2c]) + res4b12_relu = mx.symbol.Activation(name='res4b12_relu', data=res4b12, act_type='relu') + res4b13_branch2a = mx.symbol.Convolution(name='res4b13_branch2a', data=res4b12_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2a = mx.symbol.BatchNorm(name='bn4b13_branch2a', data=res4b13_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b13_branch2a = bn4b13_branch2a + res4b13_branch2a_relu = mx.symbol.Activation(name='res4b13_branch2a_relu', data=scale4b13_branch2a, + act_type='relu') + res4b13_branch2b = mx.symbol.Convolution(name='res4b13_branch2b', data=res4b13_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b13_branch2b = mx.symbol.BatchNorm(name='bn4b13_branch2b', data=res4b13_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b13_branch2b = bn4b13_branch2b + res4b13_branch2b_relu = mx.symbol.Activation(name='res4b13_branch2b_relu', data=scale4b13_branch2b, + act_type='relu') + res4b13_branch2c = mx.symbol.Convolution(name='res4b13_branch2c', data=res4b13_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2c = mx.symbol.BatchNorm(name='bn4b13_branch2c', data=res4b13_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b13_branch2c = bn4b13_branch2c + res4b13 = mx.symbol.broadcast_add(name='res4b13', *[res4b12_relu, scale4b13_branch2c]) + res4b13_relu = mx.symbol.Activation(name='res4b13_relu', data=res4b13, act_type='relu') + res4b14_branch2a = mx.symbol.Convolution(name='res4b14_branch2a', data=res4b13_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2a = mx.symbol.BatchNorm(name='bn4b14_branch2a', data=res4b14_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b14_branch2a = bn4b14_branch2a + res4b14_branch2a_relu = mx.symbol.Activation(name='res4b14_branch2a_relu', data=scale4b14_branch2a, + act_type='relu') + res4b14_branch2b = mx.symbol.Convolution(name='res4b14_branch2b', data=res4b14_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b14_branch2b = mx.symbol.BatchNorm(name='bn4b14_branch2b', data=res4b14_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b14_branch2b = bn4b14_branch2b + res4b14_branch2b_relu = mx.symbol.Activation(name='res4b14_branch2b_relu', data=scale4b14_branch2b, + act_type='relu') + res4b14_branch2c = mx.symbol.Convolution(name='res4b14_branch2c', data=res4b14_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2c = mx.symbol.BatchNorm(name='bn4b14_branch2c', data=res4b14_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b14_branch2c = bn4b14_branch2c + res4b14 = mx.symbol.broadcast_add(name='res4b14', *[res4b13_relu, scale4b14_branch2c]) + res4b14_relu = mx.symbol.Activation(name='res4b14_relu', data=res4b14, act_type='relu') + res4b15_branch2a = mx.symbol.Convolution(name='res4b15_branch2a', data=res4b14_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2a = mx.symbol.BatchNorm(name='bn4b15_branch2a', data=res4b15_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b15_branch2a = bn4b15_branch2a + res4b15_branch2a_relu = mx.symbol.Activation(name='res4b15_branch2a_relu', data=scale4b15_branch2a, + act_type='relu') + res4b15_branch2b = mx.symbol.Convolution(name='res4b15_branch2b', data=res4b15_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b15_branch2b = mx.symbol.BatchNorm(name='bn4b15_branch2b', data=res4b15_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b15_branch2b = bn4b15_branch2b + res4b15_branch2b_relu = mx.symbol.Activation(name='res4b15_branch2b_relu', data=scale4b15_branch2b, + act_type='relu') + res4b15_branch2c = mx.symbol.Convolution(name='res4b15_branch2c', data=res4b15_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2c = mx.symbol.BatchNorm(name='bn4b15_branch2c', data=res4b15_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b15_branch2c = bn4b15_branch2c + res4b15 = mx.symbol.broadcast_add(name='res4b15', *[res4b14_relu, scale4b15_branch2c]) + res4b15_relu = mx.symbol.Activation(name='res4b15_relu', data=res4b15, act_type='relu') + res4b16_branch2a = mx.symbol.Convolution(name='res4b16_branch2a', data=res4b15_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2a = mx.symbol.BatchNorm(name='bn4b16_branch2a', data=res4b16_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b16_branch2a = bn4b16_branch2a + res4b16_branch2a_relu = mx.symbol.Activation(name='res4b16_branch2a_relu', data=scale4b16_branch2a, + act_type='relu') + res4b16_branch2b = mx.symbol.Convolution(name='res4b16_branch2b', data=res4b16_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b16_branch2b = mx.symbol.BatchNorm(name='bn4b16_branch2b', data=res4b16_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b16_branch2b = bn4b16_branch2b + res4b16_branch2b_relu = mx.symbol.Activation(name='res4b16_branch2b_relu', data=scale4b16_branch2b, + act_type='relu') + res4b16_branch2c = mx.symbol.Convolution(name='res4b16_branch2c', data=res4b16_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2c = mx.symbol.BatchNorm(name='bn4b16_branch2c', data=res4b16_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b16_branch2c = bn4b16_branch2c + res4b16 = mx.symbol.broadcast_add(name='res4b16', *[res4b15_relu, scale4b16_branch2c]) + res4b16_relu = mx.symbol.Activation(name='res4b16_relu', data=res4b16, act_type='relu') + res4b17_branch2a = mx.symbol.Convolution(name='res4b17_branch2a', data=res4b16_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2a = mx.symbol.BatchNorm(name='bn4b17_branch2a', data=res4b17_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b17_branch2a = bn4b17_branch2a + res4b17_branch2a_relu = mx.symbol.Activation(name='res4b17_branch2a_relu', data=scale4b17_branch2a, + act_type='relu') + res4b17_branch2b = mx.symbol.Convolution(name='res4b17_branch2b', data=res4b17_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b17_branch2b = mx.symbol.BatchNorm(name='bn4b17_branch2b', data=res4b17_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b17_branch2b = bn4b17_branch2b + res4b17_branch2b_relu = mx.symbol.Activation(name='res4b17_branch2b_relu', data=scale4b17_branch2b, + act_type='relu') + res4b17_branch2c = mx.symbol.Convolution(name='res4b17_branch2c', data=res4b17_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2c = mx.symbol.BatchNorm(name='bn4b17_branch2c', data=res4b17_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b17_branch2c = bn4b17_branch2c + res4b17 = mx.symbol.broadcast_add(name='res4b17', *[res4b16_relu, scale4b17_branch2c]) + res4b17_relu = mx.symbol.Activation(name='res4b17_relu', data=res4b17, act_type='relu') + res4b18_branch2a = mx.symbol.Convolution(name='res4b18_branch2a', data=res4b17_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2a = mx.symbol.BatchNorm(name='bn4b18_branch2a', data=res4b18_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b18_branch2a = bn4b18_branch2a + res4b18_branch2a_relu = mx.symbol.Activation(name='res4b18_branch2a_relu', data=scale4b18_branch2a, + act_type='relu') + res4b18_branch2b = mx.symbol.Convolution(name='res4b18_branch2b', data=res4b18_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b18_branch2b = mx.symbol.BatchNorm(name='bn4b18_branch2b', data=res4b18_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b18_branch2b = bn4b18_branch2b + res4b18_branch2b_relu = mx.symbol.Activation(name='res4b18_branch2b_relu', data=scale4b18_branch2b, + act_type='relu') + res4b18_branch2c = mx.symbol.Convolution(name='res4b18_branch2c', data=res4b18_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2c = mx.symbol.BatchNorm(name='bn4b18_branch2c', data=res4b18_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b18_branch2c = bn4b18_branch2c + res4b18 = mx.symbol.broadcast_add(name='res4b18', *[res4b17_relu, scale4b18_branch2c]) + res4b18_relu = mx.symbol.Activation(name='res4b18_relu', data=res4b18, act_type='relu') + res4b19_branch2a = mx.symbol.Convolution(name='res4b19_branch2a', data=res4b18_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2a = mx.symbol.BatchNorm(name='bn4b19_branch2a', data=res4b19_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b19_branch2a = bn4b19_branch2a + res4b19_branch2a_relu = mx.symbol.Activation(name='res4b19_branch2a_relu', data=scale4b19_branch2a, + act_type='relu') + res4b19_branch2b = mx.symbol.Convolution(name='res4b19_branch2b', data=res4b19_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b19_branch2b = mx.symbol.BatchNorm(name='bn4b19_branch2b', data=res4b19_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b19_branch2b = bn4b19_branch2b + res4b19_branch2b_relu = mx.symbol.Activation(name='res4b19_branch2b_relu', data=scale4b19_branch2b, + act_type='relu') + res4b19_branch2c = mx.symbol.Convolution(name='res4b19_branch2c', data=res4b19_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2c = mx.symbol.BatchNorm(name='bn4b19_branch2c', data=res4b19_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b19_branch2c = bn4b19_branch2c + res4b19 = mx.symbol.broadcast_add(name='res4b19', *[res4b18_relu, scale4b19_branch2c]) + res4b19_relu = mx.symbol.Activation(name='res4b19_relu', data=res4b19, act_type='relu') + res4b20_branch2a = mx.symbol.Convolution(name='res4b20_branch2a', data=res4b19_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2a = mx.symbol.BatchNorm(name='bn4b20_branch2a', data=res4b20_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b20_branch2a = bn4b20_branch2a + res4b20_branch2a_relu = mx.symbol.Activation(name='res4b20_branch2a_relu', data=scale4b20_branch2a, + act_type='relu') + res4b20_branch2b = mx.symbol.Convolution(name='res4b20_branch2b', data=res4b20_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b20_branch2b = mx.symbol.BatchNorm(name='bn4b20_branch2b', data=res4b20_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b20_branch2b = bn4b20_branch2b + res4b20_branch2b_relu = mx.symbol.Activation(name='res4b20_branch2b_relu', data=scale4b20_branch2b, + act_type='relu') + res4b20_branch2c = mx.symbol.Convolution(name='res4b20_branch2c', data=res4b20_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2c = mx.symbol.BatchNorm(name='bn4b20_branch2c', data=res4b20_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b20_branch2c = bn4b20_branch2c + res4b20 = mx.symbol.broadcast_add(name='res4b20', *[res4b19_relu, scale4b20_branch2c]) + res4b20_relu = mx.symbol.Activation(name='res4b20_relu', data=res4b20, act_type='relu') + res4b21_branch2a = mx.symbol.Convolution(name='res4b21_branch2a', data=res4b20_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2a = mx.symbol.BatchNorm(name='bn4b21_branch2a', data=res4b21_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b21_branch2a = bn4b21_branch2a + res4b21_branch2a_relu = mx.symbol.Activation(name='res4b21_branch2a_relu', data=scale4b21_branch2a, + act_type='relu') + res4b21_branch2b = mx.symbol.Convolution(name='res4b21_branch2b', data=res4b21_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b21_branch2b = mx.symbol.BatchNorm(name='bn4b21_branch2b', data=res4b21_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b21_branch2b = bn4b21_branch2b + res4b21_branch2b_relu = mx.symbol.Activation(name='res4b21_branch2b_relu', data=scale4b21_branch2b, + act_type='relu') + res4b21_branch2c = mx.symbol.Convolution(name='res4b21_branch2c', data=res4b21_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2c = mx.symbol.BatchNorm(name='bn4b21_branch2c', data=res4b21_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b21_branch2c = bn4b21_branch2c + res4b21 = mx.symbol.broadcast_add(name='res4b21', *[res4b20_relu, scale4b21_branch2c]) + res4b21_relu = mx.symbol.Activation(name='res4b21_relu', data=res4b21, act_type='relu') + res4b22_branch2a = mx.symbol.Convolution(name='res4b22_branch2a', data=res4b21_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2a = mx.symbol.BatchNorm(name='bn4b22_branch2a', data=res4b22_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b22_branch2a = bn4b22_branch2a + res4b22_branch2a_relu = mx.symbol.Activation(name='res4b22_branch2a_relu', data=scale4b22_branch2a, + act_type='relu') + if with_dpyramid: + res4b22_branch2b_offset = mx.symbol.Convolution(name='res4b22_branch2b_offset', data=res4b22_branch2a_relu, + num_filter=72, pad=(1, 1), kernel=(3, 3), stride=(1, 1)) + res4b22_branch2b = mx.contrib.symbol.DeformableConvolution(name='res4b22_branch2b', data=res4b22_branch2a_relu, + offset=res4b22_branch2b_offset, + num_filter=256, pad=(1, 1), kernel=(3, 3), + num_deformable_group=4, + stride=(1, 1), no_bias=True) + else: + res4b22_branch2b = mx.symbol.Convolution(name='res4b22_branch2b', data=res4b22_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b22_branch2b = mx.symbol.BatchNorm(name='bn4b22_branch2b', data=res4b22_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b22_branch2b = bn4b22_branch2b + res4b22_branch2b_relu = mx.symbol.Activation(name='res4b22_branch2b_relu', data=scale4b22_branch2b, + act_type='relu') + res4b22_branch2c = mx.symbol.Convolution(name='res4b22_branch2c', data=res4b22_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2c = mx.symbol.BatchNorm(name='bn4b22_branch2c', data=res4b22_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b22_branch2c = bn4b22_branch2c + res4b22 = mx.symbol.broadcast_add(name='res4b22', *[res4b21_relu, scale4b22_branch2c]) + res4b22_relu = mx.symbol.Activation(name='res4b22_relu', data=res4b22, act_type='relu') + + if with_dilated: + res5_stride = (1, 1) + res5_dilate = (2, 2) + else: + res5_stride = (2, 2) + res5_dilate = (1, 1) + + # res5a-bottleneck + res5a_branch2a = mx.symbol.Convolution(name='res5a_branch2a', data=res4b22_relu, num_filter=512, pad=(0, 0), kernel=(1, 1), stride=res5_stride, no_bias=True) + bn5a_branch2a = mx.symbol.BatchNorm(name='bn5a_branch2a', data=res5a_branch2a, use_global_stats=True, fix_gamma=False, eps=eps) + scale5a_branch2a = bn5a_branch2a + res5a_branch2a_relu = mx.symbol.Activation(name='res5a_branch2a_relu', data=scale5a_branch2a, act_type='relu') + + if with_dconv: + res5a_branch2b_offset = mx.symbol.Convolution(name='res5a_branch2b_offset', data=res5a_branch2a_relu, num_filter=72, pad=res5_dilate, kernel=(3, 3), dilate=res5_dilate) + res5a_branch2b = mx.contrib.symbol.DeformableConvolution(name='res5a_branch2b', data=res5a_branch2a_relu, offset=res5a_branch2b_offset, num_filter=512, + pad=res5_dilate, kernel=(3, 3), num_deformable_group=4, stride=(1, 1), dilate=res5_dilate, no_bias=True) + else: + res5a_branch2b = mx.symbol.Convolution(name='res5a_branch2b', data=res5a_branch2a_relu, num_filter=512, pad=res5_dilate, + kernel=(3, 3), stride=(1, 1), dilate=res5_dilate, no_bias=True) + + bn5a_branch2b = mx.symbol.BatchNorm(name='bn5a_branch2b', data=res5a_branch2b, use_global_stats=True, fix_gamma=False, eps=eps) + scale5a_branch2b = bn5a_branch2b + res5a_branch2b_relu = mx.symbol.Activation(name='res5a_branch2b_relu', data=scale5a_branch2b, act_type='relu') + res5a_branch2c = mx.symbol.Convolution(name='res5a_branch2c', data=res5a_branch2b_relu, num_filter=2048, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch2c = mx.symbol.BatchNorm(name='bn5a_branch2c', data=res5a_branch2c, use_global_stats=True, fix_gamma=False, eps=eps) + scale5a_branch2c = bn5a_branch2c + # res5a-shortcut + res5a_branch1 = mx.symbol.Convolution(name='res5a_branch1', data=res4b22_relu, num_filter=2048, pad=(0, 0), kernel=(1, 1), stride=res5_stride, no_bias=True) + bn5a_branch1 = mx.symbol.BatchNorm(name='bn5a_branch1', data=res5a_branch1, use_global_stats=True, fix_gamma=False, eps=eps) + scale5a_branch1 = bn5a_branch1 + res5a = mx.symbol.broadcast_add(name='res5a', *[scale5a_branch1, scale5a_branch2c]) + res5a_relu = mx.symbol.Activation(name='res5a_relu', data=res5a, act_type='relu') + + # res5b-bottleneck + res5b_branch2a = mx.symbol.Convolution(name='res5b_branch2a', data=res5a_relu, num_filter=512, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2a = mx.symbol.BatchNorm(name='bn5b_branch2a', data=res5b_branch2a, use_global_stats=True, fix_gamma=False, eps=eps) + scale5b_branch2a = bn5b_branch2a + res5b_branch2a_relu = mx.symbol.Activation(name='res5b_branch2a_relu', data=scale5b_branch2a, act_type='relu') + if with_dconv: + res5b_branch2b_offset = mx.symbol.Convolution(name='res5b_branch2b_offset', data=res5b_branch2a_relu, num_filter=72, pad=res5_dilate, kernel=(3, 3), dilate=res5_dilate) + res5b_branch2b = mx.contrib.symbol.DeformableConvolution(name='res5b_branch2b', data=res5b_branch2a_relu, offset=res5b_branch2b_offset, num_filter=512, + pad=res5_dilate, kernel=(3, 3), num_deformable_group=4, dilate=res5_dilate, no_bias=True) + else: + res5b_branch2b = mx.symbol.Convolution(name='res5b_branch2b', data=res5b_branch2a_relu, num_filter=512, pad=res5_dilate, + kernel=(3, 3), stride=(1, 1), dilate=res5_dilate, no_bias=True) + bn5b_branch2b = mx.symbol.BatchNorm(name='bn5b_branch2b', data=res5b_branch2b, use_global_stats=True, fix_gamma=False, eps=eps) + scale5b_branch2b = bn5b_branch2b + res5b_branch2b_relu = mx.symbol.Activation(name='res5b_branch2b_relu', data=scale5b_branch2b, act_type='relu') + res5b_branch2c = mx.symbol.Convolution(name='res5b_branch2c', data=res5b_branch2b_relu, num_filter=2048, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2c = mx.symbol.BatchNorm(name='bn5b_branch2c', data=res5b_branch2c, use_global_stats=True, fix_gamma=False, eps=eps) + scale5b_branch2c = bn5b_branch2c + # res5b-shortcut + res5b = mx.symbol.broadcast_add(name='res5b', *[res5a_relu, scale5b_branch2c]) + res5b_relu = mx.symbol.Activation(name='res5b_relu', data=res5b, act_type='relu') + + # res5c-bottleneck + res5c_branch2a = mx.symbol.Convolution(name='res5c_branch2a', data=res5b_relu, num_filter=512, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2a = mx.symbol.BatchNorm(name='bn5c_branch2a', data=res5c_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale5c_branch2a = bn5c_branch2a + res5c_branch2a_relu = mx.symbol.Activation(name='res5c_branch2a_relu', data=scale5c_branch2a, act_type='relu') + if with_dconv: + res5c_branch2b_offset = mx.symbol.Convolution(name='res5c_branch2b_offset', data=res5c_branch2a_relu, num_filter=72, pad=res5_dilate, kernel=(3, 3), dilate=res5_dilate) + res5c_branch2b = mx.contrib.symbol.DeformableConvolution(name='res5c_branch2b', data=res5c_branch2a_relu, offset=res5c_branch2b_offset, num_filter=512, + pad=res5_dilate, kernel=(3, 3), num_deformable_group=4, dilate=res5_dilate, no_bias=True) + else: + res5c_branch2b = mx.symbol.Convolution(name='res5c_branch2b', data=res5c_branch2a_relu, num_filter=512, pad=res5_dilate, + kernel=(3, 3), stride=(1, 1), dilate=res5_dilate, no_bias=True) + bn5c_branch2b = mx.symbol.BatchNorm(name='bn5c_branch2b', data=res5c_branch2b, use_global_stats=True, fix_gamma=False, eps=eps) + scale5c_branch2b = bn5c_branch2b + res5c_branch2b_relu = mx.symbol.Activation(name='res5c_branch2b_relu', data=scale5c_branch2b, act_type='relu') + res5c_branch2c = mx.symbol.Convolution(name='res5c_branch2c', data=res5c_branch2b_relu, num_filter=2048, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2c = mx.symbol.BatchNorm(name='bn5c_branch2c', data=res5c_branch2c, use_global_stats=True, fix_gamma=False, eps=eps) + scale5c_branch2c = bn5c_branch2c + # res5c-shortcut + res5c = mx.symbol.broadcast_add(name='res5c', *[res5b_relu, scale5c_branch2c]) + res5c_relu = mx.symbol.Activation(name='res5c_relu', data=res5c, act_type='relu') + + return res2c_relu, res3b3_relu, res4b22_relu, res5c_relu + + def get_fpn_feature(self, c2, c3, c4, c5, feature_dim=256): + + # lateral connection + fpn_p5_1x1 = mx.symbol.Convolution(data=c5, kernel=(1, 1), pad=(0, 0), stride=(1, 1), num_filter=feature_dim, name='fpn_p5_1x1') + fpn_p4_1x1 = mx.symbol.Convolution(data=c4, kernel=(1, 1), pad=(0, 0), stride=(1, 1), num_filter=feature_dim, name='fpn_p4_1x1') + fpn_p3_1x1 = mx.symbol.Convolution(data=c3, kernel=(1, 1), pad=(0, 0), stride=(1, 1), num_filter=feature_dim, name='fpn_p3_1x1') + fpn_p2_1x1 = mx.symbol.Convolution(data=c2, kernel=(1, 1), pad=(0, 0), stride=(1, 1), num_filter=feature_dim, name='fpn_p2_1x1') + # top-down connection + fpn_p5_upsample = mx.symbol.UpSampling(fpn_p5_1x1, scale=2, sample_type='nearest', name='fpn_p5_upsample') + fpn_p4_plus = mx.sym.ElementWiseSum(*[fpn_p5_upsample, fpn_p4_1x1], name='fpn_p4_sum') + fpn_p4_upsample = mx.symbol.UpSampling(fpn_p4_plus, scale=2, sample_type='nearest', name='fpn_p4_upsample') + fpn_p3_plus = mx.sym.ElementWiseSum(*[fpn_p4_upsample, fpn_p3_1x1], name='fpn_p3_sum') + fpn_p3_upsample = mx.symbol.UpSampling(fpn_p3_plus, scale=2, sample_type='nearest', name='fpn_p3_upsample') + fpn_p2_plus = mx.sym.ElementWiseSum(*[fpn_p3_upsample, fpn_p2_1x1], name='fpn_p2_sum') + # FPN feature + fpn_p6 = mx.sym.Convolution(data=c5, kernel=(3, 3), pad=(1, 1), stride=(2, 2), num_filter=feature_dim, name='fpn_p6') + fpn_p5 = mx.symbol.Convolution(data=fpn_p5_1x1, kernel=(3, 3), pad=(1, 1), stride=(1, 1), num_filter=feature_dim, name='fpn_p5') + fpn_p4 = mx.symbol.Convolution(data=fpn_p4_plus, kernel=(3, 3), pad=(1, 1), stride=(1, 1), num_filter=feature_dim, name='fpn_p4') + fpn_p3 = mx.symbol.Convolution(data=fpn_p3_plus, kernel=(3, 3), pad=(1, 1), stride=(1, 1), num_filter=feature_dim, name='fpn_p3') + fpn_p2 = mx.symbol.Convolution(data=fpn_p2_plus, kernel=(3, 3), pad=(1, 1), stride=(1, 1), num_filter=feature_dim, name='fpn_p2') + + return fpn_p2, fpn_p3, fpn_p4, fpn_p5, fpn_p6 + + def get_rpn_subnet(self, data, num_anchors, suffix): + rpn_conv = mx.sym.Convolution(data=data, kernel=(3, 3), pad=(1, 1), num_filter=512, name='rpn_conv_' + suffix, + weight=self.shared_param_dict['rpn_conv_weight'], bias=self.shared_param_dict['rpn_conv_bias']) + rpn_relu = mx.sym.Activation(data=rpn_conv, act_type='relu', name='rpn_relu_' + suffix) + rpn_cls_score = mx.sym.Convolution(data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=2 * num_anchors, name='rpn_cls_score_' + suffix, + weight=self.shared_param_dict['rpn_cls_score_weight'], bias=self.shared_param_dict['rpn_cls_score_bias']) + rpn_bbox_pred = mx.sym.Convolution(data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=4 * num_anchors, name='rpn_bbox_pred_' + suffix, + weight=self.shared_param_dict['rpn_bbox_pred_weight'], bias=self.shared_param_dict['rpn_bbox_pred_bias']) + + # n x (2*A) x H x W => n x 2 x (A*H*W) + rpn_cls_score_t1 = mx.sym.Reshape(data=rpn_cls_score, shape=(0, 2, -1, 0), name='rpn_cls_score_t1_' + suffix) + rpn_cls_score_t2 = mx.sym.Reshape(data=rpn_cls_score_t1, shape=(0, 2, -1), name='rpn_cls_score_t2_' + suffix) + rpn_cls_prob = mx.sym.SoftmaxActivation(data=rpn_cls_score_t1, mode='channel', name='rpn_cls_prob_' + suffix) + rpn_cls_prob_t = mx.sym.Reshape(data=rpn_cls_prob, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_prob_t_' + suffix) + rpn_bbox_pred_t = mx.sym.Reshape(data=rpn_bbox_pred, shape=(0, 0, -1), name='rpn_bbox_pred_t_' + suffix) + return rpn_cls_score_t2, rpn_cls_prob_t, rpn_bbox_pred_t, rpn_bbox_pred + + def get_symbol(self, cfg, is_train=True): + + # config alias for convenient + num_classes = cfg.dataset.NUM_CLASSES + num_reg_classes = (2 if cfg.CLASS_AGNOSTIC else num_classes) + + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + + # shared convolutional layers + res2, res3, res4, res5 = self.get_resnet_backbone(data) + fpn_p2, fpn_p3, fpn_p4, fpn_p5, fpn_p6 = self.get_fpn_feature(res2, res3, res4, res5) + + rpn_cls_score_p2, rpn_prob_p2, rpn_bbox_loss_p2, rpn_bbox_pred_p2 = self.get_rpn_subnet(fpn_p2, cfg.network.NUM_ANCHORS, 'p2') + rpn_cls_score_p3, rpn_prob_p3, rpn_bbox_loss_p3, rpn_bbox_pred_p3 = self.get_rpn_subnet(fpn_p3, cfg.network.NUM_ANCHORS, 'p3') + rpn_cls_score_p4, rpn_prob_p4, rpn_bbox_loss_p4, rpn_bbox_pred_p4 = self.get_rpn_subnet(fpn_p4, cfg.network.NUM_ANCHORS, 'p4') + rpn_cls_score_p5, rpn_prob_p5, rpn_bbox_loss_p5, rpn_bbox_pred_p5 = self.get_rpn_subnet(fpn_p5, cfg.network.NUM_ANCHORS, 'p5') + rpn_cls_score_p6, rpn_prob_p6, rpn_bbox_loss_p6, rpn_bbox_pred_p6 = self.get_rpn_subnet(fpn_p6, cfg.network.NUM_ANCHORS, 'p6') + + rpn_cls_prob_dict = { + 'rpn_cls_prob_stride64': rpn_prob_p6, + 'rpn_cls_prob_stride32': rpn_prob_p5, + 'rpn_cls_prob_stride16': rpn_prob_p4, + 'rpn_cls_prob_stride8': rpn_prob_p3, + 'rpn_cls_prob_stride4': rpn_prob_p2, + } + rpn_bbox_pred_dict = { + 'rpn_bbox_pred_stride64': rpn_bbox_pred_p6, + 'rpn_bbox_pred_stride32': rpn_bbox_pred_p5, + 'rpn_bbox_pred_stride16': rpn_bbox_pred_p4, + 'rpn_bbox_pred_stride8': rpn_bbox_pred_p3, + 'rpn_bbox_pred_stride4': rpn_bbox_pred_p2, + } + arg_dict = dict(rpn_cls_prob_dict.items() + rpn_bbox_pred_dict.items()) + + if is_train: + rpn_label = mx.sym.Variable(name='label') + rpn_bbox_target = mx.sym.Variable(name='bbox_target') + rpn_bbox_weight = mx.sym.Variable(name='bbox_weight') + gt_boxes = mx.sym.Variable(name="gt_boxes") + + rpn_cls_score = mx.sym.Concat(rpn_cls_score_p2, rpn_cls_score_p3, rpn_cls_score_p4, rpn_cls_score_p5, rpn_cls_score_p6, dim=2) + rpn_bbox_loss = mx.sym.Concat(rpn_bbox_loss_p2, rpn_bbox_loss_p3, rpn_bbox_loss_p4, rpn_bbox_loss_p5, rpn_bbox_loss_p6, dim=2) + # RPN classification loss + rpn_cls_output = mx.sym.SoftmaxOutput(data=rpn_cls_score, label=rpn_label, multi_output=True, normalization='valid', + use_ignore=True, ignore_label=-1, name='rpn_cls_prob') + # bounding box regression + rpn_bbox_loss = rpn_bbox_weight * mx.sym.smooth_l1(name='rpn_bbox_loss_l1', scalar=3.0, data=(rpn_bbox_loss - rpn_bbox_target)) + rpn_bbox_loss = mx.sym.MakeLoss(name='rpn_bbox_loss', data=rpn_bbox_loss, grad_scale=1.0 / cfg.TRAIN.RPN_BATCH_SIZE) + + aux_dict = { + 'op_type': 'pyramid_proposal', 'name': 'rois', + 'im_info': im_info, 'feat_stride': tuple(cfg.network.RPN_FEAT_STRIDE), + 'scales': tuple(cfg.network.ANCHOR_SCALES), 'ratios': tuple(cfg.network.ANCHOR_RATIOS), + 'rpn_pre_nms_top_n': cfg.TRAIN.RPN_PRE_NMS_TOP_N, 'rpn_post_nms_top_n': cfg.TRAIN.RPN_POST_NMS_TOP_N, + 'threshold': cfg.TRAIN.RPN_NMS_THRESH, 'rpn_min_size': cfg.TRAIN.RPN_MIN_SIZE + } + + # ROI proposal + rois = mx.sym.Custom(**dict(arg_dict.items() + aux_dict.items())) + # ROI proposal target + gt_boxes_reshape = mx.sym.Reshape(data=gt_boxes, shape=(-1, 5), name='gt_boxes_reshape') + rois, label, bbox_target, bbox_weight \ + = mx.sym.Custom(rois=rois, gt_boxes=gt_boxes_reshape, op_type='proposal_target', num_classes=num_reg_classes, batch_images=cfg.TRAIN.BATCH_IMAGES, + batch_rois=cfg.TRAIN.BATCH_ROIS, cfg=cPickle.dumps(cfg), fg_fraction=cfg.TRAIN.FG_FRACTION) + else: + aux_dict = { + 'op_type': 'pyramid_proposal', 'name': 'rois', + 'im_info': im_info, 'feat_stride': tuple(cfg.network.RPN_FEAT_STRIDE), + 'scales': tuple(cfg.network.ANCHOR_SCALES), 'ratios': tuple(cfg.network.ANCHOR_RATIOS), + 'rpn_pre_nms_top_n': cfg.TEST.RPN_PRE_NMS_TOP_N, 'rpn_post_nms_top_n': cfg.TEST.RPN_POST_NMS_TOP_N, + 'threshold': cfg.TEST.RPN_NMS_THRESH, 'rpn_min_size': cfg.TEST.RPN_MIN_SIZE + } + # ROI proposal + rois = mx.sym.Custom(**dict(arg_dict.items() + aux_dict.items())) + + roi_pool = mx.symbol.Custom(data_p2=fpn_p2, data_p3=fpn_p3, data_p4=fpn_p4, data_p5=fpn_p5, + rois=rois, op_type='fpn_roi_pooling', name='fpn_roi_pooling') + + # 2 fc + fc_new_1 = mx.symbol.FullyConnected(name='fc_new_1', data=roi_pool, num_hidden=1024) + fc_new_1_relu = mx.sym.Activation(data=fc_new_1, act_type='relu', name='fc_new_1_relu') + + fc_new_2 = mx.symbol.FullyConnected(name='fc_new_2', data=fc_new_1_relu, num_hidden=1024) + fc_new_2_relu = mx.sym.Activation(data=fc_new_2, act_type='relu', name='fc_new_2_relu') + + # cls_score/bbox_pred + cls_score = mx.symbol.FullyConnected(name='cls_score', data=fc_new_2_relu, num_hidden=num_classes) + bbox_pred = mx.symbol.FullyConnected(name='bbox_pred', data=fc_new_2_relu, num_hidden=num_reg_classes * 4) + + if is_train: + if cfg.TRAIN.ENABLE_OHEM: + labels_ohem, bbox_weights_ohem = mx.sym.Custom(op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=num_reg_classes, roi_per_img=cfg.TRAIN.BATCH_ROIS_OHEM, + cls_score=cls_score, bbox_pred=bbox_pred, labels=label, + bbox_targets=bbox_target, bbox_weights=bbox_weight) + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=labels_ohem, normalization='valid', use_ignore=True, ignore_label=-1) + bbox_loss_ = bbox_weights_ohem * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS_OHEM) + rcnn_label = labels_ohem + else: + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=label, normalization='valid') + bbox_loss_ = bbox_weight * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS) + rcnn_label = label + + # reshape output + rcnn_label = mx.sym.Reshape(data=rcnn_label, shape=(cfg.TRAIN.BATCH_IMAGES, -1), name='label_reshape') + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_classes), name='cls_prob_reshape') + bbox_loss = mx.sym.Reshape(data=bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 4 * num_reg_classes), name='bbox_loss_reshape') + # group = mx.sym.Group([rpn_cls_output, rpn_bbox_loss, mx.sym.BlockGrad(cls_prob), mx.sym.BlockGrad(bbox_loss), mx.sym.BlockGrad(rcnn_label)]) + group = mx.sym.Group([rpn_cls_output, rpn_bbox_loss, cls_prob, bbox_loss, mx.sym.BlockGrad(rcnn_label)]) + else: + cls_prob = mx.sym.SoftmaxActivation(name='cls_prob', data=cls_score) + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_classes), name='cls_prob_reshape') + bbox_pred = mx.sym.Reshape(data=bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 4 * num_reg_classes), name='bbox_pred_reshape') + group = mx.sym.Group([rois, cls_prob, bbox_pred]) + + self.sym = group + return group + + def init_weight_rcnn(self, cfg, arg_params, aux_params): + arg_params['fc_new_1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_1_weight']) + arg_params['fc_new_1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_1_bias']) + arg_params['fc_new_2_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_2_weight']) + arg_params['fc_new_2_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_2_bias']) + arg_params['cls_score_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['cls_score_weight']) + arg_params['cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['cls_score_bias']) + arg_params['bbox_pred_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['bbox_pred_weight']) + arg_params['bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['bbox_pred_bias']) + + def init_weight_fpn(self, cfg, arg_params, aux_params): + arg_params['fpn_p6_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p6_weight']) + arg_params['fpn_p6_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p6_bias']) + arg_params['fpn_p5_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p5_weight']) + arg_params['fpn_p5_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p5_bias']) + arg_params['fpn_p4_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p4_weight']) + arg_params['fpn_p4_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p4_bias']) + arg_params['fpn_p3_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p3_weight']) + arg_params['fpn_p3_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p3_bias']) + arg_params['fpn_p2_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p2_weight']) + arg_params['fpn_p2_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p2_bias']) + + arg_params['fpn_p5_1x1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p5_1x1_weight']) + arg_params['fpn_p5_1x1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p5_1x1_bias']) + arg_params['fpn_p4_1x1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p4_1x1_weight']) + arg_params['fpn_p4_1x1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p4_1x1_bias']) + arg_params['fpn_p3_1x1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p3_1x1_weight']) + arg_params['fpn_p3_1x1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p3_1x1_bias']) + arg_params['fpn_p2_1x1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p2_1x1_weight']) + arg_params['fpn_p2_1x1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p2_1x1_bias']) + + def init_weight(self, cfg, arg_params, aux_params): + for name in self.shared_param_list: + arg_params[name + '_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict[name + '_weight']) + arg_params[name + '_bias'] = mx.nd.zeros(shape=self.arg_shape_dict[name + '_bias']) + self.init_weight_rcnn(cfg, arg_params, aux_params) + self.init_weight_fpn(cfg, arg_params, aux_params) diff --git a/fpn/symbols/resnet_v1_101_fpn_rcnn_rotbox_light_head.py b/fpn/symbols/resnet_v1_101_fpn_rcnn_rotbox_light_head.py new file mode 100644 index 0000000..c8f9d42 --- /dev/null +++ b/fpn/symbols/resnet_v1_101_fpn_rcnn_rotbox_light_head.py @@ -0,0 +1,1015 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Haozhi Qi +# -------------------------------------------------------- + +import cPickle +import mxnet as mx +from utils.symbol import Symbol +from operator_py.pyramid_proposal import * +from operator_py.proposal_target import * +from operator_py.proposal_target_rotbox import * +from operator_py.fpn_roi_pooling import * +from operator_py.fpn_psroipooling_v2 import * +from operator_py.box_annotator_ohem import * + + +class resnet_v1_101_fpn_rcnn_rotbox_light_head(Symbol): + def __init__(self): + """ + Use __init__ to define parameter network needs + """ + self.shared_param_list = ['rpn_conv', 'rpn_cls_score', 'rpn_bbox_pred', 'conv_new_1', 'conv_new_2', 'conv_new_3', 'conv_new_4'] + self.shared_param_dict = {} + for name in self.shared_param_list: + self.shared_param_dict[name + '_weight'] = mx.sym.Variable(name + '_weight') + self.shared_param_dict[name + '_bias'] = mx.sym.Variable(name + '_bias') + + def get_resnet_backbone(self, data, with_dilated=False, with_dconv=False, with_dpyramid=False, eps=1e-5): + conv1 = mx.symbol.Convolution(name='conv1', data=data, num_filter=64, pad=(3, 3), kernel=(7, 7), stride=(2, 2), no_bias=True) + bn_conv1 = mx.symbol.BatchNorm(name='bn_conv1', data=conv1, use_global_stats=True, fix_gamma=False, eps=eps) + scale_conv1 = bn_conv1 + conv1_relu = mx.symbol.Activation(name='conv1_relu', data=scale_conv1, act_type='relu') + pool1 = mx.symbol.Pooling(name='pool1', data=conv1_relu, pooling_convention='full', pad=(0, 0), kernel=(3, 3), stride=(2, 2), pool_type='max') + res2a_branch1 = mx.symbol.Convolution(name='res2a_branch1', data=pool1, num_filter=256, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2a_branch1 = mx.symbol.BatchNorm(name='bn2a_branch1', data=res2a_branch1, use_global_stats=True, fix_gamma=False, eps=eps) + scale2a_branch1 = bn2a_branch1 + res2a_branch2a = mx.symbol.Convolution(name='res2a_branch2a', data=pool1, num_filter=64, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2a_branch2a = mx.symbol.BatchNorm(name='bn2a_branch2a', data=res2a_branch2a, use_global_stats=True, fix_gamma=False, eps=eps) + scale2a_branch2a = bn2a_branch2a + res2a_branch2a_relu = mx.symbol.Activation(name='res2a_branch2a_relu', data=scale2a_branch2a, act_type='relu') + res2a_branch2b = mx.symbol.Convolution(name='res2a_branch2b', data=res2a_branch2a_relu, num_filter=64, pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2a_branch2b = mx.symbol.BatchNorm(name='bn2a_branch2b', data=res2a_branch2b, use_global_stats=True, fix_gamma=False, eps=eps) + scale2a_branch2b = bn2a_branch2b + res2a_branch2b_relu = mx.symbol.Activation(name='res2a_branch2b_relu', data=scale2a_branch2b, act_type='relu') + res2a_branch2c = mx.symbol.Convolution(name='res2a_branch2c', data=res2a_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2a_branch2c = mx.symbol.BatchNorm(name='bn2a_branch2c', data=res2a_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2a_branch2c = bn2a_branch2c + res2a = mx.symbol.broadcast_add(name='res2a', *[scale2a_branch1, scale2a_branch2c]) + res2a_relu = mx.symbol.Activation(name='res2a_relu', data=res2a, act_type='relu') + res2b_branch2a = mx.symbol.Convolution(name='res2b_branch2a', data=res2a_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2a = mx.symbol.BatchNorm(name='bn2b_branch2a', data=res2b_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2b_branch2a = bn2b_branch2a + res2b_branch2a_relu = mx.symbol.Activation(name='res2b_branch2a_relu', data=scale2b_branch2a, act_type='relu') + res2b_branch2b = mx.symbol.Convolution(name='res2b_branch2b', data=res2b_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2b_branch2b = mx.symbol.BatchNorm(name='bn2b_branch2b', data=res2b_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2b_branch2b = bn2b_branch2b + res2b_branch2b_relu = mx.symbol.Activation(name='res2b_branch2b_relu', data=scale2b_branch2b, act_type='relu') + res2b_branch2c = mx.symbol.Convolution(name='res2b_branch2c', data=res2b_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2c = mx.symbol.BatchNorm(name='bn2b_branch2c', data=res2b_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2b_branch2c = bn2b_branch2c + res2b = mx.symbol.broadcast_add(name='res2b', *[res2a_relu, scale2b_branch2c]) + res2b_relu = mx.symbol.Activation(name='res2b_relu', data=res2b, act_type='relu') + res2c_branch2a = mx.symbol.Convolution(name='res2c_branch2a', data=res2b_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2a = mx.symbol.BatchNorm(name='bn2c_branch2a', data=res2c_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2c_branch2a = bn2c_branch2a + res2c_branch2a_relu = mx.symbol.Activation(name='res2c_branch2a_relu', data=scale2c_branch2a, act_type='relu') + res2c_branch2b = mx.symbol.Convolution(name='res2c_branch2b', data=res2c_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2c_branch2b = mx.symbol.BatchNorm(name='bn2c_branch2b', data=res2c_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2c_branch2b = bn2c_branch2b + res2c_branch2b_relu = mx.symbol.Activation(name='res2c_branch2b_relu', data=scale2c_branch2b, act_type='relu') + res2c_branch2c = mx.symbol.Convolution(name='res2c_branch2c', data=res2c_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2c = mx.symbol.BatchNorm(name='bn2c_branch2c', data=res2c_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2c_branch2c = bn2c_branch2c + res2c = mx.symbol.broadcast_add(name='res2c', *[res2b_relu, scale2c_branch2c]) + res2c_relu = mx.symbol.Activation(name='res2c_relu', data=res2c, act_type='relu') + res3a_branch1 = mx.symbol.Convolution(name='res3a_branch1', data=res2c_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch1 = mx.symbol.BatchNorm(name='bn3a_branch1', data=res3a_branch1, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3a_branch1 = bn3a_branch1 + res3a_branch2a = mx.symbol.Convolution(name='res3a_branch2a', data=res2c_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch2a = mx.symbol.BatchNorm(name='bn3a_branch2a', data=res3a_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3a_branch2a = bn3a_branch2a + res3a_branch2a_relu = mx.symbol.Activation(name='res3a_branch2a_relu', data=scale3a_branch2a, act_type='relu') + res3a_branch2b = mx.symbol.Convolution(name='res3a_branch2b', data=res3a_branch2a_relu, num_filter=128, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3a_branch2b = mx.symbol.BatchNorm(name='bn3a_branch2b', data=res3a_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3a_branch2b = bn3a_branch2b + res3a_branch2b_relu = mx.symbol.Activation(name='res3a_branch2b_relu', data=scale3a_branch2b, act_type='relu') + res3a_branch2c = mx.symbol.Convolution(name='res3a_branch2c', data=res3a_branch2b_relu, num_filter=512, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3a_branch2c = mx.symbol.BatchNorm(name='bn3a_branch2c', data=res3a_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3a_branch2c = bn3a_branch2c + res3a = mx.symbol.broadcast_add(name='res3a', *[scale3a_branch1, scale3a_branch2c]) + res3a_relu = mx.symbol.Activation(name='res3a_relu', data=res3a, act_type='relu') + res3b1_branch2a = mx.symbol.Convolution(name='res3b1_branch2a', data=res3a_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2a = mx.symbol.BatchNorm(name='bn3b1_branch2a', data=res3b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b1_branch2a = bn3b1_branch2a + res3b1_branch2a_relu = mx.symbol.Activation(name='res3b1_branch2a_relu', data=scale3b1_branch2a, + act_type='relu') + res3b1_branch2b = mx.symbol.Convolution(name='res3b1_branch2b', data=res3b1_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b1_branch2b = mx.symbol.BatchNorm(name='bn3b1_branch2b', data=res3b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b1_branch2b = bn3b1_branch2b + res3b1_branch2b_relu = mx.symbol.Activation(name='res3b1_branch2b_relu', data=scale3b1_branch2b, + act_type='relu') + res3b1_branch2c = mx.symbol.Convolution(name='res3b1_branch2c', data=res3b1_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2c = mx.symbol.BatchNorm(name='bn3b1_branch2c', data=res3b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b1_branch2c = bn3b1_branch2c + res3b1 = mx.symbol.broadcast_add(name='res3b1', *[res3a_relu, scale3b1_branch2c]) + res3b1_relu = mx.symbol.Activation(name='res3b1_relu', data=res3b1, act_type='relu') + res3b2_branch2a = mx.symbol.Convolution(name='res3b2_branch2a', data=res3b1_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2a = mx.symbol.BatchNorm(name='bn3b2_branch2a', data=res3b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b2_branch2a = bn3b2_branch2a + res3b2_branch2a_relu = mx.symbol.Activation(name='res3b2_branch2a_relu', data=scale3b2_branch2a, + act_type='relu') + res3b2_branch2b = mx.symbol.Convolution(name='res3b2_branch2b', data=res3b2_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b2_branch2b = mx.symbol.BatchNorm(name='bn3b2_branch2b', data=res3b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b2_branch2b = bn3b2_branch2b + res3b2_branch2b_relu = mx.symbol.Activation(name='res3b2_branch2b_relu', data=scale3b2_branch2b, + act_type='relu') + res3b2_branch2c = mx.symbol.Convolution(name='res3b2_branch2c', data=res3b2_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2c = mx.symbol.BatchNorm(name='bn3b2_branch2c', data=res3b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b2_branch2c = bn3b2_branch2c + res3b2 = mx.symbol.broadcast_add(name='res3b2', *[res3b1_relu, scale3b2_branch2c]) + res3b2_relu = mx.symbol.Activation(name='res3b2_relu', data=res3b2, act_type='relu') + res3b3_branch2a = mx.symbol.Convolution(name='res3b3_branch2a', data=res3b2_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2a = mx.symbol.BatchNorm(name='bn3b3_branch2a', data=res3b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b3_branch2a = bn3b3_branch2a + res3b3_branch2a_relu = mx.symbol.Activation(name='res3b3_branch2a_relu', data=scale3b3_branch2a, + act_type='relu') + if with_dpyramid: + res3b3_branch2b_offset = mx.symbol.Convolution(name='res3b3_branch2b_offset', data=res3b3_branch2a_relu, + num_filter=72, pad=(1, 1), kernel=(3, 3), stride=(1, 1)) + res3b3_branch2b = mx.contrib.symbol.DeformableConvolution(name='res3b3_branch2b', data=res3b3_branch2a_relu, + offset=res3b3_branch2b_offset, + num_filter=128, pad=(1, 1), kernel=(3, 3), + num_deformable_group=4, + stride=(1, 1), no_bias=True) + else: + res3b3_branch2b = mx.symbol.Convolution(name='res3b3_branch2b', data=res3b3_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b3_branch2b = mx.symbol.BatchNorm(name='bn3b3_branch2b', data=res3b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b3_branch2b = bn3b3_branch2b + res3b3_branch2b_relu = mx.symbol.Activation(name='res3b3_branch2b_relu', data=scale3b3_branch2b, + act_type='relu') + res3b3_branch2c = mx.symbol.Convolution(name='res3b3_branch2c', data=res3b3_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2c = mx.symbol.BatchNorm(name='bn3b3_branch2c', data=res3b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b3_branch2c = bn3b3_branch2c + res3b3 = mx.symbol.broadcast_add(name='res3b3', *[res3b2_relu, scale3b3_branch2c]) + res3b3_relu = mx.symbol.Activation(name='res3b3_relu', data=res3b3, act_type='relu') + res4a_branch1 = mx.symbol.Convolution(name='res4a_branch1', data=res3b3_relu, num_filter=1024, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch1 = mx.symbol.BatchNorm(name='bn4a_branch1', data=res4a_branch1, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4a_branch1 = bn4a_branch1 + res4a_branch2a = mx.symbol.Convolution(name='res4a_branch2a', data=res3b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch2a = mx.symbol.BatchNorm(name='bn4a_branch2a', data=res4a_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4a_branch2a = bn4a_branch2a + res4a_branch2a_relu = mx.symbol.Activation(name='res4a_branch2a_relu', data=scale4a_branch2a, act_type='relu') + res4a_branch2b = mx.symbol.Convolution(name='res4a_branch2b', data=res4a_branch2a_relu, num_filter=256, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4a_branch2b = mx.symbol.BatchNorm(name='bn4a_branch2b', data=res4a_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4a_branch2b = bn4a_branch2b + res4a_branch2b_relu = mx.symbol.Activation(name='res4a_branch2b_relu', data=scale4a_branch2b, act_type='relu') + res4a_branch2c = mx.symbol.Convolution(name='res4a_branch2c', data=res4a_branch2b_relu, num_filter=1024, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4a_branch2c = mx.symbol.BatchNorm(name='bn4a_branch2c', data=res4a_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4a_branch2c = bn4a_branch2c + res4a = mx.symbol.broadcast_add(name='res4a', *[scale4a_branch1, scale4a_branch2c]) + res4a_relu = mx.symbol.Activation(name='res4a_relu', data=res4a, act_type='relu') + res4b1_branch2a = mx.symbol.Convolution(name='res4b1_branch2a', data=res4a_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2a = mx.symbol.BatchNorm(name='bn4b1_branch2a', data=res4b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b1_branch2a = bn4b1_branch2a + res4b1_branch2a_relu = mx.symbol.Activation(name='res4b1_branch2a_relu', data=scale4b1_branch2a, + act_type='relu') + res4b1_branch2b = mx.symbol.Convolution(name='res4b1_branch2b', data=res4b1_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b1_branch2b = mx.symbol.BatchNorm(name='bn4b1_branch2b', data=res4b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b1_branch2b = bn4b1_branch2b + res4b1_branch2b_relu = mx.symbol.Activation(name='res4b1_branch2b_relu', data=scale4b1_branch2b, + act_type='relu') + res4b1_branch2c = mx.symbol.Convolution(name='res4b1_branch2c', data=res4b1_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2c = mx.symbol.BatchNorm(name='bn4b1_branch2c', data=res4b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b1_branch2c = bn4b1_branch2c + res4b1 = mx.symbol.broadcast_add(name='res4b1', *[res4a_relu, scale4b1_branch2c]) + res4b1_relu = mx.symbol.Activation(name='res4b1_relu', data=res4b1, act_type='relu') + res4b2_branch2a = mx.symbol.Convolution(name='res4b2_branch2a', data=res4b1_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2a = mx.symbol.BatchNorm(name='bn4b2_branch2a', data=res4b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b2_branch2a = bn4b2_branch2a + res4b2_branch2a_relu = mx.symbol.Activation(name='res4b2_branch2a_relu', data=scale4b2_branch2a, + act_type='relu') + res4b2_branch2b = mx.symbol.Convolution(name='res4b2_branch2b', data=res4b2_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b2_branch2b = mx.symbol.BatchNorm(name='bn4b2_branch2b', data=res4b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b2_branch2b = bn4b2_branch2b + res4b2_branch2b_relu = mx.symbol.Activation(name='res4b2_branch2b_relu', data=scale4b2_branch2b, + act_type='relu') + res4b2_branch2c = mx.symbol.Convolution(name='res4b2_branch2c', data=res4b2_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2c = mx.symbol.BatchNorm(name='bn4b2_branch2c', data=res4b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b2_branch2c = bn4b2_branch2c + res4b2 = mx.symbol.broadcast_add(name='res4b2', *[res4b1_relu, scale4b2_branch2c]) + res4b2_relu = mx.symbol.Activation(name='res4b2_relu', data=res4b2, act_type='relu') + res4b3_branch2a = mx.symbol.Convolution(name='res4b3_branch2a', data=res4b2_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2a = mx.symbol.BatchNorm(name='bn4b3_branch2a', data=res4b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b3_branch2a = bn4b3_branch2a + res4b3_branch2a_relu = mx.symbol.Activation(name='res4b3_branch2a_relu', data=scale4b3_branch2a, + act_type='relu') + res4b3_branch2b = mx.symbol.Convolution(name='res4b3_branch2b', data=res4b3_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b3_branch2b = mx.symbol.BatchNorm(name='bn4b3_branch2b', data=res4b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b3_branch2b = bn4b3_branch2b + res4b3_branch2b_relu = mx.symbol.Activation(name='res4b3_branch2b_relu', data=scale4b3_branch2b, + act_type='relu') + res4b3_branch2c = mx.symbol.Convolution(name='res4b3_branch2c', data=res4b3_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2c = mx.symbol.BatchNorm(name='bn4b3_branch2c', data=res4b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b3_branch2c = bn4b3_branch2c + res4b3 = mx.symbol.broadcast_add(name='res4b3', *[res4b2_relu, scale4b3_branch2c]) + res4b3_relu = mx.symbol.Activation(name='res4b3_relu', data=res4b3, act_type='relu') + res4b4_branch2a = mx.symbol.Convolution(name='res4b4_branch2a', data=res4b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2a = mx.symbol.BatchNorm(name='bn4b4_branch2a', data=res4b4_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b4_branch2a = bn4b4_branch2a + res4b4_branch2a_relu = mx.symbol.Activation(name='res4b4_branch2a_relu', data=scale4b4_branch2a, + act_type='relu') + res4b4_branch2b = mx.symbol.Convolution(name='res4b4_branch2b', data=res4b4_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b4_branch2b = mx.symbol.BatchNorm(name='bn4b4_branch2b', data=res4b4_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b4_branch2b = bn4b4_branch2b + res4b4_branch2b_relu = mx.symbol.Activation(name='res4b4_branch2b_relu', data=scale4b4_branch2b, + act_type='relu') + res4b4_branch2c = mx.symbol.Convolution(name='res4b4_branch2c', data=res4b4_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2c = mx.symbol.BatchNorm(name='bn4b4_branch2c', data=res4b4_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b4_branch2c = bn4b4_branch2c + res4b4 = mx.symbol.broadcast_add(name='res4b4', *[res4b3_relu, scale4b4_branch2c]) + res4b4_relu = mx.symbol.Activation(name='res4b4_relu', data=res4b4, act_type='relu') + res4b5_branch2a = mx.symbol.Convolution(name='res4b5_branch2a', data=res4b4_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2a = mx.symbol.BatchNorm(name='bn4b5_branch2a', data=res4b5_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b5_branch2a = bn4b5_branch2a + res4b5_branch2a_relu = mx.symbol.Activation(name='res4b5_branch2a_relu', data=scale4b5_branch2a, + act_type='relu') + res4b5_branch2b = mx.symbol.Convolution(name='res4b5_branch2b', data=res4b5_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b5_branch2b = mx.symbol.BatchNorm(name='bn4b5_branch2b', data=res4b5_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b5_branch2b = bn4b5_branch2b + res4b5_branch2b_relu = mx.symbol.Activation(name='res4b5_branch2b_relu', data=scale4b5_branch2b, + act_type='relu') + res4b5_branch2c = mx.symbol.Convolution(name='res4b5_branch2c', data=res4b5_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2c = mx.symbol.BatchNorm(name='bn4b5_branch2c', data=res4b5_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b5_branch2c = bn4b5_branch2c + res4b5 = mx.symbol.broadcast_add(name='res4b5', *[res4b4_relu, scale4b5_branch2c]) + res4b5_relu = mx.symbol.Activation(name='res4b5_relu', data=res4b5, act_type='relu') + res4b6_branch2a = mx.symbol.Convolution(name='res4b6_branch2a', data=res4b5_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2a = mx.symbol.BatchNorm(name='bn4b6_branch2a', data=res4b6_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b6_branch2a = bn4b6_branch2a + res4b6_branch2a_relu = mx.symbol.Activation(name='res4b6_branch2a_relu', data=scale4b6_branch2a, + act_type='relu') + res4b6_branch2b = mx.symbol.Convolution(name='res4b6_branch2b', data=res4b6_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b6_branch2b = mx.symbol.BatchNorm(name='bn4b6_branch2b', data=res4b6_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b6_branch2b = bn4b6_branch2b + res4b6_branch2b_relu = mx.symbol.Activation(name='res4b6_branch2b_relu', data=scale4b6_branch2b, + act_type='relu') + res4b6_branch2c = mx.symbol.Convolution(name='res4b6_branch2c', data=res4b6_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2c = mx.symbol.BatchNorm(name='bn4b6_branch2c', data=res4b6_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b6_branch2c = bn4b6_branch2c + res4b6 = mx.symbol.broadcast_add(name='res4b6', *[res4b5_relu, scale4b6_branch2c]) + res4b6_relu = mx.symbol.Activation(name='res4b6_relu', data=res4b6, act_type='relu') + res4b7_branch2a = mx.symbol.Convolution(name='res4b7_branch2a', data=res4b6_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2a = mx.symbol.BatchNorm(name='bn4b7_branch2a', data=res4b7_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b7_branch2a = bn4b7_branch2a + res4b7_branch2a_relu = mx.symbol.Activation(name='res4b7_branch2a_relu', data=scale4b7_branch2a, + act_type='relu') + res4b7_branch2b = mx.symbol.Convolution(name='res4b7_branch2b', data=res4b7_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b7_branch2b = mx.symbol.BatchNorm(name='bn4b7_branch2b', data=res4b7_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b7_branch2b = bn4b7_branch2b + res4b7_branch2b_relu = mx.symbol.Activation(name='res4b7_branch2b_relu', data=scale4b7_branch2b, + act_type='relu') + res4b7_branch2c = mx.symbol.Convolution(name='res4b7_branch2c', data=res4b7_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2c = mx.symbol.BatchNorm(name='bn4b7_branch2c', data=res4b7_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b7_branch2c = bn4b7_branch2c + res4b7 = mx.symbol.broadcast_add(name='res4b7', *[res4b6_relu, scale4b7_branch2c]) + res4b7_relu = mx.symbol.Activation(name='res4b7_relu', data=res4b7, act_type='relu') + res4b8_branch2a = mx.symbol.Convolution(name='res4b8_branch2a', data=res4b7_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2a = mx.symbol.BatchNorm(name='bn4b8_branch2a', data=res4b8_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b8_branch2a = bn4b8_branch2a + res4b8_branch2a_relu = mx.symbol.Activation(name='res4b8_branch2a_relu', data=scale4b8_branch2a, + act_type='relu') + res4b8_branch2b = mx.symbol.Convolution(name='res4b8_branch2b', data=res4b8_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b8_branch2b = mx.symbol.BatchNorm(name='bn4b8_branch2b', data=res4b8_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b8_branch2b = bn4b8_branch2b + res4b8_branch2b_relu = mx.symbol.Activation(name='res4b8_branch2b_relu', data=scale4b8_branch2b, + act_type='relu') + res4b8_branch2c = mx.symbol.Convolution(name='res4b8_branch2c', data=res4b8_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2c = mx.symbol.BatchNorm(name='bn4b8_branch2c', data=res4b8_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b8_branch2c = bn4b8_branch2c + res4b8 = mx.symbol.broadcast_add(name='res4b8', *[res4b7_relu, scale4b8_branch2c]) + res4b8_relu = mx.symbol.Activation(name='res4b8_relu', data=res4b8, act_type='relu') + res4b9_branch2a = mx.symbol.Convolution(name='res4b9_branch2a', data=res4b8_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2a = mx.symbol.BatchNorm(name='bn4b9_branch2a', data=res4b9_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b9_branch2a = bn4b9_branch2a + res4b9_branch2a_relu = mx.symbol.Activation(name='res4b9_branch2a_relu', data=scale4b9_branch2a, + act_type='relu') + res4b9_branch2b = mx.symbol.Convolution(name='res4b9_branch2b', data=res4b9_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b9_branch2b = mx.symbol.BatchNorm(name='bn4b9_branch2b', data=res4b9_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b9_branch2b = bn4b9_branch2b + res4b9_branch2b_relu = mx.symbol.Activation(name='res4b9_branch2b_relu', data=scale4b9_branch2b, + act_type='relu') + res4b9_branch2c = mx.symbol.Convolution(name='res4b9_branch2c', data=res4b9_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2c = mx.symbol.BatchNorm(name='bn4b9_branch2c', data=res4b9_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b9_branch2c = bn4b9_branch2c + res4b9 = mx.symbol.broadcast_add(name='res4b9', *[res4b8_relu, scale4b9_branch2c]) + res4b9_relu = mx.symbol.Activation(name='res4b9_relu', data=res4b9, act_type='relu') + res4b10_branch2a = mx.symbol.Convolution(name='res4b10_branch2a', data=res4b9_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2a = mx.symbol.BatchNorm(name='bn4b10_branch2a', data=res4b10_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b10_branch2a = bn4b10_branch2a + res4b10_branch2a_relu = mx.symbol.Activation(name='res4b10_branch2a_relu', data=scale4b10_branch2a, + act_type='relu') + res4b10_branch2b = mx.symbol.Convolution(name='res4b10_branch2b', data=res4b10_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b10_branch2b = mx.symbol.BatchNorm(name='bn4b10_branch2b', data=res4b10_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b10_branch2b = bn4b10_branch2b + res4b10_branch2b_relu = mx.symbol.Activation(name='res4b10_branch2b_relu', data=scale4b10_branch2b, + act_type='relu') + res4b10_branch2c = mx.symbol.Convolution(name='res4b10_branch2c', data=res4b10_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2c = mx.symbol.BatchNorm(name='bn4b10_branch2c', data=res4b10_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b10_branch2c = bn4b10_branch2c + res4b10 = mx.symbol.broadcast_add(name='res4b10', *[res4b9_relu, scale4b10_branch2c]) + res4b10_relu = mx.symbol.Activation(name='res4b10_relu', data=res4b10, act_type='relu') + res4b11_branch2a = mx.symbol.Convolution(name='res4b11_branch2a', data=res4b10_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2a = mx.symbol.BatchNorm(name='bn4b11_branch2a', data=res4b11_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b11_branch2a = bn4b11_branch2a + res4b11_branch2a_relu = mx.symbol.Activation(name='res4b11_branch2a_relu', data=scale4b11_branch2a, + act_type='relu') + res4b11_branch2b = mx.symbol.Convolution(name='res4b11_branch2b', data=res4b11_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b11_branch2b = mx.symbol.BatchNorm(name='bn4b11_branch2b', data=res4b11_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b11_branch2b = bn4b11_branch2b + res4b11_branch2b_relu = mx.symbol.Activation(name='res4b11_branch2b_relu', data=scale4b11_branch2b, + act_type='relu') + res4b11_branch2c = mx.symbol.Convolution(name='res4b11_branch2c', data=res4b11_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2c = mx.symbol.BatchNorm(name='bn4b11_branch2c', data=res4b11_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b11_branch2c = bn4b11_branch2c + res4b11 = mx.symbol.broadcast_add(name='res4b11', *[res4b10_relu, scale4b11_branch2c]) + res4b11_relu = mx.symbol.Activation(name='res4b11_relu', data=res4b11, act_type='relu') + res4b12_branch2a = mx.symbol.Convolution(name='res4b12_branch2a', data=res4b11_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2a = mx.symbol.BatchNorm(name='bn4b12_branch2a', data=res4b12_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b12_branch2a = bn4b12_branch2a + res4b12_branch2a_relu = mx.symbol.Activation(name='res4b12_branch2a_relu', data=scale4b12_branch2a, + act_type='relu') + res4b12_branch2b = mx.symbol.Convolution(name='res4b12_branch2b', data=res4b12_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b12_branch2b = mx.symbol.BatchNorm(name='bn4b12_branch2b', data=res4b12_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b12_branch2b = bn4b12_branch2b + res4b12_branch2b_relu = mx.symbol.Activation(name='res4b12_branch2b_relu', data=scale4b12_branch2b, + act_type='relu') + res4b12_branch2c = mx.symbol.Convolution(name='res4b12_branch2c', data=res4b12_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2c = mx.symbol.BatchNorm(name='bn4b12_branch2c', data=res4b12_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b12_branch2c = bn4b12_branch2c + res4b12 = mx.symbol.broadcast_add(name='res4b12', *[res4b11_relu, scale4b12_branch2c]) + res4b12_relu = mx.symbol.Activation(name='res4b12_relu', data=res4b12, act_type='relu') + res4b13_branch2a = mx.symbol.Convolution(name='res4b13_branch2a', data=res4b12_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2a = mx.symbol.BatchNorm(name='bn4b13_branch2a', data=res4b13_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b13_branch2a = bn4b13_branch2a + res4b13_branch2a_relu = mx.symbol.Activation(name='res4b13_branch2a_relu', data=scale4b13_branch2a, + act_type='relu') + res4b13_branch2b = mx.symbol.Convolution(name='res4b13_branch2b', data=res4b13_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b13_branch2b = mx.symbol.BatchNorm(name='bn4b13_branch2b', data=res4b13_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b13_branch2b = bn4b13_branch2b + res4b13_branch2b_relu = mx.symbol.Activation(name='res4b13_branch2b_relu', data=scale4b13_branch2b, + act_type='relu') + res4b13_branch2c = mx.symbol.Convolution(name='res4b13_branch2c', data=res4b13_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2c = mx.symbol.BatchNorm(name='bn4b13_branch2c', data=res4b13_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b13_branch2c = bn4b13_branch2c + res4b13 = mx.symbol.broadcast_add(name='res4b13', *[res4b12_relu, scale4b13_branch2c]) + res4b13_relu = mx.symbol.Activation(name='res4b13_relu', data=res4b13, act_type='relu') + res4b14_branch2a = mx.symbol.Convolution(name='res4b14_branch2a', data=res4b13_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2a = mx.symbol.BatchNorm(name='bn4b14_branch2a', data=res4b14_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b14_branch2a = bn4b14_branch2a + res4b14_branch2a_relu = mx.symbol.Activation(name='res4b14_branch2a_relu', data=scale4b14_branch2a, + act_type='relu') + res4b14_branch2b = mx.symbol.Convolution(name='res4b14_branch2b', data=res4b14_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b14_branch2b = mx.symbol.BatchNorm(name='bn4b14_branch2b', data=res4b14_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b14_branch2b = bn4b14_branch2b + res4b14_branch2b_relu = mx.symbol.Activation(name='res4b14_branch2b_relu', data=scale4b14_branch2b, + act_type='relu') + res4b14_branch2c = mx.symbol.Convolution(name='res4b14_branch2c', data=res4b14_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2c = mx.symbol.BatchNorm(name='bn4b14_branch2c', data=res4b14_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b14_branch2c = bn4b14_branch2c + res4b14 = mx.symbol.broadcast_add(name='res4b14', *[res4b13_relu, scale4b14_branch2c]) + res4b14_relu = mx.symbol.Activation(name='res4b14_relu', data=res4b14, act_type='relu') + res4b15_branch2a = mx.symbol.Convolution(name='res4b15_branch2a', data=res4b14_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2a = mx.symbol.BatchNorm(name='bn4b15_branch2a', data=res4b15_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b15_branch2a = bn4b15_branch2a + res4b15_branch2a_relu = mx.symbol.Activation(name='res4b15_branch2a_relu', data=scale4b15_branch2a, + act_type='relu') + res4b15_branch2b = mx.symbol.Convolution(name='res4b15_branch2b', data=res4b15_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b15_branch2b = mx.symbol.BatchNorm(name='bn4b15_branch2b', data=res4b15_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b15_branch2b = bn4b15_branch2b + res4b15_branch2b_relu = mx.symbol.Activation(name='res4b15_branch2b_relu', data=scale4b15_branch2b, + act_type='relu') + res4b15_branch2c = mx.symbol.Convolution(name='res4b15_branch2c', data=res4b15_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2c = mx.symbol.BatchNorm(name='bn4b15_branch2c', data=res4b15_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b15_branch2c = bn4b15_branch2c + res4b15 = mx.symbol.broadcast_add(name='res4b15', *[res4b14_relu, scale4b15_branch2c]) + res4b15_relu = mx.symbol.Activation(name='res4b15_relu', data=res4b15, act_type='relu') + res4b16_branch2a = mx.symbol.Convolution(name='res4b16_branch2a', data=res4b15_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2a = mx.symbol.BatchNorm(name='bn4b16_branch2a', data=res4b16_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b16_branch2a = bn4b16_branch2a + res4b16_branch2a_relu = mx.symbol.Activation(name='res4b16_branch2a_relu', data=scale4b16_branch2a, + act_type='relu') + res4b16_branch2b = mx.symbol.Convolution(name='res4b16_branch2b', data=res4b16_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b16_branch2b = mx.symbol.BatchNorm(name='bn4b16_branch2b', data=res4b16_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b16_branch2b = bn4b16_branch2b + res4b16_branch2b_relu = mx.symbol.Activation(name='res4b16_branch2b_relu', data=scale4b16_branch2b, + act_type='relu') + res4b16_branch2c = mx.symbol.Convolution(name='res4b16_branch2c', data=res4b16_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2c = mx.symbol.BatchNorm(name='bn4b16_branch2c', data=res4b16_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b16_branch2c = bn4b16_branch2c + res4b16 = mx.symbol.broadcast_add(name='res4b16', *[res4b15_relu, scale4b16_branch2c]) + res4b16_relu = mx.symbol.Activation(name='res4b16_relu', data=res4b16, act_type='relu') + res4b17_branch2a = mx.symbol.Convolution(name='res4b17_branch2a', data=res4b16_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2a = mx.symbol.BatchNorm(name='bn4b17_branch2a', data=res4b17_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b17_branch2a = bn4b17_branch2a + res4b17_branch2a_relu = mx.symbol.Activation(name='res4b17_branch2a_relu', data=scale4b17_branch2a, + act_type='relu') + res4b17_branch2b = mx.symbol.Convolution(name='res4b17_branch2b', data=res4b17_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b17_branch2b = mx.symbol.BatchNorm(name='bn4b17_branch2b', data=res4b17_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b17_branch2b = bn4b17_branch2b + res4b17_branch2b_relu = mx.symbol.Activation(name='res4b17_branch2b_relu', data=scale4b17_branch2b, + act_type='relu') + res4b17_branch2c = mx.symbol.Convolution(name='res4b17_branch2c', data=res4b17_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2c = mx.symbol.BatchNorm(name='bn4b17_branch2c', data=res4b17_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b17_branch2c = bn4b17_branch2c + res4b17 = mx.symbol.broadcast_add(name='res4b17', *[res4b16_relu, scale4b17_branch2c]) + res4b17_relu = mx.symbol.Activation(name='res4b17_relu', data=res4b17, act_type='relu') + res4b18_branch2a = mx.symbol.Convolution(name='res4b18_branch2a', data=res4b17_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2a = mx.symbol.BatchNorm(name='bn4b18_branch2a', data=res4b18_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b18_branch2a = bn4b18_branch2a + res4b18_branch2a_relu = mx.symbol.Activation(name='res4b18_branch2a_relu', data=scale4b18_branch2a, + act_type='relu') + res4b18_branch2b = mx.symbol.Convolution(name='res4b18_branch2b', data=res4b18_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b18_branch2b = mx.symbol.BatchNorm(name='bn4b18_branch2b', data=res4b18_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b18_branch2b = bn4b18_branch2b + res4b18_branch2b_relu = mx.symbol.Activation(name='res4b18_branch2b_relu', data=scale4b18_branch2b, + act_type='relu') + res4b18_branch2c = mx.symbol.Convolution(name='res4b18_branch2c', data=res4b18_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2c = mx.symbol.BatchNorm(name='bn4b18_branch2c', data=res4b18_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b18_branch2c = bn4b18_branch2c + res4b18 = mx.symbol.broadcast_add(name='res4b18', *[res4b17_relu, scale4b18_branch2c]) + res4b18_relu = mx.symbol.Activation(name='res4b18_relu', data=res4b18, act_type='relu') + res4b19_branch2a = mx.symbol.Convolution(name='res4b19_branch2a', data=res4b18_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2a = mx.symbol.BatchNorm(name='bn4b19_branch2a', data=res4b19_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b19_branch2a = bn4b19_branch2a + res4b19_branch2a_relu = mx.symbol.Activation(name='res4b19_branch2a_relu', data=scale4b19_branch2a, + act_type='relu') + res4b19_branch2b = mx.symbol.Convolution(name='res4b19_branch2b', data=res4b19_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b19_branch2b = mx.symbol.BatchNorm(name='bn4b19_branch2b', data=res4b19_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b19_branch2b = bn4b19_branch2b + res4b19_branch2b_relu = mx.symbol.Activation(name='res4b19_branch2b_relu', data=scale4b19_branch2b, + act_type='relu') + res4b19_branch2c = mx.symbol.Convolution(name='res4b19_branch2c', data=res4b19_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2c = mx.symbol.BatchNorm(name='bn4b19_branch2c', data=res4b19_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b19_branch2c = bn4b19_branch2c + res4b19 = mx.symbol.broadcast_add(name='res4b19', *[res4b18_relu, scale4b19_branch2c]) + res4b19_relu = mx.symbol.Activation(name='res4b19_relu', data=res4b19, act_type='relu') + res4b20_branch2a = mx.symbol.Convolution(name='res4b20_branch2a', data=res4b19_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2a = mx.symbol.BatchNorm(name='bn4b20_branch2a', data=res4b20_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b20_branch2a = bn4b20_branch2a + res4b20_branch2a_relu = mx.symbol.Activation(name='res4b20_branch2a_relu', data=scale4b20_branch2a, + act_type='relu') + res4b20_branch2b = mx.symbol.Convolution(name='res4b20_branch2b', data=res4b20_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b20_branch2b = mx.symbol.BatchNorm(name='bn4b20_branch2b', data=res4b20_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b20_branch2b = bn4b20_branch2b + res4b20_branch2b_relu = mx.symbol.Activation(name='res4b20_branch2b_relu', data=scale4b20_branch2b, + act_type='relu') + res4b20_branch2c = mx.symbol.Convolution(name='res4b20_branch2c', data=res4b20_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2c = mx.symbol.BatchNorm(name='bn4b20_branch2c', data=res4b20_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b20_branch2c = bn4b20_branch2c + res4b20 = mx.symbol.broadcast_add(name='res4b20', *[res4b19_relu, scale4b20_branch2c]) + res4b20_relu = mx.symbol.Activation(name='res4b20_relu', data=res4b20, act_type='relu') + res4b21_branch2a = mx.symbol.Convolution(name='res4b21_branch2a', data=res4b20_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2a = mx.symbol.BatchNorm(name='bn4b21_branch2a', data=res4b21_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b21_branch2a = bn4b21_branch2a + res4b21_branch2a_relu = mx.symbol.Activation(name='res4b21_branch2a_relu', data=scale4b21_branch2a, + act_type='relu') + res4b21_branch2b = mx.symbol.Convolution(name='res4b21_branch2b', data=res4b21_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b21_branch2b = mx.symbol.BatchNorm(name='bn4b21_branch2b', data=res4b21_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b21_branch2b = bn4b21_branch2b + res4b21_branch2b_relu = mx.symbol.Activation(name='res4b21_branch2b_relu', data=scale4b21_branch2b, + act_type='relu') + res4b21_branch2c = mx.symbol.Convolution(name='res4b21_branch2c', data=res4b21_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2c = mx.symbol.BatchNorm(name='bn4b21_branch2c', data=res4b21_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b21_branch2c = bn4b21_branch2c + res4b21 = mx.symbol.broadcast_add(name='res4b21', *[res4b20_relu, scale4b21_branch2c]) + res4b21_relu = mx.symbol.Activation(name='res4b21_relu', data=res4b21, act_type='relu') + res4b22_branch2a = mx.symbol.Convolution(name='res4b22_branch2a', data=res4b21_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2a = mx.symbol.BatchNorm(name='bn4b22_branch2a', data=res4b22_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b22_branch2a = bn4b22_branch2a + res4b22_branch2a_relu = mx.symbol.Activation(name='res4b22_branch2a_relu', data=scale4b22_branch2a, + act_type='relu') + if with_dpyramid: + res4b22_branch2b_offset = mx.symbol.Convolution(name='res4b22_branch2b_offset', data=res4b22_branch2a_relu, + num_filter=72, pad=(1, 1), kernel=(3, 3), stride=(1, 1)) + res4b22_branch2b = mx.contrib.symbol.DeformableConvolution(name='res4b22_branch2b', data=res4b22_branch2a_relu, + offset=res4b22_branch2b_offset, + num_filter=256, pad=(1, 1), kernel=(3, 3), + num_deformable_group=4, + stride=(1, 1), no_bias=True) + else: + res4b22_branch2b = mx.symbol.Convolution(name='res4b22_branch2b', data=res4b22_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b22_branch2b = mx.symbol.BatchNorm(name='bn4b22_branch2b', data=res4b22_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b22_branch2b = bn4b22_branch2b + res4b22_branch2b_relu = mx.symbol.Activation(name='res4b22_branch2b_relu', data=scale4b22_branch2b, + act_type='relu') + res4b22_branch2c = mx.symbol.Convolution(name='res4b22_branch2c', data=res4b22_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2c = mx.symbol.BatchNorm(name='bn4b22_branch2c', data=res4b22_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b22_branch2c = bn4b22_branch2c + res4b22 = mx.symbol.broadcast_add(name='res4b22', *[res4b21_relu, scale4b22_branch2c]) + res4b22_relu = mx.symbol.Activation(name='res4b22_relu', data=res4b22, act_type='relu') + + if with_dilated: + res5_stride = (1, 1) + res5_dilate = (2, 2) + else: + res5_stride = (2, 2) + res5_dilate = (1, 1) + + # res5a-bottleneck + res5a_branch2a = mx.symbol.Convolution(name='res5a_branch2a', data=res4b22_relu, num_filter=512, pad=(0, 0), kernel=(1, 1), stride=res5_stride, no_bias=True) + bn5a_branch2a = mx.symbol.BatchNorm(name='bn5a_branch2a', data=res5a_branch2a, use_global_stats=True, fix_gamma=False, eps=eps) + scale5a_branch2a = bn5a_branch2a + res5a_branch2a_relu = mx.symbol.Activation(name='res5a_branch2a_relu', data=scale5a_branch2a, act_type='relu') + + if with_dconv: + res5a_branch2b_offset = mx.symbol.Convolution(name='res5a_branch2b_offset', data=res5a_branch2a_relu, num_filter=72, pad=res5_dilate, kernel=(3, 3), dilate=res5_dilate) + res5a_branch2b = mx.contrib.symbol.DeformableConvolution(name='res5a_branch2b', data=res5a_branch2a_relu, offset=res5a_branch2b_offset, num_filter=512, + pad=res5_dilate, kernel=(3, 3), num_deformable_group=4, stride=(1, 1), dilate=res5_dilate, no_bias=True) + else: + res5a_branch2b = mx.symbol.Convolution(name='res5a_branch2b', data=res5a_branch2a_relu, num_filter=512, pad=res5_dilate, + kernel=(3, 3), stride=(1, 1), dilate=res5_dilate, no_bias=True) + + bn5a_branch2b = mx.symbol.BatchNorm(name='bn5a_branch2b', data=res5a_branch2b, use_global_stats=True, fix_gamma=False, eps=eps) + scale5a_branch2b = bn5a_branch2b + res5a_branch2b_relu = mx.symbol.Activation(name='res5a_branch2b_relu', data=scale5a_branch2b, act_type='relu') + res5a_branch2c = mx.symbol.Convolution(name='res5a_branch2c', data=res5a_branch2b_relu, num_filter=2048, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch2c = mx.symbol.BatchNorm(name='bn5a_branch2c', data=res5a_branch2c, use_global_stats=True, fix_gamma=False, eps=eps) + scale5a_branch2c = bn5a_branch2c + # res5a-shortcut + res5a_branch1 = mx.symbol.Convolution(name='res5a_branch1', data=res4b22_relu, num_filter=2048, pad=(0, 0), kernel=(1, 1), stride=res5_stride, no_bias=True) + bn5a_branch1 = mx.symbol.BatchNorm(name='bn5a_branch1', data=res5a_branch1, use_global_stats=True, fix_gamma=False, eps=eps) + scale5a_branch1 = bn5a_branch1 + res5a = mx.symbol.broadcast_add(name='res5a', *[scale5a_branch1, scale5a_branch2c]) + res5a_relu = mx.symbol.Activation(name='res5a_relu', data=res5a, act_type='relu') + + # res5b-bottleneck + res5b_branch2a = mx.symbol.Convolution(name='res5b_branch2a', data=res5a_relu, num_filter=512, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2a = mx.symbol.BatchNorm(name='bn5b_branch2a', data=res5b_branch2a, use_global_stats=True, fix_gamma=False, eps=eps) + scale5b_branch2a = bn5b_branch2a + res5b_branch2a_relu = mx.symbol.Activation(name='res5b_branch2a_relu', data=scale5b_branch2a, act_type='relu') + if with_dconv: + res5b_branch2b_offset = mx.symbol.Convolution(name='res5b_branch2b_offset', data=res5b_branch2a_relu, num_filter=72, pad=res5_dilate, kernel=(3, 3), dilate=res5_dilate) + res5b_branch2b = mx.contrib.symbol.DeformableConvolution(name='res5b_branch2b', data=res5b_branch2a_relu, offset=res5b_branch2b_offset, num_filter=512, + pad=res5_dilate, kernel=(3, 3), num_deformable_group=4, dilate=res5_dilate, no_bias=True) + else: + res5b_branch2b = mx.symbol.Convolution(name='res5b_branch2b', data=res5b_branch2a_relu, num_filter=512, pad=res5_dilate, + kernel=(3, 3), stride=(1, 1), dilate=res5_dilate, no_bias=True) + bn5b_branch2b = mx.symbol.BatchNorm(name='bn5b_branch2b', data=res5b_branch2b, use_global_stats=True, fix_gamma=False, eps=eps) + scale5b_branch2b = bn5b_branch2b + res5b_branch2b_relu = mx.symbol.Activation(name='res5b_branch2b_relu', data=scale5b_branch2b, act_type='relu') + res5b_branch2c = mx.symbol.Convolution(name='res5b_branch2c', data=res5b_branch2b_relu, num_filter=2048, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2c = mx.symbol.BatchNorm(name='bn5b_branch2c', data=res5b_branch2c, use_global_stats=True, fix_gamma=False, eps=eps) + scale5b_branch2c = bn5b_branch2c + # res5b-shortcut + res5b = mx.symbol.broadcast_add(name='res5b', *[res5a_relu, scale5b_branch2c]) + res5b_relu = mx.symbol.Activation(name='res5b_relu', data=res5b, act_type='relu') + + # res5c-bottleneck + res5c_branch2a = mx.symbol.Convolution(name='res5c_branch2a', data=res5b_relu, num_filter=512, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2a = mx.symbol.BatchNorm(name='bn5c_branch2a', data=res5c_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale5c_branch2a = bn5c_branch2a + res5c_branch2a_relu = mx.symbol.Activation(name='res5c_branch2a_relu', data=scale5c_branch2a, act_type='relu') + if with_dconv: + res5c_branch2b_offset = mx.symbol.Convolution(name='res5c_branch2b_offset', data=res5c_branch2a_relu, num_filter=72, pad=res5_dilate, kernel=(3, 3), dilate=res5_dilate) + res5c_branch2b = mx.contrib.symbol.DeformableConvolution(name='res5c_branch2b', data=res5c_branch2a_relu, offset=res5c_branch2b_offset, num_filter=512, + pad=res5_dilate, kernel=(3, 3), num_deformable_group=4, dilate=res5_dilate, no_bias=True) + else: + res5c_branch2b = mx.symbol.Convolution(name='res5c_branch2b', data=res5c_branch2a_relu, num_filter=512, pad=res5_dilate, + kernel=(3, 3), stride=(1, 1), dilate=res5_dilate, no_bias=True) + bn5c_branch2b = mx.symbol.BatchNorm(name='bn5c_branch2b', data=res5c_branch2b, use_global_stats=True, fix_gamma=False, eps=eps) + scale5c_branch2b = bn5c_branch2b + res5c_branch2b_relu = mx.symbol.Activation(name='res5c_branch2b_relu', data=scale5c_branch2b, act_type='relu') + res5c_branch2c = mx.symbol.Convolution(name='res5c_branch2c', data=res5c_branch2b_relu, num_filter=2048, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2c = mx.symbol.BatchNorm(name='bn5c_branch2c', data=res5c_branch2c, use_global_stats=True, fix_gamma=False, eps=eps) + scale5c_branch2c = bn5c_branch2c + # res5c-shortcut + res5c = mx.symbol.broadcast_add(name='res5c', *[res5b_relu, scale5c_branch2c]) + res5c_relu = mx.symbol.Activation(name='res5c_relu', data=res5c, act_type='relu') + + return res2c_relu, res3b3_relu, res4b22_relu, res5c_relu + + def get_fpn_feature(self, c2, c3, c4, c5, feature_dim=256): + + # lateral connection + fpn_p5_1x1 = mx.symbol.Convolution(data=c5, kernel=(1, 1), pad=(0, 0), stride=(1, 1), num_filter=feature_dim, name='fpn_p5_1x1') + fpn_p4_1x1 = mx.symbol.Convolution(data=c4, kernel=(1, 1), pad=(0, 0), stride=(1, 1), num_filter=feature_dim, name='fpn_p4_1x1') + fpn_p3_1x1 = mx.symbol.Convolution(data=c3, kernel=(1, 1), pad=(0, 0), stride=(1, 1), num_filter=feature_dim, name='fpn_p3_1x1') + fpn_p2_1x1 = mx.symbol.Convolution(data=c2, kernel=(1, 1), pad=(0, 0), stride=(1, 1), num_filter=feature_dim, name='fpn_p2_1x1') + # top-down connection + fpn_p5_upsample = mx.symbol.UpSampling(fpn_p5_1x1, scale=2, sample_type='nearest', name='fpn_p5_upsample') + fpn_p4_plus = mx.sym.ElementWiseSum(*[fpn_p5_upsample, fpn_p4_1x1], name='fpn_p4_sum') + fpn_p4_upsample = mx.symbol.UpSampling(fpn_p4_plus, scale=2, sample_type='nearest', name='fpn_p4_upsample') + fpn_p3_plus = mx.sym.ElementWiseSum(*[fpn_p4_upsample, fpn_p3_1x1], name='fpn_p3_sum') + fpn_p3_upsample = mx.symbol.UpSampling(fpn_p3_plus, scale=2, sample_type='nearest', name='fpn_p3_upsample') + fpn_p2_plus = mx.sym.ElementWiseSum(*[fpn_p3_upsample, fpn_p2_1x1], name='fpn_p2_sum') + # FPN feature + fpn_p6 = mx.sym.Convolution(data=c5, kernel=(3, 3), pad=(1, 1), stride=(2, 2), num_filter=feature_dim, name='fpn_p6') + fpn_p5 = mx.symbol.Convolution(data=fpn_p5_1x1, kernel=(3, 3), pad=(1, 1), stride=(1, 1), num_filter=feature_dim, name='fpn_p5') + fpn_p4 = mx.symbol.Convolution(data=fpn_p4_plus, kernel=(3, 3), pad=(1, 1), stride=(1, 1), num_filter=feature_dim, name='fpn_p4') + fpn_p3 = mx.symbol.Convolution(data=fpn_p3_plus, kernel=(3, 3), pad=(1, 1), stride=(1, 1), num_filter=feature_dim, name='fpn_p3') + fpn_p2 = mx.symbol.Convolution(data=fpn_p2_plus, kernel=(3, 3), pad=(1, 1), stride=(1, 1), num_filter=feature_dim, name='fpn_p2') + + return fpn_p2, fpn_p3, fpn_p4, fpn_p5, fpn_p6 + + def get_light_head(self, data, suffix): + # mid_num_filter=256 + mid_num_filter=64 # for s + conv_new_1 = mx.sym.Convolution(data=data, kernel=(15, 1), pad=(7, 0), num_filter=mid_num_filter, name="conv_new_1" + suffix, + weight=self.shared_param_dict['conv_new_1_weight'], bias=self.shared_param_dict['conv_new_1_bias'], lr_mult=3.0) + + relu_new_1 = mx.sym.Activation(data=conv_new_1, act_type='relu', name='relu1' + suffix) + conv_new_2 = mx.sym.Convolution(data=relu_new_1, kernel=(1, 15), pad=(0, 7), num_filter=10 * 7 * 7, name="conv_new_2" + suffix, + weight=self.shared_param_dict['conv_new_2_weight'], bias=self.shared_param_dict['conv_new_2_bias'], + lr_mult=3.0) + relu_new_2 = mx.sym.Activation(data=conv_new_2, act_type='relu', name='relu2' + suffix) + conv_new_3 = mx.sym.Convolution(data=data, kernel=(1, 15), pad=(0, 7), num_filter=mid_num_filter, name="conv_new_3" + suffix, + weight=self.shared_param_dict['conv_new_3_weight'], bias=self.shared_param_dict['conv_new_3_bias'], + lr_mult=3.0) + relu_new_3 = mx.sym.Activation(data=conv_new_3, act_type='relu', name='relu3' + suffix) + conv_new_4 = mx.sym.Convolution(data=relu_new_3, kernel=(15, 1), pad=(7, 0), num_filter=10 * 7 * 7, name="conv_new_4" + suffix, + weight=self.shared_param_dict['conv_new_4_weight'], bias=self.shared_param_dict['conv_new_4_bias'], + lr_mult=3.0) + relu_new_4 = mx.sym.Activation(data=conv_new_4, act_type='relu', name='relu4' + suffix) + light_head = mx.symbol.broadcast_add(name='light_head', *[relu_new_2, relu_new_4]) + return light_head + def get_rpn_subnet(self, data, num_anchors, suffix): + rpn_conv = mx.sym.Convolution(data=data, kernel=(3, 3), pad=(1, 1), num_filter=512, name='rpn_conv_' + suffix, + weight=self.shared_param_dict['rpn_conv_weight'], bias=self.shared_param_dict['rpn_conv_bias']) + rpn_relu = mx.sym.Activation(data=rpn_conv, act_type='relu', name='rpn_relu_' + suffix) + rpn_cls_score = mx.sym.Convolution(data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=2 * num_anchors, name='rpn_cls_score_' + suffix, + weight=self.shared_param_dict['rpn_cls_score_weight'], bias=self.shared_param_dict['rpn_cls_score_bias']) + rpn_bbox_pred = mx.sym.Convolution(data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=4 * num_anchors, name='rpn_bbox_pred_' + suffix, + weight=self.shared_param_dict['rpn_bbox_pred_weight'], bias=self.shared_param_dict['rpn_bbox_pred_bias']) + + # n x (2*A) x H x W => n x 2 x (A*H*W) + rpn_cls_score_t1 = mx.sym.Reshape(data=rpn_cls_score, shape=(0, 2, -1, 0), name='rpn_cls_score_t1_' + suffix) + rpn_cls_score_t2 = mx.sym.Reshape(data=rpn_cls_score_t1, shape=(0, 2, -1), name='rpn_cls_score_t2_' + suffix) + rpn_cls_prob = mx.sym.SoftmaxActivation(data=rpn_cls_score_t1, mode='channel', name='rpn_cls_prob_' + suffix) + rpn_cls_prob_t = mx.sym.Reshape(data=rpn_cls_prob, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_prob_t_' + suffix) + rpn_bbox_pred_t = mx.sym.Reshape(data=rpn_bbox_pred, shape=(0, 0, -1), name='rpn_bbox_pred_t_' + suffix) + return rpn_cls_score_t2, rpn_cls_prob_t, rpn_bbox_pred_t, rpn_bbox_pred + + def get_symbol(self, cfg, is_train=True): + + # config alias for convenient + num_classes = cfg.dataset.NUM_CLASSES + num_reg_classes = (2 if cfg.CLASS_AGNOSTIC else num_classes) + + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + + # shared convolutional layers + res2, res3, res4, res5 = self.get_resnet_backbone(data) + fpn_p2, fpn_p3, fpn_p4, fpn_p5, fpn_p6 = self.get_fpn_feature(res2, res3, res4, res5) + # if cfg.network.RPN_THIN_FEATRUE == True: + # # large separable + # fpn_p2_new = self.get_light_head(fpn_p2, 'p2') + # fpn_p3_new = self.get_light_head(fpn_p3, 'p3') + # fpn_p4_new = self.get_light_head(fpn_p4, 'p4') + # fpn_p5_new = self.get_light_head(fpn_p5, 'p5') + # fpn_p6_new = self.get_light_head(fpn_p6, 'p6') + # + # rpn_cls_score_p2, rpn_prob_p2, rpn_bbox_loss_p2, rpn_bbox_pred_p2 = self.get_rpn_subnet(fpn_p2_new, cfg.network.NUM_ANCHORS, 'p2') + # rpn_cls_score_p3, rpn_prob_p3, rpn_bbox_loss_p3, rpn_bbox_pred_p3 = self.get_rpn_subnet(fpn_p3_new, cfg.network.NUM_ANCHORS, 'p3') + # rpn_cls_score_p4, rpn_prob_p4, rpn_bbox_loss_p4, rpn_bbox_pred_p4 = self.get_rpn_subnet(fpn_p4_new, cfg.network.NUM_ANCHORS, 'p4') + # rpn_cls_score_p5, rpn_prob_p5, rpn_bbox_loss_p5, rpn_bbox_pred_p5 = self.get_rpn_subnet(fpn_p5_new, cfg.network.NUM_ANCHORS, 'p5') + # rpn_cls_score_p6, rpn_prob_p6, rpn_bbox_loss_p6, rpn_bbox_pred_p6 = self.get_rpn_subnet(fpn_p6_new, cfg.network.NUM_ANCHORS, 'p6') + # else: + # large separable + fpn_p2_new = self.get_light_head(fpn_p2, 'p2') + fpn_p3_new = self.get_light_head(fpn_p3, 'p3') + fpn_p4_new = self.get_light_head(fpn_p4, 'p4') + fpn_p5_new = self.get_light_head(fpn_p5, 'p5') + + rpn_cls_score_p2, rpn_prob_p2, rpn_bbox_loss_p2, rpn_bbox_pred_p2 = self.get_rpn_subnet(fpn_p2, cfg.network.NUM_ANCHORS, 'p2') + rpn_cls_score_p3, rpn_prob_p3, rpn_bbox_loss_p3, rpn_bbox_pred_p3 = self.get_rpn_subnet(fpn_p3, cfg.network.NUM_ANCHORS, 'p3') + rpn_cls_score_p4, rpn_prob_p4, rpn_bbox_loss_p4, rpn_bbox_pred_p4 = self.get_rpn_subnet(fpn_p4, cfg.network.NUM_ANCHORS, 'p4') + rpn_cls_score_p5, rpn_prob_p5, rpn_bbox_loss_p5, rpn_bbox_pred_p5 = self.get_rpn_subnet(fpn_p5, cfg.network.NUM_ANCHORS, 'p5') + rpn_cls_score_p6, rpn_prob_p6, rpn_bbox_loss_p6, rpn_bbox_pred_p6 = self.get_rpn_subnet(fpn_p6, cfg.network.NUM_ANCHORS, 'p6') + + rpn_cls_prob_dict = { + 'rpn_cls_prob_stride64': rpn_prob_p6, + 'rpn_cls_prob_stride32': rpn_prob_p5, + 'rpn_cls_prob_stride16': rpn_prob_p4, + 'rpn_cls_prob_stride8': rpn_prob_p3, + 'rpn_cls_prob_stride4': rpn_prob_p2, + } + rpn_bbox_pred_dict = { + 'rpn_bbox_pred_stride64': rpn_bbox_pred_p6, + 'rpn_bbox_pred_stride32': rpn_bbox_pred_p5, + 'rpn_bbox_pred_stride16': rpn_bbox_pred_p4, + 'rpn_bbox_pred_stride8': rpn_bbox_pred_p3, + 'rpn_bbox_pred_stride4': rpn_bbox_pred_p2, + } + arg_dict = dict(rpn_cls_prob_dict.items() + rpn_bbox_pred_dict.items()) + + if is_train: + rpn_label = mx.sym.Variable(name='label') + rpn_bbox_target = mx.sym.Variable(name='bbox_target') + rpn_bbox_weight = mx.sym.Variable(name='bbox_weight') + gt_boxes = mx.sym.Variable(name="gt_boxes") + + rpn_cls_score = mx.sym.Concat(rpn_cls_score_p2, rpn_cls_score_p3, rpn_cls_score_p4, rpn_cls_score_p5, rpn_cls_score_p6, dim=2) + rpn_bbox_loss = mx.sym.Concat(rpn_bbox_loss_p2, rpn_bbox_loss_p3, rpn_bbox_loss_p4, rpn_bbox_loss_p5, rpn_bbox_loss_p6, dim=2) + # RPN classification loss + rpn_cls_output = mx.sym.SoftmaxOutput(data=rpn_cls_score, label=rpn_label, multi_output=True, normalization='valid', + use_ignore=True, ignore_label=-1, name='rpn_cls_prob') + # bounding box regression + rpn_bbox_loss = rpn_bbox_weight * mx.sym.smooth_l1(name='rpn_bbox_loss_l1', scalar=3.0, data=(rpn_bbox_loss - rpn_bbox_target)) + rpn_bbox_loss = mx.sym.MakeLoss(name='rpn_bbox_loss', data=rpn_bbox_loss, grad_scale=1.0 / cfg.TRAIN.RPN_BATCH_SIZE) + + aux_dict = { + 'op_type': 'pyramid_proposal', 'name': 'rois', + 'im_info': im_info, 'feat_stride': tuple(cfg.network.RPN_FEAT_STRIDE), + 'scales': tuple(cfg.network.ANCHOR_SCALES), 'ratios': tuple(cfg.network.ANCHOR_RATIOS), + 'rpn_pre_nms_top_n': cfg.TRAIN.RPN_PRE_NMS_TOP_N, 'rpn_post_nms_top_n': cfg.TRAIN.RPN_POST_NMS_TOP_N, + 'threshold': cfg.TRAIN.RPN_NMS_THRESH, 'rpn_min_size': cfg.TRAIN.RPN_MIN_SIZE + } + + # ROI proposal + rois = mx.sym.Custom(**dict(arg_dict.items() + aux_dict.items())) + # ROI proposal target + gt_boxes_reshape = mx.sym.Reshape(data=gt_boxes, shape=(-1, 9), name='gt_boxes_reshape') + rois, label, bbox_target, bbox_weight \ + = mx.sym.Custom(rois=rois, gt_boxes=gt_boxes_reshape, op_type='proposal_target_rotbox', num_classes=num_reg_classes, batch_images=cfg.TRAIN.BATCH_IMAGES, + batch_rois=cfg.TRAIN.BATCH_ROIS, cfg=cPickle.dumps(cfg), fg_fraction=cfg.TRAIN.FG_FRACTION) + else: + aux_dict = { + 'op_type': 'pyramid_proposal', 'name': 'rois', + 'im_info': im_info, 'feat_stride': tuple(cfg.network.RPN_FEAT_STRIDE), + 'scales': tuple(cfg.network.ANCHOR_SCALES), 'ratios': tuple(cfg.network.ANCHOR_RATIOS), + 'rpn_pre_nms_top_n': cfg.TEST.RPN_PRE_NMS_TOP_N, 'rpn_post_nms_top_n': cfg.TEST.RPN_POST_NMS_TOP_N, + 'threshold': cfg.TEST.RPN_NMS_THRESH, 'rpn_min_size': cfg.TEST.RPN_MIN_SIZE + } + # ROI proposal + rois = mx.sym.Custom(**dict(arg_dict.items() + aux_dict.items())) + + roi_pool = mx.symbol.Custom(data_p2=fpn_p2_new, data_p3=fpn_p3_new, data_p4=fpn_p4_new, data_p5=fpn_p5_new, + rois=rois, op_type='fpn_psroi_pooling_v2', name='fpn_psroi_pooling_v2', pooling_mode=cfg.network.POOLING_MODE) + + # light head + fc_new_2 = mx.symbol.FullyConnected(name='fc_new_2', data=roi_pool, num_hidden=2048) + fc_new_2_relu = mx.sym.Activation(data=fc_new_2, act_type='relu', name='fc_new_2_relu') + + # 2 fc + # fc_new_1 = mx.symbol.FullyConnected(name='fc_new_1', data=roi_pool, num_hidden=1024) + # fc_new_1_relu = mx.sym.Activation(data=fc_new_1, act_type='relu', name='fc_new_1_relu') + # + # fc_new_2 = mx.symbol.FullyConnected(name='fc_new_2', data=fc_new_1_relu, num_hidden=1024) + # fc_new_2_relu = mx.sym.Activation(data=fc_new_2, act_type='relu', name='fc_new_2_relu') + + # cls_score/bbox_pred + cls_score = mx.symbol.FullyConnected(name='cls_score', data=fc_new_2_relu, num_hidden=num_classes) + bbox_pred = mx.symbol.FullyConnected(name='bbox_pred', data=fc_new_2_relu, num_hidden=num_reg_classes * 5) + + if is_train: + if cfg.TRAIN.ENABLE_OHEM: + labels_ohem, bbox_weights_ohem = mx.sym.Custom(op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=num_reg_classes, roi_per_img=cfg.TRAIN.BATCH_ROIS_OHEM, + cls_score=cls_score, bbox_pred=bbox_pred, labels=label, + bbox_targets=bbox_target, bbox_weights=bbox_weight) + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=labels_ohem, normalization='valid', use_ignore=True, ignore_label=-1) + bbox_loss_ = bbox_weights_ohem * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS_OHEM) + rcnn_label = labels_ohem + else: + cls_prob = mx.sym.SoftmaxOutput(name='cls_prob', data=cls_score, label=label, normalization='valid') + bbox_loss_ = bbox_weight * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS) + rcnn_label = label + + # reshape output + rcnn_label = mx.sym.Reshape(data=rcnn_label, shape=(cfg.TRAIN.BATCH_IMAGES, -1), name='label_reshape') + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_classes), name='cls_prob_reshape') + bbox_loss = mx.sym.Reshape(data=bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 5 * num_reg_classes), name='bbox_loss_reshape') + # group = mx.sym.Group([rpn_cls_output, rpn_bbox_loss, mx.sym.BlockGrad(cls_prob), mx.sym.BlockGrad(bbox_loss), mx.sym.BlockGrad(rcnn_label)]) + group = mx.sym.Group([rpn_cls_output, rpn_bbox_loss, cls_prob, bbox_loss, mx.sym.BlockGrad(rcnn_label)]) + else: + cls_prob = mx.sym.SoftmaxActivation(name='cls_prob', data=cls_score) + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_classes), name='cls_prob_reshape') + bbox_pred = mx.sym.Reshape(data=bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 5 * num_reg_classes), name='bbox_pred_reshape') + group = mx.sym.Group([rois, cls_prob, bbox_pred]) + + self.sym = group + return group + + def init_weight_rcnn(self, cfg, arg_params, aux_params): + # arg_params['fc_new_1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_1_weight']) + # arg_params['fc_new_1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_1_bias']) + arg_params['fc_new_2_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_2_weight']) + arg_params['fc_new_2_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_2_bias']) + arg_params['cls_score_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['cls_score_weight']) + arg_params['cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['cls_score_bias']) + arg_params['bbox_pred_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['bbox_pred_weight']) + arg_params['bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['bbox_pred_bias']) + + def init_weight_fpn(self, cfg, arg_params, aux_params): + arg_params['fpn_p6_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p6_weight']) + arg_params['fpn_p6_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p6_bias']) + arg_params['fpn_p5_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p5_weight']) + arg_params['fpn_p5_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p5_bias']) + arg_params['fpn_p4_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p4_weight']) + arg_params['fpn_p4_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p4_bias']) + arg_params['fpn_p3_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p3_weight']) + arg_params['fpn_p3_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p3_bias']) + arg_params['fpn_p2_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p2_weight']) + arg_params['fpn_p2_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p2_bias']) + + arg_params['fpn_p5_1x1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p5_1x1_weight']) + arg_params['fpn_p5_1x1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p5_1x1_bias']) + arg_params['fpn_p4_1x1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p4_1x1_weight']) + arg_params['fpn_p4_1x1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p4_1x1_bias']) + arg_params['fpn_p3_1x1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p3_1x1_weight']) + arg_params['fpn_p3_1x1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p3_1x1_bias']) + arg_params['fpn_p2_1x1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p2_1x1_weight']) + arg_params['fpn_p2_1x1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p2_1x1_bias']) + + def init_weight(self, cfg, arg_params, aux_params): + for name in self.shared_param_list: + arg_params[name + '_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict[name + '_weight']) + arg_params[name + '_bias'] = mx.nd.zeros(shape=self.arg_shape_dict[name + '_bias']) + self.init_weight_rcnn(cfg, arg_params, aux_params) + self.init_weight_fpn(cfg, arg_params, aux_params) diff --git a/fpn/symbols/resnet_v1_101_fpn_rcnn_rotbox_light_head_RoITransformer.py b/fpn/symbols/resnet_v1_101_fpn_rcnn_rotbox_light_head_RoITransformer.py new file mode 100644 index 0000000..4c908a4 --- /dev/null +++ b/fpn/symbols/resnet_v1_101_fpn_rcnn_rotbox_light_head_RoITransformer.py @@ -0,0 +1,1082 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Haozhi Qi +# -------------------------------------------------------- + +import cPickle +import mxnet as mx +from utils.symbol import Symbol +from operator_py.pyramid_proposal import * +from operator_py.proposal_target import * +from operator_py.proposal_target_rotbox import * +from operator_py.fpn_roi_pooling import * +from operator_py.box_annotator_ohem import * +from operator_py.fpn_rotated_psroialign import * +from operator_py.fpn_psroipooling_v2 import * +from operator_py.fpn_rotated_roialign import * +from operator_py.RRoI_target_rotbox_v2 import * +from operator_py.RRoIDecoder import * + +DEBUG = False + +class resnet_v1_101_fpn_rcnn_rotbox_light_head_RoITransformer(Symbol): + def __init__(self): + """ + Use __init__ to define parameter network needs + """ + self.shared_param_list = ['rpn_conv', 'rpn_cls_score', 'rpn_bbox_pred', 'conv_new_1', 'conv_new_2', 'conv_new_3', 'conv_new_4'] + self.shared_param_dict = {} + for name in self.shared_param_list: + self.shared_param_dict[name + '_weight'] = mx.sym.Variable(name + '_weight') + self.shared_param_dict[name + '_bias'] = mx.sym.Variable(name + '_bias') + + def get_resnet_backbone(self, data, with_dilated=False, with_dconv=False, with_dpyramid=False, eps=1e-5): + conv1 = mx.symbol.Convolution(name='conv1', data=data, num_filter=64, pad=(3, 3), kernel=(7, 7), stride=(2, 2), no_bias=True) + bn_conv1 = mx.symbol.BatchNorm(name='bn_conv1', data=conv1, use_global_stats=True, fix_gamma=False, eps=eps) + scale_conv1 = bn_conv1 + conv1_relu = mx.symbol.Activation(name='conv1_relu', data=scale_conv1, act_type='relu') + pool1 = mx.symbol.Pooling(name='pool1', data=conv1_relu, pooling_convention='full', pad=(0, 0), kernel=(3, 3), stride=(2, 2), pool_type='max') + res2a_branch1 = mx.symbol.Convolution(name='res2a_branch1', data=pool1, num_filter=256, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2a_branch1 = mx.symbol.BatchNorm(name='bn2a_branch1', data=res2a_branch1, use_global_stats=True, fix_gamma=False, eps=eps) + scale2a_branch1 = bn2a_branch1 + res2a_branch2a = mx.symbol.Convolution(name='res2a_branch2a', data=pool1, num_filter=64, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2a_branch2a = mx.symbol.BatchNorm(name='bn2a_branch2a', data=res2a_branch2a, use_global_stats=True, fix_gamma=False, eps=eps) + scale2a_branch2a = bn2a_branch2a + res2a_branch2a_relu = mx.symbol.Activation(name='res2a_branch2a_relu', data=scale2a_branch2a, act_type='relu') + res2a_branch2b = mx.symbol.Convolution(name='res2a_branch2b', data=res2a_branch2a_relu, num_filter=64, pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2a_branch2b = mx.symbol.BatchNorm(name='bn2a_branch2b', data=res2a_branch2b, use_global_stats=True, fix_gamma=False, eps=eps) + scale2a_branch2b = bn2a_branch2b + res2a_branch2b_relu = mx.symbol.Activation(name='res2a_branch2b_relu', data=scale2a_branch2b, act_type='relu') + res2a_branch2c = mx.symbol.Convolution(name='res2a_branch2c', data=res2a_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2a_branch2c = mx.symbol.BatchNorm(name='bn2a_branch2c', data=res2a_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2a_branch2c = bn2a_branch2c + res2a = mx.symbol.broadcast_add(name='res2a', *[scale2a_branch1, scale2a_branch2c]) + res2a_relu = mx.symbol.Activation(name='res2a_relu', data=res2a, act_type='relu') + res2b_branch2a = mx.symbol.Convolution(name='res2b_branch2a', data=res2a_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2a = mx.symbol.BatchNorm(name='bn2b_branch2a', data=res2b_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2b_branch2a = bn2b_branch2a + res2b_branch2a_relu = mx.symbol.Activation(name='res2b_branch2a_relu', data=scale2b_branch2a, act_type='relu') + res2b_branch2b = mx.symbol.Convolution(name='res2b_branch2b', data=res2b_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2b_branch2b = mx.symbol.BatchNorm(name='bn2b_branch2b', data=res2b_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2b_branch2b = bn2b_branch2b + res2b_branch2b_relu = mx.symbol.Activation(name='res2b_branch2b_relu', data=scale2b_branch2b, act_type='relu') + res2b_branch2c = mx.symbol.Convolution(name='res2b_branch2c', data=res2b_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2b_branch2c = mx.symbol.BatchNorm(name='bn2b_branch2c', data=res2b_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2b_branch2c = bn2b_branch2c + res2b = mx.symbol.broadcast_add(name='res2b', *[res2a_relu, scale2b_branch2c]) + res2b_relu = mx.symbol.Activation(name='res2b_relu', data=res2b, act_type='relu') + res2c_branch2a = mx.symbol.Convolution(name='res2c_branch2a', data=res2b_relu, num_filter=64, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2a = mx.symbol.BatchNorm(name='bn2c_branch2a', data=res2c_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2c_branch2a = bn2c_branch2a + res2c_branch2a_relu = mx.symbol.Activation(name='res2c_branch2a_relu', data=scale2c_branch2a, act_type='relu') + res2c_branch2b = mx.symbol.Convolution(name='res2c_branch2b', data=res2c_branch2a_relu, num_filter=64, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn2c_branch2b = mx.symbol.BatchNorm(name='bn2c_branch2b', data=res2c_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2c_branch2b = bn2c_branch2b + res2c_branch2b_relu = mx.symbol.Activation(name='res2c_branch2b_relu', data=scale2c_branch2b, act_type='relu') + res2c_branch2c = mx.symbol.Convolution(name='res2c_branch2c', data=res2c_branch2b_relu, num_filter=256, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn2c_branch2c = mx.symbol.BatchNorm(name='bn2c_branch2c', data=res2c_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale2c_branch2c = bn2c_branch2c + res2c = mx.symbol.broadcast_add(name='res2c', *[res2b_relu, scale2c_branch2c]) + res2c_relu = mx.symbol.Activation(name='res2c_relu', data=res2c, act_type='relu') + res3a_branch1 = mx.symbol.Convolution(name='res3a_branch1', data=res2c_relu, num_filter=512, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch1 = mx.symbol.BatchNorm(name='bn3a_branch1', data=res3a_branch1, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3a_branch1 = bn3a_branch1 + res3a_branch2a = mx.symbol.Convolution(name='res3a_branch2a', data=res2c_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn3a_branch2a = mx.symbol.BatchNorm(name='bn3a_branch2a', data=res3a_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3a_branch2a = bn3a_branch2a + res3a_branch2a_relu = mx.symbol.Activation(name='res3a_branch2a_relu', data=scale3a_branch2a, act_type='relu') + res3a_branch2b = mx.symbol.Convolution(name='res3a_branch2b', data=res3a_branch2a_relu, num_filter=128, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3a_branch2b = mx.symbol.BatchNorm(name='bn3a_branch2b', data=res3a_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3a_branch2b = bn3a_branch2b + res3a_branch2b_relu = mx.symbol.Activation(name='res3a_branch2b_relu', data=scale3a_branch2b, act_type='relu') + res3a_branch2c = mx.symbol.Convolution(name='res3a_branch2c', data=res3a_branch2b_relu, num_filter=512, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3a_branch2c = mx.symbol.BatchNorm(name='bn3a_branch2c', data=res3a_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3a_branch2c = bn3a_branch2c + res3a = mx.symbol.broadcast_add(name='res3a', *[scale3a_branch1, scale3a_branch2c]) + res3a_relu = mx.symbol.Activation(name='res3a_relu', data=res3a, act_type='relu') + res3b1_branch2a = mx.symbol.Convolution(name='res3b1_branch2a', data=res3a_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2a = mx.symbol.BatchNorm(name='bn3b1_branch2a', data=res3b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b1_branch2a = bn3b1_branch2a + res3b1_branch2a_relu = mx.symbol.Activation(name='res3b1_branch2a_relu', data=scale3b1_branch2a, + act_type='relu') + res3b1_branch2b = mx.symbol.Convolution(name='res3b1_branch2b', data=res3b1_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b1_branch2b = mx.symbol.BatchNorm(name='bn3b1_branch2b', data=res3b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b1_branch2b = bn3b1_branch2b + res3b1_branch2b_relu = mx.symbol.Activation(name='res3b1_branch2b_relu', data=scale3b1_branch2b, + act_type='relu') + res3b1_branch2c = mx.symbol.Convolution(name='res3b1_branch2c', data=res3b1_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b1_branch2c = mx.symbol.BatchNorm(name='bn3b1_branch2c', data=res3b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b1_branch2c = bn3b1_branch2c + res3b1 = mx.symbol.broadcast_add(name='res3b1', *[res3a_relu, scale3b1_branch2c]) + res3b1_relu = mx.symbol.Activation(name='res3b1_relu', data=res3b1, act_type='relu') + res3b2_branch2a = mx.symbol.Convolution(name='res3b2_branch2a', data=res3b1_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2a = mx.symbol.BatchNorm(name='bn3b2_branch2a', data=res3b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b2_branch2a = bn3b2_branch2a + res3b2_branch2a_relu = mx.symbol.Activation(name='res3b2_branch2a_relu', data=scale3b2_branch2a, + act_type='relu') + res3b2_branch2b = mx.symbol.Convolution(name='res3b2_branch2b', data=res3b2_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b2_branch2b = mx.symbol.BatchNorm(name='bn3b2_branch2b', data=res3b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b2_branch2b = bn3b2_branch2b + res3b2_branch2b_relu = mx.symbol.Activation(name='res3b2_branch2b_relu', data=scale3b2_branch2b, + act_type='relu') + res3b2_branch2c = mx.symbol.Convolution(name='res3b2_branch2c', data=res3b2_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b2_branch2c = mx.symbol.BatchNorm(name='bn3b2_branch2c', data=res3b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b2_branch2c = bn3b2_branch2c + res3b2 = mx.symbol.broadcast_add(name='res3b2', *[res3b1_relu, scale3b2_branch2c]) + res3b2_relu = mx.symbol.Activation(name='res3b2_relu', data=res3b2, act_type='relu') + res3b3_branch2a = mx.symbol.Convolution(name='res3b3_branch2a', data=res3b2_relu, num_filter=128, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2a = mx.symbol.BatchNorm(name='bn3b3_branch2a', data=res3b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b3_branch2a = bn3b3_branch2a + res3b3_branch2a_relu = mx.symbol.Activation(name='res3b3_branch2a_relu', data=scale3b3_branch2a, + act_type='relu') + if with_dpyramid: + res3b3_branch2b_offset = mx.symbol.Convolution(name='res3b3_branch2b_offset', data=res3b3_branch2a_relu, + num_filter=72, pad=(1, 1), kernel=(3, 3), stride=(1, 1)) + res3b3_branch2b = mx.contrib.symbol.DeformableConvolution(name='res3b3_branch2b', data=res3b3_branch2a_relu, + offset=res3b3_branch2b_offset, + num_filter=128, pad=(1, 1), kernel=(3, 3), + num_deformable_group=4, + stride=(1, 1), no_bias=True) + else: + res3b3_branch2b = mx.symbol.Convolution(name='res3b3_branch2b', data=res3b3_branch2a_relu, num_filter=128, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn3b3_branch2b = mx.symbol.BatchNorm(name='bn3b3_branch2b', data=res3b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b3_branch2b = bn3b3_branch2b + res3b3_branch2b_relu = mx.symbol.Activation(name='res3b3_branch2b_relu', data=scale3b3_branch2b, + act_type='relu') + res3b3_branch2c = mx.symbol.Convolution(name='res3b3_branch2c', data=res3b3_branch2b_relu, num_filter=512, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn3b3_branch2c = mx.symbol.BatchNorm(name='bn3b3_branch2c', data=res3b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale3b3_branch2c = bn3b3_branch2c + res3b3 = mx.symbol.broadcast_add(name='res3b3', *[res3b2_relu, scale3b3_branch2c]) + res3b3_relu = mx.symbol.Activation(name='res3b3_relu', data=res3b3, act_type='relu') + res4a_branch1 = mx.symbol.Convolution(name='res4a_branch1', data=res3b3_relu, num_filter=1024, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch1 = mx.symbol.BatchNorm(name='bn4a_branch1', data=res4a_branch1, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4a_branch1 = bn4a_branch1 + res4a_branch2a = mx.symbol.Convolution(name='res4a_branch2a', data=res3b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(2, 2), no_bias=True) + bn4a_branch2a = mx.symbol.BatchNorm(name='bn4a_branch2a', data=res4a_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4a_branch2a = bn4a_branch2a + res4a_branch2a_relu = mx.symbol.Activation(name='res4a_branch2a_relu', data=scale4a_branch2a, act_type='relu') + res4a_branch2b = mx.symbol.Convolution(name='res4a_branch2b', data=res4a_branch2a_relu, num_filter=256, + pad=(1, 1), + kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4a_branch2b = mx.symbol.BatchNorm(name='bn4a_branch2b', data=res4a_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4a_branch2b = bn4a_branch2b + res4a_branch2b_relu = mx.symbol.Activation(name='res4a_branch2b_relu', data=scale4a_branch2b, act_type='relu') + res4a_branch2c = mx.symbol.Convolution(name='res4a_branch2c', data=res4a_branch2b_relu, num_filter=1024, + pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4a_branch2c = mx.symbol.BatchNorm(name='bn4a_branch2c', data=res4a_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4a_branch2c = bn4a_branch2c + res4a = mx.symbol.broadcast_add(name='res4a', *[scale4a_branch1, scale4a_branch2c]) + res4a_relu = mx.symbol.Activation(name='res4a_relu', data=res4a, act_type='relu') + res4b1_branch2a = mx.symbol.Convolution(name='res4b1_branch2a', data=res4a_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2a = mx.symbol.BatchNorm(name='bn4b1_branch2a', data=res4b1_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b1_branch2a = bn4b1_branch2a + res4b1_branch2a_relu = mx.symbol.Activation(name='res4b1_branch2a_relu', data=scale4b1_branch2a, + act_type='relu') + res4b1_branch2b = mx.symbol.Convolution(name='res4b1_branch2b', data=res4b1_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b1_branch2b = mx.symbol.BatchNorm(name='bn4b1_branch2b', data=res4b1_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b1_branch2b = bn4b1_branch2b + res4b1_branch2b_relu = mx.symbol.Activation(name='res4b1_branch2b_relu', data=scale4b1_branch2b, + act_type='relu') + res4b1_branch2c = mx.symbol.Convolution(name='res4b1_branch2c', data=res4b1_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b1_branch2c = mx.symbol.BatchNorm(name='bn4b1_branch2c', data=res4b1_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b1_branch2c = bn4b1_branch2c + res4b1 = mx.symbol.broadcast_add(name='res4b1', *[res4a_relu, scale4b1_branch2c]) + res4b1_relu = mx.symbol.Activation(name='res4b1_relu', data=res4b1, act_type='relu') + res4b2_branch2a = mx.symbol.Convolution(name='res4b2_branch2a', data=res4b1_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2a = mx.symbol.BatchNorm(name='bn4b2_branch2a', data=res4b2_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b2_branch2a = bn4b2_branch2a + res4b2_branch2a_relu = mx.symbol.Activation(name='res4b2_branch2a_relu', data=scale4b2_branch2a, + act_type='relu') + res4b2_branch2b = mx.symbol.Convolution(name='res4b2_branch2b', data=res4b2_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b2_branch2b = mx.symbol.BatchNorm(name='bn4b2_branch2b', data=res4b2_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b2_branch2b = bn4b2_branch2b + res4b2_branch2b_relu = mx.symbol.Activation(name='res4b2_branch2b_relu', data=scale4b2_branch2b, + act_type='relu') + res4b2_branch2c = mx.symbol.Convolution(name='res4b2_branch2c', data=res4b2_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b2_branch2c = mx.symbol.BatchNorm(name='bn4b2_branch2c', data=res4b2_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b2_branch2c = bn4b2_branch2c + res4b2 = mx.symbol.broadcast_add(name='res4b2', *[res4b1_relu, scale4b2_branch2c]) + res4b2_relu = mx.symbol.Activation(name='res4b2_relu', data=res4b2, act_type='relu') + res4b3_branch2a = mx.symbol.Convolution(name='res4b3_branch2a', data=res4b2_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2a = mx.symbol.BatchNorm(name='bn4b3_branch2a', data=res4b3_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b3_branch2a = bn4b3_branch2a + res4b3_branch2a_relu = mx.symbol.Activation(name='res4b3_branch2a_relu', data=scale4b3_branch2a, + act_type='relu') + res4b3_branch2b = mx.symbol.Convolution(name='res4b3_branch2b', data=res4b3_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b3_branch2b = mx.symbol.BatchNorm(name='bn4b3_branch2b', data=res4b3_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b3_branch2b = bn4b3_branch2b + res4b3_branch2b_relu = mx.symbol.Activation(name='res4b3_branch2b_relu', data=scale4b3_branch2b, + act_type='relu') + res4b3_branch2c = mx.symbol.Convolution(name='res4b3_branch2c', data=res4b3_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b3_branch2c = mx.symbol.BatchNorm(name='bn4b3_branch2c', data=res4b3_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b3_branch2c = bn4b3_branch2c + res4b3 = mx.symbol.broadcast_add(name='res4b3', *[res4b2_relu, scale4b3_branch2c]) + res4b3_relu = mx.symbol.Activation(name='res4b3_relu', data=res4b3, act_type='relu') + res4b4_branch2a = mx.symbol.Convolution(name='res4b4_branch2a', data=res4b3_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2a = mx.symbol.BatchNorm(name='bn4b4_branch2a', data=res4b4_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b4_branch2a = bn4b4_branch2a + res4b4_branch2a_relu = mx.symbol.Activation(name='res4b4_branch2a_relu', data=scale4b4_branch2a, + act_type='relu') + res4b4_branch2b = mx.symbol.Convolution(name='res4b4_branch2b', data=res4b4_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b4_branch2b = mx.symbol.BatchNorm(name='bn4b4_branch2b', data=res4b4_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b4_branch2b = bn4b4_branch2b + res4b4_branch2b_relu = mx.symbol.Activation(name='res4b4_branch2b_relu', data=scale4b4_branch2b, + act_type='relu') + res4b4_branch2c = mx.symbol.Convolution(name='res4b4_branch2c', data=res4b4_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b4_branch2c = mx.symbol.BatchNorm(name='bn4b4_branch2c', data=res4b4_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b4_branch2c = bn4b4_branch2c + res4b4 = mx.symbol.broadcast_add(name='res4b4', *[res4b3_relu, scale4b4_branch2c]) + res4b4_relu = mx.symbol.Activation(name='res4b4_relu', data=res4b4, act_type='relu') + res4b5_branch2a = mx.symbol.Convolution(name='res4b5_branch2a', data=res4b4_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2a = mx.symbol.BatchNorm(name='bn4b5_branch2a', data=res4b5_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b5_branch2a = bn4b5_branch2a + res4b5_branch2a_relu = mx.symbol.Activation(name='res4b5_branch2a_relu', data=scale4b5_branch2a, + act_type='relu') + res4b5_branch2b = mx.symbol.Convolution(name='res4b5_branch2b', data=res4b5_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b5_branch2b = mx.symbol.BatchNorm(name='bn4b5_branch2b', data=res4b5_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b5_branch2b = bn4b5_branch2b + res4b5_branch2b_relu = mx.symbol.Activation(name='res4b5_branch2b_relu', data=scale4b5_branch2b, + act_type='relu') + res4b5_branch2c = mx.symbol.Convolution(name='res4b5_branch2c', data=res4b5_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b5_branch2c = mx.symbol.BatchNorm(name='bn4b5_branch2c', data=res4b5_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b5_branch2c = bn4b5_branch2c + res4b5 = mx.symbol.broadcast_add(name='res4b5', *[res4b4_relu, scale4b5_branch2c]) + res4b5_relu = mx.symbol.Activation(name='res4b5_relu', data=res4b5, act_type='relu') + res4b6_branch2a = mx.symbol.Convolution(name='res4b6_branch2a', data=res4b5_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2a = mx.symbol.BatchNorm(name='bn4b6_branch2a', data=res4b6_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b6_branch2a = bn4b6_branch2a + res4b6_branch2a_relu = mx.symbol.Activation(name='res4b6_branch2a_relu', data=scale4b6_branch2a, + act_type='relu') + res4b6_branch2b = mx.symbol.Convolution(name='res4b6_branch2b', data=res4b6_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b6_branch2b = mx.symbol.BatchNorm(name='bn4b6_branch2b', data=res4b6_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b6_branch2b = bn4b6_branch2b + res4b6_branch2b_relu = mx.symbol.Activation(name='res4b6_branch2b_relu', data=scale4b6_branch2b, + act_type='relu') + res4b6_branch2c = mx.symbol.Convolution(name='res4b6_branch2c', data=res4b6_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b6_branch2c = mx.symbol.BatchNorm(name='bn4b6_branch2c', data=res4b6_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b6_branch2c = bn4b6_branch2c + res4b6 = mx.symbol.broadcast_add(name='res4b6', *[res4b5_relu, scale4b6_branch2c]) + res4b6_relu = mx.symbol.Activation(name='res4b6_relu', data=res4b6, act_type='relu') + res4b7_branch2a = mx.symbol.Convolution(name='res4b7_branch2a', data=res4b6_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2a = mx.symbol.BatchNorm(name='bn4b7_branch2a', data=res4b7_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b7_branch2a = bn4b7_branch2a + res4b7_branch2a_relu = mx.symbol.Activation(name='res4b7_branch2a_relu', data=scale4b7_branch2a, + act_type='relu') + res4b7_branch2b = mx.symbol.Convolution(name='res4b7_branch2b', data=res4b7_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b7_branch2b = mx.symbol.BatchNorm(name='bn4b7_branch2b', data=res4b7_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b7_branch2b = bn4b7_branch2b + res4b7_branch2b_relu = mx.symbol.Activation(name='res4b7_branch2b_relu', data=scale4b7_branch2b, + act_type='relu') + res4b7_branch2c = mx.symbol.Convolution(name='res4b7_branch2c', data=res4b7_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b7_branch2c = mx.symbol.BatchNorm(name='bn4b7_branch2c', data=res4b7_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b7_branch2c = bn4b7_branch2c + res4b7 = mx.symbol.broadcast_add(name='res4b7', *[res4b6_relu, scale4b7_branch2c]) + res4b7_relu = mx.symbol.Activation(name='res4b7_relu', data=res4b7, act_type='relu') + res4b8_branch2a = mx.symbol.Convolution(name='res4b8_branch2a', data=res4b7_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2a = mx.symbol.BatchNorm(name='bn4b8_branch2a', data=res4b8_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b8_branch2a = bn4b8_branch2a + res4b8_branch2a_relu = mx.symbol.Activation(name='res4b8_branch2a_relu', data=scale4b8_branch2a, + act_type='relu') + res4b8_branch2b = mx.symbol.Convolution(name='res4b8_branch2b', data=res4b8_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b8_branch2b = mx.symbol.BatchNorm(name='bn4b8_branch2b', data=res4b8_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b8_branch2b = bn4b8_branch2b + res4b8_branch2b_relu = mx.symbol.Activation(name='res4b8_branch2b_relu', data=scale4b8_branch2b, + act_type='relu') + res4b8_branch2c = mx.symbol.Convolution(name='res4b8_branch2c', data=res4b8_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b8_branch2c = mx.symbol.BatchNorm(name='bn4b8_branch2c', data=res4b8_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b8_branch2c = bn4b8_branch2c + res4b8 = mx.symbol.broadcast_add(name='res4b8', *[res4b7_relu, scale4b8_branch2c]) + res4b8_relu = mx.symbol.Activation(name='res4b8_relu', data=res4b8, act_type='relu') + res4b9_branch2a = mx.symbol.Convolution(name='res4b9_branch2a', data=res4b8_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2a = mx.symbol.BatchNorm(name='bn4b9_branch2a', data=res4b9_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b9_branch2a = bn4b9_branch2a + res4b9_branch2a_relu = mx.symbol.Activation(name='res4b9_branch2a_relu', data=scale4b9_branch2a, + act_type='relu') + res4b9_branch2b = mx.symbol.Convolution(name='res4b9_branch2b', data=res4b9_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b9_branch2b = mx.symbol.BatchNorm(name='bn4b9_branch2b', data=res4b9_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b9_branch2b = bn4b9_branch2b + res4b9_branch2b_relu = mx.symbol.Activation(name='res4b9_branch2b_relu', data=scale4b9_branch2b, + act_type='relu') + res4b9_branch2c = mx.symbol.Convolution(name='res4b9_branch2c', data=res4b9_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b9_branch2c = mx.symbol.BatchNorm(name='bn4b9_branch2c', data=res4b9_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b9_branch2c = bn4b9_branch2c + res4b9 = mx.symbol.broadcast_add(name='res4b9', *[res4b8_relu, scale4b9_branch2c]) + res4b9_relu = mx.symbol.Activation(name='res4b9_relu', data=res4b9, act_type='relu') + res4b10_branch2a = mx.symbol.Convolution(name='res4b10_branch2a', data=res4b9_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2a = mx.symbol.BatchNorm(name='bn4b10_branch2a', data=res4b10_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b10_branch2a = bn4b10_branch2a + res4b10_branch2a_relu = mx.symbol.Activation(name='res4b10_branch2a_relu', data=scale4b10_branch2a, + act_type='relu') + res4b10_branch2b = mx.symbol.Convolution(name='res4b10_branch2b', data=res4b10_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b10_branch2b = mx.symbol.BatchNorm(name='bn4b10_branch2b', data=res4b10_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b10_branch2b = bn4b10_branch2b + res4b10_branch2b_relu = mx.symbol.Activation(name='res4b10_branch2b_relu', data=scale4b10_branch2b, + act_type='relu') + res4b10_branch2c = mx.symbol.Convolution(name='res4b10_branch2c', data=res4b10_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b10_branch2c = mx.symbol.BatchNorm(name='bn4b10_branch2c', data=res4b10_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b10_branch2c = bn4b10_branch2c + res4b10 = mx.symbol.broadcast_add(name='res4b10', *[res4b9_relu, scale4b10_branch2c]) + res4b10_relu = mx.symbol.Activation(name='res4b10_relu', data=res4b10, act_type='relu') + res4b11_branch2a = mx.symbol.Convolution(name='res4b11_branch2a', data=res4b10_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2a = mx.symbol.BatchNorm(name='bn4b11_branch2a', data=res4b11_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b11_branch2a = bn4b11_branch2a + res4b11_branch2a_relu = mx.symbol.Activation(name='res4b11_branch2a_relu', data=scale4b11_branch2a, + act_type='relu') + res4b11_branch2b = mx.symbol.Convolution(name='res4b11_branch2b', data=res4b11_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b11_branch2b = mx.symbol.BatchNorm(name='bn4b11_branch2b', data=res4b11_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b11_branch2b = bn4b11_branch2b + res4b11_branch2b_relu = mx.symbol.Activation(name='res4b11_branch2b_relu', data=scale4b11_branch2b, + act_type='relu') + res4b11_branch2c = mx.symbol.Convolution(name='res4b11_branch2c', data=res4b11_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b11_branch2c = mx.symbol.BatchNorm(name='bn4b11_branch2c', data=res4b11_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b11_branch2c = bn4b11_branch2c + res4b11 = mx.symbol.broadcast_add(name='res4b11', *[res4b10_relu, scale4b11_branch2c]) + res4b11_relu = mx.symbol.Activation(name='res4b11_relu', data=res4b11, act_type='relu') + res4b12_branch2a = mx.symbol.Convolution(name='res4b12_branch2a', data=res4b11_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2a = mx.symbol.BatchNorm(name='bn4b12_branch2a', data=res4b12_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b12_branch2a = bn4b12_branch2a + res4b12_branch2a_relu = mx.symbol.Activation(name='res4b12_branch2a_relu', data=scale4b12_branch2a, + act_type='relu') + res4b12_branch2b = mx.symbol.Convolution(name='res4b12_branch2b', data=res4b12_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b12_branch2b = mx.symbol.BatchNorm(name='bn4b12_branch2b', data=res4b12_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b12_branch2b = bn4b12_branch2b + res4b12_branch2b_relu = mx.symbol.Activation(name='res4b12_branch2b_relu', data=scale4b12_branch2b, + act_type='relu') + res4b12_branch2c = mx.symbol.Convolution(name='res4b12_branch2c', data=res4b12_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b12_branch2c = mx.symbol.BatchNorm(name='bn4b12_branch2c', data=res4b12_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b12_branch2c = bn4b12_branch2c + res4b12 = mx.symbol.broadcast_add(name='res4b12', *[res4b11_relu, scale4b12_branch2c]) + res4b12_relu = mx.symbol.Activation(name='res4b12_relu', data=res4b12, act_type='relu') + res4b13_branch2a = mx.symbol.Convolution(name='res4b13_branch2a', data=res4b12_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2a = mx.symbol.BatchNorm(name='bn4b13_branch2a', data=res4b13_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b13_branch2a = bn4b13_branch2a + res4b13_branch2a_relu = mx.symbol.Activation(name='res4b13_branch2a_relu', data=scale4b13_branch2a, + act_type='relu') + res4b13_branch2b = mx.symbol.Convolution(name='res4b13_branch2b', data=res4b13_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b13_branch2b = mx.symbol.BatchNorm(name='bn4b13_branch2b', data=res4b13_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b13_branch2b = bn4b13_branch2b + res4b13_branch2b_relu = mx.symbol.Activation(name='res4b13_branch2b_relu', data=scale4b13_branch2b, + act_type='relu') + res4b13_branch2c = mx.symbol.Convolution(name='res4b13_branch2c', data=res4b13_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b13_branch2c = mx.symbol.BatchNorm(name='bn4b13_branch2c', data=res4b13_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b13_branch2c = bn4b13_branch2c + res4b13 = mx.symbol.broadcast_add(name='res4b13', *[res4b12_relu, scale4b13_branch2c]) + res4b13_relu = mx.symbol.Activation(name='res4b13_relu', data=res4b13, act_type='relu') + res4b14_branch2a = mx.symbol.Convolution(name='res4b14_branch2a', data=res4b13_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2a = mx.symbol.BatchNorm(name='bn4b14_branch2a', data=res4b14_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b14_branch2a = bn4b14_branch2a + res4b14_branch2a_relu = mx.symbol.Activation(name='res4b14_branch2a_relu', data=scale4b14_branch2a, + act_type='relu') + res4b14_branch2b = mx.symbol.Convolution(name='res4b14_branch2b', data=res4b14_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b14_branch2b = mx.symbol.BatchNorm(name='bn4b14_branch2b', data=res4b14_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b14_branch2b = bn4b14_branch2b + res4b14_branch2b_relu = mx.symbol.Activation(name='res4b14_branch2b_relu', data=scale4b14_branch2b, + act_type='relu') + res4b14_branch2c = mx.symbol.Convolution(name='res4b14_branch2c', data=res4b14_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b14_branch2c = mx.symbol.BatchNorm(name='bn4b14_branch2c', data=res4b14_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b14_branch2c = bn4b14_branch2c + res4b14 = mx.symbol.broadcast_add(name='res4b14', *[res4b13_relu, scale4b14_branch2c]) + res4b14_relu = mx.symbol.Activation(name='res4b14_relu', data=res4b14, act_type='relu') + res4b15_branch2a = mx.symbol.Convolution(name='res4b15_branch2a', data=res4b14_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2a = mx.symbol.BatchNorm(name='bn4b15_branch2a', data=res4b15_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b15_branch2a = bn4b15_branch2a + res4b15_branch2a_relu = mx.symbol.Activation(name='res4b15_branch2a_relu', data=scale4b15_branch2a, + act_type='relu') + res4b15_branch2b = mx.symbol.Convolution(name='res4b15_branch2b', data=res4b15_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b15_branch2b = mx.symbol.BatchNorm(name='bn4b15_branch2b', data=res4b15_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b15_branch2b = bn4b15_branch2b + res4b15_branch2b_relu = mx.symbol.Activation(name='res4b15_branch2b_relu', data=scale4b15_branch2b, + act_type='relu') + res4b15_branch2c = mx.symbol.Convolution(name='res4b15_branch2c', data=res4b15_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b15_branch2c = mx.symbol.BatchNorm(name='bn4b15_branch2c', data=res4b15_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b15_branch2c = bn4b15_branch2c + res4b15 = mx.symbol.broadcast_add(name='res4b15', *[res4b14_relu, scale4b15_branch2c]) + res4b15_relu = mx.symbol.Activation(name='res4b15_relu', data=res4b15, act_type='relu') + res4b16_branch2a = mx.symbol.Convolution(name='res4b16_branch2a', data=res4b15_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2a = mx.symbol.BatchNorm(name='bn4b16_branch2a', data=res4b16_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b16_branch2a = bn4b16_branch2a + res4b16_branch2a_relu = mx.symbol.Activation(name='res4b16_branch2a_relu', data=scale4b16_branch2a, + act_type='relu') + res4b16_branch2b = mx.symbol.Convolution(name='res4b16_branch2b', data=res4b16_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b16_branch2b = mx.symbol.BatchNorm(name='bn4b16_branch2b', data=res4b16_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b16_branch2b = bn4b16_branch2b + res4b16_branch2b_relu = mx.symbol.Activation(name='res4b16_branch2b_relu', data=scale4b16_branch2b, + act_type='relu') + res4b16_branch2c = mx.symbol.Convolution(name='res4b16_branch2c', data=res4b16_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b16_branch2c = mx.symbol.BatchNorm(name='bn4b16_branch2c', data=res4b16_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b16_branch2c = bn4b16_branch2c + res4b16 = mx.symbol.broadcast_add(name='res4b16', *[res4b15_relu, scale4b16_branch2c]) + res4b16_relu = mx.symbol.Activation(name='res4b16_relu', data=res4b16, act_type='relu') + res4b17_branch2a = mx.symbol.Convolution(name='res4b17_branch2a', data=res4b16_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2a = mx.symbol.BatchNorm(name='bn4b17_branch2a', data=res4b17_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b17_branch2a = bn4b17_branch2a + res4b17_branch2a_relu = mx.symbol.Activation(name='res4b17_branch2a_relu', data=scale4b17_branch2a, + act_type='relu') + res4b17_branch2b = mx.symbol.Convolution(name='res4b17_branch2b', data=res4b17_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b17_branch2b = mx.symbol.BatchNorm(name='bn4b17_branch2b', data=res4b17_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b17_branch2b = bn4b17_branch2b + res4b17_branch2b_relu = mx.symbol.Activation(name='res4b17_branch2b_relu', data=scale4b17_branch2b, + act_type='relu') + res4b17_branch2c = mx.symbol.Convolution(name='res4b17_branch2c', data=res4b17_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b17_branch2c = mx.symbol.BatchNorm(name='bn4b17_branch2c', data=res4b17_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b17_branch2c = bn4b17_branch2c + res4b17 = mx.symbol.broadcast_add(name='res4b17', *[res4b16_relu, scale4b17_branch2c]) + res4b17_relu = mx.symbol.Activation(name='res4b17_relu', data=res4b17, act_type='relu') + res4b18_branch2a = mx.symbol.Convolution(name='res4b18_branch2a', data=res4b17_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2a = mx.symbol.BatchNorm(name='bn4b18_branch2a', data=res4b18_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b18_branch2a = bn4b18_branch2a + res4b18_branch2a_relu = mx.symbol.Activation(name='res4b18_branch2a_relu', data=scale4b18_branch2a, + act_type='relu') + res4b18_branch2b = mx.symbol.Convolution(name='res4b18_branch2b', data=res4b18_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b18_branch2b = mx.symbol.BatchNorm(name='bn4b18_branch2b', data=res4b18_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b18_branch2b = bn4b18_branch2b + res4b18_branch2b_relu = mx.symbol.Activation(name='res4b18_branch2b_relu', data=scale4b18_branch2b, + act_type='relu') + res4b18_branch2c = mx.symbol.Convolution(name='res4b18_branch2c', data=res4b18_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b18_branch2c = mx.symbol.BatchNorm(name='bn4b18_branch2c', data=res4b18_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b18_branch2c = bn4b18_branch2c + res4b18 = mx.symbol.broadcast_add(name='res4b18', *[res4b17_relu, scale4b18_branch2c]) + res4b18_relu = mx.symbol.Activation(name='res4b18_relu', data=res4b18, act_type='relu') + res4b19_branch2a = mx.symbol.Convolution(name='res4b19_branch2a', data=res4b18_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2a = mx.symbol.BatchNorm(name='bn4b19_branch2a', data=res4b19_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b19_branch2a = bn4b19_branch2a + res4b19_branch2a_relu = mx.symbol.Activation(name='res4b19_branch2a_relu', data=scale4b19_branch2a, + act_type='relu') + res4b19_branch2b = mx.symbol.Convolution(name='res4b19_branch2b', data=res4b19_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b19_branch2b = mx.symbol.BatchNorm(name='bn4b19_branch2b', data=res4b19_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b19_branch2b = bn4b19_branch2b + res4b19_branch2b_relu = mx.symbol.Activation(name='res4b19_branch2b_relu', data=scale4b19_branch2b, + act_type='relu') + res4b19_branch2c = mx.symbol.Convolution(name='res4b19_branch2c', data=res4b19_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b19_branch2c = mx.symbol.BatchNorm(name='bn4b19_branch2c', data=res4b19_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b19_branch2c = bn4b19_branch2c + res4b19 = mx.symbol.broadcast_add(name='res4b19', *[res4b18_relu, scale4b19_branch2c]) + res4b19_relu = mx.symbol.Activation(name='res4b19_relu', data=res4b19, act_type='relu') + res4b20_branch2a = mx.symbol.Convolution(name='res4b20_branch2a', data=res4b19_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2a = mx.symbol.BatchNorm(name='bn4b20_branch2a', data=res4b20_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b20_branch2a = bn4b20_branch2a + res4b20_branch2a_relu = mx.symbol.Activation(name='res4b20_branch2a_relu', data=scale4b20_branch2a, + act_type='relu') + res4b20_branch2b = mx.symbol.Convolution(name='res4b20_branch2b', data=res4b20_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b20_branch2b = mx.symbol.BatchNorm(name='bn4b20_branch2b', data=res4b20_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b20_branch2b = bn4b20_branch2b + res4b20_branch2b_relu = mx.symbol.Activation(name='res4b20_branch2b_relu', data=scale4b20_branch2b, + act_type='relu') + res4b20_branch2c = mx.symbol.Convolution(name='res4b20_branch2c', data=res4b20_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b20_branch2c = mx.symbol.BatchNorm(name='bn4b20_branch2c', data=res4b20_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b20_branch2c = bn4b20_branch2c + res4b20 = mx.symbol.broadcast_add(name='res4b20', *[res4b19_relu, scale4b20_branch2c]) + res4b20_relu = mx.symbol.Activation(name='res4b20_relu', data=res4b20, act_type='relu') + res4b21_branch2a = mx.symbol.Convolution(name='res4b21_branch2a', data=res4b20_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2a = mx.symbol.BatchNorm(name='bn4b21_branch2a', data=res4b21_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b21_branch2a = bn4b21_branch2a + res4b21_branch2a_relu = mx.symbol.Activation(name='res4b21_branch2a_relu', data=scale4b21_branch2a, + act_type='relu') + res4b21_branch2b = mx.symbol.Convolution(name='res4b21_branch2b', data=res4b21_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b21_branch2b = mx.symbol.BatchNorm(name='bn4b21_branch2b', data=res4b21_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b21_branch2b = bn4b21_branch2b + res4b21_branch2b_relu = mx.symbol.Activation(name='res4b21_branch2b_relu', data=scale4b21_branch2b, + act_type='relu') + res4b21_branch2c = mx.symbol.Convolution(name='res4b21_branch2c', data=res4b21_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b21_branch2c = mx.symbol.BatchNorm(name='bn4b21_branch2c', data=res4b21_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b21_branch2c = bn4b21_branch2c + res4b21 = mx.symbol.broadcast_add(name='res4b21', *[res4b20_relu, scale4b21_branch2c]) + res4b21_relu = mx.symbol.Activation(name='res4b21_relu', data=res4b21, act_type='relu') + res4b22_branch2a = mx.symbol.Convolution(name='res4b22_branch2a', data=res4b21_relu, num_filter=256, pad=(0, 0), + kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2a = mx.symbol.BatchNorm(name='bn4b22_branch2a', data=res4b22_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b22_branch2a = bn4b22_branch2a + res4b22_branch2a_relu = mx.symbol.Activation(name='res4b22_branch2a_relu', data=scale4b22_branch2a, + act_type='relu') + if with_dpyramid: + res4b22_branch2b_offset = mx.symbol.Convolution(name='res4b22_branch2b_offset', data=res4b22_branch2a_relu, + num_filter=72, pad=(1, 1), kernel=(3, 3), stride=(1, 1)) + res4b22_branch2b = mx.contrib.symbol.DeformableConvolution(name='res4b22_branch2b', data=res4b22_branch2a_relu, + offset=res4b22_branch2b_offset, + num_filter=256, pad=(1, 1), kernel=(3, 3), + num_deformable_group=4, + stride=(1, 1), no_bias=True) + else: + res4b22_branch2b = mx.symbol.Convolution(name='res4b22_branch2b', data=res4b22_branch2a_relu, num_filter=256, + pad=(1, 1), kernel=(3, 3), stride=(1, 1), no_bias=True) + bn4b22_branch2b = mx.symbol.BatchNorm(name='bn4b22_branch2b', data=res4b22_branch2b, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b22_branch2b = bn4b22_branch2b + res4b22_branch2b_relu = mx.symbol.Activation(name='res4b22_branch2b_relu', data=scale4b22_branch2b, + act_type='relu') + res4b22_branch2c = mx.symbol.Convolution(name='res4b22_branch2c', data=res4b22_branch2b_relu, num_filter=1024, + pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn4b22_branch2c = mx.symbol.BatchNorm(name='bn4b22_branch2c', data=res4b22_branch2c, use_global_stats=True, + fix_gamma=False, eps=eps) + scale4b22_branch2c = bn4b22_branch2c + res4b22 = mx.symbol.broadcast_add(name='res4b22', *[res4b21_relu, scale4b22_branch2c]) + res4b22_relu = mx.symbol.Activation(name='res4b22_relu', data=res4b22, act_type='relu') + + if with_dilated: + res5_stride = (1, 1) + res5_dilate = (2, 2) + else: + res5_stride = (2, 2) + res5_dilate = (1, 1) + + # res5a-bottleneck + res5a_branch2a = mx.symbol.Convolution(name='res5a_branch2a', data=res4b22_relu, num_filter=512, pad=(0, 0), kernel=(1, 1), stride=res5_stride, no_bias=True) + bn5a_branch2a = mx.symbol.BatchNorm(name='bn5a_branch2a', data=res5a_branch2a, use_global_stats=True, fix_gamma=False, eps=eps) + scale5a_branch2a = bn5a_branch2a + res5a_branch2a_relu = mx.symbol.Activation(name='res5a_branch2a_relu', data=scale5a_branch2a, act_type='relu') + + if with_dconv: + res5a_branch2b_offset = mx.symbol.Convolution(name='res5a_branch2b_offset', data=res5a_branch2a_relu, num_filter=72, pad=res5_dilate, kernel=(3, 3), dilate=res5_dilate) + res5a_branch2b = mx.contrib.symbol.DeformableConvolution(name='res5a_branch2b', data=res5a_branch2a_relu, offset=res5a_branch2b_offset, num_filter=512, + pad=res5_dilate, kernel=(3, 3), num_deformable_group=4, stride=(1, 1), dilate=res5_dilate, no_bias=True) + else: + res5a_branch2b = mx.symbol.Convolution(name='res5a_branch2b', data=res5a_branch2a_relu, num_filter=512, pad=res5_dilate, + kernel=(3, 3), stride=(1, 1), dilate=res5_dilate, no_bias=True) + + bn5a_branch2b = mx.symbol.BatchNorm(name='bn5a_branch2b', data=res5a_branch2b, use_global_stats=True, fix_gamma=False, eps=eps) + scale5a_branch2b = bn5a_branch2b + res5a_branch2b_relu = mx.symbol.Activation(name='res5a_branch2b_relu', data=scale5a_branch2b, act_type='relu') + res5a_branch2c = mx.symbol.Convolution(name='res5a_branch2c', data=res5a_branch2b_relu, num_filter=2048, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5a_branch2c = mx.symbol.BatchNorm(name='bn5a_branch2c', data=res5a_branch2c, use_global_stats=True, fix_gamma=False, eps=eps) + scale5a_branch2c = bn5a_branch2c + # res5a-shortcut + res5a_branch1 = mx.symbol.Convolution(name='res5a_branch1', data=res4b22_relu, num_filter=2048, pad=(0, 0), kernel=(1, 1), stride=res5_stride, no_bias=True) + bn5a_branch1 = mx.symbol.BatchNorm(name='bn5a_branch1', data=res5a_branch1, use_global_stats=True, fix_gamma=False, eps=eps) + scale5a_branch1 = bn5a_branch1 + res5a = mx.symbol.broadcast_add(name='res5a', *[scale5a_branch1, scale5a_branch2c]) + res5a_relu = mx.symbol.Activation(name='res5a_relu', data=res5a, act_type='relu') + + # res5b-bottleneck + res5b_branch2a = mx.symbol.Convolution(name='res5b_branch2a', data=res5a_relu, num_filter=512, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2a = mx.symbol.BatchNorm(name='bn5b_branch2a', data=res5b_branch2a, use_global_stats=True, fix_gamma=False, eps=eps) + scale5b_branch2a = bn5b_branch2a + res5b_branch2a_relu = mx.symbol.Activation(name='res5b_branch2a_relu', data=scale5b_branch2a, act_type='relu') + if with_dconv: + res5b_branch2b_offset = mx.symbol.Convolution(name='res5b_branch2b_offset', data=res5b_branch2a_relu, num_filter=72, pad=res5_dilate, kernel=(3, 3), dilate=res5_dilate) + res5b_branch2b = mx.contrib.symbol.DeformableConvolution(name='res5b_branch2b', data=res5b_branch2a_relu, offset=res5b_branch2b_offset, num_filter=512, + pad=res5_dilate, kernel=(3, 3), num_deformable_group=4, dilate=res5_dilate, no_bias=True) + else: + res5b_branch2b = mx.symbol.Convolution(name='res5b_branch2b', data=res5b_branch2a_relu, num_filter=512, pad=res5_dilate, + kernel=(3, 3), stride=(1, 1), dilate=res5_dilate, no_bias=True) + bn5b_branch2b = mx.symbol.BatchNorm(name='bn5b_branch2b', data=res5b_branch2b, use_global_stats=True, fix_gamma=False, eps=eps) + scale5b_branch2b = bn5b_branch2b + res5b_branch2b_relu = mx.symbol.Activation(name='res5b_branch2b_relu', data=scale5b_branch2b, act_type='relu') + res5b_branch2c = mx.symbol.Convolution(name='res5b_branch2c', data=res5b_branch2b_relu, num_filter=2048, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5b_branch2c = mx.symbol.BatchNorm(name='bn5b_branch2c', data=res5b_branch2c, use_global_stats=True, fix_gamma=False, eps=eps) + scale5b_branch2c = bn5b_branch2c + # res5b-shortcut + res5b = mx.symbol.broadcast_add(name='res5b', *[res5a_relu, scale5b_branch2c]) + res5b_relu = mx.symbol.Activation(name='res5b_relu', data=res5b, act_type='relu') + + # res5c-bottleneck + res5c_branch2a = mx.symbol.Convolution(name='res5c_branch2a', data=res5b_relu, num_filter=512, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2a = mx.symbol.BatchNorm(name='bn5c_branch2a', data=res5c_branch2a, use_global_stats=True, + fix_gamma=False, eps=eps) + scale5c_branch2a = bn5c_branch2a + res5c_branch2a_relu = mx.symbol.Activation(name='res5c_branch2a_relu', data=scale5c_branch2a, act_type='relu') + if with_dconv: + res5c_branch2b_offset = mx.symbol.Convolution(name='res5c_branch2b_offset', data=res5c_branch2a_relu, num_filter=72, pad=res5_dilate, kernel=(3, 3), dilate=res5_dilate) + res5c_branch2b = mx.contrib.symbol.DeformableConvolution(name='res5c_branch2b', data=res5c_branch2a_relu, offset=res5c_branch2b_offset, num_filter=512, + pad=res5_dilate, kernel=(3, 3), num_deformable_group=4, dilate=res5_dilate, no_bias=True) + else: + res5c_branch2b = mx.symbol.Convolution(name='res5c_branch2b', data=res5c_branch2a_relu, num_filter=512, pad=res5_dilate, + kernel=(3, 3), stride=(1, 1), dilate=res5_dilate, no_bias=True) + bn5c_branch2b = mx.symbol.BatchNorm(name='bn5c_branch2b', data=res5c_branch2b, use_global_stats=True, fix_gamma=False, eps=eps) + scale5c_branch2b = bn5c_branch2b + res5c_branch2b_relu = mx.symbol.Activation(name='res5c_branch2b_relu', data=scale5c_branch2b, act_type='relu') + res5c_branch2c = mx.symbol.Convolution(name='res5c_branch2c', data=res5c_branch2b_relu, num_filter=2048, pad=(0, 0), kernel=(1, 1), stride=(1, 1), no_bias=True) + bn5c_branch2c = mx.symbol.BatchNorm(name='bn5c_branch2c', data=res5c_branch2c, use_global_stats=True, fix_gamma=False, eps=eps) + scale5c_branch2c = bn5c_branch2c + # res5c-shortcut + res5c = mx.symbol.broadcast_add(name='res5c', *[res5b_relu, scale5c_branch2c]) + res5c_relu = mx.symbol.Activation(name='res5c_relu', data=res5c, act_type='relu') + + return res2c_relu, res3b3_relu, res4b22_relu, res5c_relu + + def get_fpn_feature(self, c2, c3, c4, c5, feature_dim=256): + + # lateral connection + fpn_p5_1x1 = mx.symbol.Convolution(data=c5, kernel=(1, 1), pad=(0, 0), stride=(1, 1), num_filter=feature_dim, name='fpn_p5_1x1') + fpn_p4_1x1 = mx.symbol.Convolution(data=c4, kernel=(1, 1), pad=(0, 0), stride=(1, 1), num_filter=feature_dim, name='fpn_p4_1x1') + fpn_p3_1x1 = mx.symbol.Convolution(data=c3, kernel=(1, 1), pad=(0, 0), stride=(1, 1), num_filter=feature_dim, name='fpn_p3_1x1') + fpn_p2_1x1 = mx.symbol.Convolution(data=c2, kernel=(1, 1), pad=(0, 0), stride=(1, 1), num_filter=feature_dim, name='fpn_p2_1x1') + # top-down connection + fpn_p5_upsample = mx.symbol.UpSampling(fpn_p5_1x1, scale=2, sample_type='nearest', name='fpn_p5_upsample') + fpn_p4_plus = mx.sym.ElementWiseSum(*[fpn_p5_upsample, fpn_p4_1x1], name='fpn_p4_sum') + fpn_p4_upsample = mx.symbol.UpSampling(fpn_p4_plus, scale=2, sample_type='nearest', name='fpn_p4_upsample') + fpn_p3_plus = mx.sym.ElementWiseSum(*[fpn_p4_upsample, fpn_p3_1x1], name='fpn_p3_sum') + fpn_p3_upsample = mx.symbol.UpSampling(fpn_p3_plus, scale=2, sample_type='nearest', name='fpn_p3_upsample') + fpn_p2_plus = mx.sym.ElementWiseSum(*[fpn_p3_upsample, fpn_p2_1x1], name='fpn_p2_sum') + # FPN feature + fpn_p6 = mx.sym.Convolution(data=c5, kernel=(3, 3), pad=(1, 1), stride=(2, 2), num_filter=feature_dim, name='fpn_p6') + fpn_p5 = mx.symbol.Convolution(data=fpn_p5_1x1, kernel=(3, 3), pad=(1, 1), stride=(1, 1), num_filter=feature_dim, name='fpn_p5') + fpn_p4 = mx.symbol.Convolution(data=fpn_p4_plus, kernel=(3, 3), pad=(1, 1), stride=(1, 1), num_filter=feature_dim, name='fpn_p4') + fpn_p3 = mx.symbol.Convolution(data=fpn_p3_plus, kernel=(3, 3), pad=(1, 1), stride=(1, 1), num_filter=feature_dim, name='fpn_p3') + fpn_p2 = mx.symbol.Convolution(data=fpn_p2_plus, kernel=(3, 3), pad=(1, 1), stride=(1, 1), num_filter=feature_dim, name='fpn_p2') + + return fpn_p2, fpn_p3, fpn_p4, fpn_p5, fpn_p6 + + def get_light_head(self, data, suffix): + # mid_num_filter=256 + mid_num_filter=64 # for s + conv_new_1 = mx.sym.Convolution(data=data, kernel=(15, 1), pad=(7, 0), num_filter=mid_num_filter, name="conv_new_1" + suffix, + weight=self.shared_param_dict['conv_new_1_weight'], bias=self.shared_param_dict['conv_new_1_bias'], lr_mult=3.0) + + relu_new_1 = mx.sym.Activation(data=conv_new_1, act_type='relu', name='relu1' + suffix) + conv_new_2 = mx.sym.Convolution(data=relu_new_1, kernel=(1, 15), pad=(0, 7), num_filter=10 * 7 * 7, name="conv_new_2" + suffix, + weight=self.shared_param_dict['conv_new_2_weight'], bias=self.shared_param_dict['conv_new_2_bias'], + lr_mult=3.0) + relu_new_2 = mx.sym.Activation(data=conv_new_2, act_type='relu', name='relu2' + suffix) + conv_new_3 = mx.sym.Convolution(data=data, kernel=(1, 15), pad=(0, 7), num_filter=mid_num_filter, name="conv_new_3" + suffix, + weight=self.shared_param_dict['conv_new_3_weight'], bias=self.shared_param_dict['conv_new_3_bias'], + lr_mult=3.0) + relu_new_3 = mx.sym.Activation(data=conv_new_3, act_type='relu', name='relu3' + suffix) + conv_new_4 = mx.sym.Convolution(data=relu_new_3, kernel=(15, 1), pad=(7, 0), num_filter=10 * 7 * 7, name="conv_new_4" + suffix, + weight=self.shared_param_dict['conv_new_4_weight'], bias=self.shared_param_dict['conv_new_4_bias'], + lr_mult=3.0) + relu_new_4 = mx.sym.Activation(data=conv_new_4, act_type='relu', name='relu4' + suffix) + light_head = mx.symbol.broadcast_add(name='light_head' + suffix, *[relu_new_2, relu_new_4]) + return light_head + def get_rpn_subnet(self, data, num_anchors, suffix): + rpn_conv = mx.sym.Convolution(data=data, kernel=(3, 3), pad=(1, 1), num_filter=512, name='rpn_conv_' + suffix, + weight=self.shared_param_dict['rpn_conv_weight'], bias=self.shared_param_dict['rpn_conv_bias']) + rpn_relu = mx.sym.Activation(data=rpn_conv, act_type='relu', name='rpn_relu_' + suffix) + rpn_cls_score = mx.sym.Convolution(data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=2 * num_anchors, name='rpn_cls_score_' + suffix, + weight=self.shared_param_dict['rpn_cls_score_weight'], bias=self.shared_param_dict['rpn_cls_score_bias']) + rpn_bbox_pred = mx.sym.Convolution(data=rpn_relu, kernel=(1, 1), pad=(0, 0), num_filter=4 * num_anchors, name='rpn_bbox_pred_' + suffix, + weight=self.shared_param_dict['rpn_bbox_pred_weight'], bias=self.shared_param_dict['rpn_bbox_pred_bias']) + + # n x (2*A) x H x W => n x 2 x (A*H*W) + rpn_cls_score_t1 = mx.sym.Reshape(data=rpn_cls_score, shape=(0, 2, -1, 0), name='rpn_cls_score_t1_' + suffix) + rpn_cls_score_t2 = mx.sym.Reshape(data=rpn_cls_score_t1, shape=(0, 2, -1), name='rpn_cls_score_t2_' + suffix) + rpn_cls_prob = mx.sym.SoftmaxActivation(data=rpn_cls_score_t1, mode='channel', name='rpn_cls_prob_' + suffix) + rpn_cls_prob_t = mx.sym.Reshape(data=rpn_cls_prob, shape=(0, 2 * num_anchors, -1, 0), name='rpn_cls_prob_t_' + suffix) + rpn_bbox_pred_t = mx.sym.Reshape(data=rpn_bbox_pred, shape=(0, 0, -1), name='rpn_bbox_pred_t_' + suffix) + return rpn_cls_score_t2, rpn_cls_prob_t, rpn_bbox_pred_t, rpn_bbox_pred + + def get_symbol(self, cfg, is_train=True): + + # config alias for convenient + num_classes = cfg.dataset.NUM_CLASSES + num_reg_classes = (2 if cfg.CLASS_AGNOSTIC else num_classes) + Rroi_num_reg_classes = (2 if cfg.network.RRoI_CLASS_AGNOSTIC else num_classes) + data = mx.sym.Variable(name="data") + im_info = mx.sym.Variable(name="im_info") + + # shared convolutional layers + res2, res3, res4, res5 = self.get_resnet_backbone(data) + fpn_p2, fpn_p3, fpn_p4, fpn_p5, fpn_p6 = self.get_fpn_feature(res2, res3, res4, res5) + + # large separable + fpn_p2_new = self.get_light_head(fpn_p2, 'p2') + fpn_p3_new = self.get_light_head(fpn_p3, 'p3') + fpn_p4_new = self.get_light_head(fpn_p4, 'p4') + fpn_p5_new = self.get_light_head(fpn_p5, 'p5') + + rpn_cls_score_p2, rpn_prob_p2, rpn_bbox_loss_p2, rpn_bbox_pred_p2 = self.get_rpn_subnet(fpn_p2, cfg.network.NUM_ANCHORS, 'p2') + rpn_cls_score_p3, rpn_prob_p3, rpn_bbox_loss_p3, rpn_bbox_pred_p3 = self.get_rpn_subnet(fpn_p3, cfg.network.NUM_ANCHORS, 'p3') + rpn_cls_score_p4, rpn_prob_p4, rpn_bbox_loss_p4, rpn_bbox_pred_p4 = self.get_rpn_subnet(fpn_p4, cfg.network.NUM_ANCHORS, 'p4') + rpn_cls_score_p5, rpn_prob_p5, rpn_bbox_loss_p5, rpn_bbox_pred_p5 = self.get_rpn_subnet(fpn_p5, cfg.network.NUM_ANCHORS, 'p5') + rpn_cls_score_p6, rpn_prob_p6, rpn_bbox_loss_p6, rpn_bbox_pred_p6 = self.get_rpn_subnet(fpn_p6, cfg.network.NUM_ANCHORS, 'p6') + + rpn_cls_prob_dict = { + 'rpn_cls_prob_stride64': rpn_prob_p6, + 'rpn_cls_prob_stride32': rpn_prob_p5, + 'rpn_cls_prob_stride16': rpn_prob_p4, + 'rpn_cls_prob_stride8': rpn_prob_p3, + 'rpn_cls_prob_stride4': rpn_prob_p2, + } + rpn_bbox_pred_dict = { + 'rpn_bbox_pred_stride64': rpn_bbox_pred_p6, + 'rpn_bbox_pred_stride32': rpn_bbox_pred_p5, + 'rpn_bbox_pred_stride16': rpn_bbox_pred_p4, + 'rpn_bbox_pred_stride8': rpn_bbox_pred_p3, + 'rpn_bbox_pred_stride4': rpn_bbox_pred_p2, + } + arg_dict = dict(rpn_cls_prob_dict.items() + rpn_bbox_pred_dict.items()) + + if is_train: + rpn_label = mx.sym.Variable(name='label') + rpn_bbox_target = mx.sym.Variable(name='bbox_target') + rpn_bbox_weight = mx.sym.Variable(name='bbox_weight') + gt_boxes = mx.sym.Variable(name="gt_boxes") + + rpn_cls_score = mx.sym.Concat(rpn_cls_score_p2, rpn_cls_score_p3, rpn_cls_score_p4, rpn_cls_score_p5, rpn_cls_score_p6, dim=2) + rpn_bbox_loss = mx.sym.Concat(rpn_bbox_loss_p2, rpn_bbox_loss_p3, rpn_bbox_loss_p4, rpn_bbox_loss_p5, rpn_bbox_loss_p6, dim=2) + # RPN classification loss + rpn_cls_output = mx.sym.SoftmaxOutput(data=rpn_cls_score, label=rpn_label, multi_output=True, normalization='valid', + use_ignore=True, ignore_label=-1, name='rpn_cls_prob') + # bounding box regression + rpn_bbox_loss = rpn_bbox_weight * mx.sym.smooth_l1(name='rpn_bbox_loss_l1', scalar=3.0, data=(rpn_bbox_loss - rpn_bbox_target)) + rpn_bbox_loss = mx.sym.MakeLoss(name='rpn_bbox_loss', data=rpn_bbox_loss, grad_scale=1.0 / cfg.TRAIN.RPN_BATCH_SIZE) + + aux_dict = { + 'op_type': 'pyramid_proposal', 'name': 'rois', + 'im_info': im_info, 'feat_stride': tuple(cfg.network.RPN_FEAT_STRIDE), + 'scales': tuple(cfg.network.ANCHOR_SCALES), 'ratios': tuple(cfg.network.ANCHOR_RATIOS), + 'rpn_pre_nms_top_n': cfg.TRAIN.RPN_PRE_NMS_TOP_N, 'rpn_post_nms_top_n': cfg.TRAIN.RPN_POST_NMS_TOP_N, + 'threshold': cfg.TRAIN.RPN_NMS_THRESH, 'rpn_min_size': cfg.TRAIN.RPN_MIN_SIZE + } + + # ROI proposal + rois = mx.sym.Custom(**dict(arg_dict.items() + aux_dict.items())) + # ROI proposal target + gt_boxes_reshape = mx.sym.Reshape(data=gt_boxes, shape=(-1, 9), name='gt_boxes_reshape') + rois, label, bbox_target, bbox_weight \ + = mx.sym.Custom(rois=rois, gt_boxes=gt_boxes_reshape, op_type='proposal_target_rotbox', num_classes=num_reg_classes, batch_images=cfg.TRAIN.BATCH_IMAGES, + batch_rois=cfg.TRAIN.BATCH_ROIS, cfg=cPickle.dumps(cfg), fg_class_agnostic=True, fg_fraction=cfg.TRAIN.FG_FRACTION) + else: + aux_dict = { + 'op_type': 'pyramid_proposal', 'name': 'rois', + 'im_info': im_info, 'feat_stride': tuple(cfg.network.RPN_FEAT_STRIDE), + 'scales': tuple(cfg.network.ANCHOR_SCALES), 'ratios': tuple(cfg.network.ANCHOR_RATIOS), + 'rpn_pre_nms_top_n': cfg.TEST.RPN_PRE_NMS_TOP_N, 'rpn_post_nms_top_n': cfg.TEST.RPN_POST_NMS_TOP_N, + 'threshold': cfg.TEST.RPN_NMS_THRESH, 'rpn_min_size': cfg.TEST.RPN_MIN_SIZE + } + # ROI proposal + rois = mx.sym.Custom(**dict(arg_dict.items() + aux_dict.items())) + + roi_pool = mx.symbol.Custom(data_p2=fpn_p2_new, data_p3=fpn_p3_new, data_p4=fpn_p4_new, data_p5=fpn_p5_new, + rois=rois, op_type='fpn_psroi_pooling_v2', name='fpn_psroi_pooling_v2', pooling_mode='alignave') + + # cls_score/bbox_pred + cls_score = mx.symbol.FullyConnected(name='cls_score', data=roi_pool, num_hidden=num_reg_classes) + bbox_pred = mx.symbol.FullyConnected(name='bbox_pred', data=roi_pool, num_hidden=num_reg_classes * 5) + + #------------------------------------------------------------------------------------------------------------------ + # TODO: 1. use output from softmaxcativation for score + # 2. carefully set shape for bbox_pred, cls_prob + cls_prob = mx.sym.SoftmaxActivation(name='cls_prob', data=cls_score) + cls_prob = mx.sym.Reshape(data=cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_reg_classes), name='cls_prob_reshape') + + if is_train: + if cfg.TRAIN.ENABLE_OHEM: + labels_ohem, bbox_weights_ohem = mx.sym.Custom(name='ohem', op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=num_reg_classes, roi_per_img=cfg.TRAIN.BATCH_ROIS_OHEM, + cls_score=cls_score, bbox_pred=bbox_pred, labels=label, + bbox_targets=bbox_target, bbox_weights=bbox_weight) + cls_prob_loss = mx.sym.SoftmaxOutput(name='cls_prob_loss', data=cls_score, label=labels_ohem, normalization='valid', use_ignore=True, ignore_label=-1) + bbox_loss_ = bbox_weights_ohem * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS_OHEM) + rcnn_label = labels_ohem + else: + cls_prob_loss = mx.sym.SoftmaxOutput(name='cls_prob_loss', data=cls_score, label=label, normalization='valid') + bbox_loss_ = bbox_weight * mx.sym.smooth_l1(name='bbox_loss_', scalar=1.0, data=(bbox_pred - bbox_target)) + bbox_loss = mx.sym.MakeLoss(name='bbox_loss', data=bbox_loss_, grad_scale=1.0 / cfg.TRAIN.BATCH_ROIS) + rcnn_label = label + + # reshape output + rcnn_label = mx.sym.Reshape(data=rcnn_label, shape=(cfg.TRAIN.BATCH_IMAGES, -1), name='label_reshape') + cls_prob_loss = mx.sym.Reshape(data=cls_prob_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_reg_classes), name='cls_prob_loss_reshape') + bbox_loss = mx.sym.Reshape(data=bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 5 * num_reg_classes), name='bbox_loss_reshape') + + # ----------------------------------------------------------------------------------------------------------------------------------- + # + # shape of bbox_pred (n, 2 * 5), shape of cls_prob (n, 2) + Rroi_arg_dict = {'rois': rois, 'bbox_pred': bbox_pred, 'cls_prob': cls_prob} + if is_train: + Rroi_aux_dict = { + 'op_type': 'RRoIDecoder', 'name': 'Rrois','im_info': im_info, + 'Rroi_pre_nms_top_n': cfg.TRAIN.RRoI_PRE_NMS_TOP_N, + 'Rroi_post_nms_top_n': cfg.TRAIN.RRoI_POST_NMS_TOP_N, + 'threshold': cfg.TRAIN.RRoI_NMS_THRESH, 'min_size': cfg.TRAIN.RRoI_MIN_SIZE, 'cfg': cPickle.dumps(cfg) + } + Rrois, Rrois_elarge = mx.symbol.Custom(**dict(Rroi_arg_dict.items() + Rroi_aux_dict.items())) + + # rotated proposal target + Rrois, Rrois_elarge_gt_ag, Rroi_label, Rroi_bbox_target, Rroi_bbox_weight = mx.symbol.Custom(Rrois=Rrois, gt_boxes=gt_boxes_reshape, op_type='RRoI_target_rotbox_v2', + num_classes=Rroi_num_reg_classes, batch_images=cfg.TRAIN.BATCH_IMAGES, + batch_rois=cfg.TRAIN.RRoI_BATCH_ROIS, cfg=cPickle.dumps(cfg), + fg_fraction=cfg.TRAIN.RRoI_FG_FRACTION) + else: + Rroi_aux_dict = { + 'op_type': 'RRoIDecoder', 'name': 'Rrois','im_info': im_info, + 'Rroi_pre_nms_top_n': cfg.TEST.RRoI_PRE_NMS_TOP_N, + 'Rroi_post_nms_top_n': cfg.TEST.RRoI_POST_NMS_TOP_N, + 'threshold': cfg.TEST.RRoI_NMS_THRESH, 'min_size': cfg.TEST.RRoI_MIN_SIZE, 'cfg': cPickle.dumps(cfg) + } + Rrois, Rrois_elarge = mx.symbol.Custom(**dict(Rroi_arg_dict.items() + Rroi_aux_dict.items())) + if is_train: + rotated_pool = mx.symbol.Custom(data_p2=fpn_p2_new, data_p3=fpn_p3_new, data_p4=fpn_p4_new, data_p5=fpn_p5_new, + Rrois=Rrois_elarge_gt_ag, op_type='fpn_psroi_rotatedalign', name='fpn_psroi_rotatedalign') + else: + rotated_pool = mx.symbol.Custom(data_p2=fpn_p2_new, data_p3=fpn_p3_new, data_p4=fpn_p4_new, data_p5=fpn_p5_new, + Rrois=Rrois_elarge, op_type='fpn_psroi_rotatedalign', name='fpn_psroi_rotatedalign') + + + fc_new_3 = mx.symbol.FullyConnected(name='fc_new_3', data=rotated_pool, num_hidden=2048) + fc_new_3_relu = mx.sym.Activation(data=fc_new_3, act_type='relu', name='fc_new_3_relu') + + # print 'num_classes: ', num_classes + # pdb.set_trace() + # 2 fc + # cls_score/bbox_pred + Rroi_cls_score = mx.symbol.FullyConnected(name='Rroi_cls_score', data=fc_new_3_relu, num_hidden=num_classes) + Rroi_bbox_pred = mx.symbol.FullyConnected(name='Rroi_bbox_pred', data=fc_new_3_relu, num_hidden=Rroi_num_reg_classes * 5) + + if is_train: + if cfg.TRAIN.RRoI_ENABLE_OHEM: + # turn off ohem for current design + Rroi_labels_ohem, Rroi_bbox_weights_ohem = mx.sym.Custom(name='Rroi_ohem', op_type='BoxAnnotatorOHEM', num_classes=num_classes, + num_reg_classes=Rroi_num_reg_classes, roi_per_img=cfg.TRAIN.RRoI_BATCH_ROIS_OHEM, + cls_score=Rroi_cls_score, bbox_pred=Rroi_bbox_pred, labels=Rroi_label, + bbox_targets=Rroi_bbox_target, bbox_weights=Rroi_bbox_weight) + Rroi_cls_prob = mx.sym.SoftmaxOutput(name='Rroi_cls_prob', data=Rroi_cls_score, label=Rroi_labels_ohem, normalization='valid', use_ignore=True, ignore_label=-1) + Rroi_bbox_loss_ = Rroi_bbox_weights_ohem * mx.sym.smooth_l1(name='Rroi_bbox_loss_', scalar=1.0, data=(Rroi_bbox_pred - Rroi_bbox_target)) + Rroi_bbox_loss = mx.sym.MakeLoss(name='Rroi_bbox_loss', data=Rroi_bbox_loss_, grad_scale=1.0 / cfg.TRAIN.RRoI_BATCH_ROIS_OHEM) + Rroi_rcnn_label = Rroi_labels_ohem + else: + Rroi_cls_prob = mx.sym.SoftmaxOutput(name='Rroi_cls_prob', data=Rroi_cls_score, label=Rroi_label, normalization='valid') + Rroi_bbox_loss_ = Rroi_bbox_weight * mx.sym.smooth_l1(name='Rroi_bbox_loss_', scalar=1.0, data=(Rroi_bbox_pred - Rroi_bbox_target)) + Rroi_bbox_loss = mx.sym.MakeLoss(name='Rroi_bbox_loss', data=Rroi_bbox_loss_, grad_scale=1.0 / cfg.TRAIN.RRoI_BATCH_ROIS) + Rroi_rcnn_label = Rroi_label + + # reshape output + Rroi_rcnn_label = mx.sym.Reshape(data=Rroi_rcnn_label, shape=(cfg.TRAIN.BATCH_IMAGES, -1), name='Rroi_label_reshape') + Rroi_cls_prob = mx.sym.Reshape(data=Rroi_cls_prob, shape=(cfg.TRAIN.BATCH_IMAGES, -1, num_classes), name='Rroi_cls_prob_reshape') + Rroi_bbox_loss = mx.sym.Reshape(data=Rroi_bbox_loss, shape=(cfg.TRAIN.BATCH_IMAGES, -1, 5 * Rroi_num_reg_classes), name='Rroi_bbox_loss_reshape') + # group = mx.sym.Group([rpn_cls_output, rpn_bbox_loss, mx.sym.BlockGrad(cls_prob), mx.sym.BlockGrad(bbox_loss), mx.sym.BlockGrad(rcnn_label)]) + group = mx.sym.Group([rpn_cls_output, rpn_bbox_loss, cls_prob_loss, bbox_loss, mx.sym.BlockGrad(rcnn_label), Rroi_cls_prob, Rroi_bbox_loss, mx.sym.BlockGrad(Rroi_rcnn_label)]) + else: + Rroi_cls_prob = mx.sym.SoftmaxActivation(name='Rroi_cls_prob', data=Rroi_cls_score) + Rroi_cls_prob = mx.sym.Reshape(data=Rroi_cls_prob, shape=(cfg.TEST.BATCH_IMAGES, -1, num_classes), name='Rroi_cls_prob_reshape') + Rroi_bbox_pred = mx.sym.Reshape(data=Rroi_bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 5 * Rroi_num_reg_classes), name='Rroi_bbox_pred_reshape') + + if DEBUG: + bbox_pred = mx.sym.Reshape(data=bbox_pred, shape=(cfg.TEST.BATCH_IMAGES, -1, 5 * num_reg_classes), + name='bbox_pred_reshape') + group = mx.sym.Group([rois, cls_prob, bbox_pred]) + else: + group = mx.sym.Group([Rrois, Rroi_cls_prob, Rroi_bbox_pred]) + + self.sym = group + return group + + def init_weight_rcnn(self, cfg, arg_params, aux_params): + # arg_params['fc_new_1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_1_weight']) + # arg_params['fc_new_1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_1_bias']) + # arg_params['fc_new_2_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_2_weight']) + # arg_params['fc_new_2_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_2_bias']) + arg_params['cls_score_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['cls_score_weight']) + arg_params['cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['cls_score_bias']) + arg_params['bbox_pred_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['bbox_pred_weight']) + arg_params['bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['bbox_pred_bias']) + + # Rroi params + arg_params['fc_new_3_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fc_new_3_weight']) + arg_params['fc_new_3_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fc_new_3_bias']) + arg_params['Rroi_cls_score_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['Rroi_cls_score_weight']) + arg_params['Rroi_cls_score_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['Rroi_cls_score_bias']) + arg_params['Rroi_bbox_pred_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['Rroi_bbox_pred_weight']) + arg_params['Rroi_bbox_pred_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['Rroi_bbox_pred_bias']) + + def init_weight_fpn(self, cfg, arg_params, aux_params): + arg_params['fpn_p6_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p6_weight']) + arg_params['fpn_p6_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p6_bias']) + arg_params['fpn_p5_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p5_weight']) + arg_params['fpn_p5_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p5_bias']) + arg_params['fpn_p4_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p4_weight']) + arg_params['fpn_p4_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p4_bias']) + arg_params['fpn_p3_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p3_weight']) + arg_params['fpn_p3_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p3_bias']) + arg_params['fpn_p2_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p2_weight']) + arg_params['fpn_p2_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p2_bias']) + + arg_params['fpn_p5_1x1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p5_1x1_weight']) + arg_params['fpn_p5_1x1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p5_1x1_bias']) + arg_params['fpn_p4_1x1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p4_1x1_weight']) + arg_params['fpn_p4_1x1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p4_1x1_bias']) + arg_params['fpn_p3_1x1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p3_1x1_weight']) + arg_params['fpn_p3_1x1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p3_1x1_bias']) + arg_params['fpn_p2_1x1_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict['fpn_p2_1x1_weight']) + arg_params['fpn_p2_1x1_bias'] = mx.nd.zeros(shape=self.arg_shape_dict['fpn_p2_1x1_bias']) + + def init_weight(self, cfg, arg_params, aux_params): + for name in self.shared_param_list: + arg_params[name + '_weight'] = mx.random.normal(0, 0.01, shape=self.arg_shape_dict[name + '_weight']) + arg_params[name + '_bias'] = mx.nd.zeros(shape=self.arg_shape_dict[name + '_bias']) + self.init_weight_rcnn(cfg, arg_params, aux_params) + self.init_weight_fpn(cfg, arg_params, aux_params) diff --git a/fpn/test.py b/fpn/test.py new file mode 100644 index 0000000..63ebdb9 --- /dev/null +++ b/fpn/test.py @@ -0,0 +1,60 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import _init_paths + +import cv2 +import argparse +import os +import sys +from config.config import config, update_config + + +def parse_args(): + parser = argparse.ArgumentParser(description='Test a Faster R-CNN network') + # general + parser.add_argument('--cfg', help='experiment configure file name', required=True, type=str) + + args, rest = parser.parse_known_args() + update_config(args.cfg) + + # rcnn + parser.add_argument('--vis', help='turn on visualization', action='store_true') + parser.add_argument('--ignore_cache', help='ignore cached results boxes', action='store_true') + parser.add_argument('--thresh', help='valid detection threshold', default=1e-3, type=float) + parser.add_argument('--shuffle', help='shuffle data on visualization', action='store_true') + args = parser.parse_args() + return args + +args = parse_args() +curr_path = os.path.abspath(os.path.dirname(__file__)) +sys.path.insert(0, os.path.join(curr_path, '../external/mxnet', config.MXNET_VERSION)) + +import mxnet as mx +from function.test_rcnn import test_rcnn +from utils.create_logger import create_logger + + +def main(): + ctx = [mx.gpu(int(i)) for i in config.gpus.split(',')] + print args + logger, final_output_path = create_logger(config.output_path, args.cfg, config.dataset.test_image_set) + # test_rcnn(config, config.dataset.dataset, config.dataset.test_image_set, config.dataset.root_path, config.dataset.dataset_path, + # ctx, os.path.join(final_output_path, '..', '_'.join([iset for iset in config.dataset.image_set.split('+')]), config.TRAIN.model_prefix), config.TEST.test_epoch, + # args.vis, args.ignore_cache, args.shuffle, config.TEST.HAS_RPN, config.dataset.proposal, args.thresh, logger=logger, output_path=final_output_path) + test_rcnn(config, config.dataset.dataset, config.dataset.test_image_set, config.dataset.root_path, config.dataset.dataset_path, + ctx, os.path.join(final_output_path, '..', '_'.join([iset for iset in config.dataset.image_set.split('+')]), config.TRAIN.model_prefix), config.TEST.test_epoch, + args.vis, True, args.shuffle, config.TEST.HAS_RPN, config.dataset.proposal, args.thresh, logger=logger, output_path=final_output_path) + +if __name__ == '__main__': + main() diff --git a/fpn/test_poly.py b/fpn/test_poly.py new file mode 100644 index 0000000..b91b80e --- /dev/null +++ b/fpn/test_poly.py @@ -0,0 +1,61 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import _init_paths + +import cv2 +import argparse +import os +import sys +from config.config import config, update_config + + +def parse_args(): + parser = argparse.ArgumentParser(description='Test a Faster R-CNN network') + # general + parser.add_argument('--cfg', help='experiment configure file name', required=True, type=str) + + args, rest = parser.parse_known_args() + update_config(args.cfg) + + # rcnn + parser.add_argument('--vis', help='turn on visualization', action='store_true') + parser.add_argument('--ignore_cache', help='ignore cached results boxes', action='store_true') + parser.add_argument('--thresh', help='valid detection threshold', default=1e-3, type=float) + parser.add_argument('--shuffle', help='shuffle data on visualization', action='store_true') + args = parser.parse_args() + return args + +args = parse_args() +curr_path = os.path.abspath(os.path.dirname(__file__)) +sys.path.insert(0, os.path.join(curr_path, '../external/mxnet', config.MXNET_VERSION)) + +import mxnet as mx +from function.test_rcnn import test_rcnn +from function.test_rcnn_poly import test_rcnn_poly +from utils.create_logger import create_logger + + +def main(): + ctx = [mx.gpu(int(i)) for i in config.gpus.split(',')] + print args + logger, final_output_path = create_logger(config.output_path, args.cfg, config.dataset.test_image_set) + # test_rcnn(config, config.dataset.dataset, config.dataset.test_image_set, config.dataset.root_path, config.dataset.dataset_path, + # ctx, os.path.join(final_output_path, '..', '_'.join([iset for iset in config.dataset.image_set.split('+')]), config.TRAIN.model_prefix), config.TEST.test_epoch, + # args.vis, args.ignore_cache, args.shuffle, config.TEST.HAS_RPN, config.dataset.proposal, args.thresh, logger=logger, output_path=final_output_path) + test_rcnn_poly(config, config.dataset.dataset, config.dataset.test_image_set, config.dataset.root_path, config.dataset.dataset_path, + ctx, os.path.join(final_output_path, '..', '_'.join([iset for iset in config.dataset.image_set.split('+')]), config.TRAIN.model_prefix), config.TEST.test_epoch, + args.vis, args.ignore_cache, args.shuffle, config.TEST.HAS_RPN, config.dataset.proposal, args.thresh, logger=logger, output_path=final_output_path) + +if __name__ == '__main__': + main() diff --git a/fpn/train_end2end.py b/fpn/train_end2end.py new file mode 100644 index 0000000..36e35f5 --- /dev/null +++ b/fpn/train_end2end.py @@ -0,0 +1,183 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import _init_paths + +import cv2 +import argparse +import pprint +import os +import sys +from config.config import config, update_config + + +def parse_args(): + + parser = argparse.ArgumentParser(description='Train Faster-RCNN network') + # general + parser.add_argument('--cfg', help='experiment configure file name', required=True, type=str) + + args, rest = parser.parse_known_args() + # update config + update_config(args.cfg) + + # training + parser.add_argument('--frequent', help='frequency of logging', default=config.default.frequent, type=int) + args = parser.parse_args() + return args + +print 'before parse args' +args = parse_args() +curr_path = os.path.abspath(os.path.dirname(__file__)) +sys.path.insert(0, os.path.join(curr_path, '../external/mxnet', config.MXNET_VERSION)) + +import shutil +import numpy as np +import mxnet as mx + +from symbols import * +from core.loader import PyramidAnchorIterator +from core import callback, metric +from core.module import MutableModule +from utils.create_logger import create_logger +from utils.load_data import load_gt_roidb, merge_roidb, filter_roidb +from utils.load_model import load_param +from utils.PrefetchingIter import PrefetchingIter +from utils.lr_scheduler import WarmupMultiFactorScheduler + + +def train_net(args, ctx, pretrained, epoch, prefix, begin_epoch, end_epoch, lr, lr_step): + mx.random.seed(3) + np.random.seed(3) + logger, final_output_path = create_logger(config.output_path, args.cfg, config.dataset.image_set) + prefix = os.path.join(final_output_path, prefix) + + # load symbol + shutil.copy2(os.path.join(curr_path, 'symbols', config.symbol + '.py'), final_output_path) + sym_instance = eval(config.symbol + '.' + config.symbol)() + sym = sym_instance.get_symbol(config, is_train=True) + + feat_pyramid_level = np.log2(config.network.RPN_FEAT_STRIDE).astype(int) + feat_sym = [sym.get_internals()['rpn_cls_score_p' + str(x) + '_output'] for x in feat_pyramid_level] + print('load symbol END') + # setup multi-gpu + batch_size = len(ctx) + input_batch_size = config.TRAIN.BATCH_IMAGES * batch_size + + # print config + pprint.pprint(config) + logger.info('training config:{}\n'.format(pprint.pformat(config))) + + # load dataset and prepare imdb for training + print('Start load dataset and prepare imdb for training') + image_sets = [iset for iset in config.dataset.image_set.split('+')] + roidbs = [load_gt_roidb(config.dataset.dataset, image_set, config.dataset.root_path, config.dataset.dataset_path, + flip=config.TRAIN.FLIP) + for image_set in image_sets] + roidb = merge_roidb(roidbs) + roidb = filter_roidb(roidb, config) + print('Start load training data') + # load training data + + train_data = PyramidAnchorIterator( feat_sym, roidb, config, batch_size=input_batch_size, shuffle=config.TRAIN.SHUFFLE, + ctx=ctx, feat_strides=config.network.RPN_FEAT_STRIDE, anchor_scales=config.network.ANCHOR_SCALES, + anchor_ratios=config.network.ANCHOR_RATIOS, aspect_grouping=config.TRAIN.ASPECT_GROUPING, + allowed_border=np.inf) + + # infer max shape + max_data_shape = [('data', (config.TRAIN.BATCH_IMAGES, 3, max([v[0] for v in config.SCALES]), max([v[1] for v in config.SCALES])))] + max_data_shape, max_label_shape = train_data.infer_shape(max_data_shape) + max_data_shape.append(('gt_boxes', (config.TRAIN.BATCH_IMAGES, 100, 5))) + print 'providing maximum shape', max_data_shape, max_label_shape + + data_shape_dict = dict(train_data.provide_data_single + train_data.provide_label_single) + pprint.pprint(data_shape_dict) + sym_instance.infer_shape(data_shape_dict) + + # load and initialize params + if config.TRAIN.RESUME: + print('continue training from ', begin_epoch) + arg_params, aux_params = load_param(prefix, begin_epoch, convert=True) + else: + arg_params, aux_params = load_param(pretrained, epoch, convert=True) + sym_instance.init_weight(config, arg_params, aux_params) + + # check parameter shapes + sym_instance.check_parameter_shapes(arg_params, aux_params, data_shape_dict) + + # create solver + fixed_param_prefix = config.network.FIXED_PARAMS + data_names = [k[0] for k in train_data.provide_data_single] + label_names = [k[0] for k in train_data.provide_label_single] + + mod = MutableModule(sym, data_names=data_names, label_names=label_names, + logger=logger, context=ctx, max_data_shapes=[max_data_shape for _ in range(batch_size)], + max_label_shapes=[max_label_shape for _ in range(batch_size)], fixed_param_prefix=fixed_param_prefix) + + if config.TRAIN.RESUME: + mod._preload_opt_states = '%s-%04d.states'%(prefix, begin_epoch) + + # decide training params + # metric + rpn_eval_metric = metric.RPNAccMetric() + rpn_cls_metric = metric.RPNLogLossMetric() + rpn_bbox_metric = metric.RPNL1LossMetric() + rpn_fg_metric = metric.RPNFGFraction(config) + eval_metric = metric.RCNNAccMetric(config) + eval_fg_metric = metric.RCNNFGAccuracy(config) + cls_metric = metric.RCNNLogLossMetric(config) + bbox_metric = metric.RCNNL1LossMetric(config) + eval_metrics = mx.metric.CompositeEvalMetric() + # rpn_eval_metric, rpn_cls_metric, rpn_bbox_metric, eval_metric, cls_metric, bbox_metric + for child_metric in [rpn_eval_metric, rpn_cls_metric, rpn_bbox_metric, rpn_fg_metric, eval_fg_metric, eval_metric, cls_metric, bbox_metric]: + eval_metrics.add(child_metric) + # callback + batch_end_callback = callback.Speedometer(train_data.batch_size, frequent=args.frequent) + means = np.tile(np.array(config.TRAIN.BBOX_MEANS), 2 if config.CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + stds = np.tile(np.array(config.TRAIN.BBOX_STDS), 2 if config.CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + epoch_end_callback = [mx.callback.module_checkpoint(mod, prefix, period=1, save_optimizer_states=True), callback.do_checkpoint(prefix, means, stds)] + + # decide learning rate + base_lr = lr + lr_factor = config.TRAIN.lr_factor + lr_epoch = [float(epoch) for epoch in lr_step.split(',')] + lr_epoch_diff = [epoch - begin_epoch for epoch in lr_epoch if epoch > begin_epoch] + lr = base_lr * (lr_factor ** (len(lr_epoch) - len(lr_epoch_diff))) + lr_iters = [int(epoch * len(roidb) / batch_size) for epoch in lr_epoch_diff] + print('lr', lr, 'lr_epoch_diff', lr_epoch_diff, 'lr_iters', lr_iters) + lr_scheduler = WarmupMultiFactorScheduler(lr_iters, lr_factor, config.TRAIN.warmup, config.TRAIN.warmup_lr, config.TRAIN.warmup_step) + # optimizer + optimizer_params = {'momentum': config.TRAIN.momentum, + 'wd': config.TRAIN.wd, + 'learning_rate': lr, + 'lr_scheduler': lr_scheduler, + 'clip_gradient': None} + # + if not isinstance(train_data, PrefetchingIter): + train_data = PrefetchingIter(train_data) + + # train + mod.fit(train_data, eval_metric=eval_metrics, epoch_end_callback=epoch_end_callback, + batch_end_callback=batch_end_callback, kvstore=config.default.kvstore, + optimizer='sgd', optimizer_params=optimizer_params, + arg_params=arg_params, aux_params=aux_params, begin_epoch=begin_epoch, num_epoch=end_epoch) + + +def main(): + print('Called with argument:', args) + ctx = [mx.gpu(int(i)) for i in config.gpus.split(',')] + train_net( args, ctx, config.network.pretrained, config.network.pretrained_epoch, config.TRAIN.model_prefix, + config.TRAIN.begin_epoch, config.TRAIN.end_epoch, config.TRAIN.lr, config.TRAIN.lr_step) + +if __name__ == '__main__': + main() diff --git a/fpn/train_end2end_poly.py b/fpn/train_end2end_poly.py new file mode 100644 index 0000000..b0f320d --- /dev/null +++ b/fpn/train_end2end_poly.py @@ -0,0 +1,182 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import _init_paths + +import cv2 +import argparse +import pprint +import os +import sys +from config.config import config, update_config + + +def parse_args(): + parser = argparse.ArgumentParser(description='Train Faster-RCNN network') + # general + parser.add_argument('--cfg', help='experiment configure file name', required=True, type=str) + + args, rest = parser.parse_known_args() + # update config + update_config(args.cfg) + + # training + parser.add_argument('--frequent', help='frequency of logging', default=config.default.frequent, type=int) + args = parser.parse_args() + return args + +args = parse_args() +curr_path = os.path.abspath(os.path.dirname(__file__)) +sys.path.insert(0, os.path.join(curr_path, '../external/mxnet', config.MXNET_VERSION)) + +import shutil +import numpy as np +import mxnet as mx + +from symbols import * +from core.loader import PyramidAnchorIterator_poly +from core import callback, metric +from core.module import MutableModule +from utils.create_logger import create_logger +from utils.load_data import load_gt_roidb, merge_roidb, filter_roidb, load_gt_roidb_poly +from utils.load_model import load_param +from utils.PrefetchingIter import PrefetchingIter +from utils.lr_scheduler import WarmupMultiFactorScheduler + + +def train_net(args, ctx, pretrained, epoch, prefix, begin_epoch, end_epoch, lr, lr_step): + mx.random.seed(3) + np.random.seed(3) + logger, final_output_path = create_logger(config.output_path, args.cfg, config.dataset.image_set) + prefix = os.path.join(final_output_path, prefix) + + # load symbol + shutil.copy2(os.path.join(curr_path, 'symbols', config.symbol + '.py'), final_output_path) + sym_instance = eval(config.symbol + '.' + config.symbol)() + sym = sym_instance.get_symbol(config, is_train=True) + + feat_pyramid_level = np.log2(config.network.RPN_FEAT_STRIDE).astype(int) + feat_sym = [sym.get_internals()['rpn_cls_score_p' + str(x) + '_output'] for x in feat_pyramid_level] + print('load symbol END') + # setup multi-gpu + batch_size = len(ctx) + input_batch_size = config.TRAIN.BATCH_IMAGES * batch_size + + # print config + pprint.pprint(config) + logger.info('training config:{}\n'.format(pprint.pformat(config))) + + # load dataset and prepare imdb for training + print('Start load dataset and prepare imdb for training') + image_sets = [iset for iset in config.dataset.image_set.split('+')] + roidbs = [load_gt_roidb_poly(config.dataset.dataset, image_set, config.dataset.root_path, config.dataset.dataset_path, + flip=config.TRAIN.FLIP) + for image_set in image_sets] + roidb = merge_roidb(roidbs) + roidb = filter_roidb(roidb, config) + print('Start load training data') + # load training data + + train_data = PyramidAnchorIterator_poly( feat_sym, roidb, config, batch_size=input_batch_size, shuffle=config.TRAIN.SHUFFLE, + ctx=ctx, feat_strides=config.network.RPN_FEAT_STRIDE, anchor_scales=config.network.ANCHOR_SCALES, + anchor_ratios=config.network.ANCHOR_RATIOS, aspect_grouping=config.TRAIN.ASPECT_GROUPING, + allowed_border=np.inf) + + # infer max shape + max_data_shape = [('data', (config.TRAIN.BATCH_IMAGES, 3, max([v[0] for v in config.SCALES]), max([v[1] for v in config.SCALES])))] + max_data_shape, max_label_shape = train_data.infer_shape(max_data_shape) + max_data_shape.append(('gt_boxes', (config.TRAIN.BATCH_IMAGES, 300, 9))) + print 'providing maximum shape', max_data_shape, max_label_shape + + data_shape_dict = dict(train_data.provide_data_single + train_data.provide_label_single) + pprint.pprint(data_shape_dict) + sym_instance.infer_shape(data_shape_dict) + + # load and initialize params + if config.TRAIN.RESUME: + print('continue training from ', begin_epoch) + arg_params, aux_params = load_param(prefix, begin_epoch, convert=True) + else: + arg_params, aux_params = load_param(pretrained, epoch, convert=True) + sym_instance.init_weight(config, arg_params, aux_params) + + # check parameter shapes + sym_instance.check_parameter_shapes(arg_params, aux_params, data_shape_dict) + + # create solver + fixed_param_prefix = config.network.FIXED_PARAMS + data_names = [k[0] for k in train_data.provide_data_single] + label_names = [k[0] for k in train_data.provide_label_single] + + mod = MutableModule(sym, data_names=data_names, label_names=label_names, + logger=logger, context=ctx, max_data_shapes=[max_data_shape for _ in range(batch_size)], + max_label_shapes=[max_label_shape for _ in range(batch_size)], fixed_param_prefix=fixed_param_prefix) + + if config.TRAIN.RESUME: + mod._preload_opt_states = '%s-%04d.states'%(prefix, begin_epoch) + + # decide training params + # metric + rpn_eval_metric = metric.RPNAccMetric() + rpn_cls_metric = metric.RPNLogLossMetric() + rpn_bbox_metric = metric.RPNL1LossMetric() + rpn_fg_metric = metric.RPNFGFraction(config) + eval_metric = metric.RCNNAccMetric(config) + eval_fg_metric = metric.RCNNFGAccuracy(config) + cls_metric = metric.RCNNLogLossMetric(config) + bbox_metric = metric.RCNNL1LossMetric(config) + + eval_metrics = mx.metric.CompositeEvalMetric() + # rpn_eval_metric, rpn_cls_metric, rpn_bbox_metric, eval_metric, cls_metric, bbox_metric + for child_metric in [rpn_eval_metric, rpn_cls_metric, rpn_bbox_metric, rpn_fg_metric, eval_fg_metric, eval_metric, cls_metric, bbox_metric]: + eval_metrics.add(child_metric) + # callback + batch_end_callback = callback.Speedometer(train_data.batch_size, frequent=args.frequent) + means = np.tile(np.array(config.TRAIN.BBOX_MEANS), 2 if config.CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + stds = np.tile(np.array(config.TRAIN.BBOX_STDS), 2 if config.CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + epoch_end_callback = [mx.callback.module_checkpoint(mod, prefix, period=1, save_optimizer_states=True), callback.do_checkpoint(prefix, means, stds)] + + # decide learning rate + base_lr = lr + lr_factor = config.TRAIN.lr_factor + lr_epoch = [float(epoch) for epoch in lr_step.split(',')] + lr_epoch_diff = [epoch - begin_epoch for epoch in lr_epoch if epoch > begin_epoch] + lr = base_lr * (lr_factor ** (len(lr_epoch) - len(lr_epoch_diff))) + lr_iters = [int(epoch * len(roidb) / batch_size) for epoch in lr_epoch_diff] + print('lr', lr, 'lr_epoch_diff', lr_epoch_diff, 'lr_iters', lr_iters) + lr_scheduler = WarmupMultiFactorScheduler(lr_iters, lr_factor, config.TRAIN.warmup, config.TRAIN.warmup_lr, config.TRAIN.warmup_step) + # optimizer + optimizer_params = {'momentum': config.TRAIN.momentum, + 'wd': config.TRAIN.wd, + 'learning_rate': lr, + 'lr_scheduler': lr_scheduler, + 'clip_gradient': None} + # + if not isinstance(train_data, PrefetchingIter): + train_data = PrefetchingIter(train_data) + + # train + mod.fit(train_data, eval_metric=eval_metrics, epoch_end_callback=epoch_end_callback, + batch_end_callback=batch_end_callback, kvstore=config.default.kvstore, + optimizer='sgd', optimizer_params=optimizer_params, + arg_params=arg_params, aux_params=aux_params, begin_epoch=begin_epoch, num_epoch=end_epoch) + + +def main(): + print('Called with argument:', args) + ctx = [mx.gpu(int(i)) for i in config.gpus.split(',')] + train_net( args, ctx, config.network.pretrained, config.network.pretrained_epoch, config.TRAIN.model_prefix, + config.TRAIN.begin_epoch, config.TRAIN.end_epoch, config.TRAIN.lr, config.TRAIN.lr_step) + +if __name__ == '__main__': + main() diff --git a/fpn/train_end2end_rotbox_RoITransformer.py b/fpn/train_end2end_rotbox_RoITransformer.py new file mode 100644 index 0000000..c6f92db --- /dev/null +++ b/fpn/train_end2end_rotbox_RoITransformer.py @@ -0,0 +1,191 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import _init_paths + +import cv2 +import argparse +import pprint +import os +import sys +from config.config import config, update_config + + +def parse_args(): + parser = argparse.ArgumentParser(description='Train Faster-RCNN network') + # general + parser.add_argument('--cfg', help='experiment configure file name', required=True, type=str) + + args, rest = parser.parse_known_args() + # update config + update_config(args.cfg) + + # training + parser.add_argument('--frequent', help='frequency of logging', default=config.default.frequent, type=int) + args = parser.parse_args() + return args + +args = parse_args() +curr_path = os.path.abspath(os.path.dirname(__file__)) +sys.path.insert(0, os.path.join(curr_path, '../external/mxnet', config.MXNET_VERSION)) + +import shutil +import numpy as np +import mxnet as mx + +from symbols import * +from core.loader import PyramidAnchorIterator_poly +from core import callback, metric +from core.module import MutableModule +from utils.create_logger import create_logger +from utils.load_data import merge_roidb, filter_roidb, load_gt_roidb_poly +from utils.load_model import load_param +from utils.PrefetchingIter import PrefetchingIter +from utils.lr_scheduler import WarmupMultiFactorScheduler +import pdb + +def train_net(args, ctx, pretrained, epoch, prefix, begin_epoch, end_epoch, lr, lr_step): + mx.random.seed(3) + np.random.seed(3) + logger, final_output_path = create_logger(config.output_path, args.cfg, config.dataset.image_set) + prefix = os.path.join(final_output_path, prefix) + + # load symbol + shutil.copy2(os.path.join(curr_path, 'symbols', config.symbol + '.py'), final_output_path) + sym_instance = eval(config.symbol + '.' + config.symbol)() + sym = sym_instance.get_symbol(config, is_train=True) + + feat_pyramid_level = np.log2(config.network.RPN_FEAT_STRIDE).astype(int) + feat_sym = [sym.get_internals()['rpn_cls_score_p' + str(x) + '_output'] for x in feat_pyramid_level] + print('load symbol END') + # setup multi-gpu + batch_size = len(ctx) + input_batch_size = config.TRAIN.BATCH_IMAGES * batch_size + + # print config + pprint.pprint(config) + logger.info('training config:{}\n'.format(pprint.pformat(config))) + + # load dataset and prepare imdb for training + print('Start load dataset and prepare imdb for training') + image_sets = [iset for iset in config.dataset.image_set.split('+')] + roidbs = [load_gt_roidb_poly(config.dataset.dataset, image_set, config.dataset.root_path, config.dataset.dataset_path, + flip=config.TRAIN.FLIP) + for image_set in image_sets] + roidb = merge_roidb(roidbs) + roidb = filter_roidb(roidb, config) + print('Start load training data') + # load training data + + train_data = PyramidAnchorIterator_poly( feat_sym, roidb, config, batch_size=input_batch_size, shuffle=config.TRAIN.SHUFFLE, + ctx=ctx, feat_strides=config.network.RPN_FEAT_STRIDE, anchor_scales=config.network.ANCHOR_SCALES, + anchor_ratios=config.network.ANCHOR_RATIOS, aspect_grouping=config.TRAIN.ASPECT_GROUPING, + allowed_border=np.inf) + + # infer max shape + max_data_shape = [('data', (config.TRAIN.BATCH_IMAGES, 3, max([v[0] for v in config.SCALES]), max([v[1] for v in config.SCALES])))] + max_data_shape, max_label_shape = train_data.infer_shape(max_data_shape) + max_data_shape.append(('gt_boxes', (config.TRAIN.BATCH_IMAGES, 300, 9))) + print 'providing maximum shape', max_data_shape, max_label_shape + + data_shape_dict = dict(train_data.provide_data_single + train_data.provide_label_single) + pprint.pprint(data_shape_dict) + sym_instance.infer_shape(data_shape_dict) + + # load and initialize params + if config.TRAIN.RESUME: + print('continue training from ', begin_epoch) + arg_params, aux_params = load_param(prefix, begin_epoch, convert=True) + else: + arg_params, aux_params = load_param(pretrained, epoch, convert=True) + sym_instance.init_weight(config, arg_params, aux_params) + + # check parameter shapes + sym_instance.check_parameter_shapes(arg_params, aux_params, data_shape_dict) + + # create solver + fixed_param_prefix = config.network.FIXED_PARAMS + data_names = [k[0] for k in train_data.provide_data_single] + label_names = [k[0] for k in train_data.provide_label_single] + + mod = MutableModule(sym, data_names=data_names, label_names=label_names, + logger=logger, context=ctx, max_data_shapes=[max_data_shape for _ in range(batch_size)], + max_label_shapes=[max_label_shape for _ in range(batch_size)], fixed_param_prefix=fixed_param_prefix) + + if config.TRAIN.RESUME: + mod._preload_opt_states = '%s-%04d.states'%(prefix, begin_epoch) + + # decide training params + # # metric + rpn_eval_metric = metric.RPNAccMetric() + rpn_cls_metric = metric.RPNLogLossMetric() + rpn_bbox_metric = metric.RPNL1LossMetric() + rpn_fg_metric = metric.RPNFGFraction(config) + eval_fg_metric = metric.RCNNFGAccuracy(config) + eval_metric = metric.RCNNAccMetric(config) + cls_metric = metric.RCNNLogLossMetric(config) + bbox_metric = metric.RCNNL1LossMetric(config) + # add Rroi loss here + RCNN_proposal_fraction_metric = metric.RCNNFGFraction(config) + Rroi_fg_accuracy = metric.RRoIRCNNFGAccuracy(config) + Rroi_accuracy = metric.RRoIAccMetric(config) + Rroi_cls_metric = metric.RRoIRCNNLogLossMetric(config) + Rroi_bbox_metric = metric.RRoIRCNNL1LossMetric(config) + eval_metrics = mx.metric.CompositeEvalMetric() + # rpn_eval_metric, rpn_cls_metric, rpn_bbox_metric, eval_metric, cls_metric, bbox_metric + for child_metric in [rpn_eval_metric, rpn_cls_metric, rpn_bbox_metric, rpn_fg_metric, eval_fg_metric, eval_metric, cls_metric, + bbox_metric, RCNN_proposal_fraction_metric, Rroi_fg_accuracy, Rroi_accuracy, Rroi_cls_metric, Rroi_bbox_metric]: + eval_metrics.add(child_metric) + # callback + batch_end_callback = callback.Speedometer(train_data.batch_size, frequent=args.frequent) + means = np.tile(np.array(config.TRAIN.BBOX_MEANS), 2 if config.CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + stds = np.tile(np.array(config.TRAIN.BBOX_STDS), 2 if config.CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + Rroi_means = np.tile(np.array(config.TRAIN.RRoI_BBOX_MEANS), 2 if config.network.RRoI_CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + Rroi_stds = np.tile(np.array(config.TRAIN.RRoI_BBOX_STDS), 2 if config.network.RRoI_CLASS_AGNOSTIC else config.dataset.NUM_CLASSES) + + epoch_end_callback = [mx.callback.module_checkpoint(mod, prefix, period=1, save_optimizer_states=True), callback.do_checkpoint_Rroi(prefix, means, stds, Rroi_means, Rroi_stds)] + + # decide learning rate + base_lr = lr + lr_factor = config.TRAIN.lr_factor + lr_epoch = [float(epoch) for epoch in lr_step.split(',')] + lr_epoch_diff = [epoch - begin_epoch for epoch in lr_epoch if epoch > begin_epoch] + lr = base_lr * (lr_factor ** (len(lr_epoch) - len(lr_epoch_diff))) + lr_iters = [int(epoch * len(roidb) / batch_size) for epoch in lr_epoch_diff] + print('lr', lr, 'lr_epoch_diff', lr_epoch_diff, 'lr_iters', lr_iters) + lr_scheduler = WarmupMultiFactorScheduler(lr_iters, lr_factor, config.TRAIN.warmup, config.TRAIN.warmup_lr, config.TRAIN.warmup_step) + # optimizer + optimizer_params = {'momentum': config.TRAIN.momentum, + 'wd': config.TRAIN.wd, + 'learning_rate': lr, + 'lr_scheduler': lr_scheduler, + 'clip_gradient': None} + # + if not isinstance(train_data, PrefetchingIter): + train_data = PrefetchingIter(train_data) + + # train + mod.fit(train_data, eval_metric=eval_metrics, epoch_end_callback=epoch_end_callback, + batch_end_callback=batch_end_callback, kvstore=config.default.kvstore, + optimizer='sgd', optimizer_params=optimizer_params, + arg_params=arg_params, aux_params=aux_params, begin_epoch=begin_epoch, num_epoch=end_epoch) + + +def main(): + print('Called with argument:', args) + ctx = [mx.gpu(int(i)) for i in config.gpus.split(',')] + train_net( args, ctx, config.network.pretrained, config.network.pretrained_epoch, config.TRAIN.model_prefix, + config.TRAIN.begin_epoch, config.TRAIN.end_epoch, config.TRAIN.lr, config.TRAIN.lr_step) + +if __name__ == '__main__': + main() diff --git a/init.bat b/init.bat new file mode 100644 index 0000000..4538107 --- /dev/null +++ b/init.bat @@ -0,0 +1,15 @@ +cd /d %~dp0 +mkdir .\data +mkdir .\output +mkdir .\external\mxnet +mkdir .\model\pretrained_model +pause +cd lib\bbox +python setup_windows.py build_ext --inplace +cd ..\dataset\pycocotools +python setup_windows.py build_ext --inplace +cd ..\..\nms +python setup_windows.py build_ext --inplace +python setup_windows_cuda.py build_ext --inplace +cd ..\.. +pause diff --git a/init.sh b/init.sh new file mode 100644 index 0000000..4a39423 --- /dev/null +++ b/init.sh @@ -0,0 +1,14 @@ +#!/bin/bash + +mkdir -p ./data +mkdir -p ./output +mkdir -p ./external/mxnet +mkdir -p ./model/pretrained_model + +cd lib/bbox +python setup_linux.py build_ext --inplace +cd ../dataset/pycocotools +python setup_linux.py build_ext --inplace +cd ../../nms +python setup_linux.py build_ext --inplace +cd ../.. diff --git a/lib/Makefile b/lib/Makefile new file mode 100644 index 0000000..070f04a --- /dev/null +++ b/lib/Makefile @@ -0,0 +1,8 @@ +all: + cd nms/; python setup.py build_ext --inplace; rm -rf build; cd ../../ + cd bbox/; python setup.py build_ext --inplace; rm -rf build; cd ../../ + cd dataset/pycocotools/; python setup.py build_ext --inplace; rm -rf build; cd ../../ +clean: + cd nms/; rm *.so *.c *.cpp; cd ../../ + cd bbox/; rm *.so *.c *.cpp; cd ../../ + cd dataset/pycocotools/; rm *.so; cd ../../ diff --git a/lib/__init__.py b/lib/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/lib/bbox/.gitignore b/lib/bbox/.gitignore new file mode 100644 index 0000000..f23d24e --- /dev/null +++ b/lib/bbox/.gitignore @@ -0,0 +1,2 @@ +*.c +*.cpp \ No newline at end of file diff --git a/lib/bbox/__init__.py b/lib/bbox/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/lib/bbox/bbox.pyx b/lib/bbox/bbox.pyx new file mode 100644 index 0000000..ac603b6 --- /dev/null +++ b/lib/bbox/bbox.pyx @@ -0,0 +1,57 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2016 by Contributors +# Copyright (c) 2017 Microsoft +# Licensed under The Apache-2.0 License [see LICENSE for details] +# Written by Sergey Karayev +# Modified by Yuwen Xiong, from from py-faster-rcnn (https://github.com/rbgirshick/py-faster-rcnn) +# -------------------------------------------------------- + +cimport cython +import numpy as np +cimport numpy as np + +DTYPE = np.float +ctypedef np.float_t DTYPE_t + +def bbox_overlaps_cython( + np.ndarray[DTYPE_t, ndim=2] boxes, + np.ndarray[DTYPE_t, ndim=2] query_boxes): + """ + Parameters + ---------- + boxes: (N, 4) ndarray of float + query_boxes: (K, 4) ndarray of float + Returns + ------- + overlaps: (N, K) ndarray of overlap between boxes and query_boxes + """ + cdef unsigned int N = boxes.shape[0] + cdef unsigned int K = query_boxes.shape[0] + cdef np.ndarray[DTYPE_t, ndim=2] overlaps = np.zeros((N, K), dtype=DTYPE) + cdef DTYPE_t iw, ih, box_area + cdef DTYPE_t ua + cdef unsigned int k, n + for k in range(K): + box_area = ( + (query_boxes[k, 2] - query_boxes[k, 0] + 1) * + (query_boxes[k, 3] - query_boxes[k, 1] + 1) + ) + for n in range(N): + iw = ( + min(boxes[n, 2], query_boxes[k, 2]) - + max(boxes[n, 0], query_boxes[k, 0]) + 1 + ) + if iw > 0: + ih = ( + min(boxes[n, 3], query_boxes[k, 3]) - + max(boxes[n, 1], query_boxes[k, 1]) + 1 + ) + if ih > 0: + ua = float( + (boxes[n, 2] - boxes[n, 0] + 1) * + (boxes[n, 3] - boxes[n, 1] + 1) + + box_area - iw * ih + ) + overlaps[n, k] = iw * ih / ua + return overlaps diff --git a/lib/bbox/bbox_regression.py b/lib/bbox/bbox_regression.py new file mode 100644 index 0000000..c33645f --- /dev/null +++ b/lib/bbox/bbox_regression.py @@ -0,0 +1,200 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# py-faster-rcnn +# Copyright (c) 2016 by Contributors +# Licence under The MIT License +# py-faster-rcnn (https://github.com/rbgirshick/py-faster-rcnn) +# -------------------------------------------------------- + +""" +This file has functions about generating bounding box regression targets +""" + +import numpy as np +import mxnet as mx + +from bbox_transform import bbox_overlaps, bbox_transform +import pdb + +def compute_bbox_regression_targets(rois, overlaps, labels, cfg): + """ + given rois, overlaps, gt labels, compute bounding box regression targets + :param rois: roidb[i]['boxes'] k * 4 + :param overlaps: roidb[i]['max_overlaps'] k * 1 + :param labels: roidb[i]['max_classes'] k * 1 + :return: targets[i][class, dx, dy, dw, dh] k * 5 + """ + # Ensure ROIs are floats + rois = rois.astype(np.float, copy=False) + + # Sanity check + if len(rois) != len(overlaps): + print 'bbox regression: this should not happen' + + # Indices of ground-truth ROIs + gt_inds = np.where(overlaps == 1)[0] + if len(gt_inds) == 0: + print 'something wrong : zero ground truth rois' + # Indices of examples for which we try to make predictions + ex_inds = np.where(overlaps >= cfg.TRAIN.BBOX_REGRESSION_THRESH)[0] + + # Get IoU overlap between each ex ROI and gt ROI + ex_gt_overlaps = bbox_overlaps(rois[ex_inds, :], rois[gt_inds, :]) + + # Find which gt ROI each ex ROI has max overlap with: + # this will be the ex ROI's gt target + gt_assignment = ex_gt_overlaps.argmax(axis=1) + gt_rois = rois[gt_inds[gt_assignment], :] + ex_rois = rois[ex_inds, :] + + targets = np.zeros((rois.shape[0], 5), dtype=np.float32) + targets[ex_inds, 0] = labels[ex_inds] + targets[ex_inds, 1:] = bbox_transform(ex_rois, gt_rois) + return targets + + +def add_bbox_regression_targets(roidb, cfg): + """ + given roidb, add ['bbox_targets'] and normalize bounding box regression targets + :param roidb: roidb to be processed. must have gone through imdb.prepare_roidb + :return: means, std variances of targets + """ + print 'add bounding box regression targets' + assert len(roidb) > 0 + assert 'max_classes' in roidb[0] + pdb.set_trace() + num_images = len(roidb) + num_classes = 2 if cfg.CLASS_AGNOSTIC else roidb[0]['gt_overlaps'].shape[1] + + for im_i in range(num_images): + rois = roidb[im_i]['boxes'] + max_overlaps = roidb[im_i]['max_overlaps'] + max_classes = roidb[im_i]['max_classes'] + roidb[im_i]['bbox_targets'] = compute_bbox_regression_targets(rois, max_overlaps, max_classes, cfg) + + if cfg.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED: + # use fixed / precomputed means and stds instead of empirical values + means = np.tile(np.array(cfg.TRAIN.BBOX_MEANS), (num_classes, 1)) + stds = np.tile(np.array(cfg.TRAIN.BBOX_STDS), (num_classes, 1)) + else: + # compute mean, std values + class_counts = np.zeros((num_classes, 1)) + 1e-14 + sums = np.zeros((num_classes, 4)) + squared_sums = np.zeros((num_classes, 4)) + for im_i in range(num_images): + targets = roidb[im_i]['bbox_targets'] + for cls in range(1, num_classes): + cls_indexes = np.where(targets[:, 0] > 0)[0] if cfg.CLASS_AGNOSTIC else np.where(targets[:, 0] == cls)[0] + if cls_indexes.size > 0: + class_counts[cls] += cls_indexes.size + sums[cls, :] += targets[cls_indexes, 1:].sum(axis=0) + squared_sums[cls, :] += (targets[cls_indexes, 1:] ** 2).sum(axis=0) + + means = sums / class_counts + # var(x) = E(x^2) - E(x)^2 + stds = np.sqrt(squared_sums / class_counts - means ** 2) + + print 'bbox target means:' + print means + print means[1:, :].mean(axis=0) # ignore bg class + print 'bbox target stdevs:' + print stds + print stds[1:, :].mean(axis=0) # ignore bg class + + + # normalized targets + for im_i in range(num_images): + targets = roidb[im_i]['bbox_targets'] + for cls in range(1, num_classes): + cls_indexes = np.where(targets[:, 0] > 0) if cfg.CLASS_AGNOSTIC else np.where(targets[:, 0] == cls)[0] + roidb[im_i]['bbox_targets'][cls_indexes, 1:] -= means[cls, :] + roidb[im_i]['bbox_targets'][cls_indexes, 1:] /= stds[cls, :] + + return means.ravel(), stds.ravel() + + +def expand_bbox_regression_targets(bbox_targets_data, num_classes, cfg): + """ + expand from 5 to 4 * num_classes; only the right class has non-zero bbox regression targets + :param bbox_targets_data: [k * 5] + :param num_classes: number of classes + :return: bbox target processed [k * 4 num_classes] + bbox_weights ! only foreground boxes have bbox regression computation! + """ + classes = bbox_targets_data[:, 0] + if cfg.CLASS_AGNOSTIC: + num_classes = 2 + bbox_targets = np.zeros((classes.size, 4 * num_classes), dtype=np.float32) + bbox_weights = np.zeros(bbox_targets.shape, dtype=np.float32) + indexes = np.where(classes > 0)[0] + for index in indexes: + cls = classes[index] + start = int(4 * 1 if cls > 0 else 0) if cfg.CLASS_AGNOSTIC else int(4 * cls) + end = start + 4 + # pdb.set_trace() + bbox_targets[index, start:end] = bbox_targets_data[index, 1:] + bbox_weights[index, start:end] = cfg.TRAIN.BBOX_WEIGHTS + return bbox_targets, bbox_weights + + + +def expand_bbox_regression_targets_base(bbox_targets_data, num_classes, cfg): + """ + expand from p + 1 to p * num_classes; only the right class has non-zero bbox regression targets + :param bbox_targets_data: [k * (p + 1)] + :param num_classes: number of classes + :param cfg: + :return: bbox target processed [k * p * num_classes] + bbox_weights ! only foreground boxes have bbox regression computation + """ + classes = bbox_targets_data[:, 0] + if cfg.CLASS_AGNOSTIC: + num_classes = 2 + coord_len = len(bbox_targets_data[0]) - 1 + bbox_targets = np.zeros((classes.size, coord_len * num_classes), dtype=np.float32) + bbox_weights = np.zeros(bbox_targets.shape, dtype=np.float32) + indexes = np.where(classes > 0)[0] + # pdb.set_trace() + for index in indexes: + cls = classes[index] + start = int(coord_len * 1 if cls > 0 else 0) if cfg.CLASS_AGNOSTIC else int(coord_len * cls) + end = start + coord_len + # pdb.set_trace() + bbox_targets[index, start:end] = bbox_targets_data[index, 1:] + # bbox_weights[index, start:end] = cfg.TRAIN.BBOX_WEIGHTS + bbox_weights[index, start: end] = np.ones(coord_len) + return bbox_targets, bbox_weights + + +def expand_bbox_regression_targets_base_new(bbox_targets_data, num_classes, class_agnostic): + """ + expand from p + 1 to p * num_classes; only the right class has non-zero bbox regression targets + :param bbox_targets_data: [k * (p + 1)] + :param num_classes: number of classes + :param cfg: + :return: bbox target processed [k * p * num_classes] + bbox_weights ! only foreground boxes have bbox regression computation + """ + classes = bbox_targets_data[:, 0] + if class_agnostic: + num_classes = 2 + coord_len = len(bbox_targets_data[0]) - 1 + bbox_targets = np.zeros((classes.size, coord_len * num_classes), dtype=np.float32) + bbox_weights = np.zeros(bbox_targets.shape, dtype=np.float32) + indexes = np.where(classes > 0)[0] + # pdb.set_trace() + for index in indexes: + cls = classes[index] + start = int(coord_len * 1 if cls > 0 else 0) if class_agnostic else int(coord_len * cls) + end = start + coord_len + # pdb.set_trace() + bbox_targets[index, start:end] = bbox_targets_data[index, 1:] + # bbox_weights[index, start:end] = cfg.TRAIN.BBOX_WEIGHTS + bbox_weights[index, start: end] = np.ones(coord_len) + return bbox_targets, bbox_weights + diff --git a/lib/bbox/bbox_regression_test.py b/lib/bbox/bbox_regression_test.py new file mode 100644 index 0000000..f8cb8ba --- /dev/null +++ b/lib/bbox/bbox_regression_test.py @@ -0,0 +1,57 @@ +import unittest +import numpy as np +from bbox_regression import * +import sys +sys.path.insert(0, '../../fpn') +from config.config import config as cfg + + + +class Testbbox_regression(unittest.TestCase): + + @classmethod + def setUpClass(cls): + cfg.CLASS_AGNOSTIC = False + cfg.TRAIN.BBOX_WEIGHTS = np.array([1.0, 1.0, 1.0, 1.0]) + def test_expand_bbox_regression_targets(self): + pass + + def test_expand_bbox_regression_targets_base_4(self): + # [k * (p + 1)] --> [k * p * num_classes] + # [k * 5] --> [k * 4 * num_classes] + num_classes = 3 + bbox_targets_data = np.array([[0, 3, 4, 10, 20], + [2, 7, 11, 23, 2]]) + + expected_targets = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 7, 11, 23, 2]]) + expected_weights = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1]]) + + calc_targets, calc_weights = expand_bbox_regression_targets(bbox_targets_data, num_classes, cfg) + calc_targets2, calc_weights2 = expand_bbox_regression_targets_base(bbox_targets_data, num_classes, cfg) + + np.testing.assert_array_almost_equal(calc_targets, expected_targets) + np.testing.assert_array_almost_equal(calc_weights, expected_weights) + + np.testing.assert_array_almost_equal(calc_targets2, expected_targets) + np.testing.assert_array_almost_equal(calc_weights2, expected_weights) + + def test_expand_bbox_regression_targets_base_8(self): + # [k * (p + 1)] --> [k * p * num_classes] + # [k * 9] --> [k * 8 * num_classes] + num_classes = 3 + # (dx1, dy1, ... dx4, dy4) + bbox_targets_data = np.array([[1, 3, 4, 2, 2, 2, 1, 9, 1], + [0, 2, 2, 3, 1, 3, 6, 3, 1]]) + expected_targets = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 2, 2, 2, 1, 9, 1, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]]) + expected_weights = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) + calc_targets, calc_weights = expand_bbox_regression_targets_base(bbox_targets_data, num_classes, cfg) + + np.testing.assert_array_almost_equal(calc_targets, expected_targets) + np.testing.assert_array_almost_equal(calc_weights, expected_weights) + +if __name__ == '__main__': + unittest.main() \ No newline at end of file diff --git a/lib/bbox/bbox_transform.py b/lib/bbox/bbox_transform.py new file mode 100644 index 0000000..334ee38 --- /dev/null +++ b/lib/bbox/bbox_transform.py @@ -0,0 +1,1388 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# py-faster-rcnn +# Copyright (c) 2016 by Contributors +# Licence under The MIT License +# py-faster-rcnn (https://github.com/rbgirshick/py-faster-rcnn) +# -------------------------------------------------------- + +import numpy as np +from bbox import bbox_overlaps_cython +import math +import copy +import mxnet as mx +import pdb + +def bbox_overlaps(boxes, query_boxes): + return bbox_overlaps_cython(boxes, query_boxes) + +def bbox_poly2hbb(boxes): + """ + with label + :param boxes: (x1, y1, ... x4, y4, cls) [n, 9] + :return: hbb: (xmin, ymin, xmax, ymax, cls) [n, 5] + """ + n = boxes.shape[0] + hbbs = np.zeros((n, 4)) + + xs = np.reshape(boxes[:, : -1], (n, 4, 2))[:, :, 0] + ys = np.reshape(boxes[:, : -1], (n, 4, 2))[:, :, 1] + # pdb.set_trace() + hbbs[:, 0] = np.min(xs, axis=1) + hbbs[:, 1] = np.min(ys, axis=1) + hbbs[:, 2] = np.max(xs, axis=1) + hbbs[:, 3] = np.max(ys, axis=1) + hbbs = np.hstack((hbbs, boxes[:, -1, np.newaxis])) + return hbbs +def bbox_poly2hbb_nd(boxes): + """ + with label. this is a mxnet version + :param boxes: (x1, y1, ..., x4, y4, cls) [n, 9] + :return: hbb: (xmin, ymin, xmax, ymax, cls) [n, 5] + """ + n = boxes.shape[0] + hbbs = mx.nd.zeros((n, 4)) + + xs = boxes[:, : -1].reshape((n, 4, 2))[:, :, 0] + ys = boxes[:, : -1].reshape((n, 4, 2))[:, :, 1] + + hbbs[:, 0] = mx.nd.min(xs, axis=1) + hbbs[:, 1] = mx.nd.min(ys, axis=1) + hbbs[:, 2] = mx.nd.max(xs, axis=1) + hbbs[:, 3] = mx.nd.max(ys, axis=1) + hbbs = mx.nd.concat(hbbs, mx.nd.expand_dims(boxes[:, -1], 1), dim=1) + + return hbbs +def box2poly(boxes): + """ + :param boxes: (x, y, w, h) [n, 4] + :return: (x1, y1, ... x4, y4) [n, 8] + """ + xs = boxes[:, 0] + ys = boxes[:, 1] + ws = boxes[:, 2] + hs = boxes[:, 3] + n = len(xs) + polys = np.zeros((n, 8)) + polys[:, 0] = xs - ws/2.0 + polys[:, 1] = ys - hs/2.0 + polys[:, 2] = xs + ws/2.0 + polys[:, 3] = ys - hs/2.0 + polys[:, 4] = xs + ws/2.0 + polys[:, 5] = ys + hs/2.0 + polys[:, 6] = xs - ws/2.0 + polys[:, 7] = ys + hs/2.0 + + return polys + +def xy2wh(boxes): + """ + :param boxes: (xmin, ymin, xmax, ymax) (n,4) + :return: out_boxes: (x_ctr, y_ctr, w, h) (n, 4) + """ + num_boxes = boxes.shape[0] + + ex_widths = boxes[:, 2] - boxes[:, 0] + 1.0 + ex_heights = boxes[:, 3] - boxes[:, 1] + 1.0 + ex_ctr_x = boxes[:, 0] + 0.5 * (ex_widths - 1.0) + ex_ctr_y = boxes[:, 1] + 0.5 * (ex_heights - 1.0) + + return np.concatenate((ex_ctr_x[:, np.newaxis], ex_ctr_y[:, np.newaxis], ex_widths[:, np.newaxis], ex_heights[:, np.newaxis]), axis=1) + +def xy2wh_nd(boxes): + """ + + :param boxes: (xmin, ymin, xmax, ymax) (n, 4) + :return: out_boxes: (x_ctr, y_ctr, w, h) (n, 4) + """ + num_boxes = boxes.shape[0] + + ex_widths = boxes[:, 2] - boxes[:, 0] + 1.0 + ex_heights = boxes[:, 3] - boxes[:, 1] + 1.0 + ex_ctr_x = boxes[:, 0] + 0.5 * (ex_widths - 1.0) + ex_ctr_y = boxes[:, 1] + 0.5 * (ex_heights - 1.0) + + return mx.nd.concat(ex_ctr_x.expand_dims(1), + ex_ctr_y.expand_dims(1), + ex_widths.expand_dims(1), + ex_heights.expand_dims(1), dim=1) + +def wh2xy(boxes): + """ + + :param boxes: (x_ctr, y_ctr, w, h) (n, 4) + :return: out_boxes: (xmin, ymin, xmax, ymax) (n, 4) + """ + num_boxes = boxes.shape[0] + + xmin = boxes[:, 0] - (boxes[:, 2] - 1) / 2.0 + ymin = boxes[:, 1] - (boxes[:, 3] - 1) / 2.0 + xmax = boxes[:, 0] + (boxes[:, 2] - 1) / 2.0 + ymax = boxes[:, 1] + (boxes[:, 3] - 1) / 2.0 + + return np.concatenate((xmin[:, np.newaxis], ymin[:, np.newaxis], xmax[:, np.newaxis], ymax[:, np.newaxis]), axis=1) +def poly2bbox(polys): + """ + without label + :param polys: (x1, y1, ..., x4, y4) (n, 8) + :return: boxes: (xmin, ymin, xmax, ymax) (n, 4) + """ + n = polys.shape[0] + xs = np.reshape(polys, (n, 4, 2))[:, :, 0] + ys = np.reshape(polys, (n, 4, 2))[:, :, 1] + + xmin = np.min(xs, axis=1) + ymin = np.min(ys, axis=1) + xmax = np.max(xs, axis=1) + ymax = np.max(ys, axis=1) + + xmin = xmin[:, np.newaxis] + ymin = ymin[:, np.newaxis] + xmax = xmax[:, np.newaxis] + ymax = ymax[:, np.newaxis] + + return np.concatenate((xmin, ymin, xmax, ymax), 1) + +def poly2bbox_nd(polys): + """ + this is a mx.nd version + :param polys: (x1, y1, ..., x4, y4) (n, 8) + :return: boxes: (xmin, ymin, xmax, ymax) (n, 4) + """ + n = polys.shape[0] + xs = polys.reshape(n, 4, 2)[:, :, 0] + ys = polys.reshape(n, 4, 2)[:, :, 1] + + xmin = mx.nd.min(xs, axis=1) + ymin = mx.nd.min(ys, axis=1) + xmax = mx.nd.max(xs, axis=1) + ymax = mx.nd.max(ys, axis=1) + + xmin = xmin.expand_dims(1) + ymin = ymin.expand_dims(1) + xmax = xmax.expand_dims(1) + ymax = ymax.expand_dims(1) + + return mx.nd.concat(xmin, ymin, xmax, ymax, dim=1) +# +# def dbbox_transform(ex_rois, gt_rois): +# """ +# :param ex_rois: predicted rois from rpn (x, y, w, h) +# shape [n, 4] +# :param gt_rois: ground truth rois (x1, y1, x2, y2, x3, y3, x4, y4) +# shape [n, 8] +# :return: dbbox target [n, 8] +# """ +# roi_polys = box2poly(ex_rois) +# ws = ex_rois[:, 2] +# hs = ex_rois[:, 3] +# n = len(ws) +# targets = np.zeros((n, 8)) +# for i in range(8): +# if i%2 == 0: +# # dx +# targets[:, i] = (gt_rois[:, i] - roi_polys[:, i]) / ws +# else: +# # dy +# targets[:, i] = (gt_rois[:, i] - roi_polys[:, i]) / hs +# return targets + +# def rotbox_norm(rotboxes): +# """ +# if the rot +# :param rotboxes: +# :return: +# """ + + +def dbbox_transform(ex_rois, gt_rois): + """ + :param ex_rois: predicted rois from rpn (xmin, ymin, xmax, ymax) + shape [n, 4] + :param gt_rois: ground truth rois (x1, y1, x2, y2, x3, y3, x4, y4) + shape [n, 8] + :return: dbbox target [n, 8] + """ + ws = ex_rois[:, 2] - ex_rois[:, 0] + 1.0 + hs = ex_rois[:, 3] - ex_rois[:, 1] + 1.0 + xmin, ymin, xmax, ymax = ex_rois[:, 0], ex_rois[:, 1], ex_rois[:, 2], ex_rois[:, 3] + roi_polys = np.concatenate((xmin[:, np.newaxis], + ymin[:, np.newaxis], + xmax[:, np.newaxis], + ymin[:, np.newaxis], + xmax[:, np.newaxis], + ymax[:, np.newaxis], + xmin[:, np.newaxis], + ymax[:, np.newaxis]), 1) + + n = len(ws) + targets = np.zeros((n, 8)) + for i in range(8): + if i%2 == 0: + # dx + targets[:, i] = (gt_rois[:, i] - roi_polys[:, i]) / ws + else: + # dy + targets[:, i] = (gt_rois[:, i] - roi_polys[:, i]) / hs + return targets + +def dbbox_pred(boxes, box_deltas): + """ + Transform the set of calss-agnostic boxes into class-specific boxes + by applying the predicted offsets (box_deltas) + :param boxes: rois (xmin, ymin, xmax, ymax) [n, 4] + :param box_deltas: (dx1, dy1, ..., dx4, dy4) [n, 8] + :return: box_pred: (x1, y1, ..., x4, y4) [n, 8] + x_pred = dx * w + x + """ + if boxes.shape[0] == 0: + return np.zeros((0, box_deltas.shape[1])) + + boxes = boxes.astype(np.float, copy=False) + widths = boxes[:, 2] - boxes[:, 0] + 1.0 + heights = boxes[:, 3] - boxes[:, 1] + 1.0 + # ctr_x = boxes[:, 0] + 0.5 * (widths - 1.0) + # ctr_y = boxes[:, 1] + 0.5 * (heights - 1.0) + + xmin, ymin, xmax, ymax = boxes[:, 0], boxes[:, 1], boxes[: , 2], boxes[:, 3] + rois = np.concatenate((xmin[:, np.newaxis], ymin[:, np.newaxis], + xmax[:, np.newaxis], ymin[:, np.newaxis], + xmax[:, np.newaxis], ymax[:, np.newaxis], + xmin[:, np.newaxis], ymax[:, np.newaxis]), axis=1) + box_pred = np.zeros(box_deltas.shape) + for i in range(8): + if i %2 == 0: + # for x + # pdb.set_trace() + box_pred[:, i::8] = box_deltas[:, i::8] * widths[:, np.newaxis] + rois[:, i, np.newaxis] + else: + # for y + box_pred[:, i::8] = box_deltas[:, i::8] * heights[:, np.newaxis] + rois[:, i, np.newaxis] + return box_pred + +def polygonToRotRectangle_batch(bbox): + """ + :param bbox: The polygon stored in format [x1, y1, x2, y2, x3, y3, x4, y4] + shape [num_boxes, 8] + :return: Rotated Rectangle in format [cx, cy, w, h, theta] + shape [num_rot_recs, 5] + """ + # print('bbox: ', bbox) + bbox = np.array(bbox,dtype=np.float32) + bbox = np.reshape(bbox,newshape=(-1, 2, 4),order='F') + # angle = math.atan2(-(bbox[0,1]-bbox[0,0]),bbox[1,1]-bbox[1,0]) + # print('bbox: ', bbox) + angle = np.arctan2(-(bbox[:, 0,1]-bbox[:, 0,0]),bbox[:, 1,1]-bbox[:, 1,0]) + # angle = np.arctan2(-(bbox[:, 0,1]-bbox[:, 0,0]),bbox[:, 1,1]-bbox[:, 1,0]) + # center = [[0],[0]] ## shape [2, 1] + # print('angle: ', angle) + center = np.zeros((bbox.shape[0], 2, 1)) + for i in range(4): + center[:, 0, 0] += bbox[:, 0,i] + center[:, 1, 0] += bbox[:, 1,i] + + center = np.array(center,dtype=np.float32)/4.0 + + # R = np.array([[math.cos(angle), -math.sin(angle)], [math.sin(angle), math.cos(angle)]], dtype=np.float32) + R = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]], dtype=np.float32) + + normalized = np.matmul(R.transpose((2, 1, 0)),bbox-center) + + + xmin = np.min(normalized[:, 0, :], axis=1) + # print('diff: ', (xmin - normalized[:, 0, 3])) + # assert sum((abs(xmin - normalized[:, 0, 3])) > eps) == 0 + xmax = np.max(normalized[:, 0, :], axis=1) + # assert sum(abs(xmax - normalized[:, 0, 1]) > eps) == 0 + # print('diff2: ', xmax - normalized[:, 0, 1]) + ymin = np.min(normalized[:, 1, :], axis=1) + # assert sum(abs(ymin - normalized[:, 1, 3]) > eps) == 0 + # print('diff3: ', ymin - normalized[:, 1, 3]) + ymax = np.max(normalized[:, 1, :], axis=1) + # assert sum(abs(ymax - normalized[:, 1, 1]) > eps) == 0 + # print('diff4: ', ymax - normalized[:, 1, 1]) + + w = xmax - xmin + 1 + h = ymax - ymin + 1 + + w = w[:, np.newaxis] + h = h[:, np.newaxis] + angle = angle[:, np.newaxis] % ( 2 * np.pi) + + dboxes = np.concatenate((center[:, 0].astype(np.float), center[:, 1].astype(np.float), w, h, angle), axis=1) + return dboxes + +# def polygonToRectangle_nd(bbox): +# """ +# :param bbox: The polygon stored in format [x1, y1, x2, y2, x3, y3, x4, y4] +# shape [num_boxes, 8] +# :return: Rotated Rectangle in format [cx, cy, w, h, theta] +# shape [num_rot_recs, 5] +# """ +# +# bbox_numpy = mx.nd.array(bbox, dtype='float32') +# + + +def RotBox2Polys(dboxes): + """ + :param dboxes: (x_ctr, y_ctr, w, h, angle) + (numboxes, 5) + :return: quadranlges: + (numboxes, 8) + """ + cs = np.cos(dboxes[:, 4]) + ss = np.sin(dboxes[:, 4]) + w = dboxes[:, 2] - 1 + h = dboxes[:, 3] - 1 + + ## change the order to be the initial definition + x_ctr = dboxes[:, 0] + y_ctr = dboxes[:, 1] + x1 = x_ctr + cs * (w / 2.0) - ss * (-h / 2.0) + x2 = x_ctr + cs * (w / 2.0) - ss * (h / 2.0) + x3 = x_ctr + cs * (-w / 2.0) - ss * (h / 2.0) + x4 = x_ctr + cs * (-w / 2.0) - ss * (-h / 2.0) + + y1 = y_ctr + ss * (w / 2.0) + cs * (-h / 2.0) + y2 = y_ctr + ss * (w / 2.0) + cs * (h / 2.0) + y3 = y_ctr + ss * (-w / 2.0) + cs * (h / 2.0) + y4 = y_ctr + ss * (-w / 2.0) + cs * (-h / 2.0) + + x1 = x1[:, np.newaxis] + y1 = y1[:, np.newaxis] + x2 = x2[:, np.newaxis] + y2 = y2[:, np.newaxis] + x3 = x3[:, np.newaxis] + y3 = y3[:, np.newaxis] + x4 = x4[:, np.newaxis] + y4 = y4[:, np.newaxis] + + + + polys = np.concatenate((x1, y1, x2, y2, x3, y3, x4, y4), axis=1) + return polys + +def xyhs2polys(xyhs): + """ + + :param xyhs: (x1, y1, x2, y2, h) (x1, y1) and (x2, y2) are the first and second point, the h is the height of boudning box + shape: (num_rois, 5) + :return: polys: (x1, y1, x2, y2, x3, y3, x4, y4) + shape: (num_rois, 8) + """ + x1 = xyhs[:, 0] + y1 = xyhs[:, 1] + x2 = xyhs[:, 2] + y2 = xyhs[:, 3] + h = xyhs[:, 4] + + A = -(y2 - y1) + B = (x2 - x1) + x3 = x2 + A/(np.sqrt(A * A + B * B)) * h + y3 = y2 + B/(np.sqrt(A * A + B * B)) * h + x4 = x1 + A/(np.sqrt(A * A + B * B)) * h + y4 = y1 + B/(np.sqrt(A * A + B * B)) * h + + return np.concatenate((x1[:, np.newaxis], + y1[:, np.newaxis], + x2[:, np.newaxis], + y2[:, np.newaxis], + x3[:, np.newaxis], + y3[:, np.newaxis], + x4[:, np.newaxis], + y4[:, np.newaxis]), axis=1) + +def xyhs2polys_muli_class(xyhs): + """ + :param xyhs: (x1, y1, x2, y2, h) + (numboxes, 5 * num_classes) + :return: quadrangles: + (numboxes, 8 * num_classes) + """ + num_boxes = xyhs.shape[0] + numclasses = int(xyhs.shape[1]/5) + quadrangles = np.zeros((num_boxes, 8 * numclasses)) + x1 = xyhs[:, 0::5] + y1 = xyhs[:, 1::5] + x2 = xyhs[:, 2::5] + y2 = xyhs[:, 3::5] + h = xyhs[:, 4::5] + + A = -(y2 - y1) + B = (x2 - x1) + x3 = x2 + A/(np.sqrt(A * A + B * B)) * h + y3 = y2 + B/(np.sqrt(A * A + B * B)) * h + x4 = x1 + A/(np.sqrt(A * A + B * B)) * h + y4 = y1 + B/(np.sqrt(A * A + B * B)) * h + + quadrangles[:, 0::8] = x1 + quadrangles[:, 1::8] = y1 + quadrangles[:, 2::8] = x2 + quadrangles[:, 3::8] = y2 + quadrangles[:, 4::8] = x3 + quadrangles[:, 5::8] = y3 + quadrangles[:, 6::8] = x4 + quadrangles[:, 7::8] = y4 + + return quadrangles + +def polys2xyhs(polys): + """ + Transform polys to xyhs, note! it is reversible to function xyhs2polys if only if the polys are rectangle. + :param polys: + :return: + """ + rotboxes = polygonToRotRectangle_batch(polys) + polys = RotBox2Polys(rotboxes) + x1 = polys[:, 0] + y1 = polys[:, 1] + x2 = polys[:, 2] + y2 = polys[:, 3] + h = np.sqrt((polys[:, 2] - polys[:, 4])**2 + (polys[:, 3] - polys[:, 5])**2) + + return np.concatenate((x1[:, np.newaxis], + y1[:, np.newaxis], + x2[:, np.newaxis], + y2[:, np.newaxis], + h[:, np.newaxis]), axis=1) + +def polys2xyhs_nd(polys): + """ + + :param polys: + :return: + """ + x1 = polys[:, 0] + y1 = polys[:, 1] + x2 = polys[:, 2] + y2 = polys[:, 3] + h = np.sqrt((polys[:, 2] - polys[:, 4])**2 + (polys[:, 3] - polys[:, 5])**2) + + return mx.nd.concat((mx.nd.expand_dims(x1, 1), + mx.nd.expand_dims(y1, 1), + mx.nd.expand_dims(x2, 1), + mx.nd.expand_dims(y2, 1), + mx.nd.expand_dims(h, 1)), 1) + +def dbboxtransform3_warp(ex_rois, gts): + """ + + :param ex_rois: (xmin, ymin, xmax, ymax) + shape: (num_rois, 5) + :param gts: (x1, y1, ..., x4, y4) + shape: (num_rois, 8) + :return: targets: (dx1, dy1, dx2, dy2, dh) + """ + boxes_x1s = ex_rois[:, 0] + boxes_y1s = ex_rois[:, 1] + boxes_x2s = ex_rois[:, 2] + boxes_y2s = ex_rois[:, 1] + hs = ex_rois[:, 3] - ex_rois[:, 1] + + boxes_xyhs = np.concatenate((boxes_x1s[:, np.newaxis], + boxes_y1s[:, np.newaxis], + boxes_x2s[:, np.newaxis], + boxes_y2s[:, np.newaxis], + hs[:, np.newaxis]), axis=1) + + gts_xyhs = polys2xyhs(gts) + + targets = dbboxtransform3(boxes_xyhs, gts_xyhs) + + return targets + +def dbbox_transform3_warp_nd(ex_rois, gts): + """ + this is a mxnet version + :param ex_rois: + :param gts: + :return: + """ + boxes_x1s = ex_rois[:, 0] + boxes_y1s = ex_rois[:, 1] + boxes_x2s = ex_rois[:, 2] + boxes_y2s = ex_rois[:, 1] + hs = ex_rois[:, 3] - ex_rois[:, 1] + + # boxes_xyhs = np.concatenate((boxes_x1s[:, np.newaxis], + # boxes_y1s[:, np.newaxis], + # boxes_x2s[:, np.newaxis], + # boxes_y2s[:, np.newaxis], + # hs[:, np.newaxis]), axis=1) + + boxes_xyhs = mx.nd.concat((mx.nd.expand_dims(boxes_x1s, 1), + mx.nd.expand_dims(boxes_y1s, 1), + mx.nd.expand_dims(boxes_x2s, 1), + mx.nd.expand_dims(boxes_y2s, 1), + mx.nd.expand_dims(hs, 1)), dim=1) + + gts_xyhs = polys2xyhs_nd(gts) + + targets = dbboxtransform3_nd(boxes_xyhs, gts_xyhs) + + return targets + +def dbboxtransform3_inv_warp(ex_rois, deltas): + """ + + :param ex_rois: (xmin, ymin, xmax, ymax) + shape: (num_rois, 4) + :param deltas: (dx1, dy1, dx2, dy2, dh) + shape: (num_rois, 5 * num_classes) + :return: pred_boxes: (x1, y1, x2, y2, h) + """ + boxes_x1s = ex_rois[:, 0] + boxes_y1s = ex_rois[:, 1] + boxes_x2s = ex_rois[:, 2] + boxes_y2s = ex_rois[:, 1] + hs = ex_rois[:, 3] - ex_rois[:, 1] + + boxes_xyhs = np.concatenate((boxes_x1s[:, np.newaxis], + boxes_y1s[:, np.newaxis], + boxes_x2s[:, np.newaxis], + boxes_y2s[:, np.newaxis], + hs[:, np.newaxis]), axis=1) + + pred_boxes = dbboxtransform3_inv(boxes_xyhs, deltas) + + return pred_boxes + +def rotation_translation_trans(boxes, thetas, translations): + """ + apply rotation and translation transform on boxes + :param boxes: (x1, y1, x2, y2, h) + shape: (n, 5) + :return: output boxes: (x1, y1, x2, y2, h) + shape: (n, 5) + """ + + boxes[:, 0] = boxes[:, 0] - translations[:, 0] + boxes[:, 1] = boxes[:, 1] - translations[:, 1] + xs, ys = copy.deepcopy(boxes[:, 0]), copy.deepcopy(boxes[:, 1]) + boxes[:, 0] = np.cos(thetas) * xs - np.sin(thetas) * ys + boxes[:, 1] = np.sin(thetas) * xs + np.cos(thetas) * ys + + boxes[:, 2] = boxes[:, 2] - translations[:, 0] + boxes[:, 3] = boxes[:, 3] - translations[:, 1] + xs2, ys2 = copy.deepcopy(boxes[:, 2]), copy.deepcopy(boxes[:, 3]) + boxes[:, 2] = np.cos(thetas) * xs2 - np.sin(thetas) * ys2 + boxes[:, 3] = np.sin(thetas) * xs2 + np.cos(thetas) * ys2 + + return boxes +def rotation_translation_trans_multi_class(boxes, thetas, translations): + """ + apply rotation and translation transform on boxes + :param boxes: (x1, y1) + :param thetas: + :param translations: + :return: + """ + boxes[:, 0::5] = boxes[:, 0::5] - translations[:, 0][:, np.newaxis] + boxes[:, 1::5] = boxes[:, 1::5] - translations[:, 1][:, np.newaxis] + xs, ys = boxes[:, 0::5], boxes[:, 1::5] + boxes[:, 0::5] = np.cos(thetas)[:, np.newaxis] * xs - np.sin(thetas)[:, np.newaxis] * ys + boxes[:, 1::5] = np.sin(thetas)[:, np.newaxis] * xs + np.cos(thetas)[:, np.newaxis] * ys + + boxes[:, 2::5] = boxes[:, 2::5] - translations[:, 0][:, np.newaxis] + boxes[:, 3::5] = boxes[:, 3::5] - translations[:, 1][:, np.newaxis] + xs2, ys2 = boxes[:, 2::5], boxes[:, 3::5] + boxes[:, 2::5] = np.cos(thetas)[:, np.newaxis] * xs2 - np.sin(thetas)[:, np.newaxis] * ys2 + boxes[:, 3::5] = np.sin(thetas)[:, np.newaxis] * xs2 + np.cos(thetas)[:, np.newaxis] * ys2 + + return boxes + +def dbboxtransform3(boxes, gts): + """ + boxes are rotated RoIs, + :param boxes: (x1, y1, x2, y2, h) + :param gts: (x1, y1, x2, y2, h) + :return: targets: (dx1, dy1, dx2, dy2, dh) + """ + # pdb.set_trace() + thetas = -np.arctan2((boxes[:, 3] - boxes[:, 1]), (boxes[:, 2] - boxes[:, 0])) + ext_widhts = np.sqrt((boxes[:, 0] - boxes[:, 2])**2 + (boxes[:, 1] - boxes[:, 3])**2) + hs = boxes[:, 4] + # transform the coords of gts to the coordinates of boxes + # combination of rotation and translation + # pdb.set_trace() + transformed_gts = copy.deepcopy(gts) + + transformed_gts = rotation_translation_trans(transformed_gts, thetas, boxes[:, 0:2]) + + transformed_boxes = copy.deepcopy(boxes) + transformed_boxes[:, 0] = 0 + transformed_boxes[:, 1] = 0 + transformed_boxes[:, 2] = ext_widhts + transformed_boxes[:, 3] = 0 + transformed_boxes[:, 4] = hs + + dx1 = (transformed_gts[:, 0] - transformed_boxes[:, 0]) / ext_widhts + dy1 = (transformed_gts[:, 1] - transformed_boxes[:, 1]) / hs + dx2 = (transformed_gts[:, 2] - transformed_boxes[:, 2]) / ext_widhts + dy2 = (transformed_gts[:, 3] - transformed_boxes[:, 3]) / hs + dh = np.log(transformed_gts[:, 4] / hs) + + return np.concatenate((dx1[:, np.newaxis], dy1[:, np.newaxis], dx2[:, np.newaxis], dy2[:, np.newaxis], dh[:, np.newaxis]), axis=1) + +def dbboxtransform3_nd(boxes, gts): + """ + this is a mxnet version + boxes are rotated RoIs, + :param boxes: (x1, y1, x2, y2, h) + :param gts: (x1, y1, x2, y2, h) + :return: targets: (dx1, dy1, dx2, dy2, dh) + """ + # thetas = np. + +def dbboxtransform3_inv(boxes, targets): + """ + + :param boxes: (x1, y1, x2, y2, h) + shape: (num_rois, 5) + :param targets: (dx1, dy1, dx2, dy2, dh) + shape: (num_rois, 5 * num_classes) + :return: decoded predicts (x1, y1, x2, y2, h) + shape: (num_rois, 5 * num_classes) + """ + # TODO: handle corner cases + thetas = -np.arctan2((boxes[:, 3] - boxes[:, 1]), (boxes[:, 2] - boxes[:, 0])) + ext_widhts = np.sqrt((boxes[:, 0] - boxes[:, 2])**2 + (boxes[:, 1] - boxes[:, 3])**2) + hs = boxes[:, 4] + # calculate the coords in the coordinates bind to the boxes + transformed_boxes = copy.deepcopy(boxes) + transformed_boxes[:, 0] = 0 + transformed_boxes[:, 1] = 0 + transformed_boxes[:, 2] = ext_widhts + transformed_boxes[:, 3] = 0 + transformed_boxes[:, 4] = hs + + transformed_gts = copy.deepcopy(targets) + transformed_gts[:, 0::5] = targets[:, 0::5] * ext_widhts[:, np.newaxis] + transformed_boxes[:, 0][:, np.newaxis] + transformed_gts[:, 1::5] = targets[:, 1::5] * hs[:, np.newaxis] + transformed_boxes[:, 1][:, np.newaxis] + transformed_gts[:, 2::5] = targets[:, 2::5] * ext_widhts[:, np.newaxis] + transformed_boxes[:, 2][:, np.newaxis] + transformed_gts[:, 3::5] = targets[:, 3::5] * hs[:, np.newaxis] + transformed_boxes[:, 3][:, np.newaxis] + transformed_gts[:, 4::5] = np.exp(targets[:, 4::5]) * hs[:, np.newaxis] + + # transform from the coordinates bind to the boxes to the coordinates bind to the images + pred_boxes = rotation_translation_trans_multi_class(transformed_gts, -thetas, -boxes[:, 0:2]) + # + # + # transformed_gts[:, 0::5] = transformed_gts[:, 0::5] + boxes[:, 0][:, np.newaxis] + # transformed_gts[:, 1::5] = transformed_gts[:, 1::5] + boxes[:, 1][:, np.newaxis] + # xs, ys = transformed_gts[:, 0::5], transformed_gts[:, 1::5] + # transformed_gts[:, 0::5] = np.cos(-thetas[:, np.newaxis]) * xs - np.sin(-thetas[:, np.newaxis]) * ys + # transformed_gts[:, 1::5] = np.sin(-thetas[:, np.newaxis]) * xs + np.cos(-thetas[:, np.newaxis]) * ys + # + # transformed_gts[:, 2::5] = transformed_gts[:, 2::5] + boxes[:, 0][:, np.newaxis] + # transformed_gts[:, 3::5] = transformed_gts[:, 3::5] + boxes[:, 1][:, np.newaxis] + # xs2, ys2 = transformed_gts[:, 2::5], transformed_gts[:, 3::5] + # transformed_gts[:, 2::5] = np.cos(-thetas[:, np.newaxis]) * xs2 - np.sin(-thetas[:, np.newaxis]) * ys2 + # transformed_gts[:, 3::5] = np.sin(-thetas[:, np.newaxis]) * xs2 + np.cos(-thetas[:, np.newaxis]) * ys2 + + return pred_boxes + +def RotBox2Polys_multi_class(dboxes): + """ + :param dboxes: (x_ctr, y_ctr, w, h, angle) + (numboxes, 5 * num_classes) + :return: quadranlges: + (numboxes, 8 * num_classes) + """ + num_boxes = dboxes.shape[0] + numclasses = int(dboxes.shape[1]/5) + quadrangles = np.zeros((num_boxes, 8 * numclasses)) + cs = np.cos(dboxes[:, 4::5]) + ss = np.sin(dboxes[:, 4::5]) + w = dboxes[:, 2::5] - 1 + h = dboxes[:, 3::5] - 1 + + ## change the order to be the initial definition + x_ctr = dboxes[:, 0::5] + y_ctr = dboxes[:, 1::5] + x1 = x_ctr + cs * (w / 2.0) - ss * (-h / 2.0) + x2 = x_ctr + cs * (w / 2.0) - ss * (h / 2.0) + x3 = x_ctr + cs * (-w / 2.0) - ss * (h / 2.0) + x4 = x_ctr + cs * (-w / 2.0) - ss * (-h / 2.0) + + y1 = y_ctr + ss * (w / 2.0) + cs * (-h / 2.0) + y2 = y_ctr + ss * (w / 2.0) + cs * (h / 2.0) + y3 = y_ctr + ss * (-w / 2.0) + cs * (h / 2.0) + y4 = y_ctr + ss * (-w / 2.0) + cs * (-h / 2.0) + + quadrangles[:, 0::8] = x1 + quadrangles[:, 1::8] = y1 + quadrangles[:, 2::8] = x2 + quadrangles[:, 3::8] = y2 + quadrangles[:, 4::8] = x3 + quadrangles[:, 5::8] = y3 + quadrangles[:, 6::8] = x4 + quadrangles[:, 7::8] = y4 + + return quadrangles + + +# def encodebox(ex_rois, gt_rois): +# """ +# first pred (dx, dy, dw, dh) then predict the middle point of each side +# :param ex_rois: (xmin, ymin, xmax, ymax) +# shape: (n, 4) +# :param gt_rois: (x1, y1, x2, y2, ... x4, y4) +# shape: (n, 8) +# first transfer the gt_rois to (x, y, w, h, theta) +# :return: (dx, dy, dw, dh, ) +# """ +# gt_rois_t = RotBox2Polys(polygonToRotRectangle_batch(gt_rois)) # (x1, y1, x2, y2, ..., x4, y4) +# ## find left top point +# xs = gt_rois_t.reshape(-1, 4, 2)[:, :, 0] +# ys = gt_rois_t.reshape(-1, 4, 2)[:, :, 1] +# +# top_index = np.argmin(ys, axis=1) +# +# hbbs = poly2bbox(gt_rois_t) +# +# targets1 = nonlinear_transform(ex_rois, hbbs) +# top = xs[top_index] - + + +def dbbox_transform2_warp(ex_rois, gt_rois): + """ + used to change the interface + :param ex_rois: (xmin, ymin, xmax, ymax) + shape (n, 4) + :param gt_rois: (x1, y1, ... x4, y4) + shape (n, 8) + :return: encoded targets: shape (n, 5) + """ + num_rois = ex_rois.shape[0] + # TODO: carefully set it + initial_angles = -np.ones((num_rois, 1)) * np.pi / 2. + ex_rois = xy2wh(ex_rois) + ex_rois = np.hstack((ex_rois, initial_angles)) + # pdb.set_trace() + gt_rotboxes = polygonToRotRectangle_batch(gt_rois) + targets = dbbox_transform2(ex_rois, gt_rotboxes) + + return targets + +def dbbox_transform2_warp_nd(ex_rois, gt_rois): + """ + usde to change the interface, this is a mx.nd version + :param ex_rois: (xmin, ymin, xmax, ymax) + shape (n, 4) + :param gt_rois: (x1, y1, ..., x4, y4) + shape (n, 8) + :return: encoded targets: shape (n,5) + """ + num_rois = ex_rois.shape[0] + # TODO: carefully set it + initial_angles = -mx.nd.ones((num_rois, 1)) * np.pi/2 + ex_rois = xy2wh_nd(ex_rois) + # pdb.set_trace() + ex_rois = mx.nd.concat(ex_rois, initial_angles, dim=1) + + gt_rotboxes = mx.nd.array(polygonToRotRectangle_batch(gt_rois.asnumpy())) + targets = dbbox_transform2_nd(ex_rois, gt_rotboxes) + + return targets + +def dbbox_transform2_inv_warp(ex_rois, deltas): + """ + used to change the interface + :param ex_rois: (xmin, ymin, xmax, ymax) + shape (n, 4) + :param deltas: (dx, dy, dw, dh, dtheta) + shape (n, 5 * num_classes) + :return: decoded rotboxs: shape (n, 5 * num_classes) + """ + num_rois = ex_rois.shape[0] + initial_angles = -np.ones((num_rois, 1)) * np.pi / 2. + ex_rois = xy2wh(ex_rois) + ex_rois = np.hstack((ex_rois, initial_angles)) + pred_rotboxes = dbbox_transform2_inv(ex_rois, deltas) + + return pred_rotboxes + +# def rotboxdecodewarp(ex_drois, deltas): +# """ +# +# :param ex_drois: (xmin, ymin, xmax, ymax) +# shape (n, 4) +# :param deltas: (dx, dy, dw, dh, dtheta) +# shape (n, 5 * num_classes) +# :return: decoded polys: shape (n, 8 * num_classes) +# """ +# pred_rotboxes = dbbox_transform2_inv_warp(ex_drois, deltas) + + +def dbbox_transform2(ex_drois, gt_rois): + """ + :param poly_rois: (x_ctr, y_ctr, w, h, angle) + shape (n, 5) + :param gt_rois: (x_ctr, y_ctr, w, h, angle) + shape (n, 5) + :return: encoded targets: shape (n, 5) + """ + # pdb.set_trace() + gt_widths = gt_rois[:, 2] + gt_heights = gt_rois[:, 3] + gt_angle = gt_rois[:, 4] + + ex_widths = ex_drois[:, 2] + ex_heights = ex_drois[:, 3] + ex_angle = ex_drois[:, 4] + + coord = gt_rois[:, 0: 2] - ex_drois[:, 0:2] + targets_dx = (np.cos(ex_drois[:, 4]) * coord[:, 0] + np.sin(ex_drois[:, 4]) * coord[:, 1]) / ex_widths + targets_dy = (-np.sin(ex_drois[:, 4]) * coord[:, 0] + np.cos(ex_drois[:, 4]) * coord[:, 1]) / ex_heights + targets_dw = np.log(gt_widths / ex_widths) + targets_dh = np.log(gt_heights / ex_heights) + targets_dangle = (gt_angle - ex_angle) % (2 * np.pi) / (2 * np.pi) + targets = np.stack((targets_dx, targets_dy, targets_dw, targets_dh, targets_dangle), 1) + + return targets + +def dbbox_transform2_new(ex_drois, gt_rois): + """ + :param poly_rois: (x_ctr, y_ctr, w, h, angle) + shape (n, 5) + :param gt_rois: (x_ctr, y_ctr, w, h, angle) + shape (n, 5) + :return: encoded targets: shape (n, 5) + """ + # pdb.set_trace() + gt_widths = gt_rois[:, 2] + gt_heights = gt_rois[:, 3] + gt_angle = gt_rois[:, 4] + + ex_widths = ex_drois[:, 2] + ex_heights = ex_drois[:, 3] + ex_angle = ex_drois[:, 4] + + coord = gt_rois[:, 0: 2] - ex_drois[:, 0:2] + targets_dx = (np.cos(ex_drois[:, 4]) * coord[:, 0] + np.sin(ex_drois[:, 4]) * coord[:, 1]) / ex_widths + targets_dy = (-np.sin(ex_drois[:, 4]) * coord[:, 0] + np.cos(ex_drois[:, 4]) * coord[:, 1]) / ex_heights + targets_dw = np.log(gt_widths / ex_widths) + targets_dh = np.log(gt_heights / ex_heights) + targets_dangle = (gt_angle - ex_angle) + dist = targets_dangle % (2 * np.pi) + dist = np.minimum(dist, np.pi * 2 - dist) + try: + assert np.all(dist <= (np.pi/2. + 0.001)) + except: + pdb.set_trace() + # check clockwise or anti-clockwise, if sin(dtheta) < 0, clockwise + mask = np.sin(targets_dangle) < 0 + dist[mask] = -dist[mask] + # TODO: change the norm value + dist = dist / (np.pi / 2.) + targets = np.stack((targets_dx, targets_dy, targets_dw, targets_dh, dist), 1) + + return targets + +def dbbox_transform2_inv_new(ex_drois, deltas, norm_angle): + """ + inspired from light-head rcnn, different classes share the same bbox regression + :param ex_rois: (x, y, w, h, theta) shape (n, 5) + :param deltas: (dx, dy, dw, dh, dtheta, dx ...) (n, 5 * numclasses) + :return: + """ + widths = ex_drois[:, 2] + heights = ex_drois[:, 3] + angles = ex_drois[:, 4] + ctr_x = ex_drois[:, 0] + ctr_y = ex_drois[:, 1] + + dx = deltas[:, 0::5] + dy = deltas[:, 1::5] + dw = deltas[:, 2::5] + dh = deltas[:, 3::5] + + # clip dw + # pdb.set_trace() + # fuck, I write maximum at first + # dw = np.minimum(dw, 4) + # dh = np.minimum(dh, 4) + + dangle = deltas[:, 4::5] + # pdb.set_trace() + pred_ctr_x = dx * widths[:, np.newaxis] * np.cos(angles[:, np.newaxis]) \ + - dy * heights[:, np.newaxis] * np.sin(angles[:, np.newaxis]) + ctr_x[:, np.newaxis] + pred_ctr_y = dx * widths[:, np.newaxis] * np.sin(angles[:, np.newaxis]) + \ + dy * heights[:, np.newaxis] * np.cos(angles[:, np.newaxis]) + ctr_y[:, np.newaxis] + pred_w = np.exp(dw) * widths[:, np.newaxis] + pred_h = np.exp(dh) * heights[:, np.newaxis] + + # TODO: handle the hard code here + pred_angle = (norm_angle) * dangle + angles[:, np.newaxis] + # pred_angle = pred_angle % (2 * np.pi) + + pred_dboxes = np.ones_like(deltas) + + pred_dboxes[:, 0::5] = pred_ctr_x + pred_dboxes[:, 1::5] = pred_ctr_y + pred_dboxes[:, 2::5] = pred_w + pred_dboxes[:, 3::5] = pred_h + pred_dboxes[:, 4::5] = pred_angle + + return pred_dboxes + +def choose_best_match(Rroi, gt_roi): + """ + chosse best match representation of gt_roi for a Rroi + :param Rroi: (x_ctr, y_ctr, w, h, angle) + :param gt_roi: (x_ctr, y_ctr, w, h, angle) + :return: gt_roi_new: gt_roi with new representation + """ + # TODO: finish choose best match, this is used for establish map: rotated region feat --> offset + assert Rroi[4] <= (np.pi) + assert Rroi[4] >= 0 + assert gt_roi[4] <= ( 2 * np.pi) + assert gt_roi[4] >= 0 + + Rroi_angle = Rroi[4] + + gt_x, gt_y, gt_w, gt_h, gt_angle = gt_roi + + gt_roi_extent = np.array([[gt_x, gt_y, gt_w, gt_h, gt_angle], [gt_x, gt_y, gt_h, gt_w, gt_angle + np.pi/2.], + [gt_x, gt_y, gt_w, gt_h, gt_angle + np.pi], [gt_x, gt_y, gt_h, gt_w, gt_angle + np.pi * 3/2.]]) + + gt_angle_extent = np.array([gt_angle, gt_angle + np.pi/2., gt_angle + np.pi, gt_angle + np.pi * 3/2.]) + dist = (gt_angle_extent - Rroi_angle) % (2 * np.pi) + dist = np.min(dist, np.pi * 2 - dist) + min_index = np.argmin(dist) + + gt_roi_new = gt_roi_extent[min_index] + + return gt_roi_new + +def choose_best_match_batch(Rrois, gt_rois): + """ + chosse best match representation of gt_rois for a Rrois + :param Rrois: (x_ctr, y_ctr, w, h, angle) + shape: (n, 5) + :param gt_rois: (x_ctr, y_ctr, w, h, angle) + shape: (n, 5) + :return: gt_roi_news: gt_roi with new representation + shape: (n, 5) + """ + # assert np.all(Rrois[:, 4] <= np.pi) + # assert np.all(Rrois[:, 4] >= 0) + # assert np.all(gt_rois[:, 4] <= (2 * np.pi)) + # assert np.all(gt_rois[:, 4] >= 0) + + # shape: (n, 1) + Rroi_anlges = Rrois[:, 4][:, np.newaxis] + + gt_xs, gt_ys, gt_ws, gt_hs, gt_angles = copy.deepcopy(gt_rois[:, 0]), copy.deepcopy(gt_rois[:, 1]), \ + copy.deepcopy(gt_rois[:, 2]), copy.deepcopy(gt_rois[:, 3]), \ + copy.deepcopy(gt_rois[:, 4]) + # shape: (n, 4) + gt_angle_extent = np.concatenate((gt_angles[:, np.newaxis], (gt_angles + np.pi/2.)[:, np.newaxis], + (gt_angles + np.pi)[:, np.newaxis], (gt_angles + np.pi * 3/2.)[:, np.newaxis]), axis=1) + dist = (Rroi_anlges - gt_angle_extent) % (2 * np.pi) + dist = np.minimum(dist, np.pi * 2 - dist) + min_index = np.argmin(dist, axis=1) + # selected_index = np.concatenate((np.arange(len(min_index)).reshape(len(min_index), 1), min_index), 1) + # + gt_rois_extent0 = copy.deepcopy(gt_rois) + gt_rois_extent1 = np.hstack((gt_xs[:, np.newaxis], gt_ys[:, np.newaxis], \ + gt_hs[:, np.newaxis], gt_ws[:, np.newaxis], gt_angles[:, np.newaxis] + np.pi/2.)) + gt_rois_extent2 = np.hstack((gt_xs[:, np.newaxis], gt_ys[:, np.newaxis], \ + gt_ws[:, np.newaxis], gt_hs[:, np.newaxis], gt_angles[:, np.newaxis] + np.pi)) + gt_rois_extent3 = np.hstack((gt_xs[:, np.newaxis], gt_ys[:, np.newaxis], \ + gt_hs[:, np.newaxis], gt_ws[:, np.newaxis], gt_angles[:, np.newaxis] + np.pi * 3/2.)) + + gt_rois_extent = np.concatenate((gt_rois_extent0[:, np.newaxis, :], + gt_rois_extent1[:, np.newaxis, :], + gt_rois_extent2[:, np.newaxis, :], + gt_rois_extent3[:, np.newaxis, :]), axis=1) + + gt_rois_new = np.zeros_like(gt_rois) + # pdb.set_trace() + # TODO: add pool.map here + for curiter, index in enumerate(min_index): + gt_rois_new[curiter, :] = gt_rois_extent[curiter, index, :] + + gt_rois_new[:, 4] = gt_rois_new[:, 4] % (2 * np.pi) + + return gt_rois_new + +def choose_bset_Rroi_grad_batch(Rroi): + """ + grad version + :param Rroi: + :return: + """ + x_ctr, y_ctr, w, h, angle = copy.deepcopy(Rroi[:, 0]), copy.deepcopy(Rroi[:, 1]),\ + copy.deepcopy(Rroi[:, 2]), copy.deepcopy(Rroi[:, 3]), copy.deepcopy(Rroi[:, 4]) + indexes = w < h + + Rroi[indexes, 2] = h[indexes] + Rroi[indexes, 3] = w[indexes] + Rroi[indexes, 4] = Rroi[indexes, 4] + np.pi/2. + Rroi[:, 4] = Rroi[:, 4] % np.pi + + return Rroi, indexes + + +def choose_best_Rroi_batch(Rroi): + """ + There are many instances with large aspect ratio, so we choose the point, previous is long side, after is short side, so it makes sure h < w + then angle % 180, + :param Rroi: (x_ctr, y_ctr, w, h, angle) + shape: (n, 5) + :return: Rroi_new: Rroi with new representation + """ + x_ctr, y_ctr, w, h, angle = copy.deepcopy(Rroi[:, 0]), copy.deepcopy(Rroi[:, 1]),\ + copy.deepcopy(Rroi[:, 2]), copy.deepcopy(Rroi[:, 3]), copy.deepcopy(Rroi[:, 4]) + indexes = w < h + + Rroi[indexes, 2] = h[indexes] + Rroi[indexes, 3] = w[indexes] + Rroi[indexes, 4] = Rroi[indexes, 4] + np.pi/2. + Rroi[:, 4] = Rroi[:, 4] % np.pi + + return Rroi + +def dbbox_transform2_best_match_warp(Rrois, gt_boxes): + """ + :param Rrois: (x_ctr, y_ctr, w, h, angle) angle range in [0, 180] + (n, 5) + :param gt_boxes: (x_ctr, y_ctr, w, h, angle) angle range in [0, 360] + (n, 5) + :return: + """ + + # This is a simplified version + # TODO: preprocess angle of gt_boxes according to the catogeries + # Here, use a choose beste match angle, similar to choose best point instead + gt_boxes_new = choose_best_match_batch(Rrois, gt_boxes) + try: + assert np.all(Rrois[:, 4] <= (np.pi + 0.001 )) + except: + pdb.set_trace() + bbox_targets = dbbox_transform2_new(Rrois, gt_boxes_new) + + return bbox_targets + +def dbbox_transform2_nd(ex_drois, gt_rois): + """ + + :param ex_drois: + :param gt_rois: + :return: + """ + gt_widths = gt_rois[:, 2] + gt_heights = gt_rois[:, 3] + gt_angle = gt_rois[:, 4] + + ex_widths = ex_drois[:, 2] + ex_heights = ex_drois[:, 3] + ex_angle = ex_drois[:, 4] + + coord = gt_rois[:, 0: 2] - ex_drois[:, 0:2] + targets_dx = (mx.nd.cos(ex_drois[:, 4]) * coord[:, 0] + mx.nd.sin(ex_drois[:, 4]) * coord[:, 1]) / ex_widths + targets_dy = (-mx.nd.sin(ex_drois[:, 4]) * coord[:, 0] + mx.nd.cos(ex_drois[:, 4]) * coord[:, 1]) / ex_heights + targets_dw = mx.nd.log(gt_widths / ex_widths) + targets_dh = mx.nd.log(gt_heights / ex_heights) + targets_dangle = (gt_angle - ex_angle) % (2 * np.pi) / (2 * np.pi) + targets = mx.nd.concat(targets_dx.expand_dims(1), + targets_dy.expand_dims(1), + targets_dw.expand_dims(1), + targets_dh.expand_dims(1), + targets_dangle.expand_dims(1), dim=1) + # targets = mx.nd.stack(targets_dx, targets_dy, targets_dw, targets_dh, targets_dangle, axis=1) + + return targets + +def dbbox_transform2_inv(ex_drois, deltas): + """ + inspired from light-head rcnn, different classes share the same bbox regression + :param ex_rois: (x, y, w, h, theta) shape (n, 5) + :param deltas: (dx, dy, dw, dh, dtheta, dx ...) (n, 5 * numclasses) + :return: + """ + widths = ex_drois[:, 2] + heights = ex_drois[:, 3] + angles = ex_drois[:, 4] + ctr_x = ex_drois[:, 0] + ctr_y = ex_drois[:, 1] + + dx = deltas[:, 0::5] + dy = deltas[:, 1::5] + dw = deltas[:, 2::5] + dh = deltas[:, 3::5] + + # clip dw + # pdb.set_trace() + # fuck, I write maximum at first + dw = np.minimum(dw, 4) + dh = np.minimum(dh, 4) + + dangle = deltas[:, 4::5] + # pdb.set_trace() + pred_ctr_x = dx * widths[:, np.newaxis] * np.cos(angles[:, np.newaxis]) \ + - dy * heights[:, np.newaxis] * np.sin(angles[:, np.newaxis]) + ctr_x[:, np.newaxis] + pred_ctr_y = dx * widths[:, np.newaxis] * np.sin(angles[:, np.newaxis]) + \ + dy * heights[:, np.newaxis] * np.cos(angles[:, np.newaxis]) + ctr_y[:, np.newaxis] + pred_w = np.exp(dw) * widths[:, np.newaxis] + pred_h = np.exp(dh) * heights[:, np.newaxis] + + # pred_ctr_x = dx * widths * np.cos(angles) \ + # - dy * heights * np.sin(angles) + ctr_x + # pred_ctr_y = dx * widths * np.sin(angles) + \ + # dy * heights * np.cos(angles) + ctr_y + # pred_w = np.exp(dw) * widths + # pred_h = np.exp(dh) * heights + + # TODO: handle the hard code here + pred_angle = (2 * np.pi) * dangle + angles[:, np.newaxis] + pred_angle = pred_angle % ( 2 * np.pi) + + pred_dboxes = np.ones_like(deltas) + + pred_dboxes[:, 0::5] = pred_ctr_x + pred_dboxes[:, 1::5] = pred_ctr_y + pred_dboxes[:, 2::5] = pred_w + pred_dboxes[:, 3::5] = pred_h + pred_dboxes[:, 4::5] = pred_angle + + return pred_dboxes + +def bbox_overlaps_py(boxes, query_boxes): + """ + determine overlaps between boxes and query_boxes + :param boxes: n * 4 bounding boxes + :param query_boxes: k * 4 bounding boxes + :return: overlaps: n * k overlaps + """ + n_ = boxes.shape[0] + k_ = query_boxes.shape[0] + overlaps = np.zeros((n_, k_), dtype=np.float) + for k in range(k_): + query_box_area = (query_boxes[k, 2] - query_boxes[k, 0] + 1) * (query_boxes[k, 3] - query_boxes[k, 1] + 1) + for n in range(n_): + iw = min(boxes[n, 2], query_boxes[k, 2]) - max(boxes[n, 0], query_boxes[k, 0]) + 1 + if iw > 0: + ih = min(boxes[n, 3], query_boxes[k, 3]) - max(boxes[n, 1], query_boxes[k, 1]) + 1 + if ih > 0: + box_area = (boxes[n, 2] - boxes[n, 0] + 1) * (boxes[n, 3] - boxes[n, 1] + 1) + all_area = float(box_area + query_box_area - iw * ih) + overlaps[n, k] = iw * ih / all_area + return overlaps + +def clip_polys(polys, im_shape): + """ + clip boxes to image boundaries + :param polys: [N, 8 * num_classes] + :param im_shape: tuple of 2 + :return: [N, 8 * num_classes] + """ + def clip_wh(coords, wh): + return np.maximum(np.minimum(coords, wh - 1), 0) + + # im_shape[1] > x1 >= 0 + polys[:, 0::8] = np.maximum(np.minimum(polys[:, 0::8], im_shape[1] - 1), 0) + # im_shape[0] > y1 >= 0 + polys[:, 1::8] = np.maximum(np.minimum(polys[:, 1::8], im_shape[0] - 1), 0) + # 0 <= x2 < im_shape[1] + polys[:, 2::8] = np.maximum(np.minimum(polys[:, 2::8], im_shape[1] - 1), 0) + # 0 <= y2 < im_shape[0] + polys[:, 3::8] = np.maximum(np.minimum(polys[:, 3::8], im_shape[0] - 1), 0) + + polys[:, 4::8] = clip_wh(polys[:, 4::8], im_shape[1]) + + polys[:, 5::8] = clip_wh(polys[:, 5::8], im_shape[0]) + + polys[:, 6::8] = clip_wh(polys[:, 6::8], im_shape[1]) + + polys[:, 7::8] = clip_wh(polys[:, 7::8], im_shape[0]) + + return polys +def clip_boxes(boxes, im_shape): + """ + Clip boxes to image boundaries. + :param boxes: [N, 4* num_classes] + :param im_shape: tuple of 2 + :return: [N, 4* num_classes] + """ + # x1 >= 0 + boxes[:, 0::4] = np.maximum(np.minimum(boxes[:, 0::4], im_shape[1] - 1), 0) + # y1 >= 0 + boxes[:, 1::4] = np.maximum(np.minimum(boxes[:, 1::4], im_shape[0] - 1), 0) + # x2 < im_shape[1] + boxes[:, 2::4] = np.maximum(np.minimum(boxes[:, 2::4], im_shape[1] - 1), 0) + # y2 < im_shape[0] + boxes[:, 3::4] = np.maximum(np.minimum(boxes[:, 3::4], im_shape[0] - 1), 0) + return boxes + +def filter_boxes(boxes, min_size): + """ + filter small boxes. + :param boxes: [N, 4* num_classes] + :param min_size: + :return: keep: + """ + ws = boxes[:, 2] - boxes[:, 0] + 1 + hs = boxes[:, 3] - boxes[:, 1] + 1 + keep = np.where((ws >= min_size) & (hs >= min_size))[0] + return keep + +def nonlinear_transform(ex_rois, gt_rois): + """ + compute bounding box regression targets from ex_rois to gt_rois + :param ex_rois: [N, 4] + :param gt_rois: [N, 4] + :return: [N, 4] + """ + assert ex_rois.shape[0] == gt_rois.shape[0], 'inconsistent rois number' + + ex_widths = ex_rois[:, 2] - ex_rois[:, 0] + 1.0 + ex_heights = ex_rois[:, 3] - ex_rois[:, 1] + 1.0 + ex_ctr_x = ex_rois[:, 0] + 0.5 * (ex_widths - 1.0) + ex_ctr_y = ex_rois[:, 1] + 0.5 * (ex_heights - 1.0) + + gt_widths = gt_rois[:, 2] - gt_rois[:, 0] + 1.0 + gt_heights = gt_rois[:, 3] - gt_rois[:, 1] + 1.0 + gt_ctr_x = gt_rois[:, 0] + 0.5 * (gt_widths - 1.0) + gt_ctr_y = gt_rois[:, 1] + 0.5 * (gt_heights - 1.0) + # print 'in nonlinear_transform' + # pdb.set_trace() + targets_dx = (gt_ctr_x - ex_ctr_x) / (ex_widths + 1e-14) + targets_dy = (gt_ctr_y - ex_ctr_y) / (ex_heights + 1e-14) + targets_dw = np.log(gt_widths / ex_widths) + targets_dh = np.log(gt_heights / ex_heights) + + targets = np.vstack( + (targets_dx, targets_dy, targets_dw, targets_dh)).transpose() + return targets + +def cal_line_length(point1, point2): + return math.sqrt( math.pow(point1[0] - point2[0], 2) + math.pow(point1[1] - point2[1], 2)) + +def get_best_begin_point_wrapp(coordinate): + coordinate = np.array(coordinate).reshape(4, 2) + output = get_best_begin_point(coordinate) + output = np.array(output).reshape(8) + return output +def get_best_begin_point(coordinate): + x1 = coordinate[0][0] + y1 = coordinate[0][1] + x2 = coordinate[1][0] + y2 = coordinate[1][1] + x3 = coordinate[2][0] + y3 = coordinate[2][1] + x4 = coordinate[3][0] + y4 = coordinate[3][1] + xmin = min(x1, x2, x3, x4) + ymin = min(y1, y2, y3, y4) + xmax = max(x1, x2, x3, x4) + ymax = max(y1, y2, y3, y4) + combinate = [[[x1, y1], [x2, y2], [x3, y3], [x4, y4]], [[x2, y2], [x3, y3], [x4, y4], [x1, y1]], + [[x3, y3], [x4, y4], [x1, y1], [x2, y2]], [[x4, y4], [x1, y1], [x2, y2], [x3, y3]]] + dst_coordinate = [[xmin, ymin], [xmax, ymin], [xmax, ymax], [xmin, ymax]] + force = 100000000.0 + force_flag = 0 + for i in range(4): + temp_force = cal_line_length(combinate[i][0], dst_coordinate[0]) + cal_line_length(combinate[i][1], + dst_coordinate[ + 1]) + cal_line_length( + combinate[i][2], dst_coordinate[2]) + cal_line_length(combinate[i][3], dst_coordinate[3]) + if temp_force < force: + force = temp_force + force_flag = i + if force_flag != 0: + print("choose one direction!") + return combinate[force_flag] + +def nonlinear_pred(boxes, box_deltas): + """ + Transform the set of class-agnostic boxes into class-specific boxes + by applying the predicted offsets (box_deltas) + :param boxes: !important [N 4] + :param box_deltas: [N, 4 * num_classes] + :return: [N 4 * num_classes] + """ + if boxes.shape[0] == 0: + return np.zeros((0, box_deltas.shape[1])) + + boxes = boxes.astype(np.float, copy=False) + widths = boxes[:, 2] - boxes[:, 0] + 1.0 + heights = boxes[:, 3] - boxes[:, 1] + 1.0 + ctr_x = boxes[:, 0] + 0.5 * (widths - 1.0) + ctr_y = boxes[:, 1] + 0.5 * (heights - 1.0) + + dx = box_deltas[:, 0::4] + dy = box_deltas[:, 1::4] + dw = box_deltas[:, 2::4] + dh = box_deltas[:, 3::4] + + pred_ctr_x = dx * widths[:, np.newaxis] + ctr_x[:, np.newaxis] + pred_ctr_y = dy * heights[:, np.newaxis] + ctr_y[:, np.newaxis] + + pred_w = np.clip(np.exp(dw), -60, 60) * widths[:, np.newaxis] + pred_h = np.clip(np.exp(dh), -60, 60) * heights[:, np.newaxis] + + pred_boxes = np.zeros(box_deltas.shape) + # x1 + pred_boxes[:, 0::4] = pred_ctr_x - 0.5 * (pred_w - 1.0) + # y1 + pred_boxes[:, 1::4] = pred_ctr_y - 0.5 * (pred_h - 1.0) + # x2 + pred_boxes[:, 2::4] = pred_ctr_x + 0.5 * (pred_w - 1.0) + # y2 + pred_boxes[:, 3::4] = pred_ctr_y + 0.5 * (pred_h - 1.0) + + return pred_boxes + + +def iou_transform(ex_rois, gt_rois): + """ return bbox targets, IoU loss uses gt_rois as gt """ + assert ex_rois.shape[0] == gt_rois.shape[0], 'inconsistent rois number' + return gt_rois + + +def iou_pred(boxes, box_deltas): + """ + Transform the set of class-agnostic boxes into class-specific boxes + by applying the predicted offsets (box_deltas) + :param boxes: !important [N 4] + :param box_deltas: [N, 4 * num_classes] + :return: [N 4 * num_classes] + """ + if boxes.shape[0] == 0: + return np.zeros((0, box_deltas.shape[1])) + + boxes = boxes.astype(np.float, copy=False) + x1 = boxes[:, 0] + y1 = boxes[:, 1] + x2 = boxes[:, 2] + y2 = boxes[:, 3] + + dx1 = box_deltas[:, 0::4] + dy1 = box_deltas[:, 1::4] + dx2 = box_deltas[:, 2::4] + dy2 = box_deltas[:, 3::4] + + pred_boxes = np.zeros(box_deltas.shape) + # x1 + pred_boxes[:, 0::4] = dx1 + x1[:, np.newaxis] + # y1 + pred_boxes[:, 1::4] = dy1 + y1[:, np.newaxis] + # x2 + pred_boxes[:, 2::4] = dx2 + x2[:, np.newaxis] + # y2 + pred_boxes[:, 3::4] = dy2 + y2[:, np.newaxis] + + return pred_boxes + + +# define bbox_transform and bbox_pred +bbox_transform = nonlinear_transform +bbox_pred = nonlinear_pred diff --git a/lib/bbox/bbox_transform_nd_test.py b/lib/bbox/bbox_transform_nd_test.py new file mode 100644 index 0000000..0a8ca1c --- /dev/null +++ b/lib/bbox/bbox_transform_nd_test.py @@ -0,0 +1,81 @@ +import unittest +import numpy as np +# from bbox_transform_nd import * +from bbox_transform import * +import mxnet as mx +import copy + +class Testbbox_transform(unittest.TestCase): + + + def test_bbox_poly2hbb(self): + polys = np.array([[1, 3, 21, 3, 21, 83, 1, 83, 1], + [50, 100, 65, 100, 65, 145, 50, 145, 3]]) + # expected = np.array([[11, 43, 20, 80], + # [115/2.0, 245/2.0, 15, 45]]) + expected = np.array([[1, 3, 21, 83, 1], + [50, 100, 65, 145, 3]]) + polys = mx.nd.array(polys) + expected = mx.nd.array(expected) + output = bbox_poly2hbb_nd(polys) + np.testing.assert_array_almost_equal(output.asnumpy(), expected.asnumpy()) + + def test_dbbox_transform2_nd(self): + """ + encoding format similar to RRPN, except the angle was restricted to [0, 2 pi], dangle was restricted to [0, 1] + + Must Test corner cases + :return: + """ + # TODO: the bbox_means need to be changed + boxlist1 = np.array([[1, 1, 10, 5, 0], + [1, 1, 10, 5, np.pi/10], + [1, 1, 10, 5, 0], + [30, 100, 60, 34, np.pi/2] + ]) + boxlist2 = np.array([[1, 1, 5, 8, np.pi/16], + [1, 1, 5, 8, np.pi/16 + np.pi/10], + [1, 1, 10, 5, 0], + [30, 90, 12, 45, np.pi/10] + ]) + # TODO: check the case by hand + expected_targets = np.array([[0.0000, 0.0000, -0.6931, 0.4700, 0.0312], + [0.0000, 0.0000, -0.6931, 0.4700, 0.0313], + [0.0000, 0.0000, 0.0000, 0.0000, 0.0000], + [-0.1667, 0.0000, -1.6094, 0.2803, 0.8]] ) + output = dbbox_transform2(boxlist1, boxlist2) + np.testing.assert_almost_equal(expected_targets, output, decimal=4) + expected_targets_nd = dbbox_transform2_nd(mx.nd.array(boxlist1, dtype='float32'), mx.nd.array(boxlist2, dtype='float32')) + + np.testing.assert_almost_equal(expected_targets, expected_targets_nd.asnumpy(), decimal=4) + + @unittest.skip("The test need to be reconstruct") + def test_dbbox_transform2_warp(self): + # boxlist1 = np.array([[1, 1, 10, 5, 0], + # [1, 1, 10, 5, np.pi/10], + # [1, 1, 10, 5, 0], + # [30, 100, 60, 34, np.pi/2] + # ]) + # TODO: add corner cases + boxlist1 = np.array([[-1, -2.5, 3, 4.5], + [24.5, 68.0, 35.5, 112.0], + # [2, 4, 24.6, 8], + [-9.8, 0.5, 13.8, 7.5] + ]) # (xmin, ymin, xmax, ymax) + boxlist2 = np.array([[1, 1, 5, 8, np.pi/16], + # [1, 1, 5, 8, np.pi/16 + np.pi/10] + # [1, 1, 10, 5, 0], + [30, 90, 12, 45, np.pi/10], + [5, 4, 26, 8.2, np.pi/2 + np.pi/10.] + ]) + polys2 = RotBox2Polys(boxlist2) + targets = dbbox_transform2_warp(boxlist1, polys2) + expected_targets = np.array([[0, 0, 0, 0, 0.78125], + [0, 0, 0, 0, 0.8], + [0., -3/8., np.log(26/24.6), np.log(8.2/8.), np.pi/10./(2 * np.pi)]]) + np.testing.assert_almost_equal(expected_targets, targets, decimal=4) + + targets_nd = dbbox_transform2_warp_nd(mx.nd.array(boxlist1), mx.nd.array(polys2)) + np.testing.assert_almost_equal(expected_targets, targets_nd.asnumpy(), decimal=4) +if __name__ == '__main__': + unittest.main() \ No newline at end of file diff --git a/lib/bbox/bbox_transform_test.py b/lib/bbox/bbox_transform_test.py new file mode 100644 index 0000000..4afda22 --- /dev/null +++ b/lib/bbox/bbox_transform_test.py @@ -0,0 +1,570 @@ +import unittest +import numpy as np +from bbox_transform import * +import copy + +class Testbbox_transform(unittest.TestCase): + + def test_bbox_overlaps(self): + pass + + def test_bbox_poly2hbb(self): + polys = np.array([[1, 3, 21, 3, 21, 83, 1, 83, 1], + [50, 100, 65, 100, 65, 145, 50, 145, 3]]) + # expected = np.array([[11, 43, 20, 80], + # [115/2.0, 245/2.0, 15, 45]]) + expected = np.array([[1, 3, 21, 83, 1], + [50, 100, 65, 145, 3]]) + output = bbox_poly2hbb(polys) + np.testing.assert_array_almost_equal(output, expected) + + def test_poly2bbox(self): + polys = np.array([[1, 3, 21, 3, 21, 83, 1, 83], + [50, 100, 65, 100, 65, 145, 50, 145]]) + # expected = np.array([[11, 43, 20, 80], + # [115/2.0, 245/2.0, 15, 45]]) + expected = np.array([[1, 3, 21, 83], + [50, 100, 65, 145]]) + output = poly2bbox(polys) + np.testing.assert_array_almost_equal(output, expected) + + def test_poly2bbox_nd(self): + polys = np.array([[1, 3, 21, 3, 21, 83, 1, 83], + [50, 100, 65, 100, 65, 145, 50, 145]]) + # expected = np.array([[11, 43, 20, 80], + # [115/2.0, 245/2.0, 15, 45]]) + expected = np.array([[1, 3, 21, 83], + [50, 100, 65, 145]]) + polys_nd = mx.nd.array(polys) + + output = poly2bbox_nd(polys_nd) + np.testing.assert_array_almost_equal(output.asnumpy(), expected) + + def test_box2poly(self): + """ + """ + ext_rois = np.array([[11, 43, 20, 80], + [115/2.0, 245/2.0, 15, 45] + ]) + expected = np.array([[1, 3, 21, 3, 21, 83, 1, 83], + [50, 100, 65, 100, 65, 145, 50, 145]]) + calculated = box2poly(ext_rois) + np.testing.assert_array_almost_equal(calculated, expected) + def test_dbbox_transform(self): + """ + ext_rois: (x, y, w, h) + gt_boxes: (x1, y1, x2, y2, x3, y3, x4, y4) + """ + # ext_rois = np.array([[11, 43, 20, 80], + # [115/2.0, 245/2.0, 15, 45] + # ]) + ext_rois = np.array([[1, 3, 21, 83], + [50, 100, 65, 145], + # [36., 50., 46., 60.] + ]) + gt_boxes = np.array([[1, 3, 22, 4, 21, 84, 0, 83], + [50, 100, 60, 90, 65, 145, 54, 120], + # [40., 50., 46., 57., 43., 60., 36., 55.] + ]) + targets = dbbox_transform(ext_rois, gt_boxes) + expected = np.array([[0, 0, 1/21.0, 1/81.0, 0, 1/81.0, -1/21.0, 0], + [0, 0, -5/16.0, -10/46.0, 0, 0, 4/16.0, -25/46.0], + ], dtype=np.float) + np.testing.assert_array_almost_equal(targets, expected) + + def test_box_pred_multiclass(self): + ext_rois = np.array([[1, 3, 21, 83], + [50, 100, 65, 145]]) + expect_results = np.array([[1, 3, 21, 83, 4, 5,39, 30, 1, 3, 21, 83], + [50, 100, 65, 145, 50, 100, 65, 145, 40, 105, 50, 120]]) + + # targets = bbox_transform(ext_rois, gt_boxes) + targets = np.array([[0., 0., 0., 0., 0.5 , -0.31481481, 0.5389965 , -1.13635262, 0., 0., 0., 0.], + [0., 0., 0., 0., 0., 0., 0., 0., -0.78125 , -0.2173913 , -0.37469345, -1.05605267]]) + outputs = bbox_pred(ext_rois, targets) + np.testing.assert_array_almost_equal(expect_results, outputs) + def test_dbbox_pred_multiclass(self): + + ext_rois = np.array([[1, 3, 21, 83], + [50, 100, 65, 145] + ]) + expect_results = np.array([[1., 3., 21., 3., 21., 83., 1., 83., 1, 3, 22, 4, 21, 84, 0, 83, 1., 3., 21., 3., 21., 83., 1., 83.], + [50., 100., 65., 100., 65., 145., 50., 145., 50., 100., 65., 100., 65., 145., 50., 145., 50, 100, 60, 90, 65, 145, 54, 120]]) + targets = np.array([ + [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.04761905, 0.01234568, 0., 0.01234568, -0.04761905, 0., 0., 0., 0., 0., 0., 0., 0., 0.], + [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., -0.3125, -0.2173913, 0., 0., 0.25, -0.54347826] + ]) + + + outputs = dbbox_pred(ext_rois, targets) + np.testing.assert_array_almost_equal(expect_results, outputs) + def test_clip_polys(self): + + polys = np.array([[-1, 3, 4, 7, 102, 150, 50, 205], + [3, 10, 30, 50, 70, 90, 80, 90], + [0,0, 0, 200, 100, 200, 100, 0]]) + + # y, x + im_shape = (200, 100) + + expected_outputs = np.array([[0, 3, 4, 7, 99, 150, 50, 199], + [3, 10, 30, 50, 70, 90, 80, 90], + [0, 0, 0, 199, 99, 199, 99, 0]]) + + outputs = clip_polys(polys, im_shape) + np.testing.assert_array_almost_equal(outputs, expected_outputs) + def test_clip(self): + x1, y1, x2, y2, x3, y3, x4, y4 = np.array([1, 3, 21, 3, 21, 83, 1, 83], + # [50, 100, 65, 100, 65, 145, 50, 145] + ) + w, h = np.array([100, 70]) + + expected = np.array([1, 3, 21, 69] + # [50, 100, 65, 145] + ) + + xmin = min(max(min(x1, x2, x3, x4), 0), w - 1) + xmax = min(max(max(x1, x2, x3, x4), 0), w - 1) + ymin = min(max(min(y1, y2, y3, y4), 0), h - 1) + ymax = min(max(max(y1, y2, y3, y4), 0), h - 1) + + results = np.array([xmin, ymin, xmax, ymax]) + + np.testing.assert_array_almost_equal(expected, results) + def test_clip2(self): + + bbox = np.array([-1, 3, 21, 3, 21, 83, 1, 83]) + w, h = np.array([100, 70]) + + expected = np.array([0, 3, 21, 3, 21, 69, 1, 69]) + x1 = min(max(float(bbox[0]), 0), w - 1) + y1 = min(max(float(bbox[1]), 0), h - 1) + x2 = min(max(float(bbox[2]), 0), w - 1) + y2 = min(max(float(bbox[3]), 0), h - 1) + x3 = min(max(float(bbox[4]), 0), w - 1) + y3 = min(max(float(bbox[5]), 0), h - 1) + x4 = min(max(float(bbox[6]), 0), w - 1) + y4 = min(max(float(bbox[7]), 0), h - 1) + + results = np.array([x1, y1, x2, y2, x3, y3, x4, y4]) + + ## The three functions seems like the same + # np.testing.assert_array_almost_equal(expected, results) + # np.allclose(expected, results) + np.testing.assert_allclose(expected, results) + def test_filter_shape(self): + """ + TODO + filter unpossible shapes + :return: + """ + def test_xy2wh(self): + boxes = np.array([[1, 3, 45, 10], + [24.4, 3., 44.5, 52.2]]) + outputs = xy2wh(boxes) + expected_outputs = np.array([[23, 6.5, 45, 8], + [34.45, 27.6, 21.1, 50.2]]) + np.testing.assert_almost_equal(expected_outputs, outputs) + def test_wh2xy(self): + boxes = np.array([[1, 3, 45, 10], + [24.4, 3., 44.5, 52.2]]) + outputs = xy2wh(boxes) + outputs = wh2xy(outputs) + np.testing.assert_almost_equal(boxes, outputs) + + def test_dbbox_transform2(self): + """ + encoding format similar to RRPN, except the angle was restricted to [0, 2 pi], dangle was restricted to [0, 1] + + Must Test corner cases + :return: + """ + boxlist1 = np.array([[1, 1, 10, 5, 0], + [1, 1, 10, 5, np.pi/10], + [1, 1, 10, 5, 0], + [30, 100, 60, 34, np.pi/2] + ]) + boxlist2 = np.array([[1, 1, 5, 8, np.pi/16], + [1, 1, 5, 8, np.pi/16 + np.pi/10], + [1, 1, 10, 5, 0], + [30, 90, 12, 45, np.pi/10] + ]) + expected_targets = np.array([[0.0000, 0.0000, -0.6931, 0.4700, 0.0312], + [0.0000, 0.0000, -0.6931, 0.4700, 0.0313], + [0.0000, 0.0000, 0.0000, 0.0000, 0.0000], + [-0.1667, 0.0000, -1.6094, 0.2803, 0.8]] ) + output = dbbox_transform2(boxlist1, boxlist2) + print 'output:', output + np.testing.assert_almost_equal(expected_targets, output, decimal=4) + + def test_dbbox_transform2_inv(self): + """ + similar to light-head rcnn, different classes share the same bbox regression now + :return: + """ + + boxlist1 = np.array([[1, 1, 10, 5, 0], + [1, 1, 10, 5, np.pi/10], + [1, 1, 10, 5, 0], + [30, 100, 60, 34, np.pi/2] + ]) + # the boxlist2(ground truths) are restricted to (0, 2 * pi) + boxlist2 = np.array([[1, 1, 5, 8, np.pi/16], + [1, 1, 5, 8, np.pi/16 + np.pi/10], + [1, 1, 10, 5, 0], + [30, 90, 12, 45, np.pi/10] + ]) + expected_targets = dbbox_transform2(boxlist1, boxlist2) + expected_boxlist2 = dbbox_transform2_inv(boxlist1, expected_targets) + + np.testing.assert_almost_equal(expected_boxlist2, boxlist2) + + def test_rotation_invariant_encoding(self): + boxlist1 = np.array([[1, 1, 10, 5, 0], + [1, 1, 10, 5, np.pi/10], + [1, 1, 10, 5, 0], + [30, 90.8, 60, 34, np.pi/2] + ]) + # the boxlist2(ground truths) are restricted to (0, 2 * pi) + # TODO: add corner case + boxlist2 = np.array([[1, 1, 5, 8, np.pi/16], + [1, 1, 5, 8, np.pi/16 + np.pi/10], + [1, 1, 10, 5, 0], + [30, 90.8, 12, 45, np.pi/10] + ]) + boxlist3 = copy.deepcopy(boxlist1) + boxlist3[:, 4] = boxlist1[:, 4] + np.pi/10. + boxlist4 = copy.deepcopy(boxlist2) + boxlist4[:, 4] = boxlist2[:, 4] + np.pi/10. + targets1 = dbbox_transform2(boxlist1, boxlist2) + targets2 = dbbox_transform2(boxlist3, boxlist4) + np.testing.assert_almost_equal(targets1, targets2, decimal=4) + + def test_rotation_invariant_encoding2(self): + + boxlist1 = np.array([[1, 1, 10, 5, 0], + [2, 4, 7, 8, np.pi/10.]]) + + boxlist2 = np.array([[3, 4, 9.2, 4.8, np.pi/6.0], + [2.3, 4.3, 8, 9, np.pi/9.0]]) + + + boxlist3 = copy.deepcopy(boxlist1) + boxlist4 = copy.deepcopy(boxlist2) + + angle = np.random.rand() + boxlist3[:, 4] = boxlist1[:, 4] + angle + boxlist4[:, 4] = boxlist4[:, 4] + angle + + # trans_matrix = np.array([[np.cos(angle), -np.sin(angle)], + # [np.sin(angle), np.cos(angle)]]) + + boxlist3[:, 0] = np.cos(angle) * boxlist1[:, 0] - np.sin(angle) * boxlist1[:, 1] + boxlist3[:, 1] = np.sin(angle) * boxlist1[:, 0] + np.cos(angle) * boxlist1[:, 1] + + boxlist4[:, 0] = np.cos(angle) * boxlist2[:, 0] - np.sin(angle) * boxlist2[:, 1] + boxlist4[:, 1] = np.sin(angle) * boxlist2[:, 0] + np.cos(angle) * boxlist2[:, 1] + + targets1 = dbbox_transform2(boxlist1, boxlist2) + targets2 = dbbox_transform2(boxlist3, boxlist4) + + np.testing.assert_almost_equal(targets1, targets2, decimal=4) + @unittest.skip("The test need to be reconstruct") + def test_dbbox_transform2_warp(self): + # boxlist1 = np.array([[1, 1, 10, 5, 0], + # [1, 1, 10, 5, np.pi/10], + # [1, 1, 10, 5, 0], + # [30, 100, 60, 34, np.pi/2] + # ]) + # TODO: add corner cases + boxlist1 = np.array([[-1, -2.5, 3, 4.5], + [24.5, 68.0, 35.5, 112.0], + # [2, 4, 24.6, 8], + [-9.8, 0.5, 13.8, 7.5] + ]) # (xmin, ymin, xmax, ymax) + boxlist2 = np.array([[1, 1, 5, 8, np.pi/16], + # [1, 1, 5, 8, np.pi/16 + np.pi/10] + # [1, 1, 10, 5, 0], + [30, 90, 12, 45, np.pi/10], + [5, 4, 26, 8.2, np.pi/2 + np.pi/10.] + ]) + polys2 = RotBox2Polys(boxlist2) + targets = dbbox_transform2_warp(boxlist1, polys2) + expected_targets = np.array([[0, 0, 0, 0, 0.78125], + [0, 0, 0, 0, 0.8], + [0., -3/8., np.log(26/24.6), np.log(8.2/8.), np.pi/10./(2 * np.pi)]]) + np.testing.assert_almost_equal(expected_targets, targets, decimal=4) + + @unittest.skip("The test need to be reconstruct") + def test_dbbox_transform2_inv_warp_multiclass(self): + # test 2 classes here + ext_rois = np.array([[-1, -2.5, 3, 4.5], + [24.5, 68.0, 35.5, 112.0], + # [2, 4, 24.6, 8], + [-9.8, 0.5, 13.8, 7.5] + ]) # (xmin, ymin, xmax, ymax) + expected_results = np.array([[1, 1, 5, 8, np.pi/2, 1, 1, 5, 8, np.pi/16], # (x_ctr, y_ctr, w, h, theta) + [30, 90, 12, 45, np.pi/10, 30, 90, 12, 45, np.pi/2], + # [2, 4, 24.6, 8], + [2, 4, 24.6, 8, np.pi/2, 5, 4, 26, 8.2, np.pi/2 + np.pi/10.] + ]) + targets = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0.78125], + [0, 0, 0, 0, 0.8, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, -3/8., np.log(26/24.6), np.log(8.2/8.), np.pi/10./(2 * np.pi)]]) + outputs = dbbox_transform2_inv_warp(ext_rois, targets) + # print 'outputs:', outputs + np.testing.assert_almost_equal(outputs, expected_results, decimal=4) + + + def test_dbbox_transform2_encode_decode(self): + boxlist2 = np.array([[1, 1, 5, 8, np.pi/16], + [1, 1, 5, 8, np.pi/16 + np.pi/10], + [1, 1, 10, 5, 0], + [30, 90, 12, 45, np.pi/10] + ]) + + polys2 = RotBox2Polys(boxlist2) + ex_rois = poly2bbox(polys2) + targets = dbbox_transform2_warp(ex_rois, polys2) + outputs = dbbox_transform2_inv_warp(ex_rois, targets) + np.testing.assert_almost_equal(outputs, boxlist2, decimal=5) + + def test_RotBox2Polys(self): + rotboxes = np.array([[1, 1, 5, 8, np.pi/2, 1, 1, 5, 8, np.pi/16], # (x_ctr, y_ctr, w, h, theta) + [30, 90, 12, 45, np.pi/10, 30, 90, 12, 45, np.pi/2], + # [2, 4, 24.6, 8], + [2, 4, 24.6, 8, np.pi/2, 5, 4, 26, 8.2, np.pi/2 + np.pi/10.] + ]) + expected_polys = np.concatenate((RotBox2Polys(rotboxes[:, 0:5]), RotBox2Polys(rotboxes[:, 5:10])), axis=1) + polys = RotBox2Polys_multi_class(rotboxes) + # print 'polys:', polys + + self.assertTrue(polys.shape == (3, 16)) + np.testing.assert_almost_equal(expected_polys, polys) + + def test_polys2xyhs(self): + # polys = + pass + @unittest.skip("The test can not be passed") + def test_xyhs2polys(self): + # xyh format: (x1, y1, x2, y2, h), x1, y1 is the first point, x2, y2 is the second point. h is the height of a bounding box + xyhs = np.array([[2, 1, 6, 3, 3], + [1.4, 8, 4.2, 6.3, 7.4]]) + + polys = xyhs2polys(xyhs) + inverse_xyhs = polys2xyhs(polys) + inverse_polys = xyhs2polys(inverse_xyhs) + np.testing.assert_almost_equal(xyhs, inverse_xyhs, decimal=6) + np.testing.assert_almost_equal(polys, inverse_polys, decimal=6) + + def test_dbbox_transform3(self): + + boxlist1 = np.array([[np.sqrt(3), 1, 2 * np.sqrt(3), 2, 2], + [np.sqrt(3), 1, 1 + np.sqrt(3), 1 + np.sqrt(3), 2]]) + boxlist2 = np.array([[(3 * np.sqrt(3)-1)/2., (3 + np.sqrt(3))/2., (4 * np.sqrt(3) - 1)/2., (4 + np.sqrt(3))/2., 1], + [(np.sqrt(3) + 1)/2., (np.sqrt(3) + 3)/2., (np.sqrt(3) + 2)/2., (3 + 2 * np.sqrt(3))/2., 1]]) + + targets = dbboxtransform3(boxlist1, boxlist2) + + trans_boxlist1 = np.array([[0, 0, 2, 0, 2]]) + + expected_targets = np.array([[0.5, 0.5, 0, 0.5, np.log(1/2.)], + [0.5, 0.5, 0, 0.5, np.log(1 / 2.)]]) + np.testing.assert_almost_equal(expected_targets, targets) + + def test_dbbox_transform3_inv_warp(self): + ext_rois = np.array([[2, 5, 6, 10.3]]) + targets = np.array([[1/4., 0.2/5.3, 0, 0.2/5.3, np.log(5.1/5.3)]]) + outputs = dbboxtransform3_inv_warp(ext_rois, targets) + expected_results = np.array([[3, 5.2, 6, 5.2, 5.1]]) + # pdb.set_trace() + np.testing.assert_almost_equal(outputs, expected_results) + def test_dbbox_transform3_warp_encode_decode(self): + boxlist1 = np.array([[-1, -2.5, 3, 4.5], + [24.5, 68.0, 35.5, 112.0], + # [2, 4, 24.6, 8], + [-9.8, 0.5, 13.8, 7.5] + ]) # (xmin, ymin, xmax, ymax) + boxlist2 = np.array([[1, 1, 5, 8, np.pi/16], + # [1, 1, 5, 8, np.pi/16 + np.pi/10] + # [1, 1, 10, 5, 0], + [30, 90, 12, 45, np.pi/10], + [5, 4, 26, 8.2, np.pi/2 + np.pi/10.] + ]) + polys2 = RotBox2Polys(boxlist2) + gt_xyhs = polys2xyhs(polys2) + targets = dbboxtransform3_warp(boxlist1, polys2) + # expected_targets = np.array([[0, 0, 0, 0, 0.78125], + # [0, 0, 0, 0, 0.8], + # [0., -3/8., np.log(26/24.6), np.log(8.2/8.), np.pi/10./(2 * np.pi)]]) + targets_inverse = dbboxtransform3_inv_warp(boxlist1, targets) + np.testing.assert_almost_equal(gt_xyhs, targets_inverse) + + def test_dbbox_transform3_rotation_invariant(self): + boxlist1 = np.array([[1000, 1000.8, 8000.767, 12500, np.pi/6.], + [24.5, 68.0, 23, 89.2, np.pi], + # [2, 4, 24.6, 8], + # [-9.8, 0.5, 13.8, 7.5, -np.pi/10.] + ]) # (xmin, ymin, xmax, ymax) + boxlist2 = np.array([[1000, 1000.8, 5000.767, 8000, np.pi/16], + # [1, 1, 5, 8, np.pi/16 + np.pi/10] + # [1, 1, 10, 5, 0], + [24.5, 68.0, 12, 45.5, np.pi/10], + # [5, 4, 26, 8.2, np.pi/2 + np.pi/10.] + ]) + polys1 = RotBox2Polys(boxlist1) + polys2 = RotBox2Polys(boxlist2) + xyhs1 = polys2xyhs(polys1) + xyhs2 = polys2xyhs(polys2) + + randangle = np.random.rand() + boxlist3 = copy.deepcopy(boxlist1) + boxlist3[:, 4] = boxlist3[:, 4] + randangle + polys3 = RotBox2Polys(boxlist3) + xyhs3 = polys2xyhs(polys3) + + boxlist4 = copy.deepcopy(boxlist2) + boxlist4[:, 4] = boxlist4[:, 4] + randangle + polys4 = RotBox2Polys(boxlist4) + xyhs4 = polys2xyhs(polys4) + + targets1 = dbboxtransform3(xyhs1, xyhs2) + targets2 = dbboxtransform3(xyhs3, xyhs4) + + np.testing.assert_almost_equal(targets1, targets2, decimal=6) + def test_dbbox_transform3_inv_multi_class(self): + pass + def test_dbbox_transform3_inv_warp_multi_class(self): + """ + This is a multi-class test + :return: + """ + ext_rois = np.array([[2, 5, 6, 10.3], + ]) + targets = np.array([[0, 0, 0, 0, 0, 1/4., 0.2/5.3, 0, 0.2/5.3, np.log(5.1/5.3), 0, 0, 0, 0, 0] + ]) + outputs = dbboxtransform3_inv_warp(ext_rois, targets) + expected_results = np.array([[2, 5, 6, 5, 5.3, 3, 5.2, 6, 5.2, 5.1, 2, 5, 6, 5, 5.3]]) + np.testing.assert_almost_equal(outputs, expected_results) + + def test_bbox_transformxyh(self): + ext_rois = np.array([[-1, -2.5, 3, 4.5], + [24.5, 68.0, 35.5, 112.0], + [-9.8, 0.5, 13.8, 7.5]]) + + + def test_polygonToRotRectangle_batch(self): + polygons = np.array([[0, 0, 3, 0, 3, 3, 0, 3]]) + rotboxs = polygonToRotRectangle_batch(polygons) + print 'rotboxs:', rotboxs + def test_get_best_begin_point_wrapp(self): + print 'test get best begin point' + input = [7, 5, 3, 6, 1, 2, 5, 1] + expected_output = [1, 2, 5, 1, 7, 5, 3, 6] + output = get_best_begin_point_wrapp(input) + np.testing.assert_almost_equal(np.array(output), np.array(expected_output)) + + def test_xyhs2polys_muli_class(self): + xyhs = np.array([[0, 0, 2, 0, 3, 3, 4.3, 6, 7, 8.4], + [2, 0, 2, 3, 2, 4.4, 5.5, 7.6, 8.2, 9]]) + polys = xyhs2polys_muli_class(xyhs) + expected_polys = np.concatenate((xyhs2polys(xyhs[:, 0:5]), xyhs2polys(xyhs[:, 5:10])), axis=1) + self.assertTrue(polys.shape == (2, 16)) + + np.testing.assert_almost_equal(polys, expected_polys) + + def test_choose_best_Rroi_batch(self): + # (x_ctr, y_ctr, w, h, angle) + Rrois = np.array([[3, 4, 2, 10, np.pi/6.], + [3, 4, 10, 2, np.pi/6. + np.pi/2.], + [3, 4, 2, 10, np.pi/6. + np.pi], + [3, 4, 10, 2, np.pi/6. + np.pi + np.pi/2.]]) + + results = choose_best_Rroi_batch(Rrois) + expected_results = np.array([[3, 4, 10, 2, np.pi/6. + np.pi/2.], + [3, 4, 10, 2, np.pi / 6. + np.pi / 2.], + [3, 4, 10, 2, np.pi / 6. + np.pi / 2.], + [3, 4, 10, 2, np.pi / 6. + np.pi / 2.]]) + + np.testing.assert_almost_equal(results, expected_results, decimal=6) + + def test_choose_best_match_batch(self): + # (x_ctr, y_ctr, w, h, angle) + Rrois = np.array([[3, 4, 2, 10, np.pi/6.], + [3, 4, 10, 2, np.pi/6. + np.pi/2.], + [3, 4, 2, 10, np.pi / 6.], + [3, 4, 10, 2, np.pi / 6. + np.pi / 2.] + ]) + + gt_rois = np.array([[3, 4, 10, 2, np.pi/6. + np.pi/2.], + [3, 4, 10, 2, np.pi / 6. + np.pi / 2.], + [3, 4, 2, 10, np.pi/6. + np.pi], + [3, 4, 10, 2, np.pi/6. + np.pi * 3 / 2.] + ]) + results = choose_best_match_batch(Rrois, gt_rois) + expected_results = np.array([[3, 4, 2, 10, np.pi/6.], + [3, 4, 10, 2, np.pi/6. + np.pi/2.], + [3, 4, 2, 10, np.pi / 6.], + [3, 4, 10, 2, np.pi / 6. + np.pi / 2.] + ]) + + np.testing.assert_almost_equal(results, expected_results, decimal=6) + + def test_dbbox_transform2_new(self): + boxlist1 = np.array([[1, 1, 10, 5, 0], + [1, 1, 10, 5, 0], + [1, 1, 10, 5, np.pi - np.pi/10.], + [1, 1, 10, 5, np.pi - np.pi/10.] + ]) + boxlist2 = np.array([[1, 1, 10, 5, -np.pi/10.], + [1, 1, 10, 5, np.pi/10], + [1, 1, 10, 5, np.pi - np.pi/10. - np.pi/20.], + [1, 1, 10, 5, np.pi - np.pi/10. - np.pi/20. + 10 * np.pi] + ]) + norm = np.pi / 2. + expected_results = np.array([[0, 0, 0, 0, -np.pi/10./norm], + [0, 0, 0, 0, np.pi/10./norm], + [0, 0, 0, 0, -np.pi/20./norm], + [0, 0, 0, 0, -np.pi/20./norm]]) + + results = dbbox_transform2_new(boxlist1, boxlist2) + np.testing.assert_almost_equal(results, expected_results) + + def test_dbbox_transform2_best_match_warp(self): + boxlist1 = np.array([[1, 1, 10, 5, 0], + [1, 1, 10, 5, 0], + [1, 1, 10, 5, np.pi - np.pi/10.], + [1, 1, 10, 5, np.pi - np.pi/10.] + ]) + boxlist2 = np.array([[1, 1, 5, 10, -np.pi/10. + np.pi/2.], + [1, 1, 10, 5, np.pi/10 + np.pi], + [1, 1, 5, 10, np.pi - np.pi/10. - np.pi/20. - np.pi/2.], + [1, 1, 10, 5, np.pi - np.pi/10. - np.pi/20. + 10 * np.pi] + ]) + norm = np.pi / 2. + expected_results = np.array([[0, 0, 0, 0, -np.pi/10./norm], + [0, 0, 0, 0, np.pi/10./norm], + [0, 0, 0, 0, -np.pi/20./norm], + [0, 0, 0, 0, -np.pi/20./norm]]) + + results = dbbox_transform2_best_match_warp(boxlist1, boxlist2) + print 'results: ', results + + old_resutls = dbbox_transform2(boxlist1, boxlist2) + print 'old_resutls: ', old_resutls + + np.testing.assert_almost_equal(results, expected_results) + + print 'test decode' + predict1 = dbbox_transform2_inv_new(boxlist1, results, norm) + + predict2 = dbbox_transform2_inv(boxlist1, old_resutls) + + diff = dbbox_transform2_best_match_warp(predict1, predict2) + print 'predict1:', predict1 + print 'predict2:', predict2 + print 'diff:', diff + # self.assertTrue(np.all(diff == 0)) + +if __name__ == '__main__': + unittest.main() \ No newline at end of file diff --git a/lib/bbox/setup_linux.py b/lib/bbox/setup_linux.py new file mode 100644 index 0000000..f1cd677 --- /dev/null +++ b/lib/bbox/setup_linux.py @@ -0,0 +1,86 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# py-faster-rcnn +# Copyright (c) 2016 by Contributors +# Licence under The MIT License +# py-faster-rcnn (https://github.com/rbgirshick/py-faster-rcnn) +# -------------------------------------------------------- + +import os +from os.path import join as pjoin +from setuptools import setup +from distutils.extension import Extension +from Cython.Distutils import build_ext +import numpy as np + +# Obtain the numpy include directory. This logic works across numpy versions. +try: + numpy_include = np.get_include() +except AttributeError: + numpy_include = np.get_numpy_include() + + +def customize_compiler_for_nvcc(self): + """inject deep into distutils to customize how the dispatch + to gcc/nvcc works. + If you subclass UnixCCompiler, it's not trivial to get your subclass + injected in, and still have the right customizations (i.e. + distutils.sysconfig.customize_compiler) run on it. So instead of going + the OO route, I have this. Note, it's kindof like a wierd functional + subclassing going on.""" + + # tell the compiler it can processes .cu + self.src_extensions.append('.cu') + + # save references to the default compiler_so and _comple methods + default_compiler_so = self.compiler_so + super = self._compile + + # now redefine the _compile method. This gets executed for each + # object but distutils doesn't have the ability to change compilers + # based on source extension: we add it. + def _compile(obj, src, ext, cc_args, extra_postargs, pp_opts): + if os.path.splitext(src)[1] == '.cu': + # use the cuda for .cu files + self.set_executable('compiler_so', CUDA['nvcc']) + # use only a subset of the extra_postargs, which are 1-1 translated + # from the extra_compile_args in the Extension class + postargs = extra_postargs['nvcc'] + else: + postargs = extra_postargs['gcc'] + + super(obj, src, ext, cc_args, postargs, pp_opts) + # reset the default compiler_so, which we might have changed for cuda + self.compiler_so = default_compiler_so + + # inject our redefined _compile method into the class + self._compile = _compile + + +# run the customize_compiler +class custom_build_ext(build_ext): + def build_extensions(self): + customize_compiler_for_nvcc(self.compiler) + build_ext.build_extensions(self) + + +ext_modules = [ + Extension( + "bbox", + ["bbox.pyx"], + extra_compile_args={'gcc': ["-Wno-cpp", "-Wno-unused-function"]}, + include_dirs=[numpy_include] + ), +] + +setup( + name='bbox_cython', + ext_modules=ext_modules, + # inject our custom trigger + cmdclass={'build_ext': custom_build_ext}, +) diff --git a/lib/bbox/setup_windows.py b/lib/bbox/setup_windows.py new file mode 100644 index 0000000..8e85ab2 --- /dev/null +++ b/lib/bbox/setup_windows.py @@ -0,0 +1,51 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# py-faster-rcnn +# Copyright (c) 2016 by Contributors +# Licence under The MIT License +# py-faster-rcnn (https://github.com/rbgirshick/py-faster-rcnn) +# -------------------------------------------------------- + +import numpy as np +import os +from os.path import join as pjoin +#from distutils.core import setup +from setuptools import setup +from distutils.extension import Extension +from Cython.Distutils import build_ext +import subprocess + +#change for windows, by MrX +nvcc_bin = 'nvcc.exe' +lib_dir = 'lib/x64' + +import distutils.msvc9compiler +distutils.msvc9compiler.VERSION = 14.0 + +# Obtain the numpy include directory. This logic works across numpy versions. +try: + numpy_include = np.get_include() +except AttributeError: + numpy_include = np.get_numpy_include() + +ext_modules = [ + # unix _compile: obj, src, ext, cc_args, extra_postargs, pp_opts + Extension( + "bbox", + sources=["bbox.pyx"], + extra_compile_args={}, + include_dirs = [numpy_include] + ), +] + +setup( + name='fast_rcnn', + ext_modules=ext_modules, + # inject our custom trigger + cmdclass={'build_ext': build_ext}, +) diff --git a/lib/dataset/DOTA.py b/lib/dataset/DOTA.py new file mode 100644 index 0000000..74a3bca --- /dev/null +++ b/lib/dataset/DOTA.py @@ -0,0 +1,1021 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The Apache-2.0 License [see LICENSE for details] +# Modified by Haozhi Qi, from py-faster-rcnn (https://github.com/rbgirshick/py-faster-rcnn) +# -------------------------------------------------------- + +""" +Pascal VOC database +This class loads ground truth notations from standard Pascal VOC XML data formats +and transform them into IMDB format. Selective search is used for proposals, see roidb +function. Results are written as the Pascal VOC format. Evaluation is based on mAP +criterion. +""" + +import cPickle +import os +import numpy as np + +from imdb import IMDB +import cv2 +import zipfile +from bbox.bbox_transform import bbox_overlaps, bbox_transform, get_best_begin_point_wrapp +from PIL import Image +import codecs +import sys +# TODO: change it +sys.path.insert(0, r'../../') +# this_dir = os.path.dirname(__file__) +# sys.path.insert(0, os.path.join(this_dir, '..', '..', 'fpn')) +import pdb + +# pdb.set_trace() +# from dota_kit.ResultMerge import * +from dota_kit.ResultMerge_multi_process import * + +# the target of this class is to get DOTA roidb +class DOTA(IMDB): + def __init__(self, image_set, root_path, data_path, result_path=None, mask_size=-1, binary_thresh=None): + """ + fill basic information to initialize imdb + :param image_set: train, test etc. + :param root_path: 'selective_search_data' and 'cache' + :param data_path: data and results + :return: imdb object + """ + self.image_set = image_set + super(DOTA, self).__init__('DOTA', self.image_set, root_path, data_path, result_path) # set self.name + + self.root_path = root_path + self.data_path = data_path + + self.classes = ['__background__', # always index 0 + 'plane', 'baseball-diamond', + 'bridge', 'ground-track-field', + 'small-vehicle', 'large-vehicle', + 'ship', 'tennis-court', + 'basketball-court', 'storage-tank', + 'soccer-ball-field', 'roundabout', + 'harbor', 'swimming-pool', + 'helicopter'] + self.num_classes = len(self.classes) + ## index changed to be basename + self.image_set_index = self.load_image_set_index() + self.num_images = len(self.image_set_index) + print 'num_images', self.num_images + self.mask_size = mask_size + self.binary_thresh = binary_thresh + + self.config = {'comp_id': 'comp4', + 'use_diff': False, + 'min_size': 2} + + def load_image_set_index(self): + """ + find out which indexes correspond to given image set (train or val) + :return: + """ + image_set_index_file = os.path.join(self.data_path, self.image_set + '.txt') + assert os.path.exists(image_set_index_file), 'Path does not exist: {}'.format(image_set_index_file) + with open(image_set_index_file, 'r') as f: + lines = f.readlines() + image_lists = [line.strip() for line in lines] + #image_lists = [os.path.join(self.data_path, 'images', line.strip() + '.jpg') for line in lines] + return image_lists + + def image_path_from_index(self, index): + """ + given image index, find out full path + :param image_name: image name in the data dir + :return: full path of this image + """ + # hint: self.image_set means 'train' or 'test' + # TODO: when data ready, the entrance here should be changed + # Now, it has been changed + # image_file = os.path.join(self.data_path, self.image_set, index) + image_file = os.path.join(self.data_path, 'images', index + '.png') + assert os.path.exists(image_file), 'Path does not exist: {}'.format(image_file) + return image_file + + def gt_roidb(self): + """ + return ground truth image regions database + :return: imdb[image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + cache_file = os.path.join(self.cache_path, self.name + '_gt_roidb.pkl') + if os.path.exists(cache_file): + with open(cache_file, 'rb') as fid: + roidb = cPickle.load(fid) + print '{} gt roidb loaded from {}'.format(self.name, cache_file) + return roidb + + gt_roidb = [self.load_annotation(index) for index in self.image_set_index] + with open(cache_file, 'wb') as fid: + cPickle.dump(gt_roidb, fid, cPickle.HIGHEST_PROTOCOL) + print 'wrote gt roidb to {}'.format(cache_file) + + return gt_roidb + + def load_annotation(self, index): + """ + for a given index, load image and bounding boxes info from XML file + :param image_name: image name in the data dir + :return: record['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + # import xml.etree.ElementTree as ET + roi_rec = dict() + roi_rec['image'] = self.image_path_from_index(index) + # roi_rec['image_name'] = 'img_' + index + '.jpg' + + # filename = os.path.join(self.data_path, 'labelTxt', os.path.splitext(os.path.basename(index))[0] + '.txt') + img_path = self.image_path_from_index(index) + w, h = Image.open(img_path).size + roi_rec['height'] = float(h) + roi_rec['width'] = float(w) + + #f = codecs.open(filename, 'r', 'utf-16') + if self.image_set == 'train': + filename = os.path.join(self.data_path, 'labelTxt', index + '.txt') + f = codecs.open(filename, 'r') + objs = f.readlines() + objs = [obj.strip().split(' ') for obj in objs] + # objs = tree.findall('object') + if not self.config['use_diff']: + non_diff_objs = [obj for obj in objs if obj[9] != '1'] + objs = non_diff_objs + num_objs = len(objs) + + boxes = np.zeros((num_objs, 4), dtype=np.int16) + gt_classes = np.zeros((num_objs), dtype=np.int32) + overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32) + + class_to_index = dict(zip(self.classes, range(self.num_classes))) + # Load object bounding boxes into a data frame. + for ix, obj in enumerate(objs): + bbox = obj + # Make pixel indexes 0-based + x1 = float(bbox[0]) - 1 + y1 = float(bbox[1]) - 1 + x2 = float(bbox[2]) - 1 + y2 = float(bbox[3]) - 1 + x3 = float(bbox[4]) - 1 + y3 = float(bbox[5]) - 1 + x4 = float(bbox[6]) - 1 + y4 = float(bbox[7]) - 1 + xmin = max(min(x1, x2, x3, x4), 0) + xmax = max(x1, x2, x3, x4) + ymin = max(min(y1, y2, y3, y4), 0) + ymax = max(y1, y2, y3, y4) + + + ## restric to (0, w) (0, h) + xmin = min(max(xmin, 0), w - 1) + xmax = min(max(xmax, 0), w - 1) + ymin = min(max(ymin, 0), h - 1) + ymax = min(max(ymax, 0), h - 1) + cls = class_to_index[obj[8].lower().strip()] + boxes[ix, :] = [xmin, ymin, xmax, ymax] + gt_classes[ix] = cls + overlaps[ix, cls] = 1.0 + roi_rec.update({'boxes': boxes, + 'gt_classes': gt_classes, + 'gt_overlaps': overlaps, + 'max_classes': overlaps.argmax(axis=1), + 'max_overlaps': overlaps.max(axis=1), + 'flipped': False}) + return roi_rec + + def evaluate_detections(self, detections): + """ + :param detections: [cls][image] = N x [x1, y1, x2, y2, x3, y3, x4, y4, score] + :return: + """ + detection_results_path = os.path.join(self.result_path, 'test_results') + info = '' + if not os.path.isdir(detection_results_path): + os.mkdir(detection_results_path) + self.write_DOTA_results(detections, threshold=0.0) + return info + + def write_DOTA_results(self, all_boxes, threshold=0.2): + """ + write results files in pascal devkit path + :param all_boxes: boxes to be processed [bbox, confidence] + :return: None + """ + path = os.path.join(self.result_path, 'test_results') + if os.path.isdir(path): + print "delete original test results files!" + os.system("rm -r {}".format(path)) + os.mkdir(path) + for cls_ind, cls in enumerate(self.classes): + if cls == '__background__': + continue + for im_ind, index in enumerate(self.image_set_index): + dets = all_boxes[cls_ind][im_ind] + # if dets.shape[0] == 0: + # print "no detection results in {}".format(index) + # f = open(os.path.join(self.result_path, 'test_results', 'res_{}'.format(os.path.splitext(os.path.basename(index))[0] + '.txt')), 'a') + f = open(os.path.join(self.result_path, 'test_results', '{}'.format(index + '.txt')), 'a') + # the VOCdevkit expects 1-based indices + for k in range(dets.shape[0]): + if dets[k, 4] <= threshold: + continue + f.write('{} {} {} {} {} {}\n'.format(int(dets[k, 0]), int(dets[k, 1]), int(dets[k, 2]), + int(dets[k, 3]),dets[k, 4],self.classes[cls_ind])) + # f.write('{} {} {} {} {} {} {} {} {} {}\n'.format(int(dets[k, 0]), int(dets[k, 1]), + # int(dets[k, 2]), int(dets[k, 1]), + # int(dets[k, 2]), int(dets[k, 3]), + # int(dets[k, 0]), int(dets[k, 3]), + # dets[k, 4], self.classes[cls_ind])) + +# DOTA_oriented contains 8 coordinates, so we have to do data dealing +class DOTA_oriented(IMDB): + def __init__(self, image_set, root_path, data_path, result_path=None, mask_size=-1, binary_thresh=None): + """ + fill basic information to initialize imdb + :param image_set: train, test etc. + :param root_path: 'selective_search_data' and 'cache' + :param data_path: data and results + :return: imdb object + """ + self.image_set = image_set + super(DOTA_oriented, self).__init__('DOTA_oriented', self.image_set, root_path, data_path, result_path) # set self.name + + self.root_path = root_path + self.data_path = data_path + + self.classes = ['__background__', # always index 0 + 'plane', 'baseball-diamond', + 'bridge', 'ground-track-field', + 'small-vehicle', 'large-vehicle', + 'ship', 'tennis-court', + 'basketball-court', 'storage-tank', + 'soccer-ball-field', 'roundabout', + 'harbor', 'swimming-pool', + 'helicopter'] + ## check it, if it is better for baseball-diamond + self.angle_agnostic_classes = ['bridge', + 'ground-track-field', 'tennis-court', + 'basketball-court', 'storage-tank', + 'soccer-ball-field', 'roundabout', + 'swimming-pool'] + self.num_classes = len(self.classes) + self.image_set_index = self.load_image_set_index() + self.num_images = len(self.image_set_index) + print 'num_images', self.num_images + self.mask_size = mask_size + self.binary_thresh = binary_thresh + + self.config = {'comp_id': 'comp4', + 'use_diff': False, + 'min_size': 2} + + def load_image_set_index(self): + """ + find out which indexes correspond to given image set (train or val) + :return: + """ + image_set_index_file = os.path.join(self.data_path, self.image_set + '.txt') + assert os.path.exists(image_set_index_file), 'Path does not exist: {}'.format(image_set_index_file) + with open(image_set_index_file, 'r') as f: + lines = f.readlines() + image_lists = [line.strip() for line in lines] + #image_lists = [os.path.join(self.data_path, 'images', line.strip() + '.jpg') for line in lines] + return image_lists + + def image_path_from_index(self, index): + """ + given image index, find out full path + :param image_name: image name in the data dir + :return: full path of this image + """ + # hint: self.image_set means 'train' or 'test' + # TODO: when data ready, the entrance here should be changed + # image_file = os.path.join(self.data_path, self.image_set, index) + image_file = os.path.join(self.data_path, 'images', index + '.png') + assert os.path.exists(image_file), 'Path does not exist: {}'.format(image_file) + return image_file + + def gt_roidb(self): + """ + return ground truth image regions database + :return: imdb[image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + cache_file = os.path.join(self.cache_path, self.name + '_gt_roidb.pkl') + if os.path.exists(cache_file): + with open(cache_file, 'rb') as fid: + roidb = cPickle.load(fid) + print '{} gt roidb loaded from {}'.format(self.name, cache_file) + return roidb + + # gt_roidb = [self.load_annotation(index) for index in self.image_set_index] + + # TODO: for debug + gt_roidb = [] + count = 0 + for index in self.image_set_index: + count += 1 + print count, '/', len(self.image_set_index) + gt_roidb.append(self.load_annotation(index)) + with open(cache_file, 'wb') as fid: + cPickle.dump(gt_roidb, fid, cPickle.HIGHEST_PROTOCOL) + print 'wrote gt roidb to {}'.format(cache_file) + + return gt_roidb + + def load_annotation(self, index): + """ + for a given index, load image and bounding boxes info from XML file + :param image_name: image name in the data dir + :return: record['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + # import xml.etree.ElementTree as ET + roi_rec = dict() + roi_rec['image'] = self.image_path_from_index(index) + # roi_rec['image_name'] = 'img_' + index + '.jpg' + + # filename = os.path.join(self.data_path, 'labelTxt', os.path.splitext(os.path.basename(index))[0] + '.txt') + # tree = ET.parse(filename) + img_path = self.image_path_from_index(index) + w, h = Image.open(img_path).size + # size = tree.find('size') + roi_rec['height'] = float(h) + roi_rec['width'] = float(w) + + valid_objs = [] + # f = codecs.open(filename, 'r', 'utf-16') + if self.image_set == 'train': + filename = os.path.join(self.data_path, 'labelTxt', index + '.txt') + f = codecs.open(filename, 'r') + objs = f.readlines() + objs = [obj.strip().split(' ') for obj in objs] + # objs = tree.findall('object') + # if not self.config['use_diff']: + # non_diff_objs = [obj for obj in objs if obj[9] != '1'] + # objs = non_diff_objs + if not self.config['use_diff']: + non_diff_objs = [obj for obj in objs if obj[9] == '0'] + objs = non_diff_objs + # Load object bounding boxes into a data frame. + for ix, obj in enumerate(objs): + bbox = obj + # Make pixel indexes 0-based + # x1 = float(bbox[0]) - 1 + # y1 = float(bbox[1]) - 1 + # x2 = float(bbox[2]) - 1 + # y2 = float(bbox[3]) - 1 + # x3 = float(bbox[4]) - 1 + # y3 = float(bbox[5]) - 1 + # x4 = float(bbox[6]) - 1 + # y4 = float(bbox[7]) - 1 + + x1 = min(max(float(bbox[0]), 0), w - 1) + y1 = min(max(float(bbox[1]), 0), h - 1) + x2 = min(max(float(bbox[2]), 0), w - 1) + y2 = min(max(float(bbox[3]), 0), h - 1) + x3 = min(max(float(bbox[4]), 0), w - 1) + y3 = min(max(float(bbox[5]), 0), h - 1) + x4 = min(max(float(bbox[6]), 0), w - 1) + y4 = min(max(float(bbox[7]), 0), h - 1) + # xmin = min(x1, x2, x3, x4) + # xmax = max(x1, x2, x3, x4) + # ymin = min(y1, y2, y3, y4) + # ymax = max(y1, y2, y3, y4) + + # TODO: filter small instances + xmin = max(min(x1, x2, x3, x4), 0) + xmax = max(x1, x2, x3, x4) + ymin = max(min(y1, y2, y3, y4), 0) + ymax = max(y1, y2, y3, y4) + + # if xmax > xmin and ymax > ymin: + # obj[:8] = [x1, y1, x2, y2, x3, y3, x4, y4] + # valid_objs.append(obj) + + if ((xmax - xmin) > 10) and ((ymax - ymin) > 10): + obj[:8] = [x1, y1, x2, y2, x3, y3, x4, y4] + valid_objs.append(obj) + + objs = valid_objs + num_objs = len(objs) + boxes = np.zeros((num_objs, 8), dtype=np.uint16) + gt_classes = np.zeros((num_objs), dtype=np.int32) + overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32) + class_to_index = dict(zip(self.classes, range(self.num_classes))) + # TODO: test it + for ix, obj in enumerate(objs): + cls = class_to_index[obj[8].lower().strip()] + if obj[8].lower().strip() in self.angle_agnostic_classes: + # if angle_agnostic, use choose_best_point, + # TODO: make the long side and short side check, choose the short side's top left as the first point + boxes[ix, :] = get_best_begin_point_wrapp(obj[:8]) + else: + boxes[ix, :] = obj[:8] + gt_classes[ix] = cls + overlaps[ix, cls] = 1.0 + + roi_rec.update({'boxes': boxes, + 'gt_classes': gt_classes, + 'gt_overlaps': overlaps, + 'max_classes': overlaps.argmax(axis=1), + 'max_overlaps': overlaps.max(axis=1), + 'flipped': False}) + return roi_rec + + def evaluate_detections(self, detections, ignore_cache): + """ + :param detections: [cls][image] = N x [x1, y1, x2, y2, x3, y3, x4, y4, score] + :return: + """ + detection_results_path = os.path.join(self.result_path, 'test_results') + info = '' + if not os.path.isdir(detection_results_path): + os.mkdir(detection_results_path) + + if ignore_cache: + self.write_DOTA_results(detections, threshold=0.001) + # pdb.set_trace() + self.write_DOTA_results_comp4(detections, threshold=0.001) + + return info + + def draw_gt_and_detections(self, detections, thresh=0.2): + # gt_folder = os.path.join(self.result_path, 'gt_on_image') + det_folder = os.path.join(self.result_path, 'det_on_image') + # if not os.path.isdir(gt_folder): + # os.mkdir(gt_folder) + self.write_DOTA_results(detections, threshold=0.1) + if not os.path.isdir(det_folder): + os.mkdir(det_folder) + for im_ind, index in enumerate(self.image_set_index): + img_path = self.image_path_from_index(index) + gt_db = self.load_annotation(index) + gt_boxes = gt_db['boxes'] + det_path = os.path.join(self.result_path, 'test_results', 'res_{}'.format(os.path.splitext(os.path.basename(index))[0] + '.txt')) + f = open(det_path, 'r') + det_boxes_results = f.readlines() + det_boxes = [] + for result in det_boxes_results: + result = result.strip().split(',') + det_boxes.append([int(result[0]), int(result[1]), int(result[2]),int(result[3]),int(result[4]),int(result[5]),int(result[6]),int(result[7]), + float(result[8]),result[9]]) + # det_boxes = detections[cls_ind][im_ind] + det_boxes = np.array(det_boxes) + img = cv2.imread(img_path) + img_height, img_width = img.shape[0], img.shape[1] + # original_img = img.copy() + for k in range(gt_boxes.shape[0]): + bbox = gt_boxes[k, :8] + bbox = map(int, bbox) + color = (0, 255, 0) + xmax = max(bbox[0], bbox[2], bbox[4], bbox[6]) + ymax = max(bbox[1], bbox[3], bbox[5], bbox[7]) + if xmax > img_width: + print "extreme xmax", xmax + if ymax > img_height: + print "extreme ymax", ymax + for i in range(3): + cv2.line(img, (bbox[i * 2], bbox[i * 2 + 1]), (bbox[(i + 1) * 2], bbox[(i + 1) * 2 + 1]), + color=color, thickness=1) + cv2.line(img, (bbox[6], bbox[7]), (bbox[0], bbox[1]), color=color, thickness=1) + # cv2.imwrite(os.path.join(gt_folder, 'img_{}.jpg'.format(index)), img) + # img = original_img + for k in range(det_boxes.shape[0]): + bbox = det_boxes[k, :8] + score = det_boxes[k, 8] + cls = det_boxes[k, 9] + if score < thresh: + continue + bbox = map(int, bbox) + color = (0, 255, 255) + for i in range(3): + cv2.line(img, (bbox[i * 2], bbox[i * 2 + 1]), (bbox[(i + 1) * 2], bbox[(i + 1) * 2 + 1]), + color=color, thickness=1) + cv2.line(img, (bbox[6], bbox[7]), (bbox[0], bbox[1]), color=color, thickness=1) + cv2.putText(img, '{} {}'.format(cls, score), (bbox[0], bbox[1] + 10), + color=(255, 255, 255), fontFace=cv2.FONT_HERSHEY_COMPLEX, fontScale=0.5) + print os.path.join(det_folder, os.path.basename(index)) + cv2.imwrite(os.path.join(det_folder, os.path.basename(index)), img) + + + def validate_clockwise_points(self, points): + """ + Validates that the points that the 4 points that dlimite a polygon are in clockwise order. + """ + + if len(points) != 8: + raise Exception("Points list not valid." + str(len(points))) + + point = [ + [int(points[0]), int(points[1])], + [int(points[2]), int(points[3])], + [int(points[4]), int(points[5])], + [int(points[6]), int(points[7])] + ] + edge = [ + (point[1][0] - point[0][0]) * (point[1][1] + point[0][1]), + (point[2][0] - point[1][0]) * (point[2][1] + point[1][1]), + (point[3][0] - point[2][0]) * (point[3][1] + point[2][1]), + (point[0][0] - point[3][0]) * (point[0][1] + point[3][1]) + ] + + summatory = edge[0] + edge[1] + edge[2] + edge[3]; + if summatory > 0: + return False + else: + return True + # TODO: test it + def write_DOTA_results_comp4(self, all_boxes, threshold=0.002): + """ + write results file in comp4 format + :param all_boxes: boxes to be processed [bbox, confidence] + :param threshold: None + :return: + """ + path = os.path.join(self.result_path, 'Task1_results') + if os.path.isdir(path): + print "delete original test results files!" + os.system("rm -rf {}".format(path)) + os.mkdir(path) + # pdb.set_trace() + for cls_ind, cls in enumerate(self.classes): + # pdb.set_trace() + if cls == '__background__': + continue + if not os.path.exists(path): + os.mkdir(path) + with open(os.path.join(path, 'Task1_' + cls + '.txt'), 'w') as f_out: + for im_ind, index in enumerate(self.image_set_index): + try: + dets = all_boxes[cls_ind][im_ind] + except: + print 'cls_ind:', cls_ind + print 'im_ind:', im_ind + return + else: + for k in range(dets.shape[0]): + if dets[k, 8] <= threshold: + continue + xmin = min(dets[k, 0], dets[k, 2], dets[k, 4], dets[k, 6]) + xmax = max(dets[k, 0], dets[k, 2], dets[k, 4], dets[k, 6]) + ymin = min(dets[k, 1], dets[k, 3], dets[k, 5], dets[k, 7]) + ymax = max(dets[k, 1], dets[k, 3], dets[k, 5], dets[k, 7]) + w = xmax - xmin + h = ymax - ymin + if (w * h < 10 * 10): + continue + if self.validate_clockwise_points(dets[k, 0:8]): + f_out.write('{} {} {} {} {} {} {} {} {} {}\n'.format(index, dets[k, 8], + int(dets[k, 0]), int(dets[k, 1]), + int(dets[k, 2]), + int(dets[k, 3]), + int(dets[k, 4]), int(dets[k, 5]), + int(dets[k, 6]), + int(dets[k, 7]) + )) + else: + # print 'A detected box is anti-clockwise! Index:{}'.format(index) + # print dets[k, 0:8] + pass + # pdb.set_trace() + # TODO: change the hard code here + nms_path = path + '_0.1_nms' + if not os.path.exists(nms_path): + os.mkdir(nms_path) + mergebypoly(path, nms_path) + def write_DOTA_results(self, all_boxes, threshold=0.02): + """ + write results files in pascal devkit path + :param all_boxes: boxes to be processed [bbox, confidence] + :return: None + """ + path = os.path.join(self.result_path, 'test_results') + if os.path.isdir(path): + print "delete original test results files!" + os.system("rm -r {}".format(path)) + os.mkdir(path) + for cls_ind, cls in enumerate(self.classes): + if cls == '__background__': + continue + for im_ind, index in enumerate(self.image_set_index): + # dets = all_boxes[cls_ind][im_ind] + try: + dets = all_boxes[cls_ind][im_ind] + except: + print 'cls_ind:', cls_ind + print 'im_ind:', im_ind + return + else: + # if dets.shape[0] == 0: + # print "no detection results in {}".format(index) + if not os.path.exists(os.path.join(self.result_path, 'test_results')): + os.mkdir(os.path.join(self.result_path, 'test_results')) + # f = open(os.path.join(self.result_path, 'test_results', 'res_{}'.format(os.path.splitext(os.path.basename(index))[0] + '.txt')), 'a') + f = open(os.path.join(self.result_path, 'test_results', '{}'.format(index + '.txt')), 'a') + + # the VOCdevkit expects 1-based indices + for k in range(dets.shape[0]): + if dets[k, 8] <= threshold: + continue + if self.validate_clockwise_points(dets[k, 0:8]): + f.write('{} {} {} {} {} {} {} {} {} {}\n'.format(int(dets[k, 0]), int(dets[k, 1]), int(dets[k, 2]), + int(dets[k, 3]), + int(dets[k, 4]), int(dets[k, 5]), int(dets[k, 6]), + int(dets[k, 7]), dets[k, 8], + self.classes[cls_ind])) + else: + # print 'A detected box is anti-clockwise! Index:{}'.format(index) + # print dets[k, 0:8] + pass + +class DOTA_oriented_v2(IMDB): + def __init__(self, image_set, root_path, data_path, result_path=None, mask_size=-1, binary_thresh=None): + """ + fill basic information to initialize imdb + :param image_set: train, test etc. + :param root_path: 'selective_search_data' and 'cache' + :param data_path: data and results + :return: imdb object + """ + self.image_set = image_set + super(DOTA_oriented_v2, self).__init__('DOTA_oriented_v2', self.image_set, root_path, data_path, result_path) # set self.name + + self.root_path = root_path + self.data_path = data_path + + self.classes = ['__background__', # always index 0 + 'plane', 'baseball-diamond', + 'bridge', 'ground-track-field', + 'small-vehicle', 'large-vehicle', + 'ship', 'tennis-court', + 'basketball-court', 'storage-tank', + 'soccer-ball-field', 'roundabout', + 'harbor', 'swimming-pool', + 'helicopter'] + ## check it, if it is better for baseball-diamond + self.angle_agnostic_classes = [ 'plane', 'baseball-diamond', + 'bridge', 'ground-track-field', + 'small-vehicle', 'large-vehicle', + 'ship', 'tennis-court', + 'basketball-court', 'storage-tank', + 'soccer-ball-field', 'roundabout', + 'harbor', 'swimming-pool', + 'helicopter'] + self.num_classes = len(self.classes) + self.image_set_index = self.load_image_set_index() + self.num_images = len(self.image_set_index) + print 'num_images', self.num_images + self.mask_size = mask_size + self.binary_thresh = binary_thresh + + self.config = {'comp_id': 'comp4', + 'use_diff': False, + 'min_size': 2} + + def load_image_set_index(self): + """ + find out which indexes correspond to given image set (train or val) + :return: + """ + image_set_index_file = os.path.join(self.data_path, self.image_set + '.txt') + assert os.path.exists(image_set_index_file), 'Path does not exist: {}'.format(image_set_index_file) + with open(image_set_index_file, 'r') as f: + lines = f.readlines() + image_lists = [line.strip() for line in lines] + #image_lists = [os.path.join(self.data_path, 'images', line.strip() + '.jpg') for line in lines] + return image_lists + + def image_path_from_index(self, index): + """ + given image index, find out full path + :param image_name: image name in the data dir + :return: full path of this image + """ + # hint: self.image_set means 'train' or 'test' + # TODO: when data ready, the entrance here should be changed + # image_file = os.path.join(self.data_path, self.image_set, index) + image_file = os.path.join(self.data_path, 'images', index + '.png') + assert os.path.exists(image_file), 'Path does not exist: {}'.format(image_file) + return image_file + + def gt_roidb(self): + """ + return ground truth image regions database + :return: imdb[image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + cache_file = os.path.join(self.cache_path, self.name + '_gt_roidb.pkl') + if os.path.exists(cache_file): + with open(cache_file, 'rb') as fid: + roidb = cPickle.load(fid) + print '{} gt roidb loaded from {}'.format(self.name, cache_file) + return roidb + + # gt_roidb = [self.load_annotation(index) for index in self.image_set_index] + + # TODO: for debug + gt_roidb = [] + count = 0 + for index in self.image_set_index: + count += 1 + print count, '/', len(self.image_set_index) + gt_roidb.append(self.load_annotation(index)) + with open(cache_file, 'wb') as fid: + cPickle.dump(gt_roidb, fid, cPickle.HIGHEST_PROTOCOL) + print 'wrote gt roidb to {}'.format(cache_file) + + return gt_roidb + + def load_annotation(self, index): + """ + for a given index, load image and bounding boxes info from XML file + :param image_name: image name in the data dir + :return: record['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + # import xml.etree.ElementTree as ET + roi_rec = dict() + roi_rec['image'] = self.image_path_from_index(index) + # roi_rec['image_name'] = 'img_' + index + '.jpg' + + # filename = os.path.join(self.data_path, 'labelTxt', os.path.splitext(os.path.basename(index))[0] + '.txt') + # tree = ET.parse(filename) + img_path = self.image_path_from_index(index) + w, h = Image.open(img_path).size + # size = tree.find('size') + roi_rec['height'] = float(h) + roi_rec['width'] = float(w) + + valid_objs = [] + # f = codecs.open(filename, 'r', 'utf-16') + if self.image_set == 'train': + filename = os.path.join(self.data_path, 'labelTxt', index + '.txt') + f = codecs.open(filename, 'r') + objs = f.readlines() + objs = [obj.strip().split(' ') for obj in objs] + # objs = tree.findall('object') + # if not self.config['use_diff']: + # non_diff_objs = [obj for obj in objs if obj[9] != '1'] + # objs = non_diff_objs + if not self.config['use_diff']: + non_diff_objs = [obj for obj in objs if obj[9] == '0'] + objs = non_diff_objs + # Load object bounding boxes into a data frame. + for ix, obj in enumerate(objs): + bbox = obj + + + x1 = min(max(float(bbox[0]), 0), w - 1) + y1 = min(max(float(bbox[1]), 0), h - 1) + x2 = min(max(float(bbox[2]), 0), w - 1) + y2 = min(max(float(bbox[3]), 0), h - 1) + x3 = min(max(float(bbox[4]), 0), w - 1) + y3 = min(max(float(bbox[5]), 0), h - 1) + x4 = min(max(float(bbox[6]), 0), w - 1) + y4 = min(max(float(bbox[7]), 0), h - 1) + + + # TODO: filter small instances + xmin = max(min(x1, x2, x3, x4), 0) + xmax = max(x1, x2, x3, x4) + ymin = max(min(y1, y2, y3, y4), 0) + ymax = max(y1, y2, y3, y4) + + # if xmax > xmin and ymax > ymin: + # obj[:8] = [x1, y1, x2, y2, x3, y3, x4, y4] + # valid_objs.append(obj) + + if ((xmax - xmin) > 10) and ((ymax - ymin) > 10): + obj[:8] = [x1, y1, x2, y2, x3, y3, x4, y4] + valid_objs.append(obj) + + objs = valid_objs + num_objs = len(objs) + boxes = np.zeros((num_objs, 8), dtype=np.uint16) + gt_classes = np.zeros((num_objs), dtype=np.int32) + overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32) + class_to_index = dict(zip(self.classes, range(self.num_classes))) + # TODO: test it + for ix, obj in enumerate(objs): + cls = class_to_index[obj[8].lower().strip()] + if obj[8].lower().strip() in self.angle_agnostic_classes: + # if angle_agnostic, use choose_best_point, + # TODO: make the long side and short side check, choose the short side's top left as the first point + boxes[ix, :] = get_best_begin_point_wrapp(obj[:8]) + else: + boxes[ix, :] = obj[:8] + gt_classes[ix] = cls + overlaps[ix, cls] = 1.0 + + roi_rec.update({'boxes': boxes, + 'gt_classes': gt_classes, + 'gt_overlaps': overlaps, + 'max_classes': overlaps.argmax(axis=1), + 'max_overlaps': overlaps.max(axis=1), + 'flipped': False}) + return roi_rec + + def evaluate_detections(self, detections, ignore_cache): + """ + :param detections: [cls][image] = N x [x1, y1, x2, y2, x3, y3, x4, y4, score] + :return: + """ + detection_results_path = os.path.join(self.result_path, 'test_results') + info = '' + if not os.path.isdir(detection_results_path): + os.mkdir(detection_results_path) + + # if ignore_cache: + self.write_DOTA_results(detections, threshold=0.001) + # pdb.set_trace() + self.write_DOTA_results_comp4(detections, threshold=0.001) + + return info + + def draw_gt_and_detections(self, detections, thresh=0.2): + # gt_folder = os.path.join(self.result_path, 'gt_on_image') + det_folder = os.path.join(self.result_path, 'det_on_image') + # if not os.path.isdir(gt_folder): + # os.mkdir(gt_folder) + self.write_DOTA_results(detections, threshold=0.1) + if not os.path.isdir(det_folder): + os.mkdir(det_folder) + for im_ind, index in enumerate(self.image_set_index): + img_path = self.image_path_from_index(index) + gt_db = self.load_annotation(index) + gt_boxes = gt_db['boxes'] + det_path = os.path.join(self.result_path, 'test_results', 'res_{}'.format(os.path.splitext(os.path.basename(index))[0] + '.txt')) + f = open(det_path, 'r') + det_boxes_results = f.readlines() + det_boxes = [] + for result in det_boxes_results: + result = result.strip().split(',') + det_boxes.append([int(result[0]), int(result[1]), int(result[2]),int(result[3]),int(result[4]),int(result[5]),int(result[6]),int(result[7]), + float(result[8]),result[9]]) + # det_boxes = detections[cls_ind][im_ind] + det_boxes = np.array(det_boxes) + img = cv2.imread(img_path) + img_height, img_width = img.shape[0], img.shape[1] + # original_img = img.copy() + for k in range(gt_boxes.shape[0]): + bbox = gt_boxes[k, :8] + bbox = map(int, bbox) + color = (0, 255, 0) + xmax = max(bbox[0], bbox[2], bbox[4], bbox[6]) + ymax = max(bbox[1], bbox[3], bbox[5], bbox[7]) + if xmax > img_width: + print "extreme xmax", xmax + if ymax > img_height: + print "extreme ymax", ymax + for i in range(3): + cv2.line(img, (bbox[i * 2], bbox[i * 2 + 1]), (bbox[(i + 1) * 2], bbox[(i + 1) * 2 + 1]), + color=color, thickness=1) + cv2.line(img, (bbox[6], bbox[7]), (bbox[0], bbox[1]), color=color, thickness=1) + # cv2.imwrite(os.path.join(gt_folder, 'img_{}.jpg'.format(index)), img) + # img = original_img + for k in range(det_boxes.shape[0]): + bbox = det_boxes[k, :8] + score = det_boxes[k, 8] + cls = det_boxes[k, 9] + if score < thresh: + continue + bbox = map(int, bbox) + color = (0, 255, 255) + for i in range(3): + cv2.line(img, (bbox[i * 2], bbox[i * 2 + 1]), (bbox[(i + 1) * 2], bbox[(i + 1) * 2 + 1]), + color=color, thickness=1) + cv2.line(img, (bbox[6], bbox[7]), (bbox[0], bbox[1]), color=color, thickness=1) + cv2.putText(img, '{} {}'.format(cls, score), (bbox[0], bbox[1] + 10), + color=(255, 255, 255), fontFace=cv2.FONT_HERSHEY_COMPLEX, fontScale=0.5) + print os.path.join(det_folder, os.path.basename(index)) + cv2.imwrite(os.path.join(det_folder, os.path.basename(index)), img) + + def validate_clockwise_points(self, points): + """ + Validates that the points that the 4 points that dlimite a polygon are in clockwise order. + """ + + if len(points) != 8: + raise Exception("Points list not valid." + str(len(points))) + + point = [ + [int(points[0]), int(points[1])], + [int(points[2]), int(points[3])], + [int(points[4]), int(points[5])], + [int(points[6]), int(points[7])] + ] + edge = [ + (point[1][0] - point[0][0]) * (point[1][1] + point[0][1]), + (point[2][0] - point[1][0]) * (point[2][1] + point[1][1]), + (point[3][0] - point[2][0]) * (point[3][1] + point[2][1]), + (point[0][0] - point[3][0]) * (point[0][1] + point[3][1]) + ] + + summatory = edge[0] + edge[1] + edge[2] + edge[3]; + if summatory > 0: + return False + else: + return True + # TODO: test it + def write_DOTA_results_comp4(self, all_boxes, threshold=0.002): + """ + write results file in comp4 format + :param all_boxes: boxes to be processed [bbox, confidence] + :param threshold: None + :return: + """ + path = os.path.join(self.result_path, 'Task1_results') + if os.path.isdir(path): + print "delete original test results files!" + os.system("rm -rf {}".format(path)) + os.mkdir(path) + # pdb.set_trace() + for cls_ind, cls in enumerate(self.classes): + # pdb.set_trace() + if cls == '__background__': + continue + if not os.path.exists(path): + os.mkdir(path) + with open(os.path.join(path, 'Task1_' + cls + '.txt'), 'w') as f_out: + for im_ind, index in enumerate(self.image_set_index): + try: + dets = all_boxes[cls_ind][im_ind] + except: + print 'cls_ind:', cls_ind + print 'im_ind:', im_ind + return + else: + for k in range(dets.shape[0]): + if dets[k, 8] <= threshold: + continue + xmin = min(dets[k, 0], dets[k, 2], dets[k, 4], dets[k, 6]) + xmax = max(dets[k, 0], dets[k, 2], dets[k, 4], dets[k, 6]) + ymin = min(dets[k, 1], dets[k, 3], dets[k, 5], dets[k, 7]) + ymax = max(dets[k, 1], dets[k, 3], dets[k, 5], dets[k, 7]) + w = xmax - xmin + h = ymax - ymin + if (w * h < 10 * 10): + continue + if self.validate_clockwise_points(dets[k, 0:8]): + f_out.write('{} {} {} {} {} {} {} {} {} {}\n'.format(index, dets[k, 8], + int(dets[k, 0]), int(dets[k, 1]), + int(dets[k, 2]), + int(dets[k, 3]), + int(dets[k, 4]), int(dets[k, 5]), + int(dets[k, 6]), + int(dets[k, 7]) + )) + else: + # print 'A detected box is anti-clockwise! Index:{}'.format(index) + # print dets[k, 0:8] + pass + # pdb.set_trace() + # TODO: change the hard code here + nms_path = path + '_0.1_nms' + if not os.path.exists(nms_path): + os.mkdir(nms_path) + mergebypoly(path, nms_path) + def write_DOTA_results(self, all_boxes, threshold=0.02): + """ + write results files in pascal devkit path + :param all_boxes: boxes to be processed [bbox, confidence] + :return: None + """ + path = os.path.join(self.result_path, 'test_results') + if os.path.isdir(path): + print "delete original test results files!" + os.system("rm -r {}".format(path)) + os.mkdir(path) + for cls_ind, cls in enumerate(self.classes): + if cls == '__background__': + continue + for im_ind, index in enumerate(self.image_set_index): + # dets = all_boxes[cls_ind][im_ind] + try: + dets = all_boxes[cls_ind][im_ind] + except: + print 'cls_ind:', cls_ind + print 'im_ind:', im_ind + return + else: + # if dets.shape[0] == 0: + # print "no detection results in {}".format(index) + if not os.path.exists(os.path.join(self.result_path, 'test_results')): + os.mkdir(os.path.join(self.result_path, 'test_results')) + # f = open(os.path.join(self.result_path, 'test_results', 'res_{}'.format(os.path.splitext(os.path.basename(index))[0] + '.txt')), 'a') + f = open(os.path.join(self.result_path, 'test_results', '{}'.format(index + '.txt')), 'a') + + # the VOCdevkit expects 1-based indices + for k in range(dets.shape[0]): + if dets[k, 8] <= threshold: + continue + if self.validate_clockwise_points(dets[k, 0:8]): + f.write('{} {} {} {} {} {} {} {} {} {}\n'.format(int(dets[k, 0]), int(dets[k, 1]), int(dets[k, 2]), + int(dets[k, 3]), + int(dets[k, 4]), int(dets[k, 5]), int(dets[k, 6]), + int(dets[k, 7]), dets[k, 8], + self.classes[cls_ind])) + else: + # print 'A detected box is anti-clockwise! Index:{}'.format(index) + # print dets[k, 0:8] + pass \ No newline at end of file diff --git a/lib/dataset/DOTA_old.py b/lib/dataset/DOTA_old.py new file mode 100644 index 0000000..535ea6d --- /dev/null +++ b/lib/dataset/DOTA_old.py @@ -0,0 +1,585 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The Apache-2.0 License [see LICENSE for details] +# Modified by Haozhi Qi, from py-faster-rcnn (https://github.com/rbgirshick/py-faster-rcnn) +# -------------------------------------------------------- + +""" +Pascal VOC database +This class loads ground truth notations from standard Pascal VOC XML data formats +and transform them into IMDB format. Selective search is used for proposals, see roidb +function. Results are written as the Pascal VOC format. Evaluation is based on mAP +criterion. +""" + +import cPickle +import os +import numpy as np + +from imdb import IMDB +import cv2 +import zipfile +from bbox.bbox_transform import bbox_overlaps, bbox_transform +from PIL import Image +import codecs + + +# the target of this class is to get DOTA roidb +class DOTA(IMDB): + def __init__(self, image_set, root_path, data_path, result_path=None, mask_size=-1, binary_thresh=None): + """ + fill basic information to initialize imdb + :param image_set: train, test etc. + :param root_path: 'selective_search_data' and 'cache' + :param data_path: data and results + :return: imdb object + """ + self.image_set = image_set + super(DOTA, self).__init__('DOTA', self.image_set, root_path, data_path, result_path) # set self.name + + self.root_path = root_path + self.data_path = data_path + + self.classes = ['__background__', # always index 0 + 'plane', 'baseball-diamond', + 'bridge', 'ground-track-field', + 'small-vehicle', 'large-vehicle', + 'ship', 'tennis-court', + 'basketball-court', 'storage-tank', + 'soccer-ball-field', 'roundabout', + 'harbor', 'swimming-pool', + 'helicopter'] + self.num_classes = len(self.classes) + ## index changed to be basename + self.image_set_index = self.load_image_set_index() + self.num_images = len(self.image_set_index) + print 'num_images', self.num_images + self.mask_size = mask_size + self.binary_thresh = binary_thresh + + self.config = {'comp_id': 'comp4', + 'use_diff': False, + 'min_size': 2} + + def load_image_set_index(self): + """ + find out which indexes correspond to given image set (train or val) + :return: + """ + image_set_index_file = os.path.join(self.data_path, self.image_set + '.txt') + assert os.path.exists(image_set_index_file), 'Path does not exist: {}'.format(image_set_index_file) + with open(image_set_index_file, 'r') as f: + lines = f.readlines() + image_lists = [line.strip() for line in lines] + #image_lists = [os.path.join(self.data_path, 'images', line.strip() + '.jpg') for line in lines] + return image_lists + + def image_path_from_index(self, index): + """ + given image index, find out full path + :param image_name: image name in the data dir + :return: full path of this image + """ + # hint: self.image_set means 'train' or 'test' + # TODO: when data ready, the entrance here should be changed + # Now, it has been changed + # image_file = os.path.join(self.data_path, self.image_set, index) + image_file = os.path.join(self.data_path, 'images', index + '.png') + assert os.path.exists(image_file), 'Path does not exist: {}'.format(image_file) + return image_file + + def gt_roidb(self): + """ + return ground truth image regions database + :return: imdb[image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + cache_file = os.path.join(self.cache_path, self.name + '_gt_roidb.pkl') + if os.path.exists(cache_file): + with open(cache_file, 'rb') as fid: + roidb = cPickle.load(fid) + print '{} gt roidb loaded from {}'.format(self.name, cache_file) + return roidb + + gt_roidb = [self.load_annotation(index) for index in self.image_set_index] + with open(cache_file, 'wb') as fid: + cPickle.dump(gt_roidb, fid, cPickle.HIGHEST_PROTOCOL) + print 'wrote gt roidb to {}'.format(cache_file) + + return gt_roidb + + def load_annotation(self, index): + """ + for a given index, load image and bounding boxes info from XML file + :param image_name: image name in the data dir + :return: record['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + # import xml.etree.ElementTree as ET + roi_rec = dict() + roi_rec['image'] = self.image_path_from_index(index) + # roi_rec['image_name'] = 'img_' + index + '.jpg' + + # filename = os.path.join(self.data_path, 'labelTxt', os.path.splitext(os.path.basename(index))[0] + '.txt') + img_path = self.image_path_from_index(index) + w, h = Image.open(img_path).size + roi_rec['height'] = float(h) + roi_rec['width'] = float(w) + + #f = codecs.open(filename, 'r', 'utf-16') + if self.image_set == 'train': + filename = os.path.join(self.data_path, 'labelTxt', index + '.txt') + f = codecs.open(filename, 'r') + objs = f.readlines() + objs = [obj.strip().split(' ') for obj in objs] + # objs = tree.findall('object') + if not self.config['use_diff']: + non_diff_objs = [obj for obj in objs if obj[9] != '1'] + objs = non_diff_objs + num_objs = len(objs) + + boxes = np.zeros((num_objs, 4), dtype=np.int16) + gt_classes = np.zeros((num_objs), dtype=np.int32) + overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32) + + class_to_index = dict(zip(self.classes, range(self.num_classes))) + # Load object bounding boxes into a data frame. + for ix, obj in enumerate(objs): + bbox = obj + # Make pixel indexes 0-based + x1 = float(bbox[0]) - 1 + y1 = float(bbox[1]) - 1 + x2 = float(bbox[2]) - 1 + y2 = float(bbox[3]) - 1 + x3 = float(bbox[4]) - 1 + y3 = float(bbox[5]) - 1 + x4 = float(bbox[6]) - 1 + y4 = float(bbox[7]) - 1 + xmin = max(min(x1, x2, x3, x4), 0) + xmax = max(x1, x2, x3, x4) + ymin = max(min(y1, y2, y3, y4), 0) + ymax = max(y1, y2, y3, y4) + + + ## restric to (0, w) (0, h)TODO: check it + xmin = min(max(xmin, 0), w - 1) + xmax = min(max(xmax, 0), w - 1) + ymin = min(max(ymin, 0), h - 1) + ymax = min(max(ymax, 0), h - 1) + cls = class_to_index[obj[8].lower().strip()] + boxes[ix, :] = [xmin, ymin, xmax, ymax] + gt_classes[ix] = cls + overlaps[ix, cls] = 1.0 + roi_rec.update({'boxes': boxes, + 'gt_classes': gt_classes, + 'gt_overlaps': overlaps, + 'max_classes': overlaps.argmax(axis=1), + 'max_overlaps': overlaps.max(axis=1), + 'flipped': False}) + return roi_rec + + def evaluate_detections(self, detections): + """ + :param detections: [cls][image] = N x [x1, y1, x2, y2, x3, y3, x4, y4, score] + :return: + """ + detection_results_path = os.path.join(self.result_path, 'test_results') + info = '' + if not os.path.isdir(detection_results_path): + os.mkdir(detection_results_path) + self.write_DOTA_results(detections, threshold=0.0) + return info + + def write_DOTA_results(self, all_boxes, threshold=0.2): + """ + write results files in pascal devkit path + :param all_boxes: boxes to be processed [bbox, confidence] + :return: None + """ + path = os.path.join(self.result_path, 'test_results') + if os.path.isdir(path): + print "delete original test results files!" + os.system("rm -r {}".format(path)) + os.mkdir(path) + for cls_ind, cls in enumerate(self.classes): + if cls == '__background__': + continue + for im_ind, index in enumerate(self.image_set_index): + dets = all_boxes[cls_ind][im_ind] + # if dets.shape[0] == 0: + # print "no detection results in {}".format(index) + # f = open(os.path.join(self.result_path, 'test_results', 'res_{}'.format(os.path.splitext(os.path.basename(index))[0] + '.txt')), 'a') + f = open(os.path.join(self.result_path, 'test_results', '{}'.format(index + '.txt')), 'a') + # the VOCdevkit expects 1-based indices + for k in range(dets.shape[0]): + if dets[k, 4] <= threshold: + continue + f.write('{} {} {} {} {} {}\n'.format(int(dets[k, 0]), int(dets[k, 1]), int(dets[k, 2]), + int(dets[k, 3]),dets[k, 4],self.classes[cls_ind])) + # f.write('{} {} {} {} {} {} {} {} {} {}\n'.format(int(dets[k, 0]), int(dets[k, 1]), + # int(dets[k, 2]), int(dets[k, 1]), + # int(dets[k, 2]), int(dets[k, 3]), + # int(dets[k, 0]), int(dets[k, 3]), + # dets[k, 4], self.classes[cls_ind])) + +# DOTA_oriented contains 8 coordinates, so we have to do data dealing +class DOTA_oriented(IMDB): + def __init__(self, image_set, root_path, data_path, result_path=None, mask_size=-1, binary_thresh=None): + """ + fill basic information to initialize imdb + :param image_set: train, test etc. + :param root_path: 'selective_search_data' and 'cache' + :param data_path: data and results + :return: imdb object + """ + self.image_set = image_set + super(DOTA_oriented, self).__init__('DOTA_oriented', self.image_set, root_path, data_path, result_path) # set self.name + + self.root_path = root_path + self.data_path = data_path + + self.classes = ['__background__', # always index 0 + 'plane', 'baseball-diamond', + 'bridge', 'ground-track-field', + 'small-vehicle', 'large-vehicle', + 'ship', 'tennis-court', + 'basketball-court', 'storage-tank', + 'soccer-ball-field', 'roundabout', + 'harbor', 'swimming-pool', + 'helicopter'] + self.num_classes = len(self.classes) + self.image_set_index = self.load_image_set_index() + self.num_images = len(self.image_set_index) + print 'num_images', self.num_images + self.mask_size = mask_size + self.binary_thresh = binary_thresh + + self.config = {'comp_id': 'comp4', + 'use_diff': False, + 'min_size': 2} + + def load_image_set_index(self): + """ + find out which indexes correspond to given image set (train or val) + :return: + """ + image_set_index_file = os.path.join(self.data_path, self.image_set + '.txt') + assert os.path.exists(image_set_index_file), 'Path does not exist: {}'.format(image_set_index_file) + with open(image_set_index_file, 'r') as f: + lines = f.readlines() + image_lists = [line.strip() for line in lines] + #image_lists = [os.path.join(self.data_path, 'images', line.strip() + '.jpg') for line in lines] + return image_lists + + def image_path_from_index(self, index): + """ + given image index, find out full path + :param image_name: image name in the data dir + :return: full path of this image + """ + # hint: self.image_set means 'train' or 'test' + # TODO: when data ready, the entrance here should be changed + # image_file = os.path.join(self.data_path, self.image_set, index) + image_file = os.path.join(self.data_path, 'images', index + '.png') + assert os.path.exists(image_file), 'Path does not exist: {}'.format(image_file) + return image_file + + def gt_roidb(self): + """ + return ground truth image regions database + :return: imdb[image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + cache_file = os.path.join(self.cache_path, self.name + '_gt_roidb.pkl') + if os.path.exists(cache_file): + with open(cache_file, 'rb') as fid: + roidb = cPickle.load(fid) + print '{} gt roidb loaded from {}'.format(self.name, cache_file) + return roidb + + # gt_roidb = [self.load_annotation(index) for index in self.image_set_index] + + # TODO: for debug + gt_roidb = [] + count = 0 + for index in self.image_set_index: + count += 1 + print count, '/', len(self.image_set_index) + gt_roidb.append(self.load_annotation(index)) + with open(cache_file, 'wb') as fid: + cPickle.dump(gt_roidb, fid, cPickle.HIGHEST_PROTOCOL) + print 'wrote gt roidb to {}'.format(cache_file) + + return gt_roidb + + def load_annotation(self, index): + """ + for a given index, load image and bounding boxes info from XML file + :param image_name: image name in the data dir + :return: record['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + # import xml.etree.ElementTree as ET + roi_rec = dict() + roi_rec['image'] = self.image_path_from_index(index) + # roi_rec['image_name'] = 'img_' + index + '.jpg' + + # filename = os.path.join(self.data_path, 'labelTxt', os.path.splitext(os.path.basename(index))[0] + '.txt') + # tree = ET.parse(filename) + img_path = self.image_path_from_index(index) + w, h = Image.open(img_path).size + # size = tree.find('size') + roi_rec['height'] = float(h) + roi_rec['width'] = float(w) + + # f = codecs.open(filename, 'r', 'utf-16') + if self.image_set == 'train': + filename = os.path.join(self.data_path, 'labelTxt', index + '.txt') + f = codecs.open(filename, 'r') + objs = f.readlines() + objs = [obj.strip().split(' ') for obj in objs] + # objs = tree.findall('object') + if not self.config['use_diff']: + non_diff_objs = [obj for obj in objs if obj[9] != '1'] + objs = non_diff_objs + num_objs = len(objs) + + boxes = np.zeros((num_objs, 8), dtype=np.uint16) + gt_classes = np.zeros((num_objs), dtype=np.int32) + overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32) + + class_to_index = dict(zip(self.classes, range(self.num_classes))) + # Load object bounding boxes into a data frame. + for ix, obj in enumerate(objs): + bbox = obj + # Make pixel indexes 0-based + # x1 = float(bbox[0]) - 1 + # y1 = float(bbox[1]) - 1 + # x2 = float(bbox[2]) - 1 + # y2 = float(bbox[3]) - 1 + # x3 = float(bbox[4]) - 1 + # y3 = float(bbox[5]) - 1 + # x4 = float(bbox[6]) - 1 + # y4 = float(bbox[7]) - 1 + + x1 = min(max(float(bbox[0]), 0), w - 1) + y1 = min(max(float(bbox[1]), 0), h - 1) + x2 = min(max(float(bbox[2]), 0), w - 1) + y2 = min(max(float(bbox[3]), 0), h - 1) + x3 = min(max(float(bbox[4]), 0), w - 1) + y3 = min(max(float(bbox[5]), 0), h - 1) + x4 = min(max(float(bbox[6]), 0), w - 1) + y4 = min(max(float(bbox[7]), 0), h - 1) + # xmin = min(x1, x2, x3, x4) + # xmax = max(x1, x2, x3, x4) + # ymin = min(y1, y2, y3, y4) + # ymax = max(y1, y2, y3, y4) + + # TODO: filter small instances + xmin = max(min(x1, x2, x3, x4), 0) + xmax = max(x1, x2, x3, x4) + ymin = max(min(y1, y2, y3, y4), 0) + ymax = max(y1, y2, y3, y4) + + cls = class_to_index[obj[8].lower().strip()] + boxes[ix, :] = [x1, y1, x2, y2, x3, y3, x4, y4] + gt_classes[ix] = cls + overlaps[ix, cls] = 1.0 + roi_rec.update({'boxes': boxes, + 'gt_classes': gt_classes, + 'gt_overlaps': overlaps, + 'max_classes': overlaps.argmax(axis=1), + 'max_overlaps': overlaps.max(axis=1), + 'flipped': False}) + return roi_rec + + def evaluate_detections(self, detections): + """ + :param detections: [cls][image] = N x [x1, y1, x2, y2, x3, y3, x4, y4, score] + :return: + """ + detection_results_path = os.path.join(self.result_path, 'test_results') + info = '' + if not os.path.isdir(detection_results_path): + os.mkdir(detection_results_path) + # hmeam_max = 0.0 + # recall = 0.0 + # precision = 0.0 + # max_thred = 0.0 + # for th in xrange(20, 50): + # threshold = th / 100.0 + # print 'now testing threshold = %f results:' % threshold + # self.write_DOTA_results(detections, threshold) + # resDict = self.do_python_eval() + # hmean = resDict['method']['hmean'] + # if hmean > hmeam_max: + # hmeam_max = hmean + # recall = resDict['method']['recall'] + # precision = resDict['method']['precision'] + # max_thred = threshold + # print '\nmaximum hmean {} is gained at threshold {}.'.format(hmeam_max, max_thred) + # print 'recall is {}, and precision is {}'.format(recall, precision) + # info += 'maximum hmean {} is gained at threshold {}.'.format(hmeam_max, max_thred) + # info += 'recall is {}, and precision is {}'.format(recall, precision) + # print 'saving the highest results!' + self.write_DOTA_results(detections, threshold=0.0) + # self.write_results_by_class(detections, threshold=0.0) + return info + + def draw_gt_and_detections(self, detections, thresh=0.2): + # gt_folder = os.path.join(self.result_path, 'gt_on_image') + det_folder = os.path.join(self.result_path, 'det_on_image') + # if not os.path.isdir(gt_folder): + # os.mkdir(gt_folder) + self.write_DOTA_results(detections, threshold=0.1) + if not os.path.isdir(det_folder): + os.mkdir(det_folder) + for im_ind, index in enumerate(self.image_set_index): + img_path = self.image_path_from_index(index) + gt_db = self.load_annotation(index) + gt_boxes = gt_db['boxes'] + det_path = os.path.join(self.result_path, 'test_results', 'res_{}'.format(os.path.splitext(os.path.basename(index))[0] + '.txt')) + f = open(det_path, 'r') + det_boxes_results = f.readlines() + det_boxes = [] + for result in det_boxes_results: + result = result.strip().split(',') + det_boxes.append([int(result[0]), int(result[1]), int(result[2]),int(result[3]),int(result[4]),int(result[5]),int(result[6]),int(result[7]), + float(result[8]),result[9]]) + # det_boxes = detections[cls_ind][im_ind] + det_boxes = np.array(det_boxes) + img = cv2.imread(img_path) + img_height, img_width = img.shape[0], img.shape[1] + # original_img = img.copy() + for k in range(gt_boxes.shape[0]): + bbox = gt_boxes[k, :8] + bbox = map(int, bbox) + color = (0, 255, 0) + xmax = max(bbox[0], bbox[2], bbox[4], bbox[6]) + ymax = max(bbox[1], bbox[3], bbox[5], bbox[7]) + if xmax > img_width: + print "extreme xmax", xmax + if ymax > img_height: + print "extreme ymax", ymax + for i in range(3): + cv2.line(img, (bbox[i * 2], bbox[i * 2 + 1]), (bbox[(i + 1) * 2], bbox[(i + 1) * 2 + 1]), + color=color, thickness=1) + cv2.line(img, (bbox[6], bbox[7]), (bbox[0], bbox[1]), color=color, thickness=1) + # cv2.imwrite(os.path.join(gt_folder, 'img_{}.jpg'.format(index)), img) + # img = original_img + for k in range(det_boxes.shape[0]): + bbox = det_boxes[k, :8] + score = det_boxes[k, 8] + cls = det_boxes[k, 9] + if score < thresh: + continue + bbox = map(int, bbox) + color = (0, 255, 255) + for i in range(3): + cv2.line(img, (bbox[i * 2], bbox[i * 2 + 1]), (bbox[(i + 1) * 2], bbox[(i + 1) * 2 + 1]), + color=color, thickness=1) + cv2.line(img, (bbox[6], bbox[7]), (bbox[0], bbox[1]), color=color, thickness=1) + cv2.putText(img, '{} {}'.format(cls, score), (bbox[0], bbox[1] + 10), + color=(255, 255, 255), fontFace=cv2.FONT_HERSHEY_COMPLEX, fontScale=0.5) + print os.path.join(det_folder, os.path.basename(index)) + cv2.imwrite(os.path.join(det_folder, os.path.basename(index)), img) + + # def write_results_by_class(self, all_boxes, threshold=0.1): + # path = os.path.join(self.result_path, 'test_results_by_class') + # if os.path.isdir(path): + # print "delete original test results files!" + # os.system("rm -r {}".format(path)) + # os.mkdir(path) + # for cls_ind, cls in enumerate(self.classes): + # if cls == '__background__': + # continue + # for im_ind, index in enumerate(self.image_set_index): + # dets = all_boxes[cls_ind][im_ind] + # # if dets.shape[0] == 0: + # # print "no detection results in {}".format(index) + # if not os.path.exists(os.path.join(self.result_path, 'test_results_by_class')): + # os.mkdir(os.path.join(self.result_path, 'test_results_by_class')) + # f = open(os.path.join(self.result_path, 'test_results_by_class', + # 'res_{}'.format(os.path.splitext(os.path.basename(index))[0] + '.txt')), 'a') + # # the VOCdevkit expects 1-based indices + # for k in range(dets.shape[0]): + # if dets[k, 8] <= threshold: + # continue + # if self.validate_clockwise_points(dets[k, 0:8]): + # f.write( + # '{},{},{},{},{},{},{},{},{},{}\n'.format(int(dets[k, 0]), int(dets[k, 1]), int(dets[k, 2]), + # int(dets[k, 3]), + # int(dets[k, 4]), int(dets[k, 5]), int(dets[k, 6]), + # int(dets[k, 7]), dets[k, 8], + # self.classes[cls_ind])) + # else: + # print 'A detected box is anti-clockwise! Index:{}'.format(index) + # print dets[k, 0:8] + + def validate_clockwise_points(self, points): + """ + Validates that the points that the 4 points that dlimite a polygon are in clockwise order. + """ + + if len(points) != 8: + raise Exception("Points list not valid." + str(len(points))) + + point = [ + [int(points[0]), int(points[1])], + [int(points[2]), int(points[3])], + [int(points[4]), int(points[5])], + [int(points[6]), int(points[7])] + ] + edge = [ + (point[1][0] - point[0][0]) * (point[1][1] + point[0][1]), + (point[2][0] - point[1][0]) * (point[2][1] + point[1][1]), + (point[3][0] - point[2][0]) * (point[3][1] + point[2][1]), + (point[0][0] - point[3][0]) * (point[0][1] + point[3][1]) + ] + + summatory = edge[0] + edge[1] + edge[2] + edge[3]; + if summatory > 0: + return False + else: + return True + + def write_DOTA_results(self, all_boxes, threshold=0.2): + """ + write results files in pascal devkit path + :param all_boxes: boxes to be processed [bbox, confidence] + :return: None + """ + path = os.path.join(self.result_path, 'test_results') + if os.path.isdir(path): + print "delete original test results files!" + os.system("rm -r {}".format(path)) + os.mkdir(path) + for cls_ind, cls in enumerate(self.classes): + if cls == '__background__': + continue + for im_ind, index in enumerate(self.image_set_index): + # dets = all_boxes[cls_ind][im_ind] + try: + dets = all_boxes[cls_ind][im_ind] + except: + print 'cls_ind:', cls_ind + print 'im_ind:', im_ind + return + else: + # if dets.shape[0] == 0: + # print "no detection results in {}".format(index) + if not os.path.exists(os.path.join(self.result_path, 'test_results')): + os.mkdir(os.path.join(self.result_path, 'test_results')) + # f = open(os.path.join(self.result_path, 'test_results', 'res_{}'.format(os.path.splitext(os.path.basename(index))[0] + '.txt')), 'a') + f = open(os.path.join(self.result_path, 'test_results', '{}'.format(index + '.txt')), 'a') + + # the VOCdevkit expects 1-based indices + for k in range(dets.shape[0]): + if dets[k, 8] <= threshold: + continue + if self.validate_clockwise_points(dets[k, 0:8]): + f.write('{} {} {} {} {} {} {} {} {} {}\n'.format(int(dets[k, 0]), int(dets[k, 1]), int(dets[k, 2]), + int(dets[k, 3]), + int(dets[k, 4]), int(dets[k, 5]), int(dets[k, 6]), + int(dets[k, 7]), dets[k, 8], + self.classes[cls_ind])) + else: + print 'A detected box is anti-clockwise! Index:{}'.format(index) + print dets[k, 0:8] \ No newline at end of file diff --git a/lib/dataset/HRSC.py b/lib/dataset/HRSC.py new file mode 100644 index 0000000..f22700f --- /dev/null +++ b/lib/dataset/HRSC.py @@ -0,0 +1,423 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The Apache-2.0 License [see LICENSE for details] +# Modified by Haozhi Qi, from py-faster-rcnn (https://github.com/rbgirshick/py-faster-rcnn) +# -------------------------------------------------------- + +""" +Pascal VOC database +This class loads ground truth notations from standard Pascal VOC XML data formats +and transform them into IMDB format. Selective search is used for proposals, see roidb +function. Results are written as the Pascal VOC format. Evaluation is based on mAP +criterion. +""" + +import cPickle +import os +import numpy as np + +from imdb import IMDB +import cv2 +import zipfile +from bbox.bbox_transform import bbox_overlaps, bbox_transform, get_best_begin_point_wrapp +from PIL import Image +import codecs +import sys +sys.path.insert(0, r'/home/dj/code/Deformable_FPN_DOTA') +import pdb + +# pdb.set_trace() +from dota_kit.ResultMerge import * +# from dota_kit.ResultMerge_multi_process import * +# from dota_kit.dota_evaluation_task1 import eval_DOTA_Task1, eval_DOTA_Task1_multi_process +from dota_kit.dota_evaluation_task1 import eval_HRSC_L1 + + +class HRSC_L1(IMDB): + def __init__(self, image_set, root_path, data_path, result_path=None, mask_size=-1, binary_thresh=None): + """ + fill basic information to initialize imdb + :param image_set: train, test etc. + :param root_path: 'selective_search_data' and 'cache' + :param data_path: data and results + :return: imdb object + """ + self.image_set = image_set + super(HRSC_L1, self).__init__('HRSC_L1', self.image_set, root_path, data_path, result_path) # set self.name + + self.root_path = root_path + self.data_path = data_path + + self.classes = ['__background__', # always index 0 + 'ship'] + ## check it, if it is better for baseball-diamond + self.angle_agnostic_classes = [ + 'ship'] + self.num_classes = len(self.classes) + self.image_set_index = self.load_image_set_index() + self.num_images = len(self.image_set_index) + print 'num_images', self.num_images + self.mask_size = mask_size + self.binary_thresh = binary_thresh + + self.config = {'comp_id': 'comp4', + 'use_diff': False, + 'min_size': 2} + + def load_image_set_index(self): + """ + find out which indexes correspond to given image set (train or val) + :return: + """ + image_set_index_file = os.path.join(self.data_path, self.image_set + '.txt') + assert os.path.exists(image_set_index_file), 'Path does not exist: {}'.format(image_set_index_file) + with open(image_set_index_file, 'r') as f: + lines = f.readlines() + image_lists = [line.strip() for line in lines] + #image_lists = [os.path.join(self.data_path, 'images', line.strip() + '.jpg') for line in lines] + return image_lists + + def image_path_from_index(self, index): + """ + given image index, find out full path + :param image_name: image name in the data dir + :return: full path of this image + """ + # hint: self.image_set means 'train' or 'test' + # TODO: when data ready, the entrance here should be changed + # image_file = os.path.join(self.data_path, self.image_set, index) + image_file = os.path.join(self.data_path, 'images', index + '.bmp') + assert os.path.exists(image_file), 'Path does not exist: {}'.format(image_file) + return image_file + + def gt_roidb(self): + """ + return ground truth image regions database + :return: imdb[image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + cache_file = os.path.join(self.cache_path, self.name + '_gt_roidb.pkl') + if os.path.exists(cache_file): + with open(cache_file, 'rb') as fid: + roidb = cPickle.load(fid) + print '{} gt roidb loaded from {}'.format(self.name, cache_file) + return roidb + + # gt_roidb = [self.load_annotation(index) for index in self.image_set_index] + + # TODO: for debug + gt_roidb = [] + count = 0 + for index in self.image_set_index: + count += 1 + print count, '/', len(self.image_set_index) + gt_roidb.append(self.load_annotation(index)) + with open(cache_file, 'wb') as fid: + cPickle.dump(gt_roidb, fid, cPickle.HIGHEST_PROTOCOL) + print 'wrote gt roidb to {}'.format(cache_file) + + return gt_roidb + + def load_annotation(self, index): + """ + for a given index, load image and bounding boxes info from XML file + :param image_name: image name in the data dir + :return: record['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + # import xml.etree.ElementTree as ET + roi_rec = dict() + roi_rec['image'] = self.image_path_from_index(index) + # roi_rec['image_name'] = 'img_' + index + '.jpg' + + img_path = self.image_path_from_index(index) + w, h = Image.open(img_path).size + # size = tree.find('size') + roi_rec['height'] = float(h) + roi_rec['width'] = float(w) + + valid_objs = [] + # f = codecs.open(filename, 'r', 'utf-16') + if self.image_set == 'train': + filename = os.path.join(self.data_path, 'labelTxt_L1', index + '.txt') + f = codecs.open(filename, 'r') + objs = f.readlines() + objs = [obj.strip().split(' ') for obj in objs] + # objs = tree.findall('object') + # if not self.config['use_diff']: + # non_diff_objs = [obj for obj in objs if obj[9] != '1'] + # objs = non_diff_objs + if not self.config['use_diff']: + non_diff_objs = [obj for obj in objs if obj[9] == '0'] + objs = non_diff_objs + # Load object bounding boxes into a data frame. + for ix, obj in enumerate(objs): + bbox = obj + + + x1 = min(max(float(bbox[0]), 0), w - 1) + y1 = min(max(float(bbox[1]), 0), h - 1) + x2 = min(max(float(bbox[2]), 0), w - 1) + y2 = min(max(float(bbox[3]), 0), h - 1) + x3 = min(max(float(bbox[4]), 0), w - 1) + y3 = min(max(float(bbox[5]), 0), h - 1) + x4 = min(max(float(bbox[6]), 0), w - 1) + y4 = min(max(float(bbox[7]), 0), h - 1) + + + # TODO: filter small instances + xmin = max(min(x1, x2, x3, x4), 0) + xmax = max(x1, x2, x3, x4) + ymin = max(min(y1, y2, y3, y4), 0) + ymax = max(y1, y2, y3, y4) + + # if xmax > xmin and ymax > ymin: + # obj[:8] = [x1, y1, x2, y2, x3, y3, x4, y4] + # valid_objs.append(obj) + + if ((xmax - xmin) > 10) and ((ymax - ymin) > 10): + obj[:8] = [x1, y1, x2, y2, x3, y3, x4, y4] + valid_objs.append(obj) + + objs = valid_objs + num_objs = len(objs) + boxes = np.zeros((num_objs, 8), dtype=np.uint16) + gt_classes = np.zeros((num_objs), dtype=np.int32) + overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32) + class_to_index = dict(zip(self.classes, range(self.num_classes))) + + for ix, obj in enumerate(objs): + cls = class_to_index[obj[8].lower().strip()] + if obj[8].lower().strip() in self.angle_agnostic_classes: + boxes[ix, :] = get_best_begin_point_wrapp(obj[:8]) + else: + boxes[ix, :] = obj[:8] + gt_classes[ix] = cls + overlaps[ix, cls] = 1.0 + + roi_rec.update({'boxes': boxes, + 'gt_classes': gt_classes, + 'gt_overlaps': overlaps, + 'max_classes': overlaps.argmax(axis=1), + 'max_overlaps': overlaps.max(axis=1), + 'flipped': False}) + return roi_rec + + def evaluate_detections(self, detections, ignore_cache): + """ + :param detections: [cls][image] = N x [x1, y1, x2, y2, x3, y3, x4, y4, score] + :return: + """ + detection_results_path = os.path.join(self.result_path, 'test_results') + info = '' + if not os.path.isdir(detection_results_path): + os.mkdir(detection_results_path) + + if ignore_cache: + self.write_results(detections, threshold=0.001) + # pdb.set_trace() + self.write_results_comp4(detections, threshold=0.001) + + # self.write_results_by_class(detections, threshold=0.0) + # TODO: check the hard code here + detpath = os.path.join(self.result_path, 'Task1_results') + '/Task1_{:s}.txt' + # TODO: test it + annopath = r'data/HRSC/labelTxt_L1/{:s}.txt' + imagesetfile = r'data/HRSC/test.txt' + + # annopath = r'/data/dj/dota/trainval_large-split_rotate/{:s}.txt' + # imagesetfile = r'/data/dj/dota/trainval_large-split_rotate/testset.txt' + + # mAP, classaps = eval_DOTA_Task1_multi_process(detpath, annopath, imagesetfile) + mAP, classaps = eval_HRSC_L1(detpath, annopath, imagesetfile) + with open(os.path.join(self.result_path, 'Task1_results') + '/mAP.txt', 'w') as f_out: + f_out.write('mAP: ' + str(mAP) + '\n') + f_out.write('classaps: ' + str(classaps)) + return info + + def draw_gt_and_detections(self, detections, thresh=0.2): + # gt_folder = os.path.join(self.result_path, 'gt_on_image') + det_folder = os.path.join(self.result_path, 'det_on_image') + # if not os.path.isdir(gt_folder): + # os.mkdir(gt_folder) + self.write_results(detections, threshold=0.1) + if not os.path.isdir(det_folder): + os.mkdir(det_folder) + for im_ind, index in enumerate(self.image_set_index): + img_path = self.image_path_from_index(index) + gt_db = self.load_annotation(index) + gt_boxes = gt_db['boxes'] + det_path = os.path.join(self.result_path, 'test_results', 'res_{}'.format(os.path.splitext(os.path.basename(index))[0] + '.txt')) + f = open(det_path, 'r') + det_boxes_results = f.readlines() + det_boxes = [] + for result in det_boxes_results: + result = result.strip().split(',') + det_boxes.append([int(result[0]), int(result[1]), int(result[2]),int(result[3]),int(result[4]),int(result[5]),int(result[6]),int(result[7]), + float(result[8]),result[9]]) + # det_boxes = detections[cls_ind][im_ind] + det_boxes = np.array(det_boxes) + img = cv2.imread(img_path) + img_height, img_width = img.shape[0], img.shape[1] + # original_img = img.copy() + for k in range(gt_boxes.shape[0]): + bbox = gt_boxes[k, :8] + bbox = map(int, bbox) + color = (0, 255, 0) + xmax = max(bbox[0], bbox[2], bbox[4], bbox[6]) + ymax = max(bbox[1], bbox[3], bbox[5], bbox[7]) + if xmax > img_width: + print "extreme xmax", xmax + if ymax > img_height: + print "extreme ymax", ymax + for i in range(3): + cv2.line(img, (bbox[i * 2], bbox[i * 2 + 1]), (bbox[(i + 1) * 2], bbox[(i + 1) * 2 + 1]), + color=color, thickness=1) + cv2.line(img, (bbox[6], bbox[7]), (bbox[0], bbox[1]), color=color, thickness=1) + # cv2.imwrite(os.path.join(gt_folder, 'img_{}.jpg'.format(index)), img) + # img = original_img + for k in range(det_boxes.shape[0]): + bbox = det_boxes[k, :8] + score = det_boxes[k, 8] + cls = det_boxes[k, 9] + if score < thresh: + continue + bbox = map(int, bbox) + color = (0, 255, 255) + for i in range(3): + cv2.line(img, (bbox[i * 2], bbox[i * 2 + 1]), (bbox[(i + 1) * 2], bbox[(i + 1) * 2 + 1]), + color=color, thickness=1) + cv2.line(img, (bbox[6], bbox[7]), (bbox[0], bbox[1]), color=color, thickness=1) + cv2.putText(img, '{} {}'.format(cls, score), (bbox[0], bbox[1] + 10), + color=(255, 255, 255), fontFace=cv2.FONT_HERSHEY_COMPLEX, fontScale=0.5) + print os.path.join(det_folder, os.path.basename(index)) + cv2.imwrite(os.path.join(det_folder, os.path.basename(index)), img) + + def validate_clockwise_points(self, points): + """ + Validates that the points that the 4 points that dlimite a polygon are in clockwise order. + """ + + if len(points) != 8: + raise Exception("Points list not valid." + str(len(points))) + + point = [ + [int(points[0]), int(points[1])], + [int(points[2]), int(points[3])], + [int(points[4]), int(points[5])], + [int(points[6]), int(points[7])] + ] + edge = [ + (point[1][0] - point[0][0]) * (point[1][1] + point[0][1]), + (point[2][0] - point[1][0]) * (point[2][1] + point[1][1]), + (point[3][0] - point[2][0]) * (point[3][1] + point[2][1]), + (point[0][0] - point[3][0]) * (point[0][1] + point[3][1]) + ] + + summatory = edge[0] + edge[1] + edge[2] + edge[3]; + if summatory > 0: + return False + else: + return True + # TODO: test it + def write_results_comp4(self, all_boxes, threshold=0.002): + """ + write results file in comp4 format + :param all_boxes: boxes to be processed [bbox, confidence] + :param threshold: None + :return: + """ + path = os.path.join(self.result_path, 'Task1_results') + if os.path.isdir(path): + print "delete original test results files!" + os.system("rm -rf {}".format(path)) + os.mkdir(path) + # pdb.set_trace() + for cls_ind, cls in enumerate(self.classes): + # pdb.set_trace() + if cls == '__background__': + continue + if not os.path.exists(path): + os.mkdir(path) + with open(os.path.join(path, 'Task1_' + cls + '.txt'), 'w') as f_out: + for im_ind, index in enumerate(self.image_set_index): + try: + dets = all_boxes[cls_ind][im_ind] + except: + print 'cls_ind:', cls_ind + print 'im_ind:', im_ind + return + else: + for k in range(dets.shape[0]): + if dets[k, 8] <= threshold: + continue + xmin = min(dets[k, 0], dets[k, 2], dets[k, 4], dets[k, 6]) + xmax = max(dets[k, 0], dets[k, 2], dets[k, 4], dets[k, 6]) + ymin = min(dets[k, 1], dets[k, 3], dets[k, 5], dets[k, 7]) + ymax = max(dets[k, 1], dets[k, 3], dets[k, 5], dets[k, 7]) + w = xmax - xmin + h = ymax - ymin + if (w * h < 10 * 10): + continue + if self.validate_clockwise_points(dets[k, 0:8]): + f_out.write('{} {} {} {} {} {} {} {} {} {}\n'.format(index, dets[k, 8], + int(dets[k, 0]), int(dets[k, 1]), + int(dets[k, 2]), + int(dets[k, 3]), + int(dets[k, 4]), int(dets[k, 5]), + int(dets[k, 6]), + int(dets[k, 7]) + )) + else: + # print 'A detected box is anti-clockwise! Index:{}'.format(index) + # print dets[k, 0:8] + pass + # pdb.set_trace() + # TODO: change the hard code here + nms_path = path + '_0.1_nms' + if not os.path.exists(nms_path): + os.mkdir(nms_path) + # mergebypoly(path, nms_path) + def write_results(self, all_boxes, threshold=0.02): + """ + write results files in pascal devkit path + :param all_boxes: boxes to be processed [bbox, confidence] + :return: None + """ + path = os.path.join(self.result_path, 'test_results') + if os.path.isdir(path): + print "delete original test results files!" + os.system("rm -r {}".format(path)) + os.mkdir(path) + for cls_ind, cls in enumerate(self.classes): + if cls == '__background__': + continue + for im_ind, index in enumerate(self.image_set_index): + # dets = all_boxes[cls_ind][im_ind] + try: + dets = all_boxes[cls_ind][im_ind] + except: + print 'cls_ind:', cls_ind + print 'im_ind:', im_ind + return + else: + # if dets.shape[0] == 0: + # print "no detection results in {}".format(index) + if not os.path.exists(os.path.join(self.result_path, 'test_results')): + os.mkdir(os.path.join(self.result_path, 'test_results')) + # f = open(os.path.join(self.result_path, 'test_results', 'res_{}'.format(os.path.splitext(os.path.basename(index))[0] + '.txt')), 'a') + f = open(os.path.join(self.result_path, 'test_results', '{}'.format(index + '.txt')), 'a') + + # the VOCdevkit expects 1-based indices + for k in range(dets.shape[0]): + if dets[k, 8] <= threshold: + continue + if self.validate_clockwise_points(dets[k, 0:8]): + f.write('{} {} {} {} {} {} {} {} {} {}\n'.format(int(dets[k, 0]), int(dets[k, 1]), int(dets[k, 2]), + int(dets[k, 3]), + int(dets[k, 4]), int(dets[k, 5]), int(dets[k, 6]), + int(dets[k, 7]), dets[k, 8], + self.classes[cls_ind])) + else: + # print 'A detected box is anti-clockwise! Index:{}'.format(index) + # print dets[k, 0:8] + pass \ No newline at end of file diff --git a/lib/dataset/__init__.py b/lib/dataset/__init__.py new file mode 100644 index 0000000..daf9c8e --- /dev/null +++ b/lib/dataset/__init__.py @@ -0,0 +1,7 @@ +from imdb import IMDB +from pascal_voc import PascalVOC +from cityscape import CityScape +from coco import coco +# from DOTA_old import DOTA, DOTA_oriented +from DOTA import DOTA, DOTA_oriented, DOTA_oriented_v2 +from HRSC import HRSC_L1 \ No newline at end of file diff --git a/lib/dataset/cityscape.py b/lib/dataset/cityscape.py new file mode 100644 index 0000000..7bc94a2 --- /dev/null +++ b/lib/dataset/cityscape.py @@ -0,0 +1,271 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Zheng Zhang +# -------------------------------------------------------- + +import cPickle +import os +import cv2 +import numpy as np +import itertools + +from imdb import IMDB +from PIL import Image + +class CityScape(IMDB): + def __init__(self, image_set, root_path, dataset_path, result_path=None): + """ + fill basic information to initialize imdb + :param image_set: leftImg8bit_train, etc + :param root_path: 'selective_search_data' and 'cache' + :param dataset_path: data and results + :return: imdb object + """ + image_set_main_folder, image_set_sub_folder= image_set.split('_', 1) + super(CityScape, self).__init__('cityscape', image_set, root_path, dataset_path, result_path) # set self.name + + self.image_set_main_folder = image_set_main_folder + self.image_set_sub_folder = image_set_sub_folder + self.root_path = root_path + self.data_path = dataset_path + self.num_classes = 19 + self.image_set_index = self.load_image_set_index() + self.num_images = len(self.image_set_index) + print 'num_images', self.num_images + + self.config = {'comp_id': 'comp4', + 'use_diff': False, + 'min_size': 2} + + + def load_image_set_index(self): + """ + find out which indexes correspond to given image set + :return: the indexes of given image set + """ + + #Collection all subfolders + image_set_main_folder_path = os.path.join(self.data_path, self.image_set_main_folder, self.image_set_sub_folder) + image_name_set = [filename for parent, dirname, filename in os.walk(image_set_main_folder_path)] + image_name_set = list(itertools.chain.from_iterable(image_name_set)) + index_set = ['' for x in range(len(image_name_set))] + valid_index_count = 0 + for i, image_name in enumerate(image_name_set): + splited_name_set = image_name.split('_') + ext_split = splited_name_set[len(splited_name_set) - 1].split('.') + ext = ext_split[len(ext_split)-1] + if splited_name_set[len(splited_name_set) - 1] != 'flip.png' and ext == 'png': + index_set[valid_index_count] = "_".join(splited_name_set[:len(splited_name_set)-1]) + valid_index_count += 1 + + return index_set[:valid_index_count] + + def image_path_from_index(self, index): + """ + find the image path from given index + :param index: the given index + :return: the image path + """ + index_folder = index.split('_')[0] + image_file = os.path.join(self.data_path, self.image_set_main_folder, self.image_set_sub_folder, index_folder, index + '_' + self.image_set_main_folder + '.png') + assert os.path.exists(image_file), 'Path does not exist: {}'.format(image_file) + return image_file + + def annotation_path_from_index(self, index): + """ + find the gt path from given index + :param index: the given index + :return: the image path + """ + index_folder = index.split('_')[0] + image_file = os.path.join(self.data_path, 'gtFine', self.image_set_sub_folder, index_folder, index + '_gtFine_labelTrainIds.png') + assert os.path.exists(image_file), 'Path does not exist: {}'.format(image_file) + return image_file + + def load_segdb_from_index(self, index): + """ + load segdb from given index + :param index: given index + :return: segdb + """ + seg_rec = dict() + seg_rec['image'] = self.image_path_from_index(index) + size = cv2.imread(seg_rec['image']).shape + seg_rec['height'] = size[0] + seg_rec['width'] = size[1] + + seg_rec['seg_cls_path'] = self.annotation_path_from_index(index) + seg_rec['flipped'] = False + + return seg_rec + + def gt_segdb(self): + """ + return ground truth image regions database + :return: imdb[image_index]['', 'flipped'] + """ + cache_file = os.path.join(self.cache_path, self.name + '_gt_segdb.pkl') + if os.path.exists(cache_file): + with open(cache_file, 'rb') as fid: + segdb = cPickle.load(fid) + print '{} gt segdb loaded from {}'.format(self.name, cache_file) + return segdb + + gt_segdb = [self.load_segdb_from_index(index) for index in self.image_set_index] + with open(cache_file, 'wb') as fid: + cPickle.dump(gt_segdb, fid, cPickle.HIGHEST_PROTOCOL) + print 'wrote gt segdb to {}'.format(cache_file) + + return gt_segdb + + def getpallete(self, num_cls): + """ + this function is to get the colormap for visualizing the segmentation mask + :param num_cls: the number of visulized class + :return: the pallete + """ + n = num_cls + pallete_raw = np.zeros((n, 3)).astype('uint8') + pallete = np.zeros((n, 3)).astype('uint8') + + pallete_raw[6, :] = [111, 74, 0] + pallete_raw[7, :] = [ 81, 0, 81] + pallete_raw[8, :] = [128, 64, 128] + pallete_raw[9, :] = [244, 35, 232] + pallete_raw[10, :] = [250, 170, 160] + pallete_raw[11, :] = [230, 150, 140] + pallete_raw[12, :] = [ 70, 70, 70] + pallete_raw[13, :] = [102, 102, 156] + pallete_raw[14, :] = [190, 153, 153] + pallete_raw[15, :] = [180, 165, 180] + pallete_raw[16, :] = [150, 100, 100] + pallete_raw[17, :] = [150, 120, 90] + pallete_raw[18, :] = [153, 153, 153] + pallete_raw[19, :] = [153, 153, 153] + pallete_raw[20, :] = [250, 170, 30] + pallete_raw[21, :] = [220, 220, 0] + pallete_raw[22, :] = [107, 142, 35] + pallete_raw[23, :] = [152, 251, 152] + pallete_raw[24, :] = [ 70, 130, 180] + pallete_raw[25, :] = [220, 20, 60] + pallete_raw[26, :] = [255, 0, 0] + pallete_raw[27, :] = [ 0, 0, 142] + pallete_raw[28, :] = [ 0, 0, 70] + pallete_raw[29, :] = [ 0, 60, 100] + pallete_raw[30, :] = [ 0, 0, 90] + pallete_raw[31, :] = [ 0, 0, 110] + pallete_raw[32, :] = [ 0, 80, 100] + pallete_raw[33, :] = [ 0, 0, 230] + pallete_raw[34, :] = [119, 11, 32] + + train2regular = [7, 8, 11, 12, 13, 17, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 31, 32, 33] + + for i in range(len(train2regular)): + pallete[i, :] = pallete_raw[train2regular[i]+1, :] + + pallete = pallete.reshape(-1) + + return pallete + + def evaluate_segmentations(self, pred_segmentations = None): + """ + top level evaluations + :param pred_segmentations: the pred segmentation result + :return: the evaluation results + """ + if not (pred_segmentations is None): + self.write_segmentation_result(pred_segmentations) + + info = self._py_evaluate_segmentation() + return info + + + def get_confusion_matrix(self, gt_label, pred_label, class_num): + """ + Calcute the confusion matrix by given label and pred + :param gt_label: the ground truth label + :param pred_label: the pred label + :param class_num: the nunber of class + :return: the confusion matrix + """ + index = (gt_label * class_num + pred_label).astype('int32') + label_count = np.bincount(index) + confusion_matrix = np.zeros((class_num, class_num)) + + for i_label in range(class_num): + for i_pred_label in range(class_num): + cur_index = i_label * class_num + i_pred_label + if cur_index < len(label_count): + confusion_matrix[i_label, i_pred_label] = label_count[cur_index] + + return confusion_matrix + + def _py_evaluate_segmentation(self): + """ + This function is a wrapper to calculte the metrics for given pred_segmentation results + :return: the evaluation metrics + """ + res_file_folder = os.path.join(self.result_path, 'results') + + confusion_matrix = np.zeros((self.num_classes,self.num_classes)) + for i, index in enumerate(self.image_set_index): + seg_gt_info = self.load_segdb_from_index(index) + + seg_gt = np.array(Image.open(seg_gt_info['seg_cls_path'])).astype('float32') + + seg_pathes = os.path.split(seg_gt_info['seg_cls_path']) + res_image_name = seg_pathes[1][:-len('_gtFine_labelTrainIds.png')] + res_subfolder_name = os.path.split(seg_pathes[0])[-1] + res_save_folder = os.path.join(res_file_folder, res_subfolder_name) + res_save_path = os.path.join(res_save_folder, res_image_name + '.png') + + seg_pred = np.array(Image.open(res_save_path)).astype('float32') + #seg_pred = np.squeeze(pred_segmentations[i]) + + seg_pred = cv2.resize(seg_pred, (seg_gt.shape[1], seg_gt.shape[0]), interpolation=cv2.INTER_NEAREST) + ignore_index = seg_gt != 255 + seg_gt = seg_gt[ignore_index] + seg_pred = seg_pred[ignore_index] + + confusion_matrix += self.get_confusion_matrix(seg_gt, seg_pred, self.num_classes) + + pos = confusion_matrix.sum(1) + res = confusion_matrix.sum(0) + tp = np.diag(confusion_matrix) + + IU_array = (tp / np.maximum(1.0, pos + res - tp)) + mean_IU = IU_array.mean() + + return {'meanIU':mean_IU, 'IU_array':IU_array} + + def write_segmentation_result(self, segmentation_results): + """ + Write the segmentation result to result_file_folder + :param segmentation_results: the prediction result + :param result_file_folder: the saving folder + :return: [None] + """ + res_file_folder = os.path.join(self.result_path, 'results') + if not os.path.exists(res_file_folder): + os.mkdir(res_file_folder) + + pallete = self.getpallete(256) + for i, index in enumerate(self.image_set_index): + seg_gt_info = self.load_segdb_from_index(index) + + seg_pathes = os.path.split(seg_gt_info['seg_cls_path']) + res_image_name = seg_pathes[1][:-len('_gtFine_labelTrainIds.png')] + res_subfolder_name = os.path.split(seg_pathes[0])[-1] + res_save_folder = os.path.join(res_file_folder, res_subfolder_name) + res_save_path = os.path.join(res_save_folder, res_image_name + '.png') + + if not os.path.exists(res_save_folder): + os.makedirs(res_save_folder) + + segmentation_result = np.uint8(np.squeeze(np.copy(segmentation_results[i]))) + segmentation_result = Image.fromarray(segmentation_result) + segmentation_result.putpalette(pallete) + segmentation_result.save(res_save_path) + diff --git a/lib/dataset/coco.py b/lib/dataset/coco.py new file mode 100644 index 0000000..0b828e4 --- /dev/null +++ b/lib/dataset/coco.py @@ -0,0 +1,383 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import cPickle +import cv2 +import os +import json +import numpy as np + +from imdb import IMDB + +# coco api +from .pycocotools.coco import COCO +from .pycocotools.cocoeval import COCOeval +from .pycocotools import mask as COCOmask +from utils.mask_coco2voc import mask_coco2voc +from utils.mask_voc2coco import mask_voc2coco +from utils.tictoc import tic, toc +from bbox.bbox_transform import clip_boxes +import multiprocessing as mp + + +def coco_results_one_category_kernel(data_pack): + cat_id = data_pack['cat_id'] + ann_type = data_pack['ann_type'] + binary_thresh = data_pack['binary_thresh'] + all_im_info = data_pack['all_im_info'] + boxes = data_pack['boxes'] + if ann_type == 'bbox': + masks = [] + elif ann_type == 'segm': + masks = data_pack['masks'] + else: + print 'unimplemented ann_type: ' + ann_type + cat_results = [] + for im_ind, im_info in enumerate(all_im_info): + index = im_info['index'] + dets = boxes[im_ind].astype(np.float) + if len(dets) == 0: + continue + scores = dets[:, -1] + if ann_type == 'bbox': + xs = dets[:, 0] + ys = dets[:, 1] + ws = dets[:, 2] - xs + 1 + hs = dets[:, 3] - ys + 1 + result = [{'image_id': index, + 'category_id': cat_id, + 'bbox': [xs[k], ys[k], ws[k], hs[k]], + 'score': scores[k]} for k in xrange(dets.shape[0])] + elif ann_type == 'segm': + width = im_info['width'] + height = im_info['height'] + dets[:, :4] = clip_boxes(dets[:, :4], [height, width]) + mask_encode = mask_voc2coco(masks[im_ind], dets[:, :4], height, width, binary_thresh) + result = [{'image_id': index, + 'category_id': cat_id, + 'segmentation': mask_encode[k], + 'score': scores[k]} for k in xrange(len(mask_encode))] + cat_results.extend(result) + return cat_results + + +class coco(IMDB): + def __init__(self, image_set, root_path, data_path, result_path=None, mask_size=-1, binary_thresh=None): + """ + fill basic information to initialize imdb + :param image_set: train2014, val2014, test2015 + :param root_path: 'data', will write 'rpn_data', 'cache' + :param data_path: 'data/coco' + """ + super(coco, self).__init__('COCO', image_set, root_path, data_path, result_path) + self.root_path = root_path + self.data_path = data_path + self.coco = COCO(self._get_ann_file()) + + # deal with class names + cats = [cat['name'] for cat in self.coco.loadCats(self.coco.getCatIds())] + self.classes = ['__background__'] + cats + self.num_classes = len(self.classes) + self._class_to_ind = dict(zip(self.classes, xrange(self.num_classes))) + self._class_to_coco_ind = dict(zip(cats, self.coco.getCatIds())) + self._coco_ind_to_class_ind = dict([(self._class_to_coco_ind[cls], self._class_to_ind[cls]) + for cls in self.classes[1:]]) + + # load image file names + self.image_set_index = self._load_image_set_index() + self.num_images = len(self.image_set_index) + print 'num_images', self.num_images + self.mask_size = mask_size + self.binary_thresh = binary_thresh + + # deal with data name + view_map = {'minival2014': 'val2014', + 'valminusminival2014': 'val2014', + 'test-dev2015': 'test2015'} + self.data_name = view_map[image_set] if image_set in view_map else image_set + + def _get_ann_file(self): + """ self.data_path / annotations / instances_train2014.json """ + prefix = 'instances' if 'test' not in self.image_set else 'image_info' + return os.path.join(self.data_path, 'annotations', + prefix + '_' + self.image_set + '.json') + + def _load_image_set_index(self): + """ image id: int """ + image_ids = self.coco.getImgIds() + return image_ids + + def image_path_from_index(self, index): + """ example: images / train2014 / COCO_train2014_000000119993.jpg """ + filename = 'COCO_%s_%012d.jpg' % (self.data_name, index) + image_path = os.path.join(self.data_path, 'images', self.data_name, filename) + assert os.path.exists(image_path), 'Path does not exist: {}'.format(image_path) + return image_path + + def gt_roidb(self): + cache_file = os.path.join(self.cache_path, self.name + '_gt_roidb.pkl') + if os.path.exists(cache_file): + with open(cache_file, 'rb') as fid: + roidb = cPickle.load(fid) + print '{} gt roidb loaded from {}'.format(self.name, cache_file) + return roidb + + gt_roidb = [self._load_coco_annotation(index) for index in self.image_set_index] + with open(cache_file, 'wb') as fid: + cPickle.dump(gt_roidb, fid, cPickle.HIGHEST_PROTOCOL) + print 'wrote gt roidb to {}'.format(cache_file) + + return gt_roidb + + def _load_coco_annotation(self, index): + """ + coco ann: [u'segmentation', u'area', u'iscrowd', u'image_id', u'bbox', u'category_id', u'id'] + iscrowd: + crowd instances are handled by marking their overlaps with all categories to -1 + and later excluded in training + bbox: + [x1, y1, w, h] + :param index: coco image id + :return: roidb entry + """ + im_ann = self.coco.loadImgs(index)[0] + width = im_ann['width'] + height = im_ann['height'] + + annIds = self.coco.getAnnIds(imgIds=index, iscrowd=False) + objs = self.coco.loadAnns(annIds) + + # sanitize bboxes + valid_objs = [] + for obj in objs: + x, y, w, h = obj['bbox'] + x1 = np.max((0, x)) + y1 = np.max((0, y)) + x2 = np.min((width - 1, x1 + np.max((0, w - 1)))) + y2 = np.min((height - 1, y1 + np.max((0, h - 1)))) + if obj['area'] > 0 and x2 >= x1 and y2 >= y1: + obj['clean_bbox'] = [x1, y1, x2, y2] + valid_objs.append(obj) + objs = valid_objs + num_objs = len(objs) + + boxes = np.zeros((num_objs, 4), dtype=np.uint16) + gt_classes = np.zeros((num_objs), dtype=np.int32) + overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32) + + for ix, obj in enumerate(objs): + cls = self._coco_ind_to_class_ind[obj['category_id']] + boxes[ix, :] = obj['clean_bbox'] + gt_classes[ix] = cls + if obj['iscrowd']: + overlaps[ix, :] = -1.0 + else: + overlaps[ix, cls] = 1.0 + + roi_rec = {'image': self.image_path_from_index(index), + 'height': height, + 'width': width, + 'boxes': boxes, + 'gt_classes': gt_classes, + 'gt_overlaps': overlaps, + 'max_classes': overlaps.argmax(axis=1), + 'max_overlaps': overlaps.max(axis=1), + 'flipped': False} + return roi_rec + + def mask_path_from_index(self, index): + """ + given image index, cache high resolution mask and return full path of masks + :param index: index of a specific image + :return: full path of this mask + """ + if self.data_name == 'val': + return [] + cache_file = os.path.join(self.cache_path, 'COCOMask', self.data_name) + if not os.path.exists(cache_file): + os.makedirs(cache_file) + # instance level segmentation + filename = 'COCO_%s_%012d' % (self.data_name, index) + gt_mask_file = os.path.join(cache_file, filename + '.hkl') + return gt_mask_file + + def load_coco_sds_annotation(self, index): + """ + coco ann: [u'segmentation', u'area', u'iscrowd', u'image_id', u'bbox', u'category_id', u'id'] + iscrowd: + crowd instances are handled by marking their overlaps with all categories to -1 + and later excluded in training + bbox: + [x1, y1, w, h] + :param index: coco image id + :return: roidb entry + """ + im_ann = self.coco.loadImgs(index)[0] + width = im_ann['width'] + height = im_ann['height'] + + # only load objs whose iscrowd==false + annIds = self.coco.getAnnIds(imgIds=index, iscrowd=False) + objs = self.coco.loadAnns(annIds) + + # sanitize bboxes + valid_objs = [] + for obj in objs: + x, y, w, h = obj['bbox'] + x1 = np.max((0, x)) + y1 = np.max((0, y)) + x2 = np.min((width - 1, x1 + np.max((0, w - 1)))) + y2 = np.min((height - 1, y1 + np.max((0, h - 1)))) + if obj['area'] > 0 and x2 >= x1 and y2 >= y1: + obj['clean_bbox'] = [x1, y1, x2, y2] + valid_objs.append(obj) + objs = valid_objs + num_objs = len(objs) + + boxes = np.zeros((num_objs, 4), dtype=np.uint16) + gt_classes = np.zeros((num_objs), dtype=np.int32) + overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32) + + for ix, obj in enumerate(objs): + cls = self._coco_ind_to_class_ind[obj['category_id']] + boxes[ix, :] = obj['clean_bbox'] + gt_classes[ix] = cls + if obj['iscrowd']: + overlaps[ix, :] = -1.0 + else: + overlaps[ix, cls] = 1.0 + + sds_rec = {'image': self.image_path_from_index(index), + 'height': height, + 'width': width, + 'boxes': boxes, + 'gt_classes': gt_classes, + 'gt_overlaps': overlaps, + 'max_classes': overlaps.argmax(axis=1), + 'max_overlaps': overlaps.max(axis=1), + 'cache_seg_inst': self.mask_path_from_index(index), + 'flipped': False} + return sds_rec, objs + + def evaluate_detections(self, detections, ann_type='bbox', all_masks=None): + """ detections_val2014_results.json """ + res_folder = os.path.join(self.result_path, 'results') + if not os.path.exists(res_folder): + os.makedirs(res_folder) + res_file = os.path.join(res_folder, 'detections_%s_results.json' % self.image_set) + self._write_coco_results(detections, res_file, ann_type, all_masks) + if 'test' not in self.image_set: + info_str = self._do_python_eval(res_file, res_folder, ann_type) + return info_str + + def evaluate_sds(self, all_boxes, all_masks): + info_str = self.evaluate_detections(all_boxes, 'segm', all_masks) + return info_str + + def _write_coco_results(self, all_boxes, res_file, ann_type, all_masks): + """ example results + [{"image_id": 42, + "category_id": 18, + "bbox": [258.15,41.29,348.26,243.78], + "score": 0.236}, ...] + """ + all_im_info = [{'index': index, + 'height': self.coco.loadImgs(index)[0]['height'], + 'width': self.coco.loadImgs(index)[0]['width']} + for index in self.image_set_index] + + if ann_type == 'bbox': + data_pack = [{'cat_id': self._class_to_coco_ind[cls], + 'cls_ind': cls_ind, + 'cls': cls, + 'ann_type': ann_type, + 'binary_thresh': self.binary_thresh, + 'all_im_info': all_im_info, + 'boxes': all_boxes[cls_ind]} + for cls_ind, cls in enumerate(self.classes) if not cls == '__background__'] + elif ann_type == 'segm': + data_pack = [{'cat_id': self._class_to_coco_ind[cls], + 'cls_ind': cls_ind, + 'cls': cls, + 'ann_type': ann_type, + 'binary_thresh': self.binary_thresh, + 'all_im_info': all_im_info, + 'boxes': all_boxes[cls_ind], + 'masks': all_masks[cls_ind]} + for cls_ind, cls in enumerate(self.classes) if not cls == '__background__'] + else: + print 'unimplemented ann_type: '+ann_type + # results = coco_results_one_category_kernel(data_pack[1]) + # print results[0] + pool = mp.Pool(mp.cpu_count()) + results = pool.map(coco_results_one_category_kernel, data_pack) + pool.close() + pool.join() + results = sum(results, []) + print 'Writing results json to %s' % res_file + with open(res_file, 'w') as f: + json.dump(results, f, sort_keys=True, indent=4) + + def _do_python_eval(self, res_file, res_folder, ann_type): + coco_dt = self.coco.loadRes(res_file) + coco_eval = COCOeval(self.coco, coco_dt) + coco_eval.params.useSegm = (ann_type == 'segm') + coco_eval.evaluate() + coco_eval.accumulate() + info_str = self._print_detection_metrics(coco_eval) + + eval_file = os.path.join(res_folder, 'detections_%s_results.pkl' % self.image_set) + with open(eval_file, 'w') as f: + cPickle.dump(coco_eval, f, cPickle.HIGHEST_PROTOCOL) + print 'coco eval results saved to %s' % eval_file + info_str += 'coco eval results saved to %s\n' % eval_file + return info_str + + def _print_detection_metrics(self, coco_eval): + info_str = '' + IoU_lo_thresh = 0.5 + IoU_hi_thresh = 0.95 + + def _get_thr_ind(coco_eval, thr): + ind = np.where((coco_eval.params.iouThrs > thr - 1e-5) & + (coco_eval.params.iouThrs < thr + 1e-5))[0][0] + iou_thr = coco_eval.params.iouThrs[ind] + assert np.isclose(iou_thr, thr) + return ind + + ind_lo = _get_thr_ind(coco_eval, IoU_lo_thresh) + ind_hi = _get_thr_ind(coco_eval, IoU_hi_thresh) + + # precision has dims (iou, recall, cls, area range, max dets) + # area range index 0: all area ranges + # max dets index 2: 100 per image + precision = \ + coco_eval.eval['precision'][ind_lo:(ind_hi + 1), :, :, 0, 2] + ap_default = np.mean(precision[precision > -1]) + print '~~~~ Mean and per-category AP @ IoU=%.2f,%.2f] ~~~~' % (IoU_lo_thresh, IoU_hi_thresh) + info_str += '~~~~ Mean and per-category AP @ IoU=%.2f,%.2f] ~~~~\n' % (IoU_lo_thresh, IoU_hi_thresh) + print '%-15s %5.1f' % ('all', 100 * ap_default) + info_str += '%-15s %5.1f\n' % ('all', 100 * ap_default) + for cls_ind, cls in enumerate(self.classes): + if cls == '__background__': + continue + # minus 1 because of __background__ + precision = coco_eval.eval['precision'][ind_lo:(ind_hi + 1), :, cls_ind - 1, 0, 2] + ap = np.mean(precision[precision > -1]) + print '%-15s %5.1f' % (cls, 100 * ap) + info_str += '%-15s %5.1f\n' % (cls, 100 * ap) + + print '~~~~ Summary metrics ~~~~' + coco_eval.summarize() + + return info_str diff --git a/lib/dataset/ds_utils.py b/lib/dataset/ds_utils.py new file mode 100644 index 0000000..131644b --- /dev/null +++ b/lib/dataset/ds_utils.py @@ -0,0 +1,16 @@ +import numpy as np + + +def unique_boxes(boxes, scale=1.0): + """ return indices of unique boxes """ + v = np.array([1, 1e3, 1e6, 1e9]) + hashes = np.round(boxes * scale).dot(v) + _, index = np.unique(hashes, return_index=True) + return np.sort(index) + + +def filter_small_boxes(boxes, min_size): + w = boxes[:, 2] - boxes[:, 0] + h = boxes[:, 3] - boxes[:, 1] + keep = np.where((w >= min_size) & (h > min_size))[0] + return keep \ No newline at end of file diff --git a/lib/dataset/imdb.py b/lib/dataset/imdb.py new file mode 100644 index 0000000..e4a74c4 --- /dev/null +++ b/lib/dataset/imdb.py @@ -0,0 +1,432 @@ +""" +General image database +An image database creates a list of relative image path called image_set_index and +transform index to absolute image path. As to training, it is necessary that ground +truth and proposals are mixed together for training. +roidb +basic format [image_index] +['image', 'height', 'width', 'flipped', +'boxes', 'gt_classes', 'gt_overlaps', 'max_classes', 'max_overlaps', 'bbox_targets'] +""" + +import os +import cPickle +import numpy as np +from PIL import Image +from bbox.bbox_transform import bbox_overlaps, get_best_begin_point_wrapp +from multiprocessing import Pool, cpu_count + +def get_flipped_entry_outclass_wrapper(IMDB_instance, seg_rec): + return IMDB_instance.get_flipped_entry(seg_rec) + +class IMDB(object): + def __init__(self, name, image_set, root_path, dataset_path, result_path=None): + """ + basic information about an image database + :param name: name of image database will be used for any output + :param root_path: root path store cache and proposal data + :param dataset_path: dataset path store images and image lists + """ + self.name = name + '_' + image_set + self.image_set = image_set + self.root_path = root_path + self.data_path = dataset_path + self._result_path = result_path + + # abstract attributes + self.classes = [] + self.num_classes = 0 + self.image_set_index = [] + self.num_images = 0 + + self.config = {} + + def image_path_from_index(self, index): + raise NotImplementedError + + def gt_roidb(self): + raise NotImplementedError + + def evaluate_detections(self, detections): + raise NotImplementedError + + def evaluate_segmentations(self, segmentations): + raise NotImplementedError + + @property + def cache_path(self): + """ + make a directory to store all caches + :return: cache path + """ + cache_path = os.path.join(self.root_path, 'cache') + if not os.path.exists(cache_path): + os.mkdir(cache_path) + return cache_path + + @property + def result_path(self): + if self._result_path and os.path.exists(self._result_path): + return self._result_path + else: + return self.cache_path + + def image_path_at(self, index): + """ + access image at index in image database + :param index: image index in image database + :return: image path + """ + return self.image_path_from_index(self.image_set_index[index]) + + def load_rpn_data(self, full=False): + if full: + rpn_file = os.path.join(self.result_path, 'rpn_data', self.name + '_full_rpn.pkl') + else: + rpn_file = os.path.join(self.result_path, 'rpn_data', self.name + '_rpn.pkl') + print 'loading {}'.format(rpn_file) + assert os.path.exists(rpn_file), 'rpn data not found at {}'.format(rpn_file) + with open(rpn_file, 'rb') as f: + box_list = cPickle.load(f) + return box_list + + def load_rpn_roidb(self, gt_roidb): + """ + turn rpn detection boxes into roidb + :param gt_roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + :return: roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + box_list = self.load_rpn_data() + return self.create_roidb_from_box_list(box_list, gt_roidb) + + def rpn_roidb(self, gt_roidb, append_gt=False): + """ + get rpn roidb and ground truth roidb + :param gt_roidb: ground truth roidb + :param append_gt: append ground truth + :return: roidb of rpn + """ + if append_gt: + print 'appending ground truth annotations' + rpn_roidb = self.load_rpn_roidb(gt_roidb) + roidb = IMDB.merge_roidbs(gt_roidb, rpn_roidb) + else: + roidb = self.load_rpn_roidb(gt_roidb) + return roidb + + def create_roidb_from_box_list(self, box_list, gt_roidb): + """ + given ground truth, prepare roidb + :param box_list: [image_index] ndarray of [box_index][x1, x2, y1, y2] + :param gt_roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + :return: roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + assert len(box_list) == self.num_images, 'number of boxes matrix must match number of images' + roidb = [] + for i in range(self.num_images): + roi_rec = dict() + roi_rec['image'] = gt_roidb[i]['image'] + roi_rec['height'] = gt_roidb[i]['height'] + roi_rec['width'] = gt_roidb[i]['width'] + + boxes = box_list[i] + if boxes.shape[1] == 5: + boxes = boxes[:, :4] + num_boxes = boxes.shape[0] + overlaps = np.zeros((num_boxes, self.num_classes), dtype=np.float32) + if gt_roidb is not None and gt_roidb[i]['boxes'].size > 0: + gt_boxes = gt_roidb[i]['boxes'] + gt_classes = gt_roidb[i]['gt_classes'] + # n boxes and k gt_boxes => n * k overlap + gt_overlaps = bbox_overlaps(boxes.astype(np.float), gt_boxes.astype(np.float)) + # for each box in n boxes, select only maximum overlap (must be greater than zero) + argmaxes = gt_overlaps.argmax(axis=1) + maxes = gt_overlaps.max(axis=1) + I = np.where(maxes > 0)[0] + overlaps[I, gt_classes[argmaxes[I]]] = maxes[I] + + roi_rec.update({'boxes': boxes, + 'gt_classes': np.zeros((num_boxes,), dtype=np.int32), + 'gt_overlaps': overlaps, + 'max_classes': overlaps.argmax(axis=1), + 'max_overlaps': overlaps.max(axis=1), + 'flipped': False}) + + # background roi => background class + zero_indexes = np.where(roi_rec['max_overlaps'] == 0)[0] + assert all(roi_rec['max_classes'][zero_indexes] == 0) + # foreground roi => foreground class + nonzero_indexes = np.where(roi_rec['max_overlaps'] > 0)[0] + assert all(roi_rec['max_classes'][nonzero_indexes] != 0) + + roidb.append(roi_rec) + + return roidb + + def get_flipped_entry(self, seg_rec): + return {'image': self.flip_and_save(seg_rec['image']), + 'seg_cls_path': self.flip_and_save(seg_rec['seg_cls_path']), + 'height': seg_rec['height'], + 'width': seg_rec['width'], + 'flipped': True} + + def append_flipped_images_for_segmentation(self, segdb): + """ + append flipped images to an roidb + flip boxes coordinates, images will be actually flipped when loading into network + :param segdb: [image_index]['seg_cls_path', 'flipped'] + :return: segdb: [image_index]['seg_cls_path', 'flipped'] + """ + print 'append flipped images to segdb' + assert self.num_images == len(segdb) + pool = Pool(processes=1) + pool_result = [] + for i in range(self.num_images): + seg_rec = segdb[i] + pool_result.append(pool.apply_async(get_flipped_entry_outclass_wrapper, args=(self, seg_rec, ))) + #self.get_flipped_entry(seg_rec, segdb_flip, i) + pool.close() + pool.join() + segdb_flip = [res_instance.get() for res_instance in pool_result] + segdb += segdb_flip + self.image_set_index *= 2 + return segdb + + def append_rotated_images(self, roidb): + # TODO: add rotated augmentation here + pass + def append_flipped_images(self, roidb): + """ + append flipped images to an roidb + flip boxes coordinates, images will be actually flipped when loading into network + :param roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + :return: roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + print 'append flipped images to roidb' + assert self.num_images == len(roidb) + for i in range(self.num_images): + roi_rec = roidb[i] + # print 'roi_rec', roi_rec + boxes = roi_rec['boxes'].copy() + oldx1 = boxes[:, 0].copy() + oldx2 = boxes[:, 2].copy() + boxes[:, 0] = roi_rec['width'] - oldx2 - 1 + boxes[:, 2] = roi_rec['width'] - oldx1 - 1 + assert (boxes[:, 2] >= boxes[:, 0]).all() + entry = {'image': roi_rec['image'], + 'height': roi_rec['height'], + 'width': roi_rec['width'], + 'boxes': boxes, + 'gt_classes': roidb[i]['gt_classes'], + 'gt_overlaps': roidb[i]['gt_overlaps'], + 'max_classes': roidb[i]['max_classes'], + 'max_overlaps': roidb[i]['max_overlaps'], + 'flipped': True} + + # if roidb has mask + if 'cache_seg_inst' in roi_rec: + [filename, extension] = os.path.splitext(roi_rec['cache_seg_inst']) + entry['cache_seg_inst'] = os.path.join(filename + '_flip' + extension) + + roidb.append(entry) + + self.image_set_index *= 2 + return roidb + + def append_flipped_images_poly(self, roidb): + """ + append flipped images to an roidb + flip boxes coordinates, images will be actually flipped when loading into network + :param roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + :return: roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + # flipp will make the center no longer the first point of the baseball-diamond + # assert "the flipp is forbidded, recommand to use rotation" + print 'append flipped images to roidb' + print 'num_images:', self.num_images, 'len roidb:', len(roidb) + assert self.num_images == len(roidb) + for i in range(self.num_images): + roi_rec = roidb[i] + boxes = roi_rec['boxes'].copy() + oldx1 = boxes[:, 0].copy() + oldx2 = boxes[:, 2].copy() + oldx3 = boxes[:, 4].copy() + oldx4 = boxes[:, 6].copy() + oldy1 = boxes[:, 1].copy() + oldy2 = boxes[:, 3].copy() + oldy3 = boxes[:, 5].copy() + oldy4 = boxes[:, 7].copy() + boxes[:, 0] = roi_rec['width'] - oldx2 - 1 + boxes[:, 1] = oldy2 + boxes[:, 2] = roi_rec['width'] - oldx1 - 1 + boxes[:, 3] = oldy1 + boxes[:, 4] = roi_rec['width'] - oldx4 - 1 + boxes[:, 5] = oldy4 + boxes[:, 6] = roi_rec['width'] - oldx3 - 1 + boxes[:, 7] = oldy3 + # assert (boxes[:, 2] >= boxes[:, 0]).all() + import pdb + # pdb.set_trace() + for idx, item in enumerate(boxes): + boxes[idx, :] = get_best_begin_point_wrapp(boxes[idx, :]) + + entry = {'image': roi_rec['image'], + 'height': roi_rec['height'], + 'width': roi_rec['width'], + 'boxes': boxes, + 'gt_classes': roidb[i]['gt_classes'], + 'gt_overlaps': roidb[i]['gt_overlaps'], + 'max_classes': roidb[i]['max_classes'], + 'max_overlaps': roidb[i]['max_overlaps'], + 'flipped': True} + + # if roidb has mask + if 'cache_seg_inst' in roi_rec: + [filename, extension] = os.path.splitext(roi_rec['cache_seg_inst']) + entry['cache_seg_inst'] = os.path.join(filename + '_flip' + extension) + + roidb.append(entry) + + self.image_set_index *= 2 + return roidb + + def flip_and_save(self, image_path): + """ + flip the image by the path and save the flipped image with suffix 'flip' + :param path: the path of specific image + :return: the path of saved image + """ + [image_name, image_ext] = os.path.splitext(os.path.basename(image_path)) + image_dir = os.path.dirname(image_path) + saved_image_path = os.path.join(image_dir, image_name + '_flip' + image_ext) + try: + flipped_image = Image.open(saved_image_path) + except: + flipped_image = Image.open(image_path) + flipped_image = flipped_image.transpose(Image.FLIP_LEFT_RIGHT) + flipped_image.save(saved_image_path, 'png') + return saved_image_path + + def evaluate_recall(self, roidb, candidate_boxes=None, thresholds=None): + """ + evaluate detection proposal recall metrics + record max overlap value for each gt box; return vector of overlap values + :param roidb: used to evaluate + :param candidate_boxes: if not given, use roidb's non-gt boxes + :param thresholds: array-like recall threshold + :return: None + ar: average recall, recalls: vector recalls at each IoU overlap threshold + thresholds: vector of IoU overlap threshold, gt_overlaps: vector of all ground-truth overlaps + """ + all_log_info = '' + area_names = ['all', '0-25', '25-50', '50-100', + '100-200', '200-300', '300-inf'] + area_ranges = [[0**2, 1e5**2], [0**2, 25**2], [25**2, 50**2], [50**2, 100**2], + [100**2, 200**2], [200**2, 300**2], [300**2, 1e5**2]] + area_counts = [] + for area_name, area_range in zip(area_names[1:], area_ranges[1:]): + area_count = 0 + for i in range(self.num_images): + if candidate_boxes is None: + # default is use the non-gt boxes from roidb + non_gt_inds = np.where(roidb[i]['gt_classes'] == 0)[0] + boxes = roidb[i]['boxes'][non_gt_inds, :] + else: + boxes = candidate_boxes[i] + boxes_areas = (boxes[:, 2] - boxes[:, 0] + 1) * (boxes[:, 3] - boxes[:, 1] + 1) + valid_range_inds = np.where((boxes_areas >= area_range[0]) & (boxes_areas < area_range[1]))[0] + area_count += len(valid_range_inds) + area_counts.append(area_count) + total_counts = float(sum(area_counts)) + for area_name, area_count in zip(area_names[1:], area_counts): + log_info = 'percentage of {} {}'.format(area_name, area_count / total_counts) + print log_info + all_log_info += log_info + log_info = 'average number of proposal {}'.format(total_counts / self.num_images) + print log_info + all_log_info += log_info + for area_name, area_range in zip(area_names, area_ranges): + gt_overlaps = np.zeros(0) + num_pos = 0 + for i in range(self.num_images): + # check for max_overlaps == 1 avoids including crowd annotations + max_gt_overlaps = roidb[i]['gt_overlaps'].max(axis=1) + gt_inds = np.where((roidb[i]['gt_classes'] > 0) & (max_gt_overlaps == 1))[0] + gt_boxes = roidb[i]['boxes'][gt_inds, :] + gt_areas = (gt_boxes[:, 2] - gt_boxes[:, 0] + 1) * (gt_boxes[:, 3] - gt_boxes[:, 1] + 1) + valid_gt_inds = np.where((gt_areas >= area_range[0]) & (gt_areas < area_range[1]))[0] + gt_boxes = gt_boxes[valid_gt_inds, :] + num_pos += len(valid_gt_inds) + + if candidate_boxes is None: + # default is use the non-gt boxes from roidb + non_gt_inds = np.where(roidb[i]['gt_classes'] == 0)[0] + boxes = roidb[i]['boxes'][non_gt_inds, :] + else: + boxes = candidate_boxes[i] + if boxes.shape[0] == 0: + continue + + overlaps = bbox_overlaps(boxes.astype(np.float), gt_boxes.astype(np.float)) + + _gt_overlaps = np.zeros((gt_boxes.shape[0])) + # choose whatever is smaller to iterate + rounds = min(boxes.shape[0], gt_boxes.shape[0]) + for j in range(rounds): + # find which proposal maximally covers each gt box + argmax_overlaps = overlaps.argmax(axis=0) + # get the IoU amount of coverage for each gt box + max_overlaps = overlaps.max(axis=0) + # find which gt box is covered by most IoU + gt_ind = max_overlaps.argmax() + gt_ovr = max_overlaps.max() + assert (gt_ovr >= 0), '%s\n%s\n%s' % (boxes, gt_boxes, overlaps) + # find the proposal box that covers the best covered gt box + box_ind = argmax_overlaps[gt_ind] + # record the IoU coverage of this gt box + _gt_overlaps[j] = overlaps[box_ind, gt_ind] + assert (_gt_overlaps[j] == gt_ovr) + # mark the proposal box and the gt box as used + overlaps[box_ind, :] = -1 + overlaps[:, gt_ind] = -1 + # append recorded IoU coverage level + gt_overlaps = np.hstack((gt_overlaps, _gt_overlaps)) + + gt_overlaps = np.sort(gt_overlaps) + if thresholds is None: + step = 0.05 + thresholds = np.arange(0.5, 0.95 + 1e-5, step) + recalls = np.zeros_like(thresholds) + + # compute recall for each IoU threshold + for i, t in enumerate(thresholds): + recalls[i] = (gt_overlaps >= t).sum() / float(num_pos) + ar = recalls.mean() + + # print results + log_info = 'average recall for {}: {:.3f}'.format(area_name, ar) + print log_info + all_log_info += log_info + for threshold, recall in zip(thresholds, recalls): + log_info = 'recall @{:.2f}: {:.3f}'.format(threshold, recall) + print log_info + all_log_info += log_info + + return all_log_info + + @staticmethod + def merge_roidbs(a, b): + """ + merge roidbs into one + :param a: roidb to be merged into + :param b: roidb to be merged + :return: merged imdb + """ + assert len(a) == len(b) + for i in range(len(a)): + a[i]['boxes'] = np.vstack((a[i]['boxes'], b[i]['boxes'])) + a[i]['gt_classes'] = np.hstack((a[i]['gt_classes'], b[i]['gt_classes'])) + a[i]['gt_overlaps'] = np.vstack((a[i]['gt_overlaps'], b[i]['gt_overlaps'])) + a[i]['max_classes'] = np.hstack((a[i]['max_classes'], b[i]['max_classes'])) + a[i]['max_overlaps'] = np.hstack((a[i]['max_overlaps'], b[i]['max_overlaps'])) + return a diff --git a/lib/dataset/pascal_voc.py b/lib/dataset/pascal_voc.py new file mode 100644 index 0000000..d4ad56e --- /dev/null +++ b/lib/dataset/pascal_voc.py @@ -0,0 +1,456 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The Apache-2.0 License [see LICENSE for details] +# Modified by Haozhi Qi, from py-faster-rcnn (https://github.com/rbgirshick/py-faster-rcnn) +# -------------------------------------------------------- + +""" +Pascal VOC database +This class loads ground truth notations from standard Pascal VOC XML data formats +and transform them into IMDB format. Selective search is used for proposals, see roidb +function. Results are written as the Pascal VOC format. Evaluation is based on mAP +criterion. +""" + +import cPickle +import cv2 +import os +import numpy as np +import PIL + +from imdb import IMDB +from pascal_voc_eval import voc_eval, voc_eval_sds +from ds_utils import unique_boxes, filter_small_boxes + +class PascalVOC(IMDB): + def __init__(self, image_set, root_path, devkit_path, result_path=None, mask_size=-1, binary_thresh=None): + """ + fill basic information to initialize imdb + :param image_set: 2007_trainval, 2007_test, etc + :param root_path: 'selective_search_data' and 'cache' + :param devkit_path: data and results + :return: imdb object + """ + year = image_set.split('_')[0] + image_set = image_set[len(year) + 1 : len(image_set)] + super(PascalVOC, self).__init__('voc_' + year, image_set, root_path, devkit_path, result_path) # set self.name + + self.year = year + self.root_path = root_path + self.devkit_path = devkit_path + self.data_path = os.path.join(devkit_path, 'VOC' + year) + + self.classes = ['__background__', # always index 0 + 'aeroplane', 'bicycle', 'bird', 'boat', + 'bottle', 'bus', 'car', 'cat', 'chair', + 'cow', 'diningtable', 'dog', 'horse', + 'motorbike', 'person', 'pottedplant', + 'sheep', 'sofa', 'train', 'tvmonitor'] + self.num_classes = len(self.classes) + self.image_set_index = self.load_image_set_index() + self.num_images = len(self.image_set_index) + print 'num_images', self.num_images + self.mask_size = mask_size + self.binary_thresh = binary_thresh + + self.config = {'comp_id': 'comp4', + 'use_diff': False, + 'min_size': 2} + + def load_image_set_index(self): + """ + find out which indexes correspond to given image set (train or val) + :return: + """ + image_set_index_file = os.path.join(self.data_path, 'ImageSets', 'Main', self.image_set + '.txt') + assert os.path.exists(image_set_index_file), 'Path does not exist: {}'.format(image_set_index_file) + with open(image_set_index_file) as f: + image_set_index = [x.strip() for x in f.readlines()] + return image_set_index + + def image_path_from_index(self, index): + """ + given image index, find out full path + :param index: index of a specific image + :return: full path of this image + """ + image_file = os.path.join(self.data_path, 'JPEGImages', index + '.jpg') + assert os.path.exists(image_file), 'Path does not exist: {}'.format(image_file) + return image_file + + def segmentation_path_from_index(self, index): + """ + given image index, find out the full path of segmentation class + :param index: index of a specific image + :return: full path of segmentation class + """ + seg_class_file = os.path.join(self.data_path, 'SegmentationClass', index + '.png') + assert os.path.exists(seg_class_file), 'Path does not exist: {}'.format(seg_class_file) + return seg_class_file + + def gt_roidb(self): + """ + return ground truth image regions database + :return: imdb[image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + cache_file = os.path.join(self.cache_path, self.name + '_gt_roidb.pkl') + if os.path.exists(cache_file): + with open(cache_file, 'rb') as fid: + roidb = cPickle.load(fid) + print '{} gt roidb loaded from {}'.format(self.name, cache_file) + return roidb + + gt_roidb = [self.load_pascal_annotation(index) for index in self.image_set_index] + with open(cache_file, 'wb') as fid: + cPickle.dump(gt_roidb, fid, cPickle.HIGHEST_PROTOCOL) + print 'wrote gt roidb to {}'.format(cache_file) + + return gt_roidb + + def gt_segdb(self): + """ + return ground truth image regions database + :return: imdb[image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + cache_file = os.path.join(self.cache_path, self.name + '_gt_segdb.pkl') + if os.path.exists(cache_file): + with open(cache_file, 'rb') as fid: + segdb = cPickle.load(fid) + print '{} gt segdb loaded from {}'.format(self.name, cache_file) + return segdb + + gt_segdb = [self.load_pascal_segmentation_annotation(index) for index in self.image_set_index] + with open(cache_file, 'wb') as fid: + cPickle.dump(gt_segdb, fid, cPickle.HIGHEST_PROTOCOL) + print 'wrote gt segdb to {}'.format(cache_file) + + return gt_segdb + + def load_pascal_annotation(self, index): + """ + for a given index, load image and bounding boxes info from XML file + :param index: index of a specific image + :return: record['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + import xml.etree.ElementTree as ET + roi_rec = dict() + roi_rec['image'] = self.image_path_from_index(index) + + filename = os.path.join(self.data_path, 'Annotations', index + '.xml') + tree = ET.parse(filename) + size = tree.find('size') + roi_rec['height'] = float(size.find('height').text) + roi_rec['width'] = float(size.find('width').text) + #im_size = cv2.imread(roi_rec['image'], cv2.IMREAD_COLOR|cv2.IMREAD_IGNORE_ORIENTATION).shape + #assert im_size[0] == roi_rec['height'] and im_size[1] == roi_rec['width'] + + objs = tree.findall('object') + if not self.config['use_diff']: + non_diff_objs = [obj for obj in objs if int(obj.find('difficult').text) == 0] + objs = non_diff_objs + num_objs = len(objs) + + boxes = np.zeros((num_objs, 4), dtype=np.uint16) + gt_classes = np.zeros((num_objs), dtype=np.int32) + overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32) + + class_to_index = dict(zip(self.classes, range(self.num_classes))) + # Load object bounding boxes into a data frame. + for ix, obj in enumerate(objs): + bbox = obj.find('bndbox') + # Make pixel indexes 0-based + x1 = float(bbox.find('xmin').text) - 1 + y1 = float(bbox.find('ymin').text) - 1 + x2 = float(bbox.find('xmax').text) - 1 + y2 = float(bbox.find('ymax').text) - 1 + cls = class_to_index[obj.find('name').text.lower().strip()] + boxes[ix, :] = [x1, y1, x2, y2] + gt_classes[ix] = cls + overlaps[ix, cls] = 1.0 + + roi_rec.update({'boxes': boxes, + 'gt_classes': gt_classes, + 'gt_overlaps': overlaps, + 'max_classes': overlaps.argmax(axis=1), + 'max_overlaps': overlaps.max(axis=1), + 'flipped': False}) + return roi_rec + + def load_selective_search_roidb(self, gt_roidb): + """ + turn selective search proposals into selective search roidb + :param gt_roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + :return: roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + import scipy.io + matfile = os.path.join(self.root_path, 'selective_search_data', self.name + '.mat') + assert os.path.exists(matfile), 'selective search data does not exist: {}'.format(matfile) + raw_data = scipy.io.loadmat(matfile)['boxes'].ravel() # original was dict ['images', 'boxes'] + + box_list = [] + for i in range(raw_data.shape[0]): + boxes = raw_data[i][:, (1, 0, 3, 2)] - 1 # pascal voc dataset starts from 1. + keep = unique_boxes(boxes) + boxes = boxes[keep, :] + keep = filter_small_boxes(boxes, self.config['min_size']) + boxes = boxes[keep, :] + box_list.append(boxes) + + return self.create_roidb_from_box_list(box_list, gt_roidb) + + def selective_search_roidb(self, gt_roidb, append_gt=False): + """ + get selective search roidb and ground truth roidb + :param gt_roidb: ground truth roidb + :param append_gt: append ground truth + :return: roidb of selective search + """ + cache_file = os.path.join(self.cache_path, self.name + '_ss_roidb.pkl') + if os.path.exists(cache_file): + with open(cache_file, 'rb') as fid: + roidb = cPickle.load(fid) + print '{} ss roidb loaded from {}'.format(self.name, cache_file) + return roidb + + if append_gt: + print 'appending ground truth annotations' + ss_roidb = self.load_selective_search_roidb(gt_roidb) + roidb = IMDB.merge_roidbs(gt_roidb, ss_roidb) + else: + roidb = self.load_selective_search_roidb(gt_roidb) + with open(cache_file, 'wb') as fid: + cPickle.dump(roidb, fid, cPickle.HIGHEST_PROTOCOL) + print 'wrote ss roidb to {}'.format(cache_file) + + return roidb + + def load_pascal_segmentation_annotation(self, index): + """ + for a given index, load image and bounding boxes info from XML file + :param index: index of a specific image + :return: record['seg_cls_path', 'flipped'] + """ + import xml.etree.ElementTree as ET + seg_rec = dict() + seg_rec['image'] = self.image_path_from_index(index) + size = cv2.imread(seg_rec['image']).shape + seg_rec['height'] = size[0] + seg_rec['width'] = size[1] + + seg_rec['seg_cls_path'] = self.segmentation_path_from_index(index) + seg_rec['flipped'] = False + + return seg_rec + + def evaluate_detections(self, detections): + """ + top level evaluations + :param detections: result matrix, [bbox, confidence] + :return: None + """ + # make all these folders for results + result_dir = os.path.join(self.result_path, 'results') + if not os.path.exists(result_dir): + os.mkdir(result_dir) + year_folder = os.path.join(self.result_path, 'results', 'VOC' + self.year) + if not os.path.exists(year_folder): + os.mkdir(year_folder) + res_file_folder = os.path.join(self.result_path, 'results', 'VOC' + self.year, 'Main') + if not os.path.exists(res_file_folder): + os.mkdir(res_file_folder) + + self.write_pascal_results(detections) + info = self.do_python_eval() + return info + + def evaluate_segmentations(self, pred_segmentations=None): + """ + top level evaluations + :param pred_segmentations: the pred segmentation result + :return: the evaluation results + """ + # make all these folders for results + if not (pred_segmentations is None): + self.write_pascal_segmentation_result(pred_segmentations) + + info = self._py_evaluate_segmentation() + return info + + def write_pascal_segmentation_result(self, pred_segmentations): + """ + Write pred segmentation to res_file_folder + :param pred_segmentations: the pred segmentation results + :param res_file_folder: the saving folder + :return: [None] + """ + result_dir = os.path.join(self.result_path, 'results') + if not os.path.exists(result_dir): + os.mkdir(result_dir) + year_folder = os.path.join(self.result_path, 'results', 'VOC' + self.year) + if not os.path.exists(year_folder): + os.mkdir(year_folder) + res_file_folder = os.path.join(self.result_path, 'results', 'VOC' + self.year, 'Segmentation') + if not os.path.exists(res_file_folder): + os.mkdir(res_file_folder) + + result_dir = os.path.join(self.result_path, 'results', 'VOC' + self.year, 'Segmentation') + if not os.path.exists(result_dir): + os.mkdir(result_dir) + + pallete = self.get_pallete(256) + + for i, index in enumerate(self.image_set_index): + segmentation_result = np.uint8(np.squeeze(np.copy(pred_segmentations[i]))) + segmentation_result = PIL.Image.fromarray(segmentation_result) + segmentation_result.putpalette(pallete) + segmentation_result.save(os.path.join(result_dir, '%s.png'%(index))) + + def get_pallete(self, num_cls): + """ + this function is to get the colormap for visualizing the segmentation mask + :param num_cls: the number of visulized class + :return: the pallete + """ + n = num_cls + pallete = [0]*(n*3) + for j in xrange(0,n): + lab = j + pallete[j*3+0] = 0 + pallete[j*3+1] = 0 + pallete[j*3+2] = 0 + i = 0 + while (lab > 0): + pallete[j*3+0] |= (((lab >> 0) & 1) << (7-i)) + pallete[j*3+1] |= (((lab >> 1) & 1) << (7-i)) + pallete[j*3+2] |= (((lab >> 2) & 1) << (7-i)) + i = i + 1 + lab >>= 3 + return pallete + + def get_confusion_matrix(self, gt_label, pred_label, class_num): + """ + Calcute the confusion matrix by given label and pred + :param gt_label: the ground truth label + :param pred_label: the pred label + :param class_num: the nunber of class + :return: the confusion matrix + """ + index = (gt_label * class_num + pred_label).astype('int32') + label_count = np.bincount(index) + confusion_matrix = np.zeros((class_num, class_num)) + + for i_label in range(class_num): + for i_pred_label in range(class_num): + cur_index = i_label * class_num + i_pred_label + if cur_index < len(label_count): + confusion_matrix[i_label, i_pred_label] = label_count[cur_index] + + return confusion_matrix + + def _py_evaluate_segmentation(self): + """ + This function is a wrapper to calculte the metrics for given pred_segmentation results + :param pred_segmentations: the pred segmentation result + :return: the evaluation metrics + """ + confusion_matrix = np.zeros((self.num_classes,self.num_classes)) + result_dir = os.path.join(self.result_path, 'results', 'VOC' + self.year, 'Segmentation') + + for i, index in enumerate(self.image_set_index): + seg_gt_info = self.load_pascal_segmentation_annotation(index) + seg_gt_path = seg_gt_info['seg_cls_path'] + seg_gt = np.array(PIL.Image.open(seg_gt_path)).astype('float32') + seg_pred_path = os.path.join(result_dir, '%s.png'%(index)) + seg_pred = np.array(PIL.Image.open(seg_pred_path)).astype('float32') + + seg_gt = cv2.resize(seg_gt, (seg_pred.shape[1], seg_pred.shape[0]), interpolation=cv2.INTER_NEAREST) + ignore_index = seg_gt != 255 + seg_gt = seg_gt[ignore_index] + seg_pred = seg_pred[ignore_index] + + confusion_matrix += self.get_confusion_matrix(seg_gt, seg_pred, self.num_classes) + + pos = confusion_matrix.sum(1) + res = confusion_matrix.sum(0) + tp = np.diag(confusion_matrix) + + IU_array = (tp / np.maximum(1.0, pos + res - tp)) + mean_IU = IU_array.mean() + + return {'meanIU':mean_IU, 'IU_array':IU_array} + + def get_result_file_template(self): + """ + this is a template + VOCdevkit/results/VOC2007/Main/_det_test_aeroplane.txt + :return: a string template + """ + res_file_folder = os.path.join(self.result_path, 'results', 'VOC' + self.year, 'Main') + comp_id = self.config['comp_id'] + filename = comp_id + '_det_' + self.image_set + '_{:s}.txt' + path = os.path.join(res_file_folder, filename) + return path + + def write_pascal_results(self, all_boxes): + """ + write results files in pascal devkit path + :param all_boxes: boxes to be processed [bbox, confidence] + :return: None + """ + for cls_ind, cls in enumerate(self.classes): + if cls == '__background__': + continue + print 'Writing {} VOC results file'.format(cls) + filename = self.get_result_file_template().format(cls) + with open(filename, 'wt') as f: + for im_ind, index in enumerate(self.image_set_index): + dets = all_boxes[cls_ind][im_ind] + if len(dets) == 0: + continue + # the VOCdevkit expects 1-based indices + for k in range(dets.shape[0]): + f.write('{:s} {:.3f} {:.1f} {:.1f} {:.1f} {:.1f}\n'. + format(index, dets[k, -1], + dets[k, 0] + 1, dets[k, 1] + 1, dets[k, 2] + 1, dets[k, 3] + 1)) + + def do_python_eval(self): + """ + python evaluation wrapper + :return: info_str + """ + info_str = '' + annopath = os.path.join(self.data_path, 'Annotations', '{0!s}.xml') + imageset_file = os.path.join(self.data_path, 'ImageSets', 'Main', self.image_set + '.txt') + annocache = os.path.join(self.cache_path, self.name + '_annotations.pkl') + aps = [] + # The PASCAL VOC metric changed in 2010 + use_07_metric = True if self.year == 'SDS' or int(self.year) < 2010 else False + print 'VOC07 metric? ' + ('Y' if use_07_metric else 'No') + info_str += 'VOC07 metric? ' + ('Y' if use_07_metric else 'No') + info_str += '\n' + for cls_ind, cls in enumerate(self.classes): + if cls == '__background__': + continue + filename = self.get_result_file_template().format(cls) + rec, prec, ap = voc_eval(filename, annopath, imageset_file, cls, annocache, + ovthresh=0.5, use_07_metric=use_07_metric) + aps += [ap] + print('AP for {} = {:.4f}'.format(cls, ap)) + info_str += 'AP for {} = {:.4f}\n'.format(cls, ap) + print('Mean AP@0.5 = {:.4f}'.format(np.mean(aps))) + info_str += 'Mean AP@0.5 = {:.4f}\n\n'.format(np.mean(aps)) + # @0.7 + aps = [] + for cls_ind, cls in enumerate(self.classes): + if cls == '__background__': + continue + filename = self.get_result_file_template().format(cls) + rec, prec, ap = voc_eval(filename, annopath, imageset_file, cls, annocache, + ovthresh=0.7, use_07_metric=use_07_metric) + aps += [ap] + print('AP for {} = {:.4f}'.format(cls, ap)) + info_str += 'AP for {} = {:.4f}\n'.format(cls, ap) + print('Mean AP@0.7 = {:.4f}'.format(np.mean(aps))) + info_str += 'Mean AP@0.7 = {:.4f}'.format(np.mean(aps)) + return info_str diff --git a/lib/dataset/pascal_voc_eval.py b/lib/dataset/pascal_voc_eval.py new file mode 100644 index 0000000..f5cc106 --- /dev/null +++ b/lib/dataset/pascal_voc_eval.py @@ -0,0 +1,363 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The Apache-2.0 License [see LICENSE for details] +# Modified by Haozhi Qi, from py-faster-rcnn (https://github.com/rbgirshick/py-faster-rcnn) +# -------------------------------------------------------- +""" +given a pascal voc imdb, compute mAP +""" + +import numpy as np +import os +import cPickle +from mask.mask_transform import mask_overlap + + +def parse_voc_rec(filename): + """ + parse pascal voc record into a dictionary + :param filename: xml file path + :return: list of dict + """ + import xml.etree.ElementTree as ET + tree = ET.parse(filename) + objects = [] + for obj in tree.findall('object'): + obj_dict = dict() + obj_dict['name'] = obj.find('name').text + obj_dict['difficult'] = int(obj.find('difficult').text) + bbox = obj.find('bndbox') + obj_dict['bbox'] = [int(float(bbox.find('xmin').text)), + int(float(bbox.find('ymin').text)), + int(float(bbox.find('xmax').text)), + int(float(bbox.find('ymax').text))] + objects.append(obj_dict) + return objects + + +def voc_ap(rec, prec, use_07_metric=False): + """ + average precision calculations + [precision integrated to recall] + :param rec: recall + :param prec: precision + :param use_07_metric: 2007 metric is 11-recall-point based AP + :return: average precision + """ + if use_07_metric: + ap = 0. + for t in np.arange(0., 1.1, 0.1): + if np.sum(rec >= t) == 0: + p = 0 + else: + p = np.max(prec[rec >= t]) + ap += p / 11. + else: + # append sentinel values at both ends + mrec = np.concatenate(([0.], rec, [1.])) + mpre = np.concatenate(([0.], prec, [0.])) + + # compute precision integration ladder + for i in range(mpre.size - 1, 0, -1): + mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i]) + + # look for recall value changes + i = np.where(mrec[1:] != mrec[:-1])[0] + + # sum (\delta recall) * prec + ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) + return ap + + +def voc_eval(detpath, annopath, imageset_file, classname, annocache, ovthresh=0.5, use_07_metric=False): + """ + pascal voc evaluation + :param detpath: detection results detpath.format(classname) + :param annopath: annotations annopath.format(classname) + :param imageset_file: text file containing list of images + :param classname: category name + :param annocache: caching annotations + :param ovthresh: overlap threshold + :param use_07_metric: whether to use voc07's 11 point ap computation + :return: rec, prec, ap + """ + with open(imageset_file, 'r') as f: + lines = f.readlines() + image_filenames = [x.strip() for x in lines] + + # load annotations from cache + if not os.path.isfile(annocache): + recs = {} + for ind, image_filename in enumerate(image_filenames): + recs[image_filename] = parse_voc_rec(annopath.format(image_filename)) + if ind % 100 == 0: + print 'reading annotations for {:d}/{:d}'.format(ind + 1, len(image_filenames)) + print 'saving annotations cache to {:s}'.format(annocache) + with open(annocache, 'wb') as f: + cPickle.dump(recs, f, protocol=cPickle.HIGHEST_PROTOCOL) + else: + with open(annocache, 'rb') as f: + recs = cPickle.load(f) + + # extract objects in :param classname: + class_recs = {} + npos = 0 + for image_filename in image_filenames: + objects = [obj for obj in recs[image_filename] if obj['name'] == classname] + bbox = np.array([x['bbox'] for x in objects]) + difficult = np.array([x['difficult'] for x in objects]).astype(np.bool) + det = [False] * len(objects) # stand for detected + npos = npos + sum(~difficult) + class_recs[image_filename] = {'bbox': bbox, + 'difficult': difficult, + 'det': det} + + # read detections + detfile = detpath.format(classname) + with open(detfile, 'r') as f: + lines = f.readlines() + + splitlines = [x.strip().split(' ') for x in lines] + image_ids = [x[0] for x in splitlines] + confidence = np.array([float(x[1]) for x in splitlines]) + bbox = np.array([[float(z) for z in x[2:]] for x in splitlines]) + + # sort by confidence + if bbox.shape[0] > 0: + sorted_inds = np.argsort(-confidence) + sorted_scores = np.sort(-confidence) + bbox = bbox[sorted_inds, :] + image_ids = [image_ids[x] for x in sorted_inds] + + # go down detections and mark true positives and false positives + nd = len(image_ids) + tp = np.zeros(nd) + fp = np.zeros(nd) + for d in range(nd): + r = class_recs[image_ids[d]] + bb = bbox[d, :].astype(float) + ovmax = -np.inf + bbgt = r['bbox'].astype(float) + + if bbgt.size > 0: + # compute overlaps + # intersection + ixmin = np.maximum(bbgt[:, 0], bb[0]) + iymin = np.maximum(bbgt[:, 1], bb[1]) + ixmax = np.minimum(bbgt[:, 2], bb[2]) + iymax = np.minimum(bbgt[:, 3], bb[3]) + iw = np.maximum(ixmax - ixmin + 1., 0.) + ih = np.maximum(iymax - iymin + 1., 0.) + inters = iw * ih + + # union + uni = ((bb[2] - bb[0] + 1.) * (bb[3] - bb[1] + 1.) + + (bbgt[:, 2] - bbgt[:, 0] + 1.) * + (bbgt[:, 3] - bbgt[:, 1] + 1.) - inters) + + overlaps = inters / uni + ovmax = np.max(overlaps) + jmax = np.argmax(overlaps) + + if ovmax > ovthresh: + if not r['difficult'][jmax]: + if not r['det'][jmax]: + tp[d] = 1. + r['det'][jmax] = 1 + else: + fp[d] = 1. + else: + fp[d] = 1. + + # compute precision recall + fp = np.cumsum(fp) + tp = np.cumsum(tp) + rec = tp / float(npos) + # avoid division by zero in case first detection matches a difficult ground ruth + prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps) + ap = voc_ap(rec, prec, use_07_metric) + + return rec, prec, ap + + +def voc_eval_sds(det_file, seg_file, devkit_path, image_list, cls_name, cache_dir, + class_names, mask_size, binary_thresh, ov_thresh=0.5): + # 1. Check whether ground truth cache file exists + with open(image_list, 'r') as f: + lines = f.readlines() + image_names = [x.strip() for x in lines] + check_voc_sds_cache(cache_dir, devkit_path, image_names, class_names) + gt_cache = cache_dir + '/' + cls_name + '_mask_gt.pkl' + with open(gt_cache, 'rb') as f: + gt_pkl = cPickle.load(f) + + # 2. Get predict pickle file for this class + with open(det_file, 'rb') as f: + boxes_pkl = cPickle.load(f) + with open(seg_file, 'rb') as f: + masks_pkl = cPickle.load(f) + + # 3. Pre-compute number of total instances to allocate memory + num_image = len(image_names) + box_num = 0 + for im_i in xrange(num_image): + box_num += len(boxes_pkl[im_i]) + + # 4. Re-organize all the predicted boxes + new_boxes = np.zeros((box_num, 5)) + new_masks = np.zeros((box_num, mask_size, mask_size)) + new_image = [] + cnt = 0 + for image_ind in xrange(len(image_names)): + boxes = boxes_pkl[image_ind] + masks = masks_pkl[image_ind] + num_instance = len(boxes) + for box_ind in xrange(num_instance): + new_boxes[cnt] = boxes[box_ind] + new_masks[cnt] = masks[box_ind] + new_image.append(image_names[image_ind]) + cnt += 1 + + # 5. Rearrange boxes according to their scores + seg_scores = new_boxes[:, -1] + keep_inds = np.argsort(-seg_scores) + new_boxes = new_boxes[keep_inds, :] + new_masks = new_masks[keep_inds, :, :] + num_pred = new_boxes.shape[0] + import cv2 + # 6. Calculate t/f positive + fp = np.zeros((num_pred, 1)) + tp = np.zeros((num_pred, 1)) + for i in xrange(num_pred): + pred_box = np.round(new_boxes[i, :4]).astype(int) + pred_mask = new_masks[i] + pred_mask = cv2.resize(pred_mask.astype(np.float32), (pred_box[2] - pred_box[0] + 1, pred_box[3] - pred_box[1] + 1)) + pred_mask = pred_mask >= binary_thresh + image_index = new_image[keep_inds[i]] + + if image_index not in gt_pkl: + fp[i] = 1 + continue + gt_dict_list = gt_pkl[image_index] + # calculate max region overlap + cur_overlap = -1000 + cur_overlap_ind = -1 + for ind2, gt_dict in enumerate(gt_dict_list): + gt_mask_bound = np.round(gt_dict['mask_bound']).astype(int) + pred_mask_bound = pred_box + ov = mask_overlap(gt_mask_bound, pred_mask_bound, gt_dict['mask'], pred_mask) + if ov > cur_overlap: + cur_overlap = ov + cur_overlap_ind = ind2 + if cur_overlap >= ov_thresh: + if gt_dict_list[cur_overlap_ind]['already_detect']: + fp[i] = 1 + else: + tp[i] = 1 + gt_dict_list[cur_overlap_ind]['already_detect'] = 1 + else: + fp[i] = 1 + + # 7. Calculate precision + num_pos = 0 + for key, val in gt_pkl.iteritems(): + num_pos += len(val) + fp = np.cumsum(fp) + tp = np.cumsum(tp) + rec = tp / float(num_pos) + # avoid divide by zero in case the first matches a difficult gt + prec = tp / np.maximum(fp+tp, np.finfo(np.float64).eps) + ap = voc_ap(rec, prec, True) + return ap + + +def parse_inst(image_name, devkit_path): + """ + Get cooresponding masks, boxes, classes according to image name + Args: + image_name: input image name + devkit_path: root dir for devkit SDS + Returns: + roi/mask dictionary of this image + """ + import PIL + seg_obj_name = os.path.join(devkit_path, 'SegmentationObject', image_name + '.png') + seg_obj_data = PIL.Image.open(seg_obj_name) + seg_obj_data = np.array(seg_obj_data.getdata(), np.uint8).reshape(seg_obj_data.size[1], seg_obj_data.size[0]) + + seg_cls_name = os.path.join(devkit_path, 'SegmentationClass', image_name + '.png') + seg_cls_data = PIL.Image.open(seg_cls_name) + seg_cls_data = np.array(seg_cls_data.getdata(), np.uint8).reshape(seg_cls_data.size[1], seg_cls_data.size[0]) + + unique_inst = np.unique(seg_obj_data) + # delete background pixels + background_ind = np.where(unique_inst == 0)[0] + unique_inst = np.delete(unique_inst, background_ind) + record = [] + for inst_ind in xrange(unique_inst.shape[0]): + [r, c] = np.where(seg_obj_data == unique_inst[inst_ind]) + mask_bound = np.zeros(4, dtype=int) + mask_bound[0] = np.min(c) + mask_bound[1] = np.min(r) + mask_bound[2] = np.max(c) + mask_bound[3] = np.max(r) + mask = seg_obj_data[mask_bound[1]:mask_bound[3]+1, mask_bound[0]:mask_bound[2]+1] + mask = (mask == unique_inst[inst_ind]) + mask_cls = seg_cls_data[mask_bound[1]:mask_bound[3]+1, mask_bound[0]:mask_bound[2]+1] + mask_cls = mask_cls[mask] + num_cls = np.unique(mask_cls) + assert num_cls.shape[0] == 1 + cur_inst = num_cls[0] + record.append({ + 'mask': mask, + 'mask_cls': cur_inst, + 'mask_bound': mask_bound + }) + + return record + + +def check_voc_sds_cache(cache_dir, devkit_path, image_names, class_names): + """ + Args: + cache_dir: output directory for cached mask annotation + devkit_path: root directory of VOCdevkitSDS + image_names: used for parse image instances + class_names: VOC 20 class names + """ + + if not os.path.isdir(cache_dir): + os.mkdir(cache_dir) + + exist_cache = True + for cls_name in class_names: + if cls_name == '__background__': + continue + cache_name = os.path.join(cache_dir, cls_name + '_mask_gt.pkl') + if not os.path.isfile(cache_name): + exist_cache = False + break + + if not exist_cache: + # load annotations: + # create a list with size classes + record_list = [{} for _ in xrange(21)] + for i, image_name in enumerate(image_names): + record = parse_inst(image_name, devkit_path) + for j, mask_dic in enumerate(record): + cls = mask_dic['mask_cls'] + mask_dic['already_detect'] = False + if image_name not in record_list[cls]: + record_list[cls][image_name] = [] + record_list[cls][image_name].append(mask_dic) + if i % 100 == 0: + print 'Reading annotation for {:d}/{:d}'.format(i + 1, len(image_names)) + + print 'Saving cached annotations...' + for cls_ind, name in enumerate(class_names): + if name == '__background__': + continue + cachefile = os.path.join(cache_dir, name + '_mask_gt.pkl') + with open(cachefile, 'wb') as f: + cPickle.dump(record_list[cls_ind], f) \ No newline at end of file diff --git a/lib/dataset/pycocotools/.gitignore b/lib/dataset/pycocotools/.gitignore new file mode 100644 index 0000000..b261f51 --- /dev/null +++ b/lib/dataset/pycocotools/.gitignore @@ -0,0 +1 @@ +_mask.c diff --git a/lib/dataset/pycocotools/UPSTREAM_REV b/lib/dataset/pycocotools/UPSTREAM_REV new file mode 100644 index 0000000..706219b --- /dev/null +++ b/lib/dataset/pycocotools/UPSTREAM_REV @@ -0,0 +1 @@ +https://github.com/pdollar/coco/commit/3ac47c77ebd5a1ed4254a98b7fbf2ef4765a3574 diff --git a/lib/dataset/pycocotools/__init__.py b/lib/dataset/pycocotools/__init__.py new file mode 100644 index 0000000..3f7d85b --- /dev/null +++ b/lib/dataset/pycocotools/__init__.py @@ -0,0 +1 @@ +__author__ = 'tylin' diff --git a/lib/dataset/pycocotools/_mask.pyx b/lib/dataset/pycocotools/_mask.pyx new file mode 100644 index 0000000..902228b --- /dev/null +++ b/lib/dataset/pycocotools/_mask.pyx @@ -0,0 +1,294 @@ +# distutils: language = c +# distutils: sources = maskApi.c + +#************************************************************************** +# Microsoft COCO Toolbox. version 2.0 +# Data, paper, and tutorials available at: http://mscoco.org/ +# Code written by Piotr Dollar and Tsung-Yi Lin, 2015. +# Licensed under the Simplified BSD License [see coco/license.txt] +#************************************************************************** + +__author__ = 'tsungyi' + +# import both Python-level and C-level symbols of Numpy +# the API uses Numpy to interface C and Python +import numpy as np +cimport numpy as np +from libc.stdlib cimport malloc, free + +# intialized Numpy. must do. +np.import_array() + +# important, use PyMem_Malloc rather than malloc to avoid crash on windows +from cpython.mem cimport PyMem_Malloc, PyMem_Realloc, PyMem_Free + +# import numpy C function +# we use PyArray_ENABLEFLAGS to make Numpy ndarray responsible to memoery management +cdef extern from "numpy/arrayobject.h": + void PyArray_ENABLEFLAGS(np.ndarray arr, int flags) + +# Declare the prototype of the C functions in MaskApi.h +cdef extern from "maskApi.h": + ctypedef unsigned int uint + ctypedef unsigned long siz + ctypedef unsigned char byte + ctypedef double* BB + ctypedef struct RLE: + siz h, + siz w, + siz m, + uint* cnts, + void rlesInit( RLE **R, siz n ) + void rleEncode( RLE *R, const byte *M, siz h, siz w, siz n ) + void rleDecode( const RLE *R, byte *mask, siz n ) + void rleMerge( const RLE *R, RLE *M, siz n, bint intersect ) + void rleArea( const RLE *R, siz n, uint *a ) + void rleIou( RLE *dt, RLE *gt, siz m, siz n, byte *iscrowd, double *o ) + void bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o ) + void rleToBbox( const RLE *R, BB bb, siz n ) + void rleFrBbox( RLE *R, const BB bb, siz h, siz w, siz n ) + void rleFrPoly( RLE *R, const double *xy, siz k, siz h, siz w ) + char* rleToString( const RLE *R ) + void rleFrString( RLE *R, char *s, siz h, siz w ) + +# python class to wrap RLE array in C +# the class handles the memory allocation and deallocation +cdef class RLEs: + cdef RLE *_R + cdef siz _n + + def __cinit__(self, siz n =0): + rlesInit(&self._R, n) + self._n = n + + # free the RLE array here + #def __dealloc__(self): + #if self._R is not NULL: + #for i in range(self._n): + # free(self._R[i].cnts) + # free(self._R) + def __getattr__(self, key): + if key == 'n': + return self._n + raise AttributeError(key) + +# python class to wrap Mask array in C +# the class handles the memory allocation and deallocation +cdef class Masks: + cdef byte *_mask + cdef siz _h + cdef siz _w + cdef siz _n + + def __cinit__(self, h, w, n): + self._mask = PyMem_Malloc(h*w*n* sizeof(byte)) + self._h = h + self._w = w + self._n = n + # def __dealloc__(self): + # the memory management of _mask has been passed to np.ndarray + # it doesn't need to be freed here + + # called when passing into np.array() and return an np.ndarray in column-major order + def __array__(self): + cdef np.npy_intp shape[1] + shape[0] = self._h*self._w*self._n + # Create a 1D array, and reshape it to fortran/Matlab column-major array + ndarray = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT8, self._mask).reshape((self._h, self._w, self._n), order='F') + # The _mask allocated by Masks is now handled by ndarray + PyArray_ENABLEFLAGS(ndarray, np.NPY_OWNDATA) + return ndarray + +# internal conversion from Python RLEs object to compressed RLE format +def _toString(RLEs Rs): + cdef siz n = Rs.n + cdef bytes py_string + cdef char* c_string + objs = [] + for i in range(n): + c_string = rleToString( &Rs._R[i] ) + py_string = c_string + objs.append({ + 'size': [Rs._R[i].h, Rs._R[i].w], + 'counts': py_string + }) + #free(c_string) + return objs + +# internal conversion from compressed RLE format to Python RLEs object +def _frString(rleObjs): + cdef siz n = len(rleObjs) + Rs = RLEs(n) + cdef bytes py_string + cdef char* c_string + for i, obj in enumerate(rleObjs): + py_string = str(obj['counts']) + c_string = py_string + rleFrString( &Rs._R[i], c_string, obj['size'][0], obj['size'][1] ) + return Rs + +# encode mask to RLEs objects +# list of RLE string can be generated by RLEs member function +def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask): + h, w, n = mask.shape[0], mask.shape[1], mask.shape[2] + cdef RLEs Rs = RLEs(n) + rleEncode(Rs._R,mask.data,h,w,n) + objs = _toString(Rs) + return objs + +# decode mask from compressed list of RLE string or RLEs object +def decode(rleObjs): + cdef RLEs Rs = _frString(rleObjs) + h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n + masks = Masks(h, w, n) + rleDecode( Rs._R, masks._mask, n ); + return np.array(masks) + +def merge(rleObjs, bint intersect=0): + cdef RLEs Rs = _frString(rleObjs) + cdef RLEs R = RLEs(1) + rleMerge(Rs._R, R._R, Rs._n, intersect) + obj = _toString(R)[0] + return obj + +def area(rleObjs): + cdef RLEs Rs = _frString(rleObjs) + _a = PyMem_Malloc(Rs._n* sizeof(uint)) + rleArea(Rs._R, Rs._n, _a) + cdef np.npy_intp shape[1] + shape[0] = Rs._n + a = np.array((Rs._n, ), dtype=np.uint8) + a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a) + PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA) + return a + +# iou computation. support function overload (RLEs-RLEs and bbox-bbox). +def iou( dt, gt, pyiscrowd ): + def _preproc(objs): + if len(objs) == 0: + return objs + if type(objs) == np.ndarray: + if len(objs.shape) == 1: + objs = objs.reshape((objs[0], 1)) + # check if it's Nx4 bbox + if not len(objs.shape) == 2 or not objs.shape[1] == 4: + raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension') + objs = objs.astype(np.double) + elif type(objs) == list: + # check if list is in box format and convert it to np.ndarray + isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs])) + isrle = np.all(np.array([type(obj) == dict for obj in objs])) + if isbox: + objs = np.array(objs, dtype=np.double) + if len(objs.shape) == 1: + objs = objs.reshape((1,objs.shape[0])) + elif isrle: + objs = _frString(objs) + else: + raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])') + else: + raise Exception('unrecognized type. The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.') + return objs + def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou): + rleIou( dt._R, gt._R, m, n, iscrowd.data, _iou.data ) + def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou): + bbIou( dt.data, gt.data, m, n, iscrowd.data, _iou.data ) + def _len(obj): + cdef siz N = 0 + if type(obj) == RLEs: + N = obj.n + elif len(obj)==0: + pass + elif type(obj) == np.ndarray: + N = obj.shape[0] + return N + # convert iscrowd to numpy array + cdef np.ndarray[np.uint8_t, ndim=1] iscrowd = np.array(pyiscrowd, dtype=np.uint8) + # simple type checking + cdef siz m, n + dt = _preproc(dt) + gt = _preproc(gt) + m = _len(dt) + n = _len(gt) + if m == 0 or n == 0: + return [] + if not type(dt) == type(gt): + raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray') + + # define local variables + cdef double* _iou = 0 + cdef np.npy_intp shape[1] + # check type and assign iou function + if type(dt) == RLEs: + _iouFun = _rleIou + elif type(dt) == np.ndarray: + _iouFun = _bbIou + else: + raise Exception('input data type not allowed.') + _iou = PyMem_Malloc(m*n* sizeof(double)) + iou = np.zeros((m*n, ), dtype=np.double) + shape[0] = m*n + iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou) + PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA) + _iouFun(dt, gt, iscrowd, m, n, iou) + return iou.reshape((m,n), order='F') + +def toBbox( rleObjs ): + cdef RLEs Rs = _frString(rleObjs) + cdef siz n = Rs.n + cdef BB _bb = PyMem_Malloc(4*n* sizeof(double)) + rleToBbox( Rs._R, _bb, n ) + cdef np.npy_intp shape[1] + shape[0] = 4*n + bb = np.array((1,4*n), dtype=np.double) + bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4)) + PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA) + return bb + +def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ): + cdef siz n = bb.shape[0] + Rs = RLEs(n) + rleFrBbox( Rs._R, bb.data, h, w, n ) + objs = _toString(Rs) + return objs + +def frPoly( poly, siz h, siz w ): + cdef np.ndarray[np.double_t, ndim=1] np_poly + n = len(poly) + Rs = RLEs(n) + for i, p in enumerate(poly): + np_poly = np.array(p, dtype=np.double, order='F') + rleFrPoly( &Rs._R[i], np_poly.data, len(np_poly)/2, h, w ) + objs = _toString(Rs) + return objs + +def frUncompressedRLE(ucRles, siz h, siz w): + cdef np.ndarray[np.uint32_t, ndim=1] cnts + cdef RLE R + cdef uint *data + n = len(ucRles) + objs = [] + for i in range(n): + Rs = RLEs(1) + cnts = np.array(ucRles[i]['counts'], dtype=np.uint32) + # time for malloc can be saved here but it's fine + data = PyMem_Malloc(len(cnts)* sizeof(uint)) + for j in range(len(cnts)): + data[j] = cnts[j] + R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), data) + Rs._R[0] = R + objs.append(_toString(Rs)[0]) + return objs + +def frPyObjects(pyobj, siz h, w): + if type(pyobj) == np.ndarray: + objs = frBbox(pyobj, h, w ) + elif type(pyobj) == list and len(pyobj[0]) == 4: + objs = frBbox(pyobj, h, w ) + elif type(pyobj) == list and len(pyobj[0]) > 4: + objs = frPoly(pyobj, h, w ) + elif type(pyobj) == list and type(pyobj[0]) == dict: + objs = frUncompressedRLE(pyobj, h, w) + else: + raise Exception('input type is not supported.') + return objs diff --git a/lib/dataset/pycocotools/coco.py b/lib/dataset/pycocotools/coco.py new file mode 100644 index 0000000..50ee30c --- /dev/null +++ b/lib/dataset/pycocotools/coco.py @@ -0,0 +1,448 @@ +__author__ = 'tylin' +__version__ = '1.0.1' +# Interface for accessing the Microsoft COCO dataset. + +# Microsoft COCO is a large image dataset designed for object detection, +# segmentation, and caption generation. pycocotools is a Python API that +# assists in loading, parsing and visualizing the annotations in COCO. +# Please visit http://mscoco.org/ for more information on COCO, including +# for the data, paper, and tutorials. The exact format of the annotations +# is also described on the COCO website. For example usage of the pycocotools +# please see pycocotools_demo.ipynb. In addition to this API, please download both +# the COCO images and annotations in order to run the demo. + +# An alternative to using the API is to load the annotations directly +# into Python dictionary +# Using the API provides additional utility functions. Note that this API +# supports both *instance* and *caption* annotations. In the case of +# captions not all functions are defined (e.g. categories are undefined). + +# The following API functions are defined: +# COCO - COCO api class that loads COCO annotation file and prepare data structures. +# decodeMask - Decode binary mask M encoded via run-length encoding. +# encodeMask - Encode binary mask M using run-length encoding. +# getAnnIds - Get ann ids that satisfy given filter conditions. +# getCatIds - Get cat ids that satisfy given filter conditions. +# getImgIds - Get img ids that satisfy given filter conditions. +# loadAnns - Load anns with the specified ids. +# loadCats - Load cats with the specified ids. +# loadImgs - Load imgs with the specified ids. +# segToMask - Convert polygon segmentation to binary mask. +# showAnns - Display the specified annotations. +# loadRes - Load algorithm results and create API for accessing them. +# download - Download COCO images from mscoco.org server. +# Throughout the API "ann"=annotation, "cat"=category, and "img"=image. +# Help on each functions can be accessed by: "help COCO>function". + +# See also COCO>decodeMask, +# COCO>encodeMask, COCO>getAnnIds, COCO>getCatIds, +# COCO>getImgIds, COCO>loadAnns, COCO>loadCats, +# COCO>loadImgs, COCO>segToMask, COCO>showAnns + +# Microsoft COCO Toolbox. version 2.0 +# Data, paper, and tutorials available at: http://mscoco.org/ +# Code written by Piotr Dollar and Tsung-Yi Lin, 2014. +# Licensed under the Simplified BSD License [see bsd.txt] + +import json +import datetime +import time +import matplotlib.pyplot as plt +from matplotlib.collections import PatchCollection +from matplotlib.patches import Polygon +import numpy as np +from skimage.draw import polygon +import urllib +import copy +import itertools +import mask +import os + +class COCO: + def __init__(self, annotation_file=None): + """ + Constructor of Microsoft COCO helper class for reading and visualizing annotations. + :param annotation_file (str): location of annotation file + :param image_folder (str): location to the folder that hosts images. + :return: + """ + # load dataset + self.dataset = {} + self.anns = [] + self.imgToAnns = {} + self.catToImgs = {} + self.imgs = {} + self.cats = {} + if not annotation_file == None: + print 'loading annotations into memory...' + tic = time.time() + dataset = json.load(open(annotation_file, 'r')) + print 'Done (t=%0.2fs)'%(time.time()- tic) + self.dataset = dataset + self.createIndex() + + def createIndex(self): + # create index + print 'creating index...' + anns = {} + imgToAnns = {} + catToImgs = {} + cats = {} + imgs = {} + if 'annotations' in self.dataset: + imgToAnns = {ann['image_id']: [] for ann in self.dataset['annotations']} + anns = {ann['id']: [] for ann in self.dataset['annotations']} + for ann in self.dataset['annotations']: + imgToAnns[ann['image_id']] += [ann] + anns[ann['id']] = ann + + if 'images' in self.dataset: + imgs = {im['id']: {} for im in self.dataset['images']} + for img in self.dataset['images']: + imgs[img['id']] = img + + if 'categories' in self.dataset: + cats = {cat['id']: [] for cat in self.dataset['categories']} + for cat in self.dataset['categories']: + cats[cat['id']] = cat + + if 'annotations' in self.dataset and 'categories' in self.dataset: + catToImgs = {cat['id']: [] for cat in self.dataset['categories']} + for ann in self.dataset['annotations']: + catToImgs[ann['category_id']] += [ann['image_id']] + + print 'index created!' + + # create class members + self.anns = anns + self.imgToAnns = imgToAnns + self.catToImgs = catToImgs + self.imgs = imgs + self.cats = cats + + def info(self): + """ + Print information about the annotation file. + :return: + """ + for key, value in self.dataset['info'].items(): + print '%s: %s'%(key, value) + + def getAnnIds(self, imgIds=[], catIds=[], areaRng=[], iscrowd=None): + """ + Get ann ids that satisfy given filter conditions. default skips that filter + :param imgIds (int array) : get anns for given imgs + catIds (int array) : get anns for given cats + areaRng (float array) : get anns for given area range (e.g. [0 inf]) + iscrowd (boolean) : get anns for given crowd label (False or True) + :return: ids (int array) : integer array of ann ids + """ + imgIds = imgIds if type(imgIds) == list else [imgIds] + catIds = catIds if type(catIds) == list else [catIds] + + if len(imgIds) == len(catIds) == len(areaRng) == 0: + anns = self.dataset['annotations'] + else: + if not len(imgIds) == 0: + # this can be changed by defaultdict + lists = [self.imgToAnns[imgId] for imgId in imgIds if imgId in self.imgToAnns] + anns = list(itertools.chain.from_iterable(lists)) + else: + anns = self.dataset['annotations'] + anns = anns if len(catIds) == 0 else [ann for ann in anns if ann['category_id'] in catIds] + anns = anns if len(areaRng) == 0 else [ann for ann in anns if ann['area'] > areaRng[0] and ann['area'] < areaRng[1]] + if not iscrowd == None: + ids = [ann['id'] for ann in anns if ann['iscrowd'] == iscrowd] + else: + ids = [ann['id'] for ann in anns] + return ids + + def getCatIds(self, catNms=[], supNms=[], catIds=[]): + """ + filtering parameters. default skips that filter. + :param catNms (str array) : get cats for given cat names + :param supNms (str array) : get cats for given supercategory names + :param catIds (int array) : get cats for given cat ids + :return: ids (int array) : integer array of cat ids + """ + catNms = catNms if type(catNms) == list else [catNms] + supNms = supNms if type(supNms) == list else [supNms] + catIds = catIds if type(catIds) == list else [catIds] + + if len(catNms) == len(supNms) == len(catIds) == 0: + cats = self.dataset['categories'] + else: + cats = self.dataset['categories'] + cats = cats if len(catNms) == 0 else [cat for cat in cats if cat['name'] in catNms] + cats = cats if len(supNms) == 0 else [cat for cat in cats if cat['supercategory'] in supNms] + cats = cats if len(catIds) == 0 else [cat for cat in cats if cat['id'] in catIds] + ids = [cat['id'] for cat in cats] + return ids + + def getImgIds(self, imgIds=[], catIds=[]): + ''' + Get img ids that satisfy given filter conditions. + :param imgIds (int array) : get imgs for given ids + :param catIds (int array) : get imgs with all given cats + :return: ids (int array) : integer array of img ids + ''' + imgIds = imgIds if type(imgIds) == list else [imgIds] + catIds = catIds if type(catIds) == list else [catIds] + + if len(imgIds) == len(catIds) == 0: + ids = self.imgs.keys() + else: + ids = set(imgIds) + for i, catId in enumerate(catIds): + if i == 0 and len(ids) == 0: + ids = set(self.catToImgs[catId]) + else: + ids &= set(self.catToImgs[catId]) + return list(ids) + + def loadAnns(self, ids=[]): + """ + Load anns with the specified ids. + :param ids (int array) : integer ids specifying anns + :return: anns (object array) : loaded ann objects + """ + if type(ids) == list: + return [self.anns[id] for id in ids] + elif type(ids) == int: + return [self.anns[ids]] + + def loadCats(self, ids=[]): + """ + Load cats with the specified ids. + :param ids (int array) : integer ids specifying cats + :return: cats (object array) : loaded cat objects + """ + if type(ids) == list: + return [self.cats[id] for id in ids] + elif type(ids) == int: + return [self.cats[ids]] + + def loadImgs(self, ids=[]): + """ + Load anns with the specified ids. + :param ids (int array) : integer ids specifying img + :return: imgs (object array) : loaded img objects + """ + if type(ids) == list: + return [self.imgs[id] for id in ids] + elif type(ids) == int: + return [self.imgs[ids]] + + def showAnns(self, anns): + """ + Display the specified annotations. + :param anns (array of object): annotations to display + :return: None + """ + if len(anns) == 0: + return 0 + if 'segmentation' in anns[0]: + datasetType = 'instances' + elif 'caption' in anns[0]: + datasetType = 'captions' + if datasetType == 'instances': + ax = plt.gca() + polygons = [] + color = [] + for ann in anns: + c = np.random.random((1, 3)).tolist()[0] + if type(ann['segmentation']) == list: + # polygon + for seg in ann['segmentation']: + poly = np.array(seg).reshape((len(seg)/2, 2)) + polygons.append(Polygon(poly, True,alpha=0.4)) + color.append(c) + else: + # mask + t = self.imgs[ann['image_id']] + if type(ann['segmentation']['counts']) == list: + rle = mask.frPyObjects([ann['segmentation']], t['height'], t['width']) + else: + rle = [ann['segmentation']] + m = mask.decode(rle) + img = np.ones( (m.shape[0], m.shape[1], 3) ) + if ann['iscrowd'] == 1: + color_mask = np.array([2.0,166.0,101.0])/255 + if ann['iscrowd'] == 0: + color_mask = np.random.random((1, 3)).tolist()[0] + for i in range(3): + img[:,:,i] = color_mask[i] + ax.imshow(np.dstack( (img, m*0.5) )) + p = PatchCollection(polygons, facecolors=color, edgecolors=(0,0,0,1), linewidths=3, alpha=0.4) + ax.add_collection(p) + elif datasetType == 'captions': + for ann in anns: + print ann['caption'] + + def loadRes(self, resFile): + """ + Load result file and return a result api object. + :param resFile (str) : file name of result file + :return: res (obj) : result api object + """ + res = COCO() + res.dataset['images'] = [img for img in self.dataset['images']] + # res.dataset['info'] = copy.deepcopy(self.dataset['info']) + # res.dataset['licenses'] = copy.deepcopy(self.dataset['licenses']) + + print 'Loading and preparing results... ' + tic = time.time() + anns = json.load(open(resFile)) + assert type(anns) == list, 'results in not an array of objects' + annsImgIds = [ann['image_id'] for ann in anns] + assert set(annsImgIds) == (set(annsImgIds) & set(self.getImgIds())), \ + 'Results do not correspond to current coco set' + if 'caption' in anns[0]: + imgIds = set([img['id'] for img in res.dataset['images']]) & set([ann['image_id'] for ann in anns]) + res.dataset['images'] = [img for img in res.dataset['images'] if img['id'] in imgIds] + for id, ann in enumerate(anns): + ann['id'] = id+1 + elif 'bbox' in anns[0] and not anns[0]['bbox'] == []: + res.dataset['categories'] = copy.deepcopy(self.dataset['categories']) + for id, ann in enumerate(anns): + bb = ann['bbox'] + x1, x2, y1, y2 = [bb[0], bb[0]+bb[2], bb[1], bb[1]+bb[3]] + if not 'segmentation' in ann: + ann['segmentation'] = [[x1, y1, x1, y2, x2, y2, x2, y1]] + ann['area'] = bb[2]*bb[3] + ann['id'] = id+1 + ann['iscrowd'] = 0 + elif 'segmentation' in anns[0]: + res.dataset['categories'] = copy.deepcopy(self.dataset['categories']) + for id, ann in enumerate(anns): + # now only support compressed RLE format as segmentation results + ann['area'] = mask.area([ann['segmentation']])[0] + if not 'bbox' in ann: + ann['bbox'] = mask.toBbox([ann['segmentation']])[0] + ann['id'] = id+1 + ann['iscrowd'] = 0 + print 'DONE (t=%0.2fs)'%(time.time()- tic) + + res.dataset['annotations'] = anns + res.createIndex() + return res + + def download( self, tarDir = None, imgIds = [] ): + ''' + Download COCO images from mscoco.org server. + :param tarDir (str): COCO results directory name + imgIds (list): images to be downloaded + :return: + ''' + if tarDir is None: + print 'Please specify target directory' + return -1 + if len(imgIds) == 0: + imgs = self.imgs.values() + else: + imgs = self.loadImgs(imgIds) + N = len(imgs) + if not os.path.exists(tarDir): + os.makedirs(tarDir) + for i, img in enumerate(imgs): + tic = time.time() + fname = os.path.join(tarDir, img['file_name']) + if not os.path.exists(fname): + urllib.urlretrieve(img['coco_url'], fname) + print 'downloaded %d/%d images (t=%.1fs)'%(i, N, time.time()- tic) + + @staticmethod + def decodeMask(R): + """ + Decode binary mask M encoded via run-length encoding. + :param R (object RLE) : run-length encoding of binary mask + :return: M (bool 2D array) : decoded binary mask + """ + N = len(R['counts']) + M = np.zeros( (R['size'][0]*R['size'][1], )) + n = 0 + val = 1 + for pos in range(N): + val = not val + for c in range(R['counts'][pos]): + R['counts'][pos] + M[n] = val + n += 1 + return M.reshape((R['size']), order='F') + + @staticmethod + def encodeMask(M): + """ + Encode binary mask M using run-length encoding. + :param M (bool 2D array) : binary mask to encode + :return: R (object RLE) : run-length encoding of binary mask + """ + [h, w] = M.shape + M = M.flatten(order='F') + N = len(M) + counts_list = [] + pos = 0 + # counts + counts_list.append(1) + diffs = np.logical_xor(M[0:N-1], M[1:N]) + for diff in diffs: + if diff: + pos +=1 + counts_list.append(1) + else: + counts_list[pos] += 1 + # if array starts from 1. start with 0 counts for 0 + if M[0] == 1: + counts_list = [0] + counts_list + return {'size': [h, w], + 'counts': counts_list , + } + + @staticmethod + def segToMask( S, h, w ): + """ + Convert polygon segmentation to binary mask. + :param S (float array) : polygon segmentation mask + :param h (int) : target mask height + :param w (int) : target mask width + :return: M (bool 2D array) : binary mask + """ + M = np.zeros((h,w), dtype=np.bool) + for s in S: + N = len(s) + rr, cc = polygon(np.array(s[1:N:2]).clip(max=h-1), \ + np.array(s[0:N:2]).clip(max=w-1)) # (y, x) + M[rr, cc] = 1 + return M + + + + def annToRLE(self, ann): + """ + Convert annotation which can be polygons, uncompressed RLE to RLE. + :return: binary mask (numpy 2D array) + """ + t = self.imgs[ann['image_id']] + h, w = t['height'], t['width'] + segm = ann['segmentation'] + if type(segm) == list: + # polygon -- a single object might consist of multiple parts + # we merge all parts into one mask rle code + rles = mask.frPyObjects(segm, h, w) + rle = mask.merge(rles) + elif type(segm['counts']) == list: + # uncompressed RLE + rle = mask.frPyObjects(segm, h, w) + else: + # rle + rle = ann['segmentation'] + return rle + + def annToMask(self, ann): + """ + Convert annotation which can be polygons, uncompressed RLE, or RLE to binary mask. + :return: binary mask (numpy 2D array) + """ + rle = self.annToRLE(ann) + m = mask.decode(rle) + return m diff --git a/lib/dataset/pycocotools/cocoeval.py b/lib/dataset/pycocotools/cocoeval.py new file mode 100644 index 0000000..f389eb0 --- /dev/null +++ b/lib/dataset/pycocotools/cocoeval.py @@ -0,0 +1,444 @@ +__author__ = 'tsungyi' + +import numpy as np +import datetime +import time +from collections import defaultdict +import mask +import copy + +class COCOeval: + # Interface for evaluating detection on the Microsoft COCO dataset. + # + # The usage for CocoEval is as follows: + # cocoGt=..., cocoDt=... # load dataset and results + # E = CocoEval(cocoGt,cocoDt); # initialize CocoEval object + # E.params.recThrs = ...; # set parameters as desired + # E.evaluate(); # run per image evaluation + # E.accumulate(); # accumulate per image results + # E.summarize(); # display summary metrics of results + # For example usage see evalDemo.m and http://mscoco.org/. + # + # The evaluation parameters are as follows (defaults in brackets): + # imgIds - [all] N img ids to use for evaluation + # catIds - [all] K cat ids to use for evaluation + # iouThrs - [.5:.05:.95] T=10 IoU thresholds for evaluation + # recThrs - [0:.01:1] R=101 recall thresholds for evaluation + # areaRng - [...] A=4 object area ranges for evaluation + # maxDets - [1 10 100] M=3 thresholds on max detections per image + # useSegm - [1] if true evaluate against ground-truth segments + # useCats - [1] if true use category labels for evaluation # Note: if useSegm=0 the evaluation is run on bounding boxes. + # Note: if useCats=0 category labels are ignored as in proposal scoring. + # Note: multiple areaRngs [Ax2] and maxDets [Mx1] can be specified. + # + # evaluate(): evaluates detections on every image and every category and + # concats the results into the "evalImgs" with fields: + # dtIds - [1xD] id for each of the D detections (dt) + # gtIds - [1xG] id for each of the G ground truths (gt) + # dtMatches - [TxD] matching gt id at each IoU or 0 + # gtMatches - [TxG] matching dt id at each IoU or 0 + # dtScores - [1xD] confidence of each dt + # gtIgnore - [1xG] ignore flag for each gt + # dtIgnore - [TxD] ignore flag for each dt at each IoU + # + # accumulate(): accumulates the per-image, per-category evaluation + # results in "evalImgs" into the dictionary "eval" with fields: + # params - parameters used for evaluation + # date - date evaluation was performed + # counts - [T,R,K,A,M] parameter dimensions (see above) + # precision - [TxRxKxAxM] precision for every evaluation setting + # recall - [TxKxAxM] max recall for every evaluation setting + # Note: precision and recall==-1 for settings with no gt objects. + # + # See also coco, mask, pycocoDemo, pycocoEvalDemo + # + # Microsoft COCO Toolbox. version 2.0 + # Data, paper, and tutorials available at: http://mscoco.org/ + # Code written by Piotr Dollar and Tsung-Yi Lin, 2015. + # Licensed under the Simplified BSD License [see coco/license.txt] + def __init__(self, cocoGt=None, cocoDt=None): + ''' + Initialize CocoEval using coco APIs for gt and dt + :param cocoGt: coco object with ground truth annotations + :param cocoDt: coco object with detection results + :return: None + ''' + self.cocoGt = cocoGt # ground truth COCO API + self.cocoDt = cocoDt # detections COCO API + self.params = {} # evaluation parameters + self.evalImgs = defaultdict(list) # per-image per-category evaluation results [KxAxI] elements + self.eval = {} # accumulated evaluation results + self._gts = defaultdict(list) # gt for evaluation + self._dts = defaultdict(list) # dt for evaluation + self.params = Params() # parameters + self._paramsEval = {} # parameters for evaluation + self.stats = [] # result summarization + self.ious = {} # ious between all gts and dts + if not cocoGt is None: + self.params.imgIds = sorted(cocoGt.getImgIds()) + self.params.catIds = sorted(cocoGt.getCatIds()) + + + def _prepare(self): + ''' + Prepare ._gts and ._dts for evaluation based on params + :return: None + ''' + # + def _toMask(objs, coco): + # modify segmentation by reference + for obj in objs: + t = coco.imgs[obj['image_id']] + if type(obj['segmentation']) == list: + if type(obj['segmentation'][0]) == dict: + print 'debug' + obj['segmentation'] = mask.frPyObjects(obj['segmentation'],t['height'],t['width']) + if len(obj['segmentation']) == 1: + obj['segmentation'] = obj['segmentation'][0] + else: + # an object can have multiple polygon regions + # merge them into one RLE mask + obj['segmentation'] = mask.merge(obj['segmentation']) + elif type(obj['segmentation']) == dict and type(obj['segmentation']['counts']) == list: + obj['segmentation'] = mask.frPyObjects([obj['segmentation']],t['height'],t['width'])[0] + elif type(obj['segmentation']) == dict and \ + type(obj['segmentation']['counts'] == unicode or type(obj['segmentation']['counts']) == str): + pass + else: + raise Exception('segmentation format not supported.') + p = self.params + if p.useCats: + gts=self.cocoGt.loadAnns(self.cocoGt.getAnnIds(imgIds=p.imgIds, catIds=p.catIds)) + dts=self.cocoDt.loadAnns(self.cocoDt.getAnnIds(imgIds=p.imgIds, catIds=p.catIds)) + else: + gts=self.cocoGt.loadAnns(self.cocoGt.getAnnIds(imgIds=p.imgIds)) + dts=self.cocoDt.loadAnns(self.cocoDt.getAnnIds(imgIds=p.imgIds)) + + if p.useSegm: + _toMask(gts, self.cocoGt) + _toMask(dts, self.cocoDt) + self._gts = defaultdict(list) # gt for evaluation + self._dts = defaultdict(list) # dt for evaluation + for gt in gts: + self._gts[gt['image_id'], gt['category_id']].append(gt) + for dt in dts: + self._dts[dt['image_id'], dt['category_id']].append(dt) + self.evalImgs = defaultdict(list) # per-image per-category evaluation results + self.eval = {} # accumulated evaluation results + + def evaluate(self): + ''' + Run per image evaluation on given images and store results (a list of dict) in self.evalImgs + :return: None + ''' + tic = time.time() + print 'Running per image evaluation... ' + p = self.params + p.imgIds = list(np.unique(p.imgIds)) + if p.useCats: + p.catIds = list(np.unique(p.catIds)) + p.maxDets = sorted(p.maxDets) + self.params=p + + self._prepare() + # loop through images, area range, max detection number + catIds = p.catIds if p.useCats else [-1] + + computeIoU = self.computeIoU + self.ious = {(imgId, catId): computeIoU(imgId, catId) \ + for imgId in p.imgIds + for catId in catIds} + + evaluateImg = self.evaluateImg + maxDet = p.maxDets[-1] + self.evalImgs = [evaluateImg(imgId, catId, areaRng, maxDet) + for catId in catIds + for areaRng in p.areaRng + for imgId in p.imgIds + ] + self._paramsEval = copy.deepcopy(self.params) + toc = time.time() + print 'DONE (t=%0.2fs).'%(toc-tic) + + def computeIoU(self, imgId, catId): + p = self.params + if p.useCats: + gt = self._gts[imgId,catId] + dt = self._dts[imgId,catId] + else: + gt = [_ for cId in p.catIds for _ in self._gts[imgId,cId]] + dt = [_ for cId in p.catIds for _ in self._dts[imgId,cId]] + if len(gt) == 0 and len(dt) ==0: + return [] + dt = sorted(dt, key=lambda x: -x['score']) + if len(dt) > p.maxDets[-1]: + dt=dt[0:p.maxDets[-1]] + + if p.useSegm: + g = [g['segmentation'] for g in gt] + d = [d['segmentation'] for d in dt] + else: + g = [g['bbox'] for g in gt] + d = [d['bbox'] for d in dt] + + # compute iou between each dt and gt region + iscrowd = [int(o['iscrowd']) for o in gt] + ious = mask.iou(d,g,iscrowd) + return ious + + def evaluateImg(self, imgId, catId, aRng, maxDet): + ''' + perform evaluation for single category and image + :return: dict (single image results) + ''' + # + p = self.params + if p.useCats: + gt = self._gts[imgId,catId] + dt = self._dts[imgId,catId] + else: + gt = [_ for cId in p.catIds for _ in self._gts[imgId,cId]] + dt = [_ for cId in p.catIds for _ in self._dts[imgId,cId]] + if len(gt) == 0 and len(dt) ==0: + return None + + for g in gt: + if 'ignore' not in g: + g['ignore'] = 0 + if g['iscrowd'] == 1 or g['ignore'] or (g['area']aRng[1]): + g['_ignore'] = 1 + else: + g['_ignore'] = 0 + + # sort dt highest score first, sort gt ignore last + # gt = sorted(gt, key=lambda x: x['_ignore']) + gtind = [ind for (ind, g) in sorted(enumerate(gt), key=lambda (ind, g): g['_ignore']) ] + + gt = [gt[ind] for ind in gtind] + dt = sorted(dt, key=lambda x: -x['score'])[0:maxDet] + iscrowd = [int(o['iscrowd']) for o in gt] + # load computed ious + N_iou = len(self.ious[imgId, catId]) + ious = self.ious[imgId, catId][0:maxDet, np.array(gtind)] if N_iou >0 else self.ious[imgId, catId] + + T = len(p.iouThrs) + G = len(gt) + D = len(dt) + gtm = np.zeros((T,G)) + dtm = np.zeros((T,D)) + gtIg = np.array([g['_ignore'] for g in gt]) + dtIg = np.zeros((T,D)) + if not len(ious)==0: + for tind, t in enumerate(p.iouThrs): + for dind, d in enumerate(dt): + # information about best match so far (m=-1 -> unmatched) + iou = min([t,1-1e-10]) + m = -1 + for gind, g in enumerate(gt): + # if this gt already matched, and not a crowd, continue + if gtm[tind,gind]>0 and not iscrowd[gind]: + continue + # if dt matched to reg gt, and on ignore gt, stop + if m>-1 and gtIg[m]==0 and gtIg[gind]==1: + break + # continue to next gt unless better match made + if ious[dind,gind] < iou: + continue + # match successful and best so far, store appropriately + iou=ious[dind,gind] + m=gind + # if match made store id of match for both dt and gt + if m ==-1: + continue + dtIg[tind,dind] = gtIg[m] + dtm[tind,dind] = gt[m]['id'] + gtm[tind,m] = d['id'] + # set unmatched detections outside of area range to ignore + a = np.array([d['area']aRng[1] for d in dt]).reshape((1, len(dt))) + dtIg = np.logical_or(dtIg, np.logical_and(dtm==0, np.repeat(a,T,0))) + # store results for given image and category + return { + 'image_id': imgId, + 'category_id': catId, + 'aRng': aRng, + 'maxDet': maxDet, + 'dtIds': [d['id'] for d in dt], + 'gtIds': [g['id'] for g in gt], + 'dtMatches': dtm, + 'gtMatches': gtm, + 'dtScores': [d['score'] for d in dt], + 'gtIgnore': gtIg, + 'dtIgnore': dtIg, + } + + def accumulate(self, p = None): + ''' + Accumulate per image evaluation results and store the result in self.eval + :param p: input params for evaluation + :return: None + ''' + print 'Accumulating evaluation results... ' + tic = time.time() + if not self.evalImgs: + print 'Please run evaluate() first' + # allows input customized parameters + if p is None: + p = self.params + p.catIds = p.catIds if p.useCats == 1 else [-1] + T = len(p.iouThrs) + R = len(p.recThrs) + K = len(p.catIds) if p.useCats else 1 + A = len(p.areaRng) + M = len(p.maxDets) + precision = -np.ones((T,R,K,A,M)) # -1 for the precision of absent categories + recall = -np.ones((T,K,A,M)) + + # create dictionary for future indexing + _pe = self._paramsEval + catIds = _pe.catIds if _pe.useCats else [-1] + setK = set(catIds) + setA = set(map(tuple, _pe.areaRng)) + setM = set(_pe.maxDets) + setI = set(_pe.imgIds) + # get inds to evaluate + k_list = [n for n, k in enumerate(p.catIds) if k in setK] + m_list = [m for n, m in enumerate(p.maxDets) if m in setM] + a_list = [n for n, a in enumerate(map(lambda x: tuple(x), p.areaRng)) if a in setA] + i_list = [n for n, i in enumerate(p.imgIds) if i in setI] + # K0 = len(_pe.catIds) + I0 = len(_pe.imgIds) + A0 = len(_pe.areaRng) + # retrieve E at each category, area range, and max number of detections + for k, k0 in enumerate(k_list): + Nk = k0*A0*I0 + for a, a0 in enumerate(a_list): + Na = a0*I0 + for m, maxDet in enumerate(m_list): + E = [self.evalImgs[Nk+Na+i] for i in i_list] + E = filter(None, E) + if len(E) == 0: + continue + dtScores = np.concatenate([e['dtScores'][0:maxDet] for e in E]) + + # different sorting method generates slightly different results. + # mergesort is used to be consistent as Matlab implementation. + inds = np.argsort(-dtScores, kind='mergesort') + + dtm = np.concatenate([e['dtMatches'][:,0:maxDet] for e in E], axis=1)[:,inds] + dtIg = np.concatenate([e['dtIgnore'][:,0:maxDet] for e in E], axis=1)[:,inds] + gtIg = np.concatenate([e['gtIgnore'] for e in E]) + npig = len([ig for ig in gtIg if ig == 0]) + if npig == 0: + continue + tps = np.logical_and( dtm, np.logical_not(dtIg) ) + fps = np.logical_and(np.logical_not(dtm), np.logical_not(dtIg) ) + + tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float) + fp_sum = np.cumsum(fps, axis=1).astype(dtype=np.float) + for t, (tp, fp) in enumerate(zip(tp_sum, fp_sum)): + tp = np.array(tp) + fp = np.array(fp) + nd = len(tp) + rc = tp / npig + pr = tp / (fp+tp+np.spacing(1)) + q = np.zeros((R,)) + + if nd: + recall[t,k,a,m] = rc[-1] + else: + recall[t,k,a,m] = 0 + + # numpy is slow without cython optimization for accessing elements + # use python array gets significant speed improvement + pr = pr.tolist(); q = q.tolist() + + for i in range(nd-1, 0, -1): + if pr[i] > pr[i-1]: + pr[i-1] = pr[i] + + inds = np.searchsorted(rc, p.recThrs) + try: + for ri, pi in enumerate(inds): + q[ri] = pr[pi] + except: + pass + precision[t,:,k,a,m] = np.array(q) + self.eval = { + 'params': p, + 'counts': [T, R, K, A, M], + 'date': datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), + 'precision': precision, + 'recall': recall, + } + toc = time.time() + print 'DONE (t=%0.2fs).'%( toc-tic ) + + def summarize(self): + ''' + Compute and display summary metrics for evaluation results. + Note this functin can *only* be applied on the default parameter setting + ''' + def _summarize( ap=1, iouThr=None, areaRng='all', maxDets=100 ): + p = self.params + iStr = ' {:<18} {} @[ IoU={:<9} | area={:>6} | maxDets={:>3} ] = {}' + titleStr = 'Average Precision' if ap == 1 else 'Average Recall' + typeStr = '(AP)' if ap==1 else '(AR)' + iouStr = '%0.2f:%0.2f'%(p.iouThrs[0], p.iouThrs[-1]) if iouThr is None else '%0.2f'%(iouThr) + areaStr = areaRng + maxDetsStr = '%d'%(maxDets) + + aind = [i for i, aRng in enumerate(['all', 'small', 'medium', 'large']) if aRng == areaRng] + mind = [i for i, mDet in enumerate([1, 10, 100]) if mDet == maxDets] + if ap == 1: + # dimension of precision: [TxRxKxAxM] + s = self.eval['precision'] + # IoU + if iouThr is not None: + t = np.where(iouThr == p.iouThrs)[0] + s = s[t] + # areaRng + s = s[:,:,:,aind,mind] + else: + # dimension of recall: [TxKxAxM] + s = self.eval['recall'] + s = s[:,:,aind,mind] + if len(s[s>-1])==0: + mean_s = -1 + else: + mean_s = np.mean(s[s>-1]) + print iStr.format(titleStr, typeStr, iouStr, areaStr, maxDetsStr, '%.3f'%(float(mean_s))) + return mean_s + + if not self.eval: + raise Exception('Please run accumulate() first') + self.stats = np.zeros((12,)) + self.stats[0] = _summarize(1) + self.stats[1] = _summarize(1,iouThr=.5) + self.stats[2] = _summarize(1,iouThr=.75) + self.stats[3] = _summarize(1,areaRng='small') + self.stats[4] = _summarize(1,areaRng='medium') + self.stats[5] = _summarize(1,areaRng='large') + self.stats[6] = _summarize(0,maxDets=1) + self.stats[7] = _summarize(0,maxDets=10) + self.stats[8] = _summarize(0,maxDets=100) + self.stats[9] = _summarize(0,areaRng='small') + self.stats[10] = _summarize(0,areaRng='medium') + self.stats[11] = _summarize(0,areaRng='large') + + def __str__(self): + self.summarize() + +class Params: + ''' + Params for coco evaluation api + ''' + def __init__(self): + self.imgIds = [] + self.catIds = [] + # np.arange causes trouble. the data point on arange is slightly larger than the true value + self.iouThrs = np.linspace(.5, 0.95, np.round((0.95-.5)/.05)+1, endpoint=True) + self.recThrs = np.linspace(.0, 1.00, np.round((1.00-.0)/.01)+1, endpoint=True) + self.maxDets = [1,10,100] + self.areaRng = [ [0**2,1e5**2], [0**2, 32**2], [32**2, 96**2], [96**2, 1e5**2] ] + self.useSegm = 0 + self.useCats = 1 \ No newline at end of file diff --git a/lib/dataset/pycocotools/mask.py b/lib/dataset/pycocotools/mask.py new file mode 100644 index 0000000..de137a6 --- /dev/null +++ b/lib/dataset/pycocotools/mask.py @@ -0,0 +1,87 @@ +__author__ = 'tsungyi' + +import _mask as _mask + +# Interface for manipulating masks stored in RLE format. +# +# RLE is a simple yet efficient format for storing binary masks. RLE +# first divides a vector (or vectorized image) into a series of piecewise +# constant regions and then for each piece simply stores the length of +# that piece. For example, given M=[0 0 1 1 1 0 1] the RLE counts would +# be [2 3 1 1], or for M=[1 1 1 1 1 1 0] the counts would be [0 6 1] +# (note that the odd counts are always the numbers of zeros). Instead of +# storing the counts directly, additional compression is achieved with a +# variable bitrate representation based on a common scheme called LEB128. +# +# Compression is greatest given large piecewise constant regions. +# Specifically, the size of the RLE is proportional to the number of +# *boundaries* in M (or for an image the number of boundaries in the y +# direction). Assuming fairly simple shapes, the RLE representation is +# O(sqrt(n)) where n is number of pixels in the object. Hence space usage +# is substantially lower, especially for large simple objects (large n). +# +# Many common operations on masks can be computed directly using the RLE +# (without need for decoding). This includes computations such as area, +# union, intersection, etc. All of these operations are linear in the +# size of the RLE, in other words they are O(sqrt(n)) where n is the area +# of the object. Computing these operations on the original mask is O(n). +# Thus, using the RLE can result in substantial computational savings. +# +# The following API functions are defined: +# encode - Encode binary masks using RLE. +# decode - Decode binary masks encoded via RLE. +# merge - Compute union or intersection of encoded masks. +# iou - Compute intersection over union between masks. +# area - Compute area of encoded masks. +# toBbox - Get bounding boxes surrounding encoded masks. +# frPyObjects - Convert polygon, bbox, and uncompressed RLE to encoded RLE mask. +# +# Usage: +# Rs = encode( masks ) +# masks = decode( Rs ) +# R = merge( Rs, intersect=false ) +# o = iou( dt, gt, iscrowd ) +# a = area( Rs ) +# bbs = toBbox( Rs ) +# Rs = frPyObjects( [pyObjects], h, w ) +# +# In the API the following formats are used: +# Rs - [dict] Run-length encoding of binary masks +# R - dict Run-length encoding of binary mask +# masks - [hxwxn] Binary mask(s) (must have type np.ndarray(dtype=uint8) in column-major order) +# iscrowd - [nx1] list of np.ndarray. 1 indicates corresponding gt image has crowd region to ignore +# bbs - [nx4] Bounding box(es) stored as [x y w h] +# poly - Polygon stored as [[x1 y1 x2 y2...],[x1 y1 ...],...] (2D list) +# dt,gt - May be either bounding boxes or encoded masks +# Both poly and bbs are 0-indexed (bbox=[0 0 1 1] encloses first pixel). +# +# Finally, a note about the intersection over union (iou) computation. +# The standard iou of a ground truth (gt) and detected (dt) object is +# iou(gt,dt) = area(intersect(gt,dt)) / area(union(gt,dt)) +# For "crowd" regions, we use a modified criteria. If a gt object is +# marked as "iscrowd", we allow a dt to match any subregion of the gt. +# Choosing gt' in the crowd gt that best matches the dt can be done using +# gt'=intersect(dt,gt). Since by definition union(gt',dt)=dt, computing +# iou(gt,dt,iscrowd) = iou(gt',dt) = area(intersect(gt,dt)) / area(dt) +# For crowd gt regions we use this modified criteria above for the iou. +# +# To compile run "python setup.py build_ext --inplace" +# Please do not contact us for help with compiling. +# +# Microsoft COCO Toolbox. version 2.0 +# Data, paper, and tutorials available at: http://mscoco.org/ +# Code written by Piotr Dollar and Tsung-Yi Lin, 2015. +# Licensed under the Simplified BSD License [see coco/license.txt] + +encode = _mask.encode +#decode = _mask.decode +def decode(rleObjs): + if type(rleObjs) == list: + return _mask.decode(rleObjs) + else: + return _mask.decode([rleObjs])[:,:,0] +iou = _mask.iou +merge = _mask.merge +area = _mask.area +toBbox = _mask.toBbox +frPyObjects = _mask.frPyObjects diff --git a/lib/dataset/pycocotools/maskApi.h b/lib/dataset/pycocotools/maskApi.h new file mode 100644 index 0000000..ff16116 --- /dev/null +++ b/lib/dataset/pycocotools/maskApi.h @@ -0,0 +1,55 @@ +/************************************************************************** +* Microsoft COCO Toolbox. version 2.0 +* Data, paper, and tutorials available at: http://mscoco.org/ +* Code written by Piotr Dollar and Tsung-Yi Lin, 2015. +* Licensed under the Simplified BSD License [see coco/license.txt] +**************************************************************************/ +#pragma once +#include + +typedef unsigned int uint; +typedef unsigned long siz; +typedef unsigned char byte; +typedef double* BB; +typedef struct { siz h, w, m; uint *cnts; } RLE; + +// Initialize/destroy RLE. +void rleInit( RLE *R, siz h, siz w, siz m, uint *cnts ); +void rleFree( RLE *R ); + +// Initialize/destroy RLE array. +void rlesInit( RLE **R, siz n ); +void rlesFree( RLE **R, siz n ); + +// Encode binary masks using RLE. +void rleEncode( RLE *R, const byte *mask, siz h, siz w, siz n ); + +// Decode binary masks encoded via RLE. +void rleDecode( const RLE *R, byte *mask, siz n ); + +// Compute union or intersection of encoded masks. +void rleMerge( const RLE *R, RLE *M, siz n, bool intersect ); + +// Compute area of encoded masks. +void rleArea( const RLE *R, siz n, uint *a ); + +// Compute intersection over union between masks. +void rleIou( RLE *dt, RLE *gt, siz m, siz n, byte *iscrowd, double *o ); + +// Compute intersection over union between bounding boxes. +void bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o ); + +// Get bounding boxes surrounding encoded masks. +void rleToBbox( const RLE *R, BB bb, siz n ); + +// Convert bounding boxes to encoded masks. +void rleFrBbox( RLE *R, const BB bb, siz h, siz w, siz n ); + +// Convert polygon to encoded mask. +void rleFrPoly( RLE *R, const double *xy, siz k, siz h, siz w ); + +// Get compressed string representation of encoded mask. +char* rleToString( const RLE *R ); + +// Convert from compressed string representation of encoded mask. +void rleFrString( RLE *R, char *s, siz h, siz w ); diff --git a/lib/dataset/pycocotools/setup_linux.py b/lib/dataset/pycocotools/setup_linux.py new file mode 100644 index 0000000..5e836f1 --- /dev/null +++ b/lib/dataset/pycocotools/setup_linux.py @@ -0,0 +1,20 @@ +from distutils.core import setup +from Cython.Build import cythonize +from distutils.extension import Extension +import numpy as np + +# To compile and install locally run "python setup.py build_ext --inplace" +# To install library to Python site-packages run "python setup.py build_ext install" + +ext_modules = [ + Extension( + '_mask', + sources=['maskApi.c', '_mask.pyx'], + include_dirs=[np.get_include()], + extra_compile_args=['-Wno-cpp', '-Wno-unused-function', '-std=c99'], + ) +] + +setup(name='pycocotools', + ext_modules=cythonize(ext_modules) +) diff --git a/lib/dataset/pycocotools/setup_windows.py b/lib/dataset/pycocotools/setup_windows.py new file mode 100644 index 0000000..c9d748b --- /dev/null +++ b/lib/dataset/pycocotools/setup_windows.py @@ -0,0 +1,24 @@ +from distutils.core import setup +from Cython.Build import cythonize +from distutils.extension import Extension +import numpy as np + +import distutils.msvc9compiler +distutils.msvc9compiler.VERSION = 14.0 + + +# To compile and install locally run "python setup.py build_ext --inplace" +# To install library to Python site-packages run "python setup.py build_ext install" + +ext_modules = [ + Extension( + '_mask', + sources=['maskApi.c', '_mask.pyx'], + include_dirs=[np.get_include()], + extra_compile_args=[], + ) +] + +setup(name='pycocotools', + ext_modules=cythonize(ext_modules) +) diff --git a/lib/dataset/vehicle.py b/lib/dataset/vehicle.py new file mode 100644 index 0000000..49a2c95 --- /dev/null +++ b/lib/dataset/vehicle.py @@ -0,0 +1,423 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The Apache-2.0 License [see LICENSE for details] +# Modified by Haozhi Qi, from py-faster-rcnn (https://github.com/rbgirshick/py-faster-rcnn) +# -------------------------------------------------------- + +""" +Pascal VOC database +This class loads ground truth notations from standard Pascal VOC XML data formats +and transform them into IMDB format. Selective search is used for proposals, see roidb +function. Results are written as the Pascal VOC format. Evaluation is based on mAP +criterion. +""" + +import cPickle +import os +import numpy as np + +from imdb import IMDB +import cv2 +import zipfile +from bbox.bbox_transform import bbox_overlaps, bbox_transform, get_best_begin_point_wrapp +from PIL import Image +import codecs +import sys +sys.path.insert(0, r'/home/dj/code/Deformable_FPN_DOTA') +import pdb + +# pdb.set_trace() +from dota_kit.ResultMerge import * +# from dota_kit.ResultMerge_multi_process import * +# from dota_kit.dota_evaluation_task1 import eval_DOTA_Task1, eval_DOTA_Task1_multi_process +from dota_kit.dota_evaluation_task1 import eval_HRSC_L1 + + +class vehicle(IMDB): + def __init__(self, image_set, root_path, data_path, result_path=None, mask_size=-1, binary_thresh=None): + """ + fill basic information to initialize imdb + :param image_set: train, test etc. + :param root_path: 'selective_search_data' and 'cache' + :param data_path: data and results + :return: imdb object + """ + self.image_set = image_set + super(vehicle, self).__init__('vehicle', self.image_set, root_path, data_path, result_path) # set self.name + + self.root_path = root_path + self.data_path = data_path + + self.classes = ['__background__', # always index 0 + 'truck, car'] + ## check it, if it is better for baseball-diamond + self.angle_agnostic_classes = [ + 'truck, car'] + self.num_classes = len(self.classes) + self.image_set_index = self.load_image_set_index() + self.num_images = len(self.image_set_index) + print 'num_images', self.num_images + self.mask_size = mask_size + self.binary_thresh = binary_thresh + + self.config = {'comp_id': 'comp4', + 'use_diff': False, + 'min_size': 2} + + def load_image_set_index(self): + """ + find out which indexes correspond to given image set (train or val) + :return: + """ + image_set_index_file = os.path.join(self.data_path, self.image_set + '.txt') + assert os.path.exists(image_set_index_file), 'Path does not exist: {}'.format(image_set_index_file) + with open(image_set_index_file, 'r') as f: + lines = f.readlines() + image_lists = [line.strip() for line in lines] + #image_lists = [os.path.join(self.data_path, 'images', line.strip() + '.jpg') for line in lines] + return image_lists + + def image_path_from_index(self, index): + """ + given image index, find out full path + :param image_name: image name in the data dir + :return: full path of this image + """ + # hint: self.image_set means 'train' or 'test' + # TODO: when data ready, the entrance here should be changed + # image_file = os.path.join(self.data_path, self.image_set, index) + image_file = os.path.join(self.data_path, 'images', index + '.bmp') + assert os.path.exists(image_file), 'Path does not exist: {}'.format(image_file) + return image_file + + def gt_roidb(self): + """ + return ground truth image regions database + :return: imdb[image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + cache_file = os.path.join(self.cache_path, self.name + '_gt_roidb.pkl') + if os.path.exists(cache_file): + with open(cache_file, 'rb') as fid: + roidb = cPickle.load(fid) + print '{} gt roidb loaded from {}'.format(self.name, cache_file) + return roidb + + # gt_roidb = [self.load_annotation(index) for index in self.image_set_index] + + # TODO: for debug + gt_roidb = [] + count = 0 + for index in self.image_set_index: + count += 1 + print count, '/', len(self.image_set_index) + gt_roidb.append(self.load_annotation(index)) + with open(cache_file, 'wb') as fid: + cPickle.dump(gt_roidb, fid, cPickle.HIGHEST_PROTOCOL) + print 'wrote gt roidb to {}'.format(cache_file) + + return gt_roidb + + def load_annotation(self, index): + """ + for a given index, load image and bounding boxes info from XML file + :param image_name: image name in the data dir + :return: record['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] + """ + # import xml.etree.ElementTree as ET + roi_rec = dict() + roi_rec['image'] = self.image_path_from_index(index) + # roi_rec['image_name'] = 'img_' + index + '.jpg' + + img_path = self.image_path_from_index(index) + w, h = Image.open(img_path).size + # size = tree.find('size') + roi_rec['height'] = float(h) + roi_rec['width'] = float(w) + + valid_objs = [] + # f = codecs.open(filename, 'r', 'utf-16') + if self.image_set == 'train': + filename = os.path.join(self.data_path, 'labelTxt_L1', index + '.txt') + f = codecs.open(filename, 'r') + objs = f.readlines() + objs = [obj.strip().split(' ') for obj in objs] + # objs = tree.findall('object') + # if not self.config['use_diff']: + # non_diff_objs = [obj for obj in objs if obj[9] != '1'] + # objs = non_diff_objs + if not self.config['use_diff']: + non_diff_objs = [obj for obj in objs if obj[9] == '0'] + objs = non_diff_objs + # Load object bounding boxes into a data frame. + for ix, obj in enumerate(objs): + bbox = obj + + + x1 = min(max(float(bbox[0]), 0), w - 1) + y1 = min(max(float(bbox[1]), 0), h - 1) + x2 = min(max(float(bbox[2]), 0), w - 1) + y2 = min(max(float(bbox[3]), 0), h - 1) + x3 = min(max(float(bbox[4]), 0), w - 1) + y3 = min(max(float(bbox[5]), 0), h - 1) + x4 = min(max(float(bbox[6]), 0), w - 1) + y4 = min(max(float(bbox[7]), 0), h - 1) + + + # TODO: filter small instances + xmin = max(min(x1, x2, x3, x4), 0) + xmax = max(x1, x2, x3, x4) + ymin = max(min(y1, y2, y3, y4), 0) + ymax = max(y1, y2, y3, y4) + + # if xmax > xmin and ymax > ymin: + # obj[:8] = [x1, y1, x2, y2, x3, y3, x4, y4] + # valid_objs.append(obj) + + if ((xmax - xmin) > 10) and ((ymax - ymin) > 10): + obj[:8] = [x1, y1, x2, y2, x3, y3, x4, y4] + valid_objs.append(obj) + + objs = valid_objs + num_objs = len(objs) + boxes = np.zeros((num_objs, 8), dtype=np.uint16) + gt_classes = np.zeros((num_objs), dtype=np.int32) + overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32) + class_to_index = dict(zip(self.classes, range(self.num_classes))) + + for ix, obj in enumerate(objs): + cls = class_to_index[obj[8].lower().strip()] + if obj[8].lower().strip() in self.angle_agnostic_classes: + boxes[ix, :] = get_best_begin_point_wrapp(obj[:8]) + else: + boxes[ix, :] = obj[:8] + gt_classes[ix] = cls + overlaps[ix, cls] = 1.0 + + roi_rec.update({'boxes': boxes, + 'gt_classes': gt_classes, + 'gt_overlaps': overlaps, + 'max_classes': overlaps.argmax(axis=1), + 'max_overlaps': overlaps.max(axis=1), + 'flipped': False}) + return roi_rec + + def evaluate_detections(self, detections, ignore_cache): + """ + :param detections: [cls][image] = N x [x1, y1, x2, y2, x3, y3, x4, y4, score] + :return: + """ + detection_results_path = os.path.join(self.result_path, 'test_results') + info = '' + if not os.path.isdir(detection_results_path): + os.mkdir(detection_results_path) + + if ignore_cache: + self.write_results(detections, threshold=0.001) + # pdb.set_trace() + self.write_results_comp4(detections, threshold=0.001) + + # self.write_results_by_class(detections, threshold=0.0) + # TODO: check the hard code here + detpath = os.path.join(self.result_path, 'Task1_results') + '/Task1_{:s}.txt' + # TODO: test it + annopath = r'data/HRSC/labelTxt_L1/{:s}.txt' + imagesetfile = r'data/HRSC/test.txt' + + # annopath = r'/data/dj/dota/trainval_large-split_rotate/{:s}.txt' + # imagesetfile = r'/data/dj/dota/trainval_large-split_rotate/testset.txt' + + # mAP, classaps = eval_DOTA_Task1_multi_process(detpath, annopath, imagesetfile) + mAP, classaps = eval_HRSC_L1(detpath, annopath, imagesetfile) + with open(os.path.join(self.result_path, 'Task1_results') + '/mAP.txt', 'w') as f_out: + f_out.write('mAP: ' + str(mAP) + '\n') + f_out.write('classaps: ' + str(classaps)) + return info + + def draw_gt_and_detections(self, detections, thresh=0.2): + # gt_folder = os.path.join(self.result_path, 'gt_on_image') + det_folder = os.path.join(self.result_path, 'det_on_image') + # if not os.path.isdir(gt_folder): + # os.mkdir(gt_folder) + self.write_results(detections, threshold=0.1) + if not os.path.isdir(det_folder): + os.mkdir(det_folder) + for im_ind, index in enumerate(self.image_set_index): + img_path = self.image_path_from_index(index) + gt_db = self.load_annotation(index) + gt_boxes = gt_db['boxes'] + det_path = os.path.join(self.result_path, 'test_results', 'res_{}'.format(os.path.splitext(os.path.basename(index))[0] + '.txt')) + f = open(det_path, 'r') + det_boxes_results = f.readlines() + det_boxes = [] + for result in det_boxes_results: + result = result.strip().split(',') + det_boxes.append([int(result[0]), int(result[1]), int(result[2]),int(result[3]),int(result[4]),int(result[5]),int(result[6]),int(result[7]), + float(result[8]),result[9]]) + # det_boxes = detections[cls_ind][im_ind] + det_boxes = np.array(det_boxes) + img = cv2.imread(img_path) + img_height, img_width = img.shape[0], img.shape[1] + # original_img = img.copy() + for k in range(gt_boxes.shape[0]): + bbox = gt_boxes[k, :8] + bbox = map(int, bbox) + color = (0, 255, 0) + xmax = max(bbox[0], bbox[2], bbox[4], bbox[6]) + ymax = max(bbox[1], bbox[3], bbox[5], bbox[7]) + if xmax > img_width: + print "extreme xmax", xmax + if ymax > img_height: + print "extreme ymax", ymax + for i in range(3): + cv2.line(img, (bbox[i * 2], bbox[i * 2 + 1]), (bbox[(i + 1) * 2], bbox[(i + 1) * 2 + 1]), + color=color, thickness=1) + cv2.line(img, (bbox[6], bbox[7]), (bbox[0], bbox[1]), color=color, thickness=1) + # cv2.imwrite(os.path.join(gt_folder, 'img_{}.jpg'.format(index)), img) + # img = original_img + for k in range(det_boxes.shape[0]): + bbox = det_boxes[k, :8] + score = det_boxes[k, 8] + cls = det_boxes[k, 9] + if score < thresh: + continue + bbox = map(int, bbox) + color = (0, 255, 255) + for i in range(3): + cv2.line(img, (bbox[i * 2], bbox[i * 2 + 1]), (bbox[(i + 1) * 2], bbox[(i + 1) * 2 + 1]), + color=color, thickness=1) + cv2.line(img, (bbox[6], bbox[7]), (bbox[0], bbox[1]), color=color, thickness=1) + cv2.putText(img, '{} {}'.format(cls, score), (bbox[0], bbox[1] + 10), + color=(255, 255, 255), fontFace=cv2.FONT_HERSHEY_COMPLEX, fontScale=0.5) + print os.path.join(det_folder, os.path.basename(index)) + cv2.imwrite(os.path.join(det_folder, os.path.basename(index)), img) + + def validate_clockwise_points(self, points): + """ + Validates that the points that the 4 points that dlimite a polygon are in clockwise order. + """ + + if len(points) != 8: + raise Exception("Points list not valid." + str(len(points))) + + point = [ + [int(points[0]), int(points[1])], + [int(points[2]), int(points[3])], + [int(points[4]), int(points[5])], + [int(points[6]), int(points[7])] + ] + edge = [ + (point[1][0] - point[0][0]) * (point[1][1] + point[0][1]), + (point[2][0] - point[1][0]) * (point[2][1] + point[1][1]), + (point[3][0] - point[2][0]) * (point[3][1] + point[2][1]), + (point[0][0] - point[3][0]) * (point[0][1] + point[3][1]) + ] + + summatory = edge[0] + edge[1] + edge[2] + edge[3]; + if summatory > 0: + return False + else: + return True + # TODO: test it + def write_results_comp4(self, all_boxes, threshold=0.002): + """ + write results file in comp4 format + :param all_boxes: boxes to be processed [bbox, confidence] + :param threshold: None + :return: + """ + path = os.path.join(self.result_path, 'Task1_results') + if os.path.isdir(path): + print "delete original test results files!" + os.system("rm -rf {}".format(path)) + os.mkdir(path) + # pdb.set_trace() + for cls_ind, cls in enumerate(self.classes): + # pdb.set_trace() + if cls == '__background__': + continue + if not os.path.exists(path): + os.mkdir(path) + with open(os.path.join(path, 'Task1_' + cls + '.txt'), 'w') as f_out: + for im_ind, index in enumerate(self.image_set_index): + try: + dets = all_boxes[cls_ind][im_ind] + except: + print 'cls_ind:', cls_ind + print 'im_ind:', im_ind + return + else: + for k in range(dets.shape[0]): + if dets[k, 8] <= threshold: + continue + xmin = min(dets[k, 0], dets[k, 2], dets[k, 4], dets[k, 6]) + xmax = max(dets[k, 0], dets[k, 2], dets[k, 4], dets[k, 6]) + ymin = min(dets[k, 1], dets[k, 3], dets[k, 5], dets[k, 7]) + ymax = max(dets[k, 1], dets[k, 3], dets[k, 5], dets[k, 7]) + w = xmax - xmin + h = ymax - ymin + if (w * h < 10 * 10): + continue + if self.validate_clockwise_points(dets[k, 0:8]): + f_out.write('{} {} {} {} {} {} {} {} {} {}\n'.format(index, dets[k, 8], + int(dets[k, 0]), int(dets[k, 1]), + int(dets[k, 2]), + int(dets[k, 3]), + int(dets[k, 4]), int(dets[k, 5]), + int(dets[k, 6]), + int(dets[k, 7]) + )) + else: + # print 'A detected box is anti-clockwise! Index:{}'.format(index) + # print dets[k, 0:8] + pass + # pdb.set_trace() + # TODO: change the hard code here + nms_path = path + '_0.1_nms' + if not os.path.exists(nms_path): + os.mkdir(nms_path) + # mergebypoly(path, nms_path) + def write_results(self, all_boxes, threshold=0.02): + """ + write results files in pascal devkit path + :param all_boxes: boxes to be processed [bbox, confidence] + :return: None + """ + path = os.path.join(self.result_path, 'test_results') + if os.path.isdir(path): + print "delete original test results files!" + os.system("rm -r {}".format(path)) + os.mkdir(path) + for cls_ind, cls in enumerate(self.classes): + if cls == '__background__': + continue + for im_ind, index in enumerate(self.image_set_index): + # dets = all_boxes[cls_ind][im_ind] + try: + dets = all_boxes[cls_ind][im_ind] + except: + print 'cls_ind:', cls_ind + print 'im_ind:', im_ind + return + else: + # if dets.shape[0] == 0: + # print "no detection results in {}".format(index) + if not os.path.exists(os.path.join(self.result_path, 'test_results')): + os.mkdir(os.path.join(self.result_path, 'test_results')) + # f = open(os.path.join(self.result_path, 'test_results', 'res_{}'.format(os.path.splitext(os.path.basename(index))[0] + '.txt')), 'a') + f = open(os.path.join(self.result_path, 'test_results', '{}'.format(index + '.txt')), 'a') + + # the VOCdevkit expects 1-based indices + for k in range(dets.shape[0]): + if dets[k, 8] <= threshold: + continue + if self.validate_clockwise_points(dets[k, 0:8]): + f.write('{} {} {} {} {} {} {} {} {} {}\n'.format(int(dets[k, 0]), int(dets[k, 1]), int(dets[k, 2]), + int(dets[k, 3]), + int(dets[k, 4]), int(dets[k, 5]), int(dets[k, 6]), + int(dets[k, 7]), dets[k, 8], + self.classes[cls_ind])) + else: + # print 'A detected box is anti-clockwise! Index:{}'.format(index) + # print dets[k, 0:8] + pass \ No newline at end of file diff --git a/lib/mask/__init__.py b/lib/mask/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/lib/mask/mask_transform.py b/lib/mask/mask_transform.py new file mode 100644 index 0000000..6ce8a1f --- /dev/null +++ b/lib/mask/mask_transform.py @@ -0,0 +1,70 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Haozhi Qi, Yi Li, Guodong Zhang +# -------------------------------------------------------- + +import numpy as np + + +def intersect_box_mask(ex_box, gt_box, gt_mask): + """ + This function calculate the intersection part of a external box + and gt_box, mask it according to gt_mask + Args: + ex_box: external ROIS + gt_box: ground truth boxes + gt_mask: ground truth masks, not been resized yet + Returns: + regression_target: logical numpy array + """ + x1 = max(ex_box[0], gt_box[0]) + y1 = max(ex_box[1], gt_box[1]) + x2 = min(ex_box[2], gt_box[2]) + y2 = min(ex_box[3], gt_box[3]) + if x1 > x2 or y1 > y2: + return np.zeros((21, 21), dtype=bool) + w = x2 - x1 + 1 + h = y2 - y1 + 1 + ex_starty = y1 - ex_box[1] + ex_startx = x1 - ex_box[0] + + inter_maskb = gt_mask[y1:y2+1 , x1:x2+1] + regression_target = np.zeros((ex_box[3] - ex_box[1] + 1, ex_box[2] - ex_box[0] + 1)) + regression_target[ex_starty: ex_starty + h, ex_startx: ex_startx + w] = inter_maskb + + return regression_target + + +def mask_overlap(box1, box2, mask1, mask2): + """ + This function calculate region IOU when masks are + inside different boxes + Returns: + intersection over unions of this two masks + """ + x1 = max(box1[0], box2[0]) + y1 = max(box1[1], box2[1]) + x2 = min(box1[2], box2[2]) + y2 = min(box1[3], box2[3]) + if x1 > x2 or y1 > y2: + return 0 + w = x2 - x1 + 1 + h = y2 - y1 + 1 + # get masks in the intersection part + start_ya = y1 - box1[1] + start_xa = x1 - box1[0] + inter_maska = mask1[start_ya: start_ya + h, start_xa:start_xa + w] + + start_yb = y1 - box2[1] + start_xb = x1 - box2[0] + inter_maskb = mask2[start_yb: start_yb + h, start_xb:start_xb + w] + + assert inter_maska.shape == inter_maskb.shape + + inter = np.logical_and(inter_maskb, inter_maska).sum() + union = mask1.sum() + mask2.sum() - inter + if union < 1.0: + return 0 + return float(inter) / float(union) diff --git a/lib/nms/__init__.py b/lib/nms/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/lib/nms/cpu_nms.pyx b/lib/nms/cpu_nms.pyx new file mode 100644 index 0000000..c1266bc --- /dev/null +++ b/lib/nms/cpu_nms.pyx @@ -0,0 +1,68 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2015 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified from py-faster-rcnn (https://github.com/rbgirshick/py-faster-rcnn) +# -------------------------------------------------------- + +import numpy as np +cimport numpy as np + +cdef inline np.float32_t max(np.float32_t a, np.float32_t b): + return a if a >= b else b + +cdef inline np.float32_t min(np.float32_t a, np.float32_t b): + return a if a <= b else b + +def cpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh): + cdef np.ndarray[np.float32_t, ndim=1] x1 = dets[:, 0] + cdef np.ndarray[np.float32_t, ndim=1] y1 = dets[:, 1] + cdef np.ndarray[np.float32_t, ndim=1] x2 = dets[:, 2] + cdef np.ndarray[np.float32_t, ndim=1] y2 = dets[:, 3] + cdef np.ndarray[np.float32_t, ndim=1] scores = dets[:, 4] + + cdef np.ndarray[np.float32_t, ndim=1] areas = (x2 - x1 + 1) * (y2 - y1 + 1) + cdef np.ndarray[np.int_t, ndim=1] order = scores.argsort()[::-1].astype('i') + + cdef int ndets = dets.shape[0] + cdef np.ndarray[np.int_t, ndim=1] suppressed = \ + np.zeros((ndets), dtype=np.int) + + # nominal indices + cdef int _i, _j + # sorted indices + cdef int i, j + # temp variables for box i's (the box currently under consideration) + cdef np.float32_t ix1, iy1, ix2, iy2, iarea + # variables for computing overlap with box j (lower scoring box) + cdef np.float32_t xx1, yy1, xx2, yy2 + cdef np.float32_t w, h + cdef np.float32_t inter, ovr + + keep = [] + for _i in range(ndets): + i = order[_i] + if suppressed[i] == 1: + continue + keep.append(i) + ix1 = x1[i] + iy1 = y1[i] + ix2 = x2[i] + iy2 = y2[i] + iarea = areas[i] + for _j in range(_i + 1, ndets): + j = order[_j] + if suppressed[j] == 1: + continue + xx1 = max(ix1, x1[j]) + yy1 = max(iy1, y1[j]) + xx2 = min(ix2, x2[j]) + yy2 = min(iy2, y2[j]) + w = max(0.0, xx2 - xx1 + 1) + h = max(0.0, yy2 - yy1 + 1) + inter = w * h + ovr = inter / (iarea + areas[j] - inter) + if ovr >= thresh: + suppressed[j] = 1 + + return keep diff --git a/lib/nms/gpu_nms.cu b/lib/nms/gpu_nms.cu new file mode 100644 index 0000000..c7cb085 --- /dev/null +++ b/lib/nms/gpu_nms.cu @@ -0,0 +1,7081 @@ +// ------------------------------------------------------------------ +// Deformable Convolutional Networks +// Copyright (c) 2015 Microsoft +// Licensed under The MIT License +// Modified from MATLAB Faster R-CNN (https://github.com/shaoqingren/faster_rcnn) +// ------------------------------------------------------------------ + +//#include "gpu_nms.hpp" +#include +#include + + +#define CUDA_CHECK(condition) \ + /* Code block avoids redefinition of cudaError_t error */ \ + do { \ + cudaError_t error = condition; \ + if (error != cudaSuccess) { \ + std::cout << cudaGetErrorString(error) << std::endl; \ + } \ + } while (0) + +#define DIVUP(m,n) ((m) / (n) + ((m) % (n) > 0)) +int const threadsPerBlock = sizeof(unsigned long long) * 8; + +__device__ inline float devIoU(float const * const a, float const * const b) { + float left = max(a[0], b[0]), right = min(a[2], b[2]); + float top = max(a[1], b[1]), bottom = min(a[3], b[3]); + float width = max(right - left + 1, 0.f), height = max(bottom - top + 1, 0.f); + float interS = width * height; + float Sa = (a[2] - a[0] + 1) * (a[3] - a[1] + 1); + float Sb = (b[2] - b[0] + 1) * (b[3] - b[1] + 1); + return interS / (Sa + Sb - interS); +} + +__global__ void nms_kernel(const int n_boxes, const float nms_overlap_thresh, + const float *dev_boxes, unsigned long long *dev_mask) { + const int row_start = blockIdx.y; + const int col_start = blockIdx.x; + + // if (row_start > col_start) return; + + const int row_size = + min(n_boxes - row_start * threadsPerBlock, threadsPerBlock); + const int col_size = + min(n_boxes - col_start * threadsPerBlock, threadsPerBlock); + + __shared__ float block_boxes[threadsPerBlock * 5]; + if (threadIdx.x < col_size) { + block_boxes[threadIdx.x * 5 + 0] = + dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 0]; + block_boxes[threadIdx.x * 5 + 1] = + dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 1]; + block_boxes[threadIdx.x * 5 + 2] = + dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 2]; + block_boxes[threadIdx.x * 5 + 3] = + dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 3]; + block_boxes[threadIdx.x * 5 + 4] = + dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 4]; + } + __syncthreads(); + + if (threadIdx.x < row_size) { + const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x; + const float *cur_box = dev_boxes + cur_box_idx * 5; + int i = 0; + unsigned long long t = 0; + int start = 0; + if (row_start == col_start) { + start = threadIdx.x + 1; + } + for (i = start; i < col_size; i++) { + if (devIoU(cur_box, block_boxes + i * 5) > nms_overlap_thresh) { + t |= 1ULL << i; + } + } + const int col_blocks = DIVUP(n_boxes, threadsPerBlock); + dev_mask[cur_box_idx * col_blocks + col_start] = t; + } +} + +void _set_device(int device_id) { + int current_device; + CUDA_CHECK(cudaGetDevice(¤t_device)); + if (current_device == device_id) { + return; + } + // The call to cudaSetDevice must come before any calls to Get, which + // may perform initialization using the GPU. + CUDA_CHECK(cudaSetDevice(device_id)); +} + +void _nms(long* keep_out, int* num_out, const float* boxes_host, int boxes_num, + int boxes_dim, float nms_overlap_thresh, int device_id) { + _set_device(device_id); + + float* boxes_dev = NULL; + unsigned long long* mask_dev = NULL; + + const int col_blocks = DIVUP(boxes_num, threadsPerBlock); + + CUDA_CHECK(cudaMalloc(&boxes_dev, + boxes_num * boxes_dim * sizeof(float))); + CUDA_CHECK(cudaMemcpy(boxes_dev, + boxes_host, + boxes_num * boxes_dim * sizeof(float), + cudaMemcpyHostToDevice)); + + CUDA_CHECK(cudaMalloc(&mask_dev, + boxes_num * col_blocks * sizeof(unsigned long long))); + + dim3 blocks(DIVUP(boxes_num, threadsPerBlock), + DIVUP(boxes_num, threadsPerBlock)); + dim3 threads(threadsPerBlock); + nms_kernel<<>>(boxes_num, + nms_overlap_thresh, + boxes_dev, + mask_dev); + + std::vector mask_host(boxes_num * col_blocks); + CUDA_CHECK(cudaMemcpy(&mask_host[0], + mask_dev, + sizeof(unsigned long long) * boxes_num * col_blocks, + cudaMemcpyDeviceToHost)); + + std::vector remv(col_blocks); + memset(&remv[0], 0, sizeof(unsigned long long) * col_blocks); + + int num_to_keep = 0; + for (int i = 0; i < boxes_num; i++) { + int nblock = i / threadsPerBlock; + int inblock = i % threadsPerBlock; + + if (!(remv[nblock] & (1ULL << inblock))) { + keep_out[num_to_keep++] = i; + unsigned long long *p = &mask_host[0] + i * col_blocks; + for (int j = nblock; j < col_blocks; j++) { + remv[j] |= p[j]; + } + } + } + *num_out = num_to_keep; + + CUDA_CHECK(cudaFree(boxes_dev)); + CUDA_CHECK(cudaFree(mask_dev)); +} + + + + + + + + + + +/* Generated by Cython 0.24 */ + +#define PY_SSIZE_T_CLEAN +#include "Python.h" +#ifndef Py_PYTHON_H + #error Python headers needed to compile C extensions, please install development version of Python. +#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03020000) + #error Cython requires Python 2.6+ or Python 3.2+. +#else +#define CYTHON_ABI "0_24" +#include +#ifndef offsetof + #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) +#endif +#if !defined(WIN32) && !defined(MS_WINDOWS) + #ifndef __stdcall + #define __stdcall + #endif + #ifndef __cdecl + #define __cdecl + #endif + #ifndef __fastcall + #define __fastcall + #endif +#endif +#ifndef DL_IMPORT + #define DL_IMPORT(t) t +#endif +#ifndef DL_EXPORT + #define DL_EXPORT(t) t +#endif +#ifndef PY_LONG_LONG + #define PY_LONG_LONG LONG_LONG +#endif +#ifndef Py_HUGE_VAL + #define Py_HUGE_VAL HUGE_VAL +#endif +#ifdef PYPY_VERSION + #define CYTHON_COMPILING_IN_PYPY 1 + #define CYTHON_COMPILING_IN_CPYTHON 0 +#else + #define CYTHON_COMPILING_IN_PYPY 0 + #define CYTHON_COMPILING_IN_CPYTHON 1 +#endif +#if !defined(CYTHON_USE_PYLONG_INTERNALS) && CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x02070000 + #define CYTHON_USE_PYLONG_INTERNALS 1 +#endif +#if CYTHON_USE_PYLONG_INTERNALS + #include "longintrepr.h" + #undef SHIFT + #undef BASE + #undef MASK +#endif +#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) + #define Py_OptimizeFlag 0 +#endif +#define __PYX_BUILD_PY_SSIZE_T "n" +#define CYTHON_FORMAT_SSIZE_T "z" +#if PY_MAJOR_VERSION < 3 + #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" + #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ + PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) + #define __Pyx_DefaultClassType PyClass_Type +#else + #define __Pyx_BUILTIN_MODULE_NAME "builtins" + #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ + PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) + #define __Pyx_DefaultClassType PyType_Type +#endif +#ifndef Py_TPFLAGS_CHECKTYPES + #define Py_TPFLAGS_CHECKTYPES 0 +#endif +#ifndef Py_TPFLAGS_HAVE_INDEX + #define Py_TPFLAGS_HAVE_INDEX 0 +#endif +#ifndef Py_TPFLAGS_HAVE_NEWBUFFER + #define Py_TPFLAGS_HAVE_NEWBUFFER 0 +#endif +#ifndef Py_TPFLAGS_HAVE_FINALIZE + #define Py_TPFLAGS_HAVE_FINALIZE 0 +#endif +#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) + #define CYTHON_PEP393_ENABLED 1 + #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ + 0 : _PyUnicode_Ready((PyObject *)(op))) + #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) + #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) + #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u) + #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) + #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) + #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) +#else + #define CYTHON_PEP393_ENABLED 0 + #define __Pyx_PyUnicode_READY(op) (0) + #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) + #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) + #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE)) + #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) + #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) + #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) +#endif +#if CYTHON_COMPILING_IN_PYPY + #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) + #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) +#else + #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) + #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ + PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) +#endif +#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains) + #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) +#endif +#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format) + #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) +#endif +#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) + #define PyObject_Malloc(s) PyMem_Malloc(s) + #define PyObject_Free(p) PyMem_Free(p) + #define PyObject_Realloc(p) PyMem_Realloc(p) +#endif +#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) +#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) +#if PY_MAJOR_VERSION >= 3 + #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) +#else + #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) +#endif +#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) + #define PyObject_ASCII(o) PyObject_Repr(o) +#endif +#if PY_MAJOR_VERSION >= 3 + #define PyBaseString_Type PyUnicode_Type + #define PyStringObject PyUnicodeObject + #define PyString_Type PyUnicode_Type + #define PyString_Check PyUnicode_Check + #define PyString_CheckExact PyUnicode_CheckExact +#endif +#if PY_MAJOR_VERSION >= 3 + #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) + #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) +#else + #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) + #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) +#endif +#ifndef PySet_CheckExact + #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type) +#endif +#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) +#if PY_MAJOR_VERSION >= 3 + #define PyIntObject PyLongObject + #define PyInt_Type PyLong_Type + #define PyInt_Check(op) PyLong_Check(op) + #define PyInt_CheckExact(op) PyLong_CheckExact(op) + #define PyInt_FromString PyLong_FromString + #define PyInt_FromUnicode PyLong_FromUnicode + #define PyInt_FromLong PyLong_FromLong + #define PyInt_FromSize_t PyLong_FromSize_t + #define PyInt_FromSsize_t PyLong_FromSsize_t + #define PyInt_AsLong PyLong_AsLong + #define PyInt_AS_LONG PyLong_AS_LONG + #define PyInt_AsSsize_t PyLong_AsSsize_t + #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask + #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask + #define PyNumber_Int PyNumber_Long +#endif +#if PY_MAJOR_VERSION >= 3 + #define PyBoolObject PyLongObject +#endif +#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY + #ifndef PyUnicode_InternFromString + #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) + #endif +#endif +#if PY_VERSION_HEX < 0x030200A4 + typedef long Py_hash_t; + #define __Pyx_PyInt_FromHash_t PyInt_FromLong + #define __Pyx_PyInt_AsHash_t PyInt_AsLong +#else + #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t + #define __Pyx_PyInt_AsHash_t PyInt_AsSsize_t +#endif +#if PY_MAJOR_VERSION >= 3 + #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : PyInstanceMethod_New(func)) +#else + #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass) +#endif +#if PY_VERSION_HEX >= 0x030500B1 +#define __Pyx_PyAsyncMethodsStruct PyAsyncMethods +#define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) +#elif CYTHON_COMPILING_IN_CPYTHON && PY_MAJOR_VERSION >= 3 +typedef struct { + unaryfunc am_await; + unaryfunc am_aiter; + unaryfunc am_anext; +} __Pyx_PyAsyncMethodsStruct; +#define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) +#else +#define __Pyx_PyType_AsAsync(obj) NULL +#endif +#ifndef CYTHON_RESTRICT + #if defined(__GNUC__) + #define CYTHON_RESTRICT __restrict__ + #elif defined(_MSC_VER) && _MSC_VER >= 1400 + #define CYTHON_RESTRICT __restrict + #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L + #define CYTHON_RESTRICT restrict + #else + #define CYTHON_RESTRICT + #endif +#endif +#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) + +#ifndef __cplusplus + #error "Cython files generated with the C++ option must be compiled with a C++ compiler." +#endif +#ifndef CYTHON_INLINE + #define CYTHON_INLINE inline +#endif +template +void __Pyx_call_destructor(T& x) { + x.~T(); +} +template +class __Pyx_FakeReference { + public: + __Pyx_FakeReference() : ptr(NULL) { } + __Pyx_FakeReference(const T& ref) : ptr(const_cast(&ref)) { } + T *operator->() { return ptr; } + operator T&() { return *ptr; } + private: + T *ptr; +}; + +#if defined(WIN32) || defined(MS_WINDOWS) + #define _USE_MATH_DEFINES +#endif +#include +#ifdef NAN +#define __PYX_NAN() ((float) NAN) +#else +static CYTHON_INLINE float __PYX_NAN() { + float value; + memset(&value, 0xFF, sizeof(value)); + return value; +} +#endif + + +#define __PYX_ERR(f_index, lineno, Ln_error) \ +{ \ + __pyx_filename = __pyx_f[f_index]; __pyx_lineno = lineno; __pyx_clineno = __LINE__; goto Ln_error; \ +} + +#if PY_MAJOR_VERSION >= 3 + #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) +#else + #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) +#endif + +#ifndef __PYX_EXTERN_C + #ifdef __cplusplus + #define __PYX_EXTERN_C extern "C" + #else + #define __PYX_EXTERN_C extern + #endif +#endif + +#define __PYX_HAVE__nms__gpu_nms +#define __PYX_HAVE_API__nms__gpu_nms +#include "string.h" +#include "stdio.h" +#include "stdlib.h" +#include "numpy/arrayobject.h" +#include "numpy/ufuncobject.h" +#include "gpu_nms.hpp" +#ifdef _OPENMP +#include +#endif /* _OPENMP */ + +#ifdef PYREX_WITHOUT_ASSERTIONS +#define CYTHON_WITHOUT_ASSERTIONS +#endif + +#ifndef CYTHON_UNUSED +# if defined(__GNUC__) +# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) +# define CYTHON_UNUSED __attribute__ ((__unused__)) +# else +# define CYTHON_UNUSED +# endif +# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) +# define CYTHON_UNUSED __attribute__ ((__unused__)) +# else +# define CYTHON_UNUSED +# endif +#endif +#ifndef CYTHON_NCP_UNUSED +# if CYTHON_COMPILING_IN_CPYTHON +# define CYTHON_NCP_UNUSED +# else +# define CYTHON_NCP_UNUSED CYTHON_UNUSED +# endif +#endif +typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; + const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; + +#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 +#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT 0 +#define __PYX_DEFAULT_STRING_ENCODING "" +#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString +#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize +#define __Pyx_uchar_cast(c) ((unsigned char)c) +#define __Pyx_long_cast(x) ((long)x) +#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ + (sizeof(type) < sizeof(Py_ssize_t)) ||\ + (sizeof(type) > sizeof(Py_ssize_t) &&\ + likely(v < (type)PY_SSIZE_T_MAX ||\ + v == (type)PY_SSIZE_T_MAX) &&\ + (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ + v == (type)PY_SSIZE_T_MIN))) ||\ + (sizeof(type) == sizeof(Py_ssize_t) &&\ + (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ + v == (type)PY_SSIZE_T_MAX))) ) +#if defined (__cplusplus) && __cplusplus >= 201103L + #include + #define __Pyx_sst_abs(value) std::abs(value) +#elif SIZEOF_INT >= SIZEOF_SIZE_T + #define __Pyx_sst_abs(value) abs(value) +#elif SIZEOF_LONG >= SIZEOF_SIZE_T + #define __Pyx_sst_abs(value) labs(value) +#elif defined (_MSC_VER) && defined (_M_X64) + #define __Pyx_sst_abs(value) _abs64(value) +#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L + #define __Pyx_sst_abs(value) llabs(value) +#elif defined (__GNUC__) + #define __Pyx_sst_abs(value) __builtin_llabs(value) +#else + #define __Pyx_sst_abs(value) ((value<0) ? -value : value) +#endif +static CYTHON_INLINE char* __Pyx_PyObject_AsString(PyObject*); +static CYTHON_INLINE char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); +#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) +#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) +#define __Pyx_PyBytes_FromString PyBytes_FromString +#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize +static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); +#if PY_MAJOR_VERSION < 3 + #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString + #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize +#else + #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString + #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize +#endif +#define __Pyx_PyObject_AsSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) +#define __Pyx_PyObject_AsUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) +#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) +#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) +#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) +#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) +#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) +#if PY_MAJOR_VERSION < 3 +static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) +{ + const Py_UNICODE *u_end = u; + while (*u_end++) ; + return (size_t)(u_end - u - 1); +} +#else +#define __Pyx_Py_UNICODE_strlen Py_UNICODE_strlen +#endif +#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) +#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode +#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode +#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) +#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) +#define __Pyx_PyBool_FromLong(b) ((b) ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False)) +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); +static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); +#if CYTHON_COMPILING_IN_CPYTHON +#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) +#else +#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) +#endif +#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) +#if PY_MAJOR_VERSION >= 3 +#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) +#else +#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) +#endif +#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) +#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII +static int __Pyx_sys_getdefaultencoding_not_ascii; +static int __Pyx_init_sys_getdefaultencoding_params(void) { + PyObject* sys; + PyObject* default_encoding = NULL; + PyObject* ascii_chars_u = NULL; + PyObject* ascii_chars_b = NULL; + const char* default_encoding_c; + sys = PyImport_ImportModule("sys"); + if (!sys) goto bad; + default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); + Py_DECREF(sys); + if (!default_encoding) goto bad; + default_encoding_c = PyBytes_AsString(default_encoding); + if (!default_encoding_c) goto bad; + if (strcmp(default_encoding_c, "ascii") == 0) { + __Pyx_sys_getdefaultencoding_not_ascii = 0; + } else { + char ascii_chars[128]; + int c; + for (c = 0; c < 128; c++) { + ascii_chars[c] = c; + } + __Pyx_sys_getdefaultencoding_not_ascii = 1; + ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); + if (!ascii_chars_u) goto bad; + ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); + if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { + PyErr_Format( + PyExc_ValueError, + "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", + default_encoding_c); + goto bad; + } + Py_DECREF(ascii_chars_u); + Py_DECREF(ascii_chars_b); + } + Py_DECREF(default_encoding); + return 0; +bad: + Py_XDECREF(default_encoding); + Py_XDECREF(ascii_chars_u); + Py_XDECREF(ascii_chars_b); + return -1; +} +#endif +#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 +#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) +#else +#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) +#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT +static char* __PYX_DEFAULT_STRING_ENCODING; +static int __Pyx_init_sys_getdefaultencoding_params(void) { + PyObject* sys; + PyObject* default_encoding = NULL; + char* default_encoding_c; + sys = PyImport_ImportModule("sys"); + if (!sys) goto bad; + default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); + Py_DECREF(sys); + if (!default_encoding) goto bad; + default_encoding_c = PyBytes_AsString(default_encoding); + if (!default_encoding_c) goto bad; + __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c)); + if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; + strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); + Py_DECREF(default_encoding); + return 0; +bad: + Py_XDECREF(default_encoding); + return -1; +} +#endif +#endif + + +/* Test for GCC > 2.95 */ +#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) + #define likely(x) __builtin_expect(!!(x), 1) + #define unlikely(x) __builtin_expect(!!(x), 0) +#else /* !__GNUC__ or GCC < 2.95 */ + #define likely(x) (x) + #define unlikely(x) (x) +#endif /* __GNUC__ */ + +static PyObject *__pyx_m; +static PyObject *__pyx_d; +static PyObject *__pyx_b; +static PyObject *__pyx_empty_tuple; +static PyObject *__pyx_empty_bytes; +static PyObject *__pyx_empty_unicode; +static int __pyx_lineno; +static int __pyx_clineno = 0; +static const char * __pyx_cfilenm= __FILE__; +static const char *__pyx_filename; + +/* None.proto */ +#if !defined(CYTHON_CCOMPLEX) + #if defined(__cplusplus) + #define CYTHON_CCOMPLEX 1 + #elif defined(_Complex_I) + #define CYTHON_CCOMPLEX 1 + #else + #define CYTHON_CCOMPLEX 0 + #endif +#endif +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + #include + #else + #include + #endif +#endif +#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__) + #undef _Complex_I + #define _Complex_I 1.0fj +#endif + + +static const char *__pyx_f[] = { + "nms\\gpu_nms.pyx", + "__init__.pxd", + "type.pxd", +}; +/* BufferFormatStructs.proto */ +#define IS_UNSIGNED(type) (((type) -1) > 0) +struct __Pyx_StructField_; +#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0) +typedef struct { + const char* name; + struct __Pyx_StructField_* fields; + size_t size; + size_t arraysize[8]; + int ndim; + char typegroup; + char is_unsigned; + int flags; +} __Pyx_TypeInfo; +typedef struct __Pyx_StructField_ { + __Pyx_TypeInfo* type; + const char* name; + size_t offset; +} __Pyx_StructField; +typedef struct { + __Pyx_StructField* field; + size_t parent_offset; +} __Pyx_BufFmt_StackElem; +typedef struct { + __Pyx_StructField root; + __Pyx_BufFmt_StackElem* head; + size_t fmt_offset; + size_t new_count, enc_count; + size_t struct_alignment; + int is_complex; + char enc_type; + char new_packmode; + char enc_packmode; + char is_valid_array; +} __Pyx_BufFmt_Context; + + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":725 + * # in Cython to enable them only on the right systems. + * + * ctypedef npy_int8 int8_t # <<<<<<<<<<<<<< + * ctypedef npy_int16 int16_t + * ctypedef npy_int32 int32_t + */ +typedef npy_int8 __pyx_t_5numpy_int8_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":726 + * + * ctypedef npy_int8 int8_t + * ctypedef npy_int16 int16_t # <<<<<<<<<<<<<< + * ctypedef npy_int32 int32_t + * ctypedef npy_int64 int64_t + */ +typedef npy_int16 __pyx_t_5numpy_int16_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":727 + * ctypedef npy_int8 int8_t + * ctypedef npy_int16 int16_t + * ctypedef npy_int32 int32_t # <<<<<<<<<<<<<< + * ctypedef npy_int64 int64_t + * #ctypedef npy_int96 int96_t + */ +typedef npy_int32 __pyx_t_5numpy_int32_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":728 + * ctypedef npy_int16 int16_t + * ctypedef npy_int32 int32_t + * ctypedef npy_int64 int64_t # <<<<<<<<<<<<<< + * #ctypedef npy_int96 int96_t + * #ctypedef npy_int128 int128_t + */ +typedef npy_int64 __pyx_t_5numpy_int64_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":732 + * #ctypedef npy_int128 int128_t + * + * ctypedef npy_uint8 uint8_t # <<<<<<<<<<<<<< + * ctypedef npy_uint16 uint16_t + * ctypedef npy_uint32 uint32_t + */ +typedef npy_uint8 __pyx_t_5numpy_uint8_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":733 + * + * ctypedef npy_uint8 uint8_t + * ctypedef npy_uint16 uint16_t # <<<<<<<<<<<<<< + * ctypedef npy_uint32 uint32_t + * ctypedef npy_uint64 uint64_t + */ +typedef npy_uint16 __pyx_t_5numpy_uint16_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":734 + * ctypedef npy_uint8 uint8_t + * ctypedef npy_uint16 uint16_t + * ctypedef npy_uint32 uint32_t # <<<<<<<<<<<<<< + * ctypedef npy_uint64 uint64_t + * #ctypedef npy_uint96 uint96_t + */ +typedef npy_uint32 __pyx_t_5numpy_uint32_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":735 + * ctypedef npy_uint16 uint16_t + * ctypedef npy_uint32 uint32_t + * ctypedef npy_uint64 uint64_t # <<<<<<<<<<<<<< + * #ctypedef npy_uint96 uint96_t + * #ctypedef npy_uint128 uint128_t + */ +typedef npy_uint64 __pyx_t_5numpy_uint64_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":739 + * #ctypedef npy_uint128 uint128_t + * + * ctypedef npy_float32 float32_t # <<<<<<<<<<<<<< + * ctypedef npy_float64 float64_t + * #ctypedef npy_float80 float80_t + */ +typedef npy_float32 __pyx_t_5numpy_float32_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":740 + * + * ctypedef npy_float32 float32_t + * ctypedef npy_float64 float64_t # <<<<<<<<<<<<<< + * #ctypedef npy_float80 float80_t + * #ctypedef npy_float128 float128_t + */ +typedef npy_float64 __pyx_t_5numpy_float64_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":749 + * # The int types are mapped a bit surprising -- + * # numpy.int corresponds to 'l' and numpy.long to 'q' + * ctypedef npy_long int_t # <<<<<<<<<<<<<< + * ctypedef npy_longlong long_t + * ctypedef npy_longlong longlong_t + */ +typedef npy_long __pyx_t_5numpy_int_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":750 + * # numpy.int corresponds to 'l' and numpy.long to 'q' + * ctypedef npy_long int_t + * ctypedef npy_longlong long_t # <<<<<<<<<<<<<< + * ctypedef npy_longlong longlong_t + * + */ +typedef npy_longlong __pyx_t_5numpy_long_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":751 + * ctypedef npy_long int_t + * ctypedef npy_longlong long_t + * ctypedef npy_longlong longlong_t # <<<<<<<<<<<<<< + * + * ctypedef npy_ulong uint_t + */ +typedef npy_longlong __pyx_t_5numpy_longlong_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":753 + * ctypedef npy_longlong longlong_t + * + * ctypedef npy_ulong uint_t # <<<<<<<<<<<<<< + * ctypedef npy_ulonglong ulong_t + * ctypedef npy_ulonglong ulonglong_t + */ +typedef npy_ulong __pyx_t_5numpy_uint_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":754 + * + * ctypedef npy_ulong uint_t + * ctypedef npy_ulonglong ulong_t # <<<<<<<<<<<<<< + * ctypedef npy_ulonglong ulonglong_t + * + */ +typedef npy_ulonglong __pyx_t_5numpy_ulong_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":755 + * ctypedef npy_ulong uint_t + * ctypedef npy_ulonglong ulong_t + * ctypedef npy_ulonglong ulonglong_t # <<<<<<<<<<<<<< + * + * ctypedef npy_intp intp_t + */ +typedef npy_ulonglong __pyx_t_5numpy_ulonglong_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":757 + * ctypedef npy_ulonglong ulonglong_t + * + * ctypedef npy_intp intp_t # <<<<<<<<<<<<<< + * ctypedef npy_uintp uintp_t + * + */ +typedef npy_intp __pyx_t_5numpy_intp_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":758 + * + * ctypedef npy_intp intp_t + * ctypedef npy_uintp uintp_t # <<<<<<<<<<<<<< + * + * ctypedef npy_double float_t + */ +typedef npy_uintp __pyx_t_5numpy_uintp_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":760 + * ctypedef npy_uintp uintp_t + * + * ctypedef npy_double float_t # <<<<<<<<<<<<<< + * ctypedef npy_double double_t + * ctypedef npy_longdouble longdouble_t + */ +typedef npy_double __pyx_t_5numpy_float_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":761 + * + * ctypedef npy_double float_t + * ctypedef npy_double double_t # <<<<<<<<<<<<<< + * ctypedef npy_longdouble longdouble_t + * + */ +typedef npy_double __pyx_t_5numpy_double_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":762 + * ctypedef npy_double float_t + * ctypedef npy_double double_t + * ctypedef npy_longdouble longdouble_t # <<<<<<<<<<<<<< + * + * ctypedef npy_cfloat cfloat_t + */ +typedef npy_longdouble __pyx_t_5numpy_longdouble_t; +/* None.proto */ +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + typedef ::std::complex< float > __pyx_t_float_complex; + #else + typedef float _Complex __pyx_t_float_complex; + #endif +#else + typedef struct { float real, imag; } __pyx_t_float_complex; +#endif + +/* None.proto */ +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + typedef ::std::complex< double > __pyx_t_double_complex; + #else + typedef double _Complex __pyx_t_double_complex; + #endif +#else + typedef struct { double real, imag; } __pyx_t_double_complex; +#endif + + +/*--- Type declarations ---*/ + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":764 + * ctypedef npy_longdouble longdouble_t + * + * ctypedef npy_cfloat cfloat_t # <<<<<<<<<<<<<< + * ctypedef npy_cdouble cdouble_t + * ctypedef npy_clongdouble clongdouble_t + */ +typedef npy_cfloat __pyx_t_5numpy_cfloat_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":765 + * + * ctypedef npy_cfloat cfloat_t + * ctypedef npy_cdouble cdouble_t # <<<<<<<<<<<<<< + * ctypedef npy_clongdouble clongdouble_t + * + */ +typedef npy_cdouble __pyx_t_5numpy_cdouble_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":766 + * ctypedef npy_cfloat cfloat_t + * ctypedef npy_cdouble cdouble_t + * ctypedef npy_clongdouble clongdouble_t # <<<<<<<<<<<<<< + * + * ctypedef npy_cdouble complex_t + */ +typedef npy_clongdouble __pyx_t_5numpy_clongdouble_t; + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":768 + * ctypedef npy_clongdouble clongdouble_t + * + * ctypedef npy_cdouble complex_t # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew1(a): + */ +typedef npy_cdouble __pyx_t_5numpy_complex_t; + +/* --- Runtime support code (head) --- */ +/* Refnanny.proto */ +#ifndef CYTHON_REFNANNY + #define CYTHON_REFNANNY 0 +#endif +#if CYTHON_REFNANNY + typedef struct { + void (*INCREF)(void*, PyObject*, int); + void (*DECREF)(void*, PyObject*, int); + void (*GOTREF)(void*, PyObject*, int); + void (*GIVEREF)(void*, PyObject*, int); + void* (*SetupContext)(const char*, int, const char*); + void (*FinishContext)(void**); + } __Pyx_RefNannyAPIStruct; + static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; + static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); + #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; +#ifdef WITH_THREAD + #define __Pyx_RefNannySetupContext(name, acquire_gil)\ + if (acquire_gil) {\ + PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ + __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ + PyGILState_Release(__pyx_gilstate_save);\ + } else {\ + __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ + } +#else + #define __Pyx_RefNannySetupContext(name, acquire_gil)\ + __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) +#endif + #define __Pyx_RefNannyFinishContext()\ + __Pyx_RefNanny->FinishContext(&__pyx_refnanny) + #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0) + #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0) + #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0) + #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0) +#else + #define __Pyx_RefNannyDeclarations + #define __Pyx_RefNannySetupContext(name, acquire_gil) + #define __Pyx_RefNannyFinishContext() + #define __Pyx_INCREF(r) Py_INCREF(r) + #define __Pyx_DECREF(r) Py_DECREF(r) + #define __Pyx_GOTREF(r) + #define __Pyx_GIVEREF(r) + #define __Pyx_XINCREF(r) Py_XINCREF(r) + #define __Pyx_XDECREF(r) Py_XDECREF(r) + #define __Pyx_XGOTREF(r) + #define __Pyx_XGIVEREF(r) +#endif +#define __Pyx_XDECREF_SET(r, v) do {\ + PyObject *tmp = (PyObject *) r;\ + r = v; __Pyx_XDECREF(tmp);\ + } while (0) +#define __Pyx_DECREF_SET(r, v) do {\ + PyObject *tmp = (PyObject *) r;\ + r = v; __Pyx_DECREF(tmp);\ + } while (0) +#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) +#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) + +/* RaiseArgTupleInvalid.proto */ +static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, + Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); + +/* RaiseDoubleKeywords.proto */ +static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); + +/* ParseKeywords.proto */ +static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\ + PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\ + const char* function_name); + +/* ArgTypeTest.proto */ +static CYTHON_INLINE int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed, + const char *name, int exact); + +/* BufferFormatCheck.proto */ +static CYTHON_INLINE int __Pyx_GetBufferAndValidate(Py_buffer* buf, PyObject* obj, + __Pyx_TypeInfo* dtype, int flags, int nd, int cast, __Pyx_BufFmt_StackElem* stack); +static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info); +static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts); +static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, + __Pyx_BufFmt_StackElem* stack, + __Pyx_TypeInfo* type); // PROTO + +/* PyObjectGetAttrStr.proto */ +#if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { + PyTypeObject* tp = Py_TYPE(obj); + if (likely(tp->tp_getattro)) + return tp->tp_getattro(obj, attr_name); +#if PY_MAJOR_VERSION < 3 + if (likely(tp->tp_getattr)) + return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); +#endif + return PyObject_GetAttr(obj, attr_name); +} +#else +#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) +#endif + +/* GetBuiltinName.proto */ +static PyObject *__Pyx_GetBuiltinName(PyObject *name); + +/* GetModuleGlobalName.proto */ +static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name); + +/* PyObjectCall.proto */ +#if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); +#else +#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) +#endif + +/* ExtTypeTest.proto */ +static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); + +/* PyObjectCallMethO.proto */ +#if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); +#endif + +/* PyObjectCallOneArg.proto */ +static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); + +/* PyObjectCallNoArg.proto */ +#if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func); +#else +#define __Pyx_PyObject_CallNoArg(func) __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL) +#endif + +/* BufferIndexError.proto */ +static void __Pyx_RaiseBufferIndexError(int axis); + +#define __Pyx_BufPtrStrided1d(type, buf, i0, s0) (type)((char*)buf + i0 * s0) +#define __Pyx_BufPtrStrided2d(type, buf, i0, s0, i1, s1) (type)((char*)buf + i0 * s0 + i1 * s1) +/* SliceObject.proto */ +static CYTHON_INLINE PyObject* __Pyx_PyObject_GetSlice( + PyObject* obj, Py_ssize_t cstart, Py_ssize_t cstop, + PyObject** py_start, PyObject** py_stop, PyObject** py_slice, + int has_cstart, int has_cstop, int wraparound); + +/* BufferFallbackError.proto */ +static void __Pyx_RaiseBufferFallbackError(void); + +/* PyThreadStateGet.proto */ +#if CYTHON_COMPILING_IN_CPYTHON +#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; +#define __Pyx_PyThreadState_assign __pyx_tstate = PyThreadState_GET(); +#else +#define __Pyx_PyThreadState_declare +#define __Pyx_PyThreadState_assign +#endif + +/* PyErrFetchRestore.proto */ +#if CYTHON_COMPILING_IN_CPYTHON +#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) +#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) +#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) +#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) +static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); +static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); +#else +#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) +#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) +#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) +#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) +#endif + +/* RaiseException.proto */ +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); + +/* DictGetItem.proto */ +#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY +static PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key) { + PyObject *value; + value = PyDict_GetItemWithError(d, key); + if (unlikely(!value)) { + if (!PyErr_Occurred()) { + PyObject* args = PyTuple_Pack(1, key); + if (likely(args)) + PyErr_SetObject(PyExc_KeyError, args); + Py_XDECREF(args); + } + return NULL; + } + Py_INCREF(value); + return value; +} +#else + #define __Pyx_PyDict_GetItem(d, key) PyObject_GetItem(d, key) +#endif + +/* RaiseTooManyValuesToUnpack.proto */ +static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); + +/* RaiseNeedMoreValuesToUnpack.proto */ +static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); + +/* RaiseNoneIterError.proto */ +static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void); + +/* Import.proto */ +static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); + +/* CodeObjectCache.proto */ +typedef struct { + PyCodeObject* code_object; + int code_line; +} __Pyx_CodeObjectCacheEntry; +struct __Pyx_CodeObjectCache { + int count; + int max_count; + __Pyx_CodeObjectCacheEntry* entries; +}; +static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; +static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); +static PyCodeObject *__pyx_find_code_object(int code_line); +static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); + +/* AddTraceback.proto */ +static void __Pyx_AddTraceback(const char *funcname, int c_line, + int py_line, const char *filename); + +/* BufferStructDeclare.proto */ +typedef struct { + Py_ssize_t shape, strides, suboffsets; +} __Pyx_Buf_DimInfo; +typedef struct { + size_t refcount; + Py_buffer pybuffer; +} __Pyx_Buffer; +typedef struct { + __Pyx_Buffer *rcbuffer; + char *data; + __Pyx_Buf_DimInfo diminfo[8]; +} __Pyx_LocalBuf_ND; + +#if PY_MAJOR_VERSION < 3 + static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags); + static void __Pyx_ReleaseBuffer(Py_buffer *view); +#else + #define __Pyx_GetBuffer PyObject_GetBuffer + #define __Pyx_ReleaseBuffer PyBuffer_Release +#endif + + +/* None.proto */ +static Py_ssize_t __Pyx_zeros[] = {0, 0, 0, 0, 0, 0, 0, 0}; +static Py_ssize_t __Pyx_minusones[] = {-1, -1, -1, -1, -1, -1, -1, -1}; + +/* CIntToPy.proto */ +static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value); + +/* None.proto */ +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + #define __Pyx_CREAL(z) ((z).real()) + #define __Pyx_CIMAG(z) ((z).imag()) + #else + #define __Pyx_CREAL(z) (__real__(z)) + #define __Pyx_CIMAG(z) (__imag__(z)) + #endif +#else + #define __Pyx_CREAL(z) ((z).real) + #define __Pyx_CIMAG(z) ((z).imag) +#endif +#if defined(__cplusplus) && CYTHON_CCOMPLEX && (defined(_WIN32) || defined(__clang__) || (defined(__GNUC__) && (__GNUC__ >= 5 || __GNUC__ == 4 && __GNUC_MINOR__ >= 4 )) || __cplusplus >= 201103) + #define __Pyx_SET_CREAL(z,x) ((z).real(x)) + #define __Pyx_SET_CIMAG(z,y) ((z).imag(y)) +#else + #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x) + #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y) +#endif + +/* None.proto */ +static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float, float); + +/* None.proto */ +#if CYTHON_CCOMPLEX + #define __Pyx_c_eqf(a, b) ((a)==(b)) + #define __Pyx_c_sumf(a, b) ((a)+(b)) + #define __Pyx_c_difff(a, b) ((a)-(b)) + #define __Pyx_c_prodf(a, b) ((a)*(b)) + #define __Pyx_c_quotf(a, b) ((a)/(b)) + #define __Pyx_c_negf(a) (-(a)) + #ifdef __cplusplus + #define __Pyx_c_is_zerof(z) ((z)==(float)0) + #define __Pyx_c_conjf(z) (::std::conj(z)) + #if 1 + #define __Pyx_c_absf(z) (::std::abs(z)) + #define __Pyx_c_powf(a, b) (::std::pow(a, b)) + #endif + #else + #define __Pyx_c_is_zerof(z) ((z)==0) + #define __Pyx_c_conjf(z) (conjf(z)) + #if 1 + #define __Pyx_c_absf(z) (cabsf(z)) + #define __Pyx_c_powf(a, b) (cpowf(a, b)) + #endif + #endif +#else + static CYTHON_INLINE int __Pyx_c_eqf(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sumf(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_difff(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prodf(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quotf(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_negf(__pyx_t_float_complex); + static CYTHON_INLINE int __Pyx_c_is_zerof(__pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conjf(__pyx_t_float_complex); + #if 1 + static CYTHON_INLINE float __Pyx_c_absf(__pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_powf(__pyx_t_float_complex, __pyx_t_float_complex); + #endif +#endif + +/* None.proto */ +static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double); + +/* None.proto */ +#if CYTHON_CCOMPLEX + #define __Pyx_c_eq(a, b) ((a)==(b)) + #define __Pyx_c_sum(a, b) ((a)+(b)) + #define __Pyx_c_diff(a, b) ((a)-(b)) + #define __Pyx_c_prod(a, b) ((a)*(b)) + #define __Pyx_c_quot(a, b) ((a)/(b)) + #define __Pyx_c_neg(a) (-(a)) + #ifdef __cplusplus + #define __Pyx_c_is_zero(z) ((z)==(double)0) + #define __Pyx_c_conj(z) (::std::conj(z)) + #if 1 + #define __Pyx_c_abs(z) (::std::abs(z)) + #define __Pyx_c_pow(a, b) (::std::pow(a, b)) + #endif + #else + #define __Pyx_c_is_zero(z) ((z)==0) + #define __Pyx_c_conj(z) (conj(z)) + #if 1 + #define __Pyx_c_abs(z) (cabs(z)) + #define __Pyx_c_pow(a, b) (cpow(a, b)) + #endif + #endif +#else + static CYTHON_INLINE int __Pyx_c_eq(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg(__pyx_t_double_complex); + static CYTHON_INLINE int __Pyx_c_is_zero(__pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj(__pyx_t_double_complex); + #if 1 + static CYTHON_INLINE double __Pyx_c_abs(__pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow(__pyx_t_double_complex, __pyx_t_double_complex); + #endif +#endif + +/* CIntToPy.proto */ +static CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value); + +/* CIntFromPy.proto */ +static CYTHON_INLINE npy_int32 __Pyx_PyInt_As_npy_int32(PyObject *); + +/* CIntFromPy.proto */ +static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); + +/* CIntToPy.proto */ +static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); + +/* CIntFromPy.proto */ +static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); + +/* CheckBinaryVersion.proto */ +static int __Pyx_check_binary_version(void); + +/* PyIdentifierFromString.proto */ +#if !defined(__Pyx_PyIdentifier_FromString) +#if PY_MAJOR_VERSION < 3 + #define __Pyx_PyIdentifier_FromString(s) PyString_FromString(s) +#else + #define __Pyx_PyIdentifier_FromString(s) PyUnicode_FromString(s) +#endif +#endif + +/* ModuleImport.proto */ +static PyObject *__Pyx_ImportModule(const char *name); + +/* TypeImport.proto */ +static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, size_t size, int strict); + +/* InitStrings.proto */ +static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); + + +/* Module declarations from 'cpython.buffer' */ + +/* Module declarations from 'libc.string' */ + +/* Module declarations from 'libc.stdio' */ + +/* Module declarations from '__builtin__' */ + +/* Module declarations from 'cpython.type' */ +static PyTypeObject *__pyx_ptype_7cpython_4type_type = 0; + +/* Module declarations from 'cpython' */ + +/* Module declarations from 'cpython.object' */ + +/* Module declarations from 'cpython.ref' */ + +/* Module declarations from 'libc.stdlib' */ + +/* Module declarations from 'numpy' */ + +/* Module declarations from 'numpy' */ +static PyTypeObject *__pyx_ptype_5numpy_dtype = 0; +static PyTypeObject *__pyx_ptype_5numpy_flatiter = 0; +static PyTypeObject *__pyx_ptype_5numpy_broadcast = 0; +static PyTypeObject *__pyx_ptype_5numpy_ndarray = 0; +static PyTypeObject *__pyx_ptype_5numpy_ufunc = 0; +static CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *, char *, char *, int *); /*proto*/ + +/* Module declarations from 'nms.gpu_nms' */ +static __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t = { "float32_t", NULL, sizeof(__pyx_t_5numpy_float32_t), { 0 }, 0, 'R', 0, 0 }; +static __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_int32_t = { "int32_t", NULL, sizeof(__pyx_t_5numpy_int32_t), { 0 }, 0, IS_UNSIGNED(__pyx_t_5numpy_int32_t) ? 'U' : 'I', IS_UNSIGNED(__pyx_t_5numpy_int32_t), 0 }; +static __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_intp_t = { "intp_t", NULL, sizeof(__pyx_t_5numpy_intp_t), { 0 }, 0, IS_UNSIGNED(__pyx_t_5numpy_intp_t) ? 'U' : 'I', IS_UNSIGNED(__pyx_t_5numpy_intp_t), 0 }; +#define __Pyx_MODULE_NAME "nms.gpu_nms" +int __pyx_module_is_main_nms__gpu_nms = 0; + +/* Implementation of 'nms.gpu_nms' */ +static PyObject *__pyx_builtin_ValueError; +static PyObject *__pyx_builtin_range; +static PyObject *__pyx_builtin_RuntimeError; +static const char __pyx_k_np[] = "np"; +static const char __pyx_k_dets[] = "dets"; +static const char __pyx_k_keep[] = "keep"; +static const char __pyx_k_main[] = "__main__"; +static const char __pyx_k_test[] = "__test__"; +static const char __pyx_k_dtype[] = "dtype"; +static const char __pyx_k_int32[] = "int32"; +static const char __pyx_k_numpy[] = "numpy"; +static const char __pyx_k_order[] = "order"; +static const char __pyx_k_range[] = "range"; +static const char __pyx_k_zeros[] = "zeros"; +static const char __pyx_k_import[] = "__import__"; +static const char __pyx_k_scores[] = "scores"; +static const char __pyx_k_thresh[] = "thresh"; +static const char __pyx_k_argsort[] = "argsort"; +static const char __pyx_k_gpu_nms[] = "gpu_nms"; +static const char __pyx_k_num_out[] = "num_out"; +static const char __pyx_k_boxes_dim[] = "boxes_dim"; +static const char __pyx_k_boxes_num[] = "boxes_num"; +static const char __pyx_k_device_id[] = "device_id"; +static const char __pyx_k_ValueError[] = "ValueError"; +static const char __pyx_k_nms_gpu_nms[] = "nms.gpu_nms"; +static const char __pyx_k_sorted_dets[] = "sorted_dets"; +static const char __pyx_k_RuntimeError[] = "RuntimeError"; +static const char __pyx_k_ndarray_is_not_C_contiguous[] = "ndarray is not C contiguous"; +static const char __pyx_k_unknown_dtype_code_in_numpy_pxd[] = "unknown dtype code in numpy.pxd (%d)"; +static const char __pyx_k_D_v_zix_caffe_caffe_win_20160523[] = "D:\\v-zix\\caffe\\caffe-win-20160523\\models\\py-faster-rcnn-windows\\lib\\nms\\gpu_nms.pyx"; +static const char __pyx_k_Format_string_allocated_too_shor[] = "Format string allocated too short, see comment in numpy.pxd"; +static const char __pyx_k_Non_native_byte_order_not_suppor[] = "Non-native byte order not supported"; +static const char __pyx_k_ndarray_is_not_Fortran_contiguou[] = "ndarray is not Fortran contiguous"; +static const char __pyx_k_Format_string_allocated_too_shor_2[] = "Format string allocated too short."; +static PyObject *__pyx_kp_s_D_v_zix_caffe_caffe_win_20160523; +static PyObject *__pyx_kp_u_Format_string_allocated_too_shor; +static PyObject *__pyx_kp_u_Format_string_allocated_too_shor_2; +static PyObject *__pyx_kp_u_Non_native_byte_order_not_suppor; +static PyObject *__pyx_n_s_RuntimeError; +static PyObject *__pyx_n_s_ValueError; +static PyObject *__pyx_n_s_argsort; +static PyObject *__pyx_n_s_boxes_dim; +static PyObject *__pyx_n_s_boxes_num; +static PyObject *__pyx_n_s_dets; +static PyObject *__pyx_n_s_device_id; +static PyObject *__pyx_n_s_dtype; +static PyObject *__pyx_n_s_gpu_nms; +static PyObject *__pyx_n_s_import; +static PyObject *__pyx_n_s_int32; +static PyObject *__pyx_n_s_keep; +static PyObject *__pyx_n_s_main; +static PyObject *__pyx_kp_u_ndarray_is_not_C_contiguous; +static PyObject *__pyx_kp_u_ndarray_is_not_Fortran_contiguou; +static PyObject *__pyx_n_s_nms_gpu_nms; +static PyObject *__pyx_n_s_np; +static PyObject *__pyx_n_s_num_out; +static PyObject *__pyx_n_s_numpy; +static PyObject *__pyx_n_s_order; +static PyObject *__pyx_n_s_range; +static PyObject *__pyx_n_s_scores; +static PyObject *__pyx_n_s_sorted_dets; +static PyObject *__pyx_n_s_test; +static PyObject *__pyx_n_s_thresh; +static PyObject *__pyx_kp_u_unknown_dtype_code_in_numpy_pxd; +static PyObject *__pyx_n_s_zeros; +static PyObject *__pyx_pf_3nms_7gpu_nms_gpu_nms(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_dets, PyObject *__pyx_v_thresh, __pyx_t_5numpy_int32_t __pyx_v_device_id); /* proto */ +static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ +static void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info); /* proto */ +static PyObject *__pyx_int_4; +static PyObject *__pyx_int_neg_1; +static PyObject *__pyx_slice_; +static PyObject *__pyx_slice__3; +static PyObject *__pyx_slice__4; +static PyObject *__pyx_tuple__2; +static PyObject *__pyx_tuple__5; +static PyObject *__pyx_tuple__6; +static PyObject *__pyx_tuple__7; +static PyObject *__pyx_tuple__8; +static PyObject *__pyx_tuple__9; +static PyObject *__pyx_tuple__10; +static PyObject *__pyx_tuple__11; +static PyObject *__pyx_codeobj__12; + +/* "nms/gpu_nms.pyx":16 + * void _nms(np.int32_t*, int*, np.float32_t*, int, int, float, int) + * + * def gpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh, # <<<<<<<<<<<<<< + * np.int32_t device_id=0): + * cdef int boxes_num = dets.shape[0] + */ + +/* Python wrapper */ +static PyObject *__pyx_pw_3nms_7gpu_nms_1gpu_nms(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static PyMethodDef __pyx_mdef_3nms_7gpu_nms_1gpu_nms = {"gpu_nms", (PyCFunction)__pyx_pw_3nms_7gpu_nms_1gpu_nms, METH_VARARGS|METH_KEYWORDS, 0}; +static PyObject *__pyx_pw_3nms_7gpu_nms_1gpu_nms(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyArrayObject *__pyx_v_dets = 0; + PyObject *__pyx_v_thresh = 0; + __pyx_t_5numpy_int32_t __pyx_v_device_id; + PyObject *__pyx_r = 0; + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("gpu_nms (wrapper)", 0); + { + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_dets,&__pyx_n_s_thresh,&__pyx_n_s_device_id,0}; + PyObject* values[3] = {0,0,0}; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args; + const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); + switch (pos_args) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + kw_args = PyDict_Size(__pyx_kwds); + switch (pos_args) { + case 0: + if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_dets)) != 0)) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_thresh)) != 0)) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("gpu_nms", 0, 2, 3, 1); __PYX_ERR(0, 16, __pyx_L3_error) + } + case 2: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_device_id); + if (value) { values[2] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "gpu_nms") < 0)) __PYX_ERR(0, 16, __pyx_L3_error) + } + } else { + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + __pyx_v_dets = ((PyArrayObject *)values[0]); + __pyx_v_thresh = ((PyObject*)values[1]); + if (values[2]) { + __pyx_v_device_id = __Pyx_PyInt_As_npy_int32(values[2]); if (unlikely((__pyx_v_device_id == (npy_int32)-1) && PyErr_Occurred())) __PYX_ERR(0, 17, __pyx_L3_error) + } else { + __pyx_v_device_id = ((__pyx_t_5numpy_int32_t)0); + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("gpu_nms", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 16, __pyx_L3_error) + __pyx_L3_error:; + __Pyx_AddTraceback("nms.gpu_nms.gpu_nms", __pyx_clineno, __pyx_lineno, __pyx_filename); + __Pyx_RefNannyFinishContext(); + return NULL; + __pyx_L4_argument_unpacking_done:; + if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_dets), __pyx_ptype_5numpy_ndarray, 1, "dets", 0))) __PYX_ERR(0, 16, __pyx_L1_error) + if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_thresh), (&PyFloat_Type), 1, "thresh", 1))) __PYX_ERR(0, 16, __pyx_L1_error) + __pyx_r = __pyx_pf_3nms_7gpu_nms_gpu_nms(__pyx_self, __pyx_v_dets, __pyx_v_thresh, __pyx_v_device_id); + + /* function exit code */ + goto __pyx_L0; + __pyx_L1_error:; + __pyx_r = NULL; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static PyObject *__pyx_pf_3nms_7gpu_nms_gpu_nms(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_dets, PyObject *__pyx_v_thresh, __pyx_t_5numpy_int32_t __pyx_v_device_id) { + int __pyx_v_boxes_num; + int __pyx_v_boxes_dim; + int __pyx_v_num_out; + PyArrayObject *__pyx_v_keep = 0; + PyArrayObject *__pyx_v_scores = 0; + PyArrayObject *__pyx_v_order = 0; + PyArrayObject *__pyx_v_sorted_dets = 0; + __Pyx_LocalBuf_ND __pyx_pybuffernd_dets; + __Pyx_Buffer __pyx_pybuffer_dets; + __Pyx_LocalBuf_ND __pyx_pybuffernd_keep; + __Pyx_Buffer __pyx_pybuffer_keep; + __Pyx_LocalBuf_ND __pyx_pybuffernd_order; + __Pyx_Buffer __pyx_pybuffer_order; + __Pyx_LocalBuf_ND __pyx_pybuffernd_scores; + __Pyx_Buffer __pyx_pybuffer_scores; + __Pyx_LocalBuf_ND __pyx_pybuffernd_sorted_dets; + __Pyx_Buffer __pyx_pybuffer_sorted_dets; + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + PyObject *__pyx_t_5 = NULL; + PyArrayObject *__pyx_t_6 = NULL; + PyArrayObject *__pyx_t_7 = NULL; + PyArrayObject *__pyx_t_8 = NULL; + PyArrayObject *__pyx_t_9 = NULL; + Py_ssize_t __pyx_t_10; + int __pyx_t_11; + Py_ssize_t __pyx_t_12; + Py_ssize_t __pyx_t_13; + float __pyx_t_14; + PyObject *__pyx_t_15 = NULL; + PyObject *__pyx_t_16 = NULL; + PyObject *__pyx_t_17 = NULL; + __Pyx_RefNannySetupContext("gpu_nms", 0); + __pyx_pybuffer_keep.pybuffer.buf = NULL; + __pyx_pybuffer_keep.refcount = 0; + __pyx_pybuffernd_keep.data = NULL; + __pyx_pybuffernd_keep.rcbuffer = &__pyx_pybuffer_keep; + __pyx_pybuffer_scores.pybuffer.buf = NULL; + __pyx_pybuffer_scores.refcount = 0; + __pyx_pybuffernd_scores.data = NULL; + __pyx_pybuffernd_scores.rcbuffer = &__pyx_pybuffer_scores; + __pyx_pybuffer_order.pybuffer.buf = NULL; + __pyx_pybuffer_order.refcount = 0; + __pyx_pybuffernd_order.data = NULL; + __pyx_pybuffernd_order.rcbuffer = &__pyx_pybuffer_order; + __pyx_pybuffer_sorted_dets.pybuffer.buf = NULL; + __pyx_pybuffer_sorted_dets.refcount = 0; + __pyx_pybuffernd_sorted_dets.data = NULL; + __pyx_pybuffernd_sorted_dets.rcbuffer = &__pyx_pybuffer_sorted_dets; + __pyx_pybuffer_dets.pybuffer.buf = NULL; + __pyx_pybuffer_dets.refcount = 0; + __pyx_pybuffernd_dets.data = NULL; + __pyx_pybuffernd_dets.rcbuffer = &__pyx_pybuffer_dets; + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_dets.rcbuffer->pybuffer, (PyObject*)__pyx_v_dets, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 16, __pyx_L1_error) + } + __pyx_pybuffernd_dets.diminfo[0].strides = __pyx_pybuffernd_dets.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_dets.diminfo[0].shape = __pyx_pybuffernd_dets.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_dets.diminfo[1].strides = __pyx_pybuffernd_dets.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_dets.diminfo[1].shape = __pyx_pybuffernd_dets.rcbuffer->pybuffer.shape[1]; + + /* "nms/gpu_nms.pyx":18 + * def gpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh, + * np.int32_t device_id=0): + * cdef int boxes_num = dets.shape[0] # <<<<<<<<<<<<<< + * cdef int boxes_dim = dets.shape[1] + * cdef int num_out + */ + __pyx_v_boxes_num = (__pyx_v_dets->dimensions[0]); + + /* "nms/gpu_nms.pyx":19 + * np.int32_t device_id=0): + * cdef int boxes_num = dets.shape[0] + * cdef int boxes_dim = dets.shape[1] # <<<<<<<<<<<<<< + * cdef int num_out + * cdef np.ndarray[np.int32_t, ndim=1] \ + */ + __pyx_v_boxes_dim = (__pyx_v_dets->dimensions[1]); + + /* "nms/gpu_nms.pyx":22 + * cdef int num_out + * cdef np.ndarray[np.int32_t, ndim=1] \ + * keep = np.zeros(boxes_num, dtype=np.int32) # <<<<<<<<<<<<<< + * cdef np.ndarray[np.float32_t, ndim=1] \ + * scores = dets[:, 4] + */ + __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 22, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_zeros); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 22, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_boxes_num); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 22, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 22, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_GIVEREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); + __pyx_t_1 = 0; + __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 22, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 22, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_int32); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 22, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_dtype, __pyx_t_5) < 0) __PYX_ERR(0, 22, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 22, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 22, __pyx_L1_error) + __pyx_t_6 = ((PyArrayObject *)__pyx_t_5); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_keep.rcbuffer->pybuffer, (PyObject*)__pyx_t_6, &__Pyx_TypeInfo_nn___pyx_t_5numpy_int32_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) { + __pyx_v_keep = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_keep.rcbuffer->pybuffer.buf = NULL; + __PYX_ERR(0, 21, __pyx_L1_error) + } else {__pyx_pybuffernd_keep.diminfo[0].strides = __pyx_pybuffernd_keep.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_keep.diminfo[0].shape = __pyx_pybuffernd_keep.rcbuffer->pybuffer.shape[0]; + } + } + __pyx_t_6 = 0; + __pyx_v_keep = ((PyArrayObject *)__pyx_t_5); + __pyx_t_5 = 0; + + /* "nms/gpu_nms.pyx":24 + * keep = np.zeros(boxes_num, dtype=np.int32) + * cdef np.ndarray[np.float32_t, ndim=1] \ + * scores = dets[:, 4] # <<<<<<<<<<<<<< + * #cdef np.ndarray[np.int_t, ndim=1] \ // 20160601, by xzn + * # order = scores.argsort()[::-1] + */ + __pyx_t_5 = PyObject_GetItem(((PyObject *)__pyx_v_dets), __pyx_tuple__2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 24, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 24, __pyx_L1_error) + __pyx_t_7 = ((PyArrayObject *)__pyx_t_5); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_scores.rcbuffer->pybuffer, (PyObject*)__pyx_t_7, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) { + __pyx_v_scores = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_scores.rcbuffer->pybuffer.buf = NULL; + __PYX_ERR(0, 23, __pyx_L1_error) + } else {__pyx_pybuffernd_scores.diminfo[0].strides = __pyx_pybuffernd_scores.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_scores.diminfo[0].shape = __pyx_pybuffernd_scores.rcbuffer->pybuffer.shape[0]; + } + } + __pyx_t_7 = 0; + __pyx_v_scores = ((PyArrayObject *)__pyx_t_5); + __pyx_t_5 = 0; + + /* "nms/gpu_nms.pyx":28 + * # order = scores.argsort()[::-1] + * cdef np.ndarray[np.intp_t, ndim=1] \ + * order = scores.argsort()[::-1] # <<<<<<<<<<<<<< + * cdef np.ndarray[np.float32_t, ndim=2] \ + * sorted_dets = dets[order, :] + */ + __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_scores), __pyx_n_s_argsort); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 28, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = NULL; + if (CYTHON_COMPILING_IN_CPYTHON && likely(PyMethod_Check(__pyx_t_1))) { + __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1); + if (likely(__pyx_t_3)) { + PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); + __Pyx_INCREF(__pyx_t_3); + __Pyx_INCREF(function); + __Pyx_DECREF_SET(__pyx_t_1, function); + } + } + if (__pyx_t_3) { + __pyx_t_5 = __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 28, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + } else { + __pyx_t_5 = __Pyx_PyObject_CallNoArg(__pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 28, __pyx_L1_error) + } + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyObject_GetItem(__pyx_t_5, __pyx_slice__3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 28, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 28, __pyx_L1_error) + __pyx_t_8 = ((PyArrayObject *)__pyx_t_1); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_order.rcbuffer->pybuffer, (PyObject*)__pyx_t_8, &__Pyx_TypeInfo_nn___pyx_t_5numpy_intp_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) { + __pyx_v_order = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_order.rcbuffer->pybuffer.buf = NULL; + __PYX_ERR(0, 27, __pyx_L1_error) + } else {__pyx_pybuffernd_order.diminfo[0].strides = __pyx_pybuffernd_order.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_order.diminfo[0].shape = __pyx_pybuffernd_order.rcbuffer->pybuffer.shape[0]; + } + } + __pyx_t_8 = 0; + __pyx_v_order = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + + /* "nms/gpu_nms.pyx":30 + * order = scores.argsort()[::-1] + * cdef np.ndarray[np.float32_t, ndim=2] \ + * sorted_dets = dets[order, :] # <<<<<<<<<<<<<< + * _nms(&keep[0], &num_out, &sorted_dets[0, 0], boxes_num, boxes_dim, thresh, device_id) + * keep = keep[:num_out] + */ + __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 30, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_INCREF(((PyObject *)__pyx_v_order)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_order)); + PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)__pyx_v_order)); + __Pyx_INCREF(__pyx_slice__4); + __Pyx_GIVEREF(__pyx_slice__4); + PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_slice__4); + __pyx_t_5 = PyObject_GetItem(((PyObject *)__pyx_v_dets), __pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 30, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 30, __pyx_L1_error) + __pyx_t_9 = ((PyArrayObject *)__pyx_t_5); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer, (PyObject*)__pyx_t_9, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) { + __pyx_v_sorted_dets = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer.buf = NULL; + __PYX_ERR(0, 29, __pyx_L1_error) + } else {__pyx_pybuffernd_sorted_dets.diminfo[0].strides = __pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_sorted_dets.diminfo[0].shape = __pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_sorted_dets.diminfo[1].strides = __pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_sorted_dets.diminfo[1].shape = __pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer.shape[1]; + } + } + __pyx_t_9 = 0; + __pyx_v_sorted_dets = ((PyArrayObject *)__pyx_t_5); + __pyx_t_5 = 0; + + /* "nms/gpu_nms.pyx":31 + * cdef np.ndarray[np.float32_t, ndim=2] \ + * sorted_dets = dets[order, :] + * _nms(&keep[0], &num_out, &sorted_dets[0, 0], boxes_num, boxes_dim, thresh, device_id) # <<<<<<<<<<<<<< + * keep = keep[:num_out] + * return list(order[keep]) + */ + __pyx_t_10 = 0; + __pyx_t_11 = -1; + if (__pyx_t_10 < 0) { + __pyx_t_10 += __pyx_pybuffernd_keep.diminfo[0].shape; + if (unlikely(__pyx_t_10 < 0)) __pyx_t_11 = 0; + } else if (unlikely(__pyx_t_10 >= __pyx_pybuffernd_keep.diminfo[0].shape)) __pyx_t_11 = 0; + if (unlikely(__pyx_t_11 != -1)) { + __Pyx_RaiseBufferIndexError(__pyx_t_11); + __PYX_ERR(0, 31, __pyx_L1_error) + } + __pyx_t_12 = 0; + __pyx_t_13 = 0; + __pyx_t_11 = -1; + if (__pyx_t_12 < 0) { + __pyx_t_12 += __pyx_pybuffernd_sorted_dets.diminfo[0].shape; + if (unlikely(__pyx_t_12 < 0)) __pyx_t_11 = 0; + } else if (unlikely(__pyx_t_12 >= __pyx_pybuffernd_sorted_dets.diminfo[0].shape)) __pyx_t_11 = 0; + if (__pyx_t_13 < 0) { + __pyx_t_13 += __pyx_pybuffernd_sorted_dets.diminfo[1].shape; + if (unlikely(__pyx_t_13 < 0)) __pyx_t_11 = 1; + } else if (unlikely(__pyx_t_13 >= __pyx_pybuffernd_sorted_dets.diminfo[1].shape)) __pyx_t_11 = 1; + if (unlikely(__pyx_t_11 != -1)) { + __Pyx_RaiseBufferIndexError(__pyx_t_11); + __PYX_ERR(0, 31, __pyx_L1_error) + } + __pyx_t_14 = __pyx_PyFloat_AsFloat(__pyx_v_thresh); if (unlikely((__pyx_t_14 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 31, __pyx_L1_error) + _nms((&(*__Pyx_BufPtrStrided1d(__pyx_t_5numpy_int32_t *, __pyx_pybuffernd_keep.rcbuffer->pybuffer.buf, __pyx_t_10, __pyx_pybuffernd_keep.diminfo[0].strides))), (&__pyx_v_num_out), (&(*__Pyx_BufPtrStrided2d(__pyx_t_5numpy_float32_t *, __pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer.buf, __pyx_t_12, __pyx_pybuffernd_sorted_dets.diminfo[0].strides, __pyx_t_13, __pyx_pybuffernd_sorted_dets.diminfo[1].strides))), __pyx_v_boxes_num, __pyx_v_boxes_dim, __pyx_t_14, __pyx_v_device_id); + + /* "nms/gpu_nms.pyx":32 + * sorted_dets = dets[order, :] + * _nms(&keep[0], &num_out, &sorted_dets[0, 0], boxes_num, boxes_dim, thresh, device_id) + * keep = keep[:num_out] # <<<<<<<<<<<<<< + * return list(order[keep]) + */ + __pyx_t_5 = __Pyx_PyObject_GetSlice(((PyObject *)__pyx_v_keep), 0, __pyx_v_num_out, NULL, NULL, NULL, 0, 1, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 32, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 32, __pyx_L1_error) + __pyx_t_6 = ((PyArrayObject *)__pyx_t_5); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_keep.rcbuffer->pybuffer); + __pyx_t_11 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_keep.rcbuffer->pybuffer, (PyObject*)__pyx_t_6, &__Pyx_TypeInfo_nn___pyx_t_5numpy_int32_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack); + if (unlikely(__pyx_t_11 < 0)) { + PyErr_Fetch(&__pyx_t_15, &__pyx_t_16, &__pyx_t_17); + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_keep.rcbuffer->pybuffer, (PyObject*)__pyx_v_keep, &__Pyx_TypeInfo_nn___pyx_t_5numpy_int32_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) { + Py_XDECREF(__pyx_t_15); Py_XDECREF(__pyx_t_16); Py_XDECREF(__pyx_t_17); + __Pyx_RaiseBufferFallbackError(); + } else { + PyErr_Restore(__pyx_t_15, __pyx_t_16, __pyx_t_17); + } + } + __pyx_pybuffernd_keep.diminfo[0].strides = __pyx_pybuffernd_keep.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_keep.diminfo[0].shape = __pyx_pybuffernd_keep.rcbuffer->pybuffer.shape[0]; + if (unlikely(__pyx_t_11 < 0)) __PYX_ERR(0, 32, __pyx_L1_error) + } + __pyx_t_6 = 0; + __Pyx_DECREF_SET(__pyx_v_keep, ((PyArrayObject *)__pyx_t_5)); + __pyx_t_5 = 0; + + /* "nms/gpu_nms.pyx":33 + * _nms(&keep[0], &num_out, &sorted_dets[0, 0], boxes_num, boxes_dim, thresh, device_id) + * keep = keep[:num_out] + * return list(order[keep]) # <<<<<<<<<<<<<< + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_5 = PyObject_GetItem(((PyObject *)__pyx_v_order), ((PyObject *)__pyx_v_keep)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 33, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_1 = PySequence_List(__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 33, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "nms/gpu_nms.pyx":16 + * void _nms(np.int32_t*, int*, np.float32_t*, int, int, float, int) + * + * def gpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh, # <<<<<<<<<<<<<< + * np.int32_t device_id=0): + * cdef int boxes_num = dets.shape[0] + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_5); + { PyObject *__pyx_type, *__pyx_value, *__pyx_tb; + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_dets.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_keep.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_order.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_scores.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer); + __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);} + __Pyx_AddTraceback("nms.gpu_nms.gpu_nms", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = NULL; + goto __pyx_L2; + __pyx_L0:; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_dets.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_keep.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_order.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_scores.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_sorted_dets.rcbuffer->pybuffer); + __pyx_L2:; + __Pyx_XDECREF((PyObject *)__pyx_v_keep); + __Pyx_XDECREF((PyObject *)__pyx_v_scores); + __Pyx_XDECREF((PyObject *)__pyx_v_order); + __Pyx_XDECREF((PyObject *)__pyx_v_sorted_dets); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":197 + * # experimental exception made for __getbuffer__ and __releasebuffer__ + * # -- the details of this may change. + * def __getbuffer__(ndarray self, Py_buffer* info, int flags): # <<<<<<<<<<<<<< + * # This implementation of getbuffer is geared towards Cython + * # requirements, and does not yet fullfill the PEP. + */ + +/* Python wrapper */ +static CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ +static CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { + int __pyx_r; + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); + __pyx_r = __pyx_pf_5numpy_7ndarray___getbuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); + + /* function exit code */ + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { + int __pyx_v_copy_shape; + int __pyx_v_i; + int __pyx_v_ndim; + int __pyx_v_endian_detector; + int __pyx_v_little_endian; + int __pyx_v_t; + char *__pyx_v_f; + PyArray_Descr *__pyx_v_descr = 0; + int __pyx_v_offset; + int __pyx_v_hasfields; + int __pyx_r; + __Pyx_RefNannyDeclarations + int __pyx_t_1; + int __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + int __pyx_t_5; + PyObject *__pyx_t_6 = NULL; + char *__pyx_t_7; + __Pyx_RefNannySetupContext("__getbuffer__", 0); + if (__pyx_v_info != NULL) { + __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); + __Pyx_GIVEREF(__pyx_v_info->obj); + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":203 + * # of flags + * + * if info == NULL: return # <<<<<<<<<<<<<< + * + * cdef int copy_shape, i, ndim + */ + __pyx_t_1 = ((__pyx_v_info == NULL) != 0); + if (__pyx_t_1) { + __pyx_r = 0; + goto __pyx_L0; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":206 + * + * cdef int copy_shape, i, ndim + * cdef int endian_detector = 1 # <<<<<<<<<<<<<< + * cdef bint little_endian = ((&endian_detector)[0] != 0) + * + */ + __pyx_v_endian_detector = 1; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":207 + * cdef int copy_shape, i, ndim + * cdef int endian_detector = 1 + * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<< + * + * ndim = PyArray_NDIM(self) + */ + __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":209 + * cdef bint little_endian = ((&endian_detector)[0] != 0) + * + * ndim = PyArray_NDIM(self) # <<<<<<<<<<<<<< + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + */ + __pyx_v_ndim = PyArray_NDIM(__pyx_v_self); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":211 + * ndim = PyArray_NDIM(self) + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * copy_shape = 1 + * else: + */ + __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0); + if (__pyx_t_1) { + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":212 + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + * copy_shape = 1 # <<<<<<<<<<<<<< + * else: + * copy_shape = 0 + */ + __pyx_v_copy_shape = 1; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":211 + * ndim = PyArray_NDIM(self) + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * copy_shape = 1 + * else: + */ + goto __pyx_L4; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":214 + * copy_shape = 1 + * else: + * copy_shape = 0 # <<<<<<<<<<<<<< + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + */ + /*else*/ { + __pyx_v_copy_shape = 0; + } + __pyx_L4:; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":216 + * copy_shape = 0 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") + */ + __pyx_t_2 = (((__pyx_v_flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS) != 0); + if (__pyx_t_2) { + } else { + __pyx_t_1 = __pyx_t_2; + goto __pyx_L6_bool_binop_done; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":217 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): # <<<<<<<<<<<<<< + * raise ValueError(u"ndarray is not C contiguous") + * + */ + __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_C_CONTIGUOUS) != 0)) != 0); + __pyx_t_1 = __pyx_t_2; + __pyx_L6_bool_binop_done:; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":216 + * copy_shape = 0 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") + */ + if (__pyx_t_1) { + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":218 + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") # <<<<<<<<<<<<<< + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + */ + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 218, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 218, __pyx_L1_error) + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":216 + * copy_shape = 0 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") + */ + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":220 + * raise ValueError(u"ndarray is not C contiguous") + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") + */ + __pyx_t_2 = (((__pyx_v_flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS) != 0); + if (__pyx_t_2) { + } else { + __pyx_t_1 = __pyx_t_2; + goto __pyx_L9_bool_binop_done; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":221 + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): # <<<<<<<<<<<<<< + * raise ValueError(u"ndarray is not Fortran contiguous") + * + */ + __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_F_CONTIGUOUS) != 0)) != 0); + __pyx_t_1 = __pyx_t_2; + __pyx_L9_bool_binop_done:; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":220 + * raise ValueError(u"ndarray is not C contiguous") + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") + */ + if (__pyx_t_1) { + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":222 + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") # <<<<<<<<<<<<<< + * + * info.buf = PyArray_DATA(self) + */ + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 222, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 222, __pyx_L1_error) + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":220 + * raise ValueError(u"ndarray is not C contiguous") + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") + */ + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":224 + * raise ValueError(u"ndarray is not Fortran contiguous") + * + * info.buf = PyArray_DATA(self) # <<<<<<<<<<<<<< + * info.ndim = ndim + * if copy_shape: + */ + __pyx_v_info->buf = PyArray_DATA(__pyx_v_self); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":225 + * + * info.buf = PyArray_DATA(self) + * info.ndim = ndim # <<<<<<<<<<<<<< + * if copy_shape: + * # Allocate new buffer for strides and shape info. + */ + __pyx_v_info->ndim = __pyx_v_ndim; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":226 + * info.buf = PyArray_DATA(self) + * info.ndim = ndim + * if copy_shape: # <<<<<<<<<<<<<< + * # Allocate new buffer for strides and shape info. + * # This is allocated as one block, strides first. + */ + __pyx_t_1 = (__pyx_v_copy_shape != 0); + if (__pyx_t_1) { + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":229 + * # Allocate new buffer for strides and shape info. + * # This is allocated as one block, strides first. + * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) # <<<<<<<<<<<<<< + * info.shape = info.strides + ndim + * for i in range(ndim): + */ + __pyx_v_info->strides = ((Py_ssize_t *)malloc((((sizeof(Py_ssize_t)) * ((size_t)__pyx_v_ndim)) * 2))); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":230 + * # This is allocated as one block, strides first. + * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) + * info.shape = info.strides + ndim # <<<<<<<<<<<<<< + * for i in range(ndim): + * info.strides[i] = PyArray_STRIDES(self)[i] + */ + __pyx_v_info->shape = (__pyx_v_info->strides + __pyx_v_ndim); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":231 + * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) + * info.shape = info.strides + ndim + * for i in range(ndim): # <<<<<<<<<<<<<< + * info.strides[i] = PyArray_STRIDES(self)[i] + * info.shape[i] = PyArray_DIMS(self)[i] + */ + __pyx_t_4 = __pyx_v_ndim; + for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { + __pyx_v_i = __pyx_t_5; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":232 + * info.shape = info.strides + ndim + * for i in range(ndim): + * info.strides[i] = PyArray_STRIDES(self)[i] # <<<<<<<<<<<<<< + * info.shape[i] = PyArray_DIMS(self)[i] + * else: + */ + (__pyx_v_info->strides[__pyx_v_i]) = (PyArray_STRIDES(__pyx_v_self)[__pyx_v_i]); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":233 + * for i in range(ndim): + * info.strides[i] = PyArray_STRIDES(self)[i] + * info.shape[i] = PyArray_DIMS(self)[i] # <<<<<<<<<<<<<< + * else: + * info.strides = PyArray_STRIDES(self) + */ + (__pyx_v_info->shape[__pyx_v_i]) = (PyArray_DIMS(__pyx_v_self)[__pyx_v_i]); + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":226 + * info.buf = PyArray_DATA(self) + * info.ndim = ndim + * if copy_shape: # <<<<<<<<<<<<<< + * # Allocate new buffer for strides and shape info. + * # This is allocated as one block, strides first. + */ + goto __pyx_L11; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":235 + * info.shape[i] = PyArray_DIMS(self)[i] + * else: + * info.strides = PyArray_STRIDES(self) # <<<<<<<<<<<<<< + * info.shape = PyArray_DIMS(self) + * info.suboffsets = NULL + */ + /*else*/ { + __pyx_v_info->strides = ((Py_ssize_t *)PyArray_STRIDES(__pyx_v_self)); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":236 + * else: + * info.strides = PyArray_STRIDES(self) + * info.shape = PyArray_DIMS(self) # <<<<<<<<<<<<<< + * info.suboffsets = NULL + * info.itemsize = PyArray_ITEMSIZE(self) + */ + __pyx_v_info->shape = ((Py_ssize_t *)PyArray_DIMS(__pyx_v_self)); + } + __pyx_L11:; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":237 + * info.strides = PyArray_STRIDES(self) + * info.shape = PyArray_DIMS(self) + * info.suboffsets = NULL # <<<<<<<<<<<<<< + * info.itemsize = PyArray_ITEMSIZE(self) + * info.readonly = not PyArray_ISWRITEABLE(self) + */ + __pyx_v_info->suboffsets = NULL; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":238 + * info.shape = PyArray_DIMS(self) + * info.suboffsets = NULL + * info.itemsize = PyArray_ITEMSIZE(self) # <<<<<<<<<<<<<< + * info.readonly = not PyArray_ISWRITEABLE(self) + * + */ + __pyx_v_info->itemsize = PyArray_ITEMSIZE(__pyx_v_self); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":239 + * info.suboffsets = NULL + * info.itemsize = PyArray_ITEMSIZE(self) + * info.readonly = not PyArray_ISWRITEABLE(self) # <<<<<<<<<<<<<< + * + * cdef int t + */ + __pyx_v_info->readonly = (!(PyArray_ISWRITEABLE(__pyx_v_self) != 0)); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":242 + * + * cdef int t + * cdef char* f = NULL # <<<<<<<<<<<<<< + * cdef dtype descr = self.descr + * cdef int offset + */ + __pyx_v_f = NULL; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":243 + * cdef int t + * cdef char* f = NULL + * cdef dtype descr = self.descr # <<<<<<<<<<<<<< + * cdef int offset + * + */ + __pyx_t_3 = ((PyObject *)__pyx_v_self->descr); + __Pyx_INCREF(__pyx_t_3); + __pyx_v_descr = ((PyArray_Descr *)__pyx_t_3); + __pyx_t_3 = 0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":246 + * cdef int offset + * + * cdef bint hasfields = PyDataType_HASFIELDS(descr) # <<<<<<<<<<<<<< + * + * if not hasfields and not copy_shape: + */ + __pyx_v_hasfields = PyDataType_HASFIELDS(__pyx_v_descr); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":248 + * cdef bint hasfields = PyDataType_HASFIELDS(descr) + * + * if not hasfields and not copy_shape: # <<<<<<<<<<<<<< + * # do not call releasebuffer + * info.obj = None + */ + __pyx_t_2 = ((!(__pyx_v_hasfields != 0)) != 0); + if (__pyx_t_2) { + } else { + __pyx_t_1 = __pyx_t_2; + goto __pyx_L15_bool_binop_done; + } + __pyx_t_2 = ((!(__pyx_v_copy_shape != 0)) != 0); + __pyx_t_1 = __pyx_t_2; + __pyx_L15_bool_binop_done:; + if (__pyx_t_1) { + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":250 + * if not hasfields and not copy_shape: + * # do not call releasebuffer + * info.obj = None # <<<<<<<<<<<<<< + * else: + * # need to call releasebuffer + */ + __Pyx_INCREF(Py_None); + __Pyx_GIVEREF(Py_None); + __Pyx_GOTREF(__pyx_v_info->obj); + __Pyx_DECREF(__pyx_v_info->obj); + __pyx_v_info->obj = Py_None; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":248 + * cdef bint hasfields = PyDataType_HASFIELDS(descr) + * + * if not hasfields and not copy_shape: # <<<<<<<<<<<<<< + * # do not call releasebuffer + * info.obj = None + */ + goto __pyx_L14; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":253 + * else: + * # need to call releasebuffer + * info.obj = self # <<<<<<<<<<<<<< + * + * if not hasfields: + */ + /*else*/ { + __Pyx_INCREF(((PyObject *)__pyx_v_self)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); + __Pyx_GOTREF(__pyx_v_info->obj); + __Pyx_DECREF(__pyx_v_info->obj); + __pyx_v_info->obj = ((PyObject *)__pyx_v_self); + } + __pyx_L14:; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":255 + * info.obj = self + * + * if not hasfields: # <<<<<<<<<<<<<< + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or + */ + __pyx_t_1 = ((!(__pyx_v_hasfields != 0)) != 0); + if (__pyx_t_1) { + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":256 + * + * if not hasfields: + * t = descr.type_num # <<<<<<<<<<<<<< + * if ((descr.byteorder == c'>' and little_endian) or + * (descr.byteorder == c'<' and not little_endian)): + */ + __pyx_t_4 = __pyx_v_descr->type_num; + __pyx_v_t = __pyx_t_4; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":257 + * if not hasfields: + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + __pyx_t_2 = ((__pyx_v_descr->byteorder == '>') != 0); + if (!__pyx_t_2) { + goto __pyx_L20_next_or; + } else { + } + __pyx_t_2 = (__pyx_v_little_endian != 0); + if (!__pyx_t_2) { + } else { + __pyx_t_1 = __pyx_t_2; + goto __pyx_L19_bool_binop_done; + } + __pyx_L20_next_or:; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":258 + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or + * (descr.byteorder == c'<' and not little_endian)): # <<<<<<<<<<<<<< + * raise ValueError(u"Non-native byte order not supported") + * if t == NPY_BYTE: f = "b" + */ + __pyx_t_2 = ((__pyx_v_descr->byteorder == '<') != 0); + if (__pyx_t_2) { + } else { + __pyx_t_1 = __pyx_t_2; + goto __pyx_L19_bool_binop_done; + } + __pyx_t_2 = ((!(__pyx_v_little_endian != 0)) != 0); + __pyx_t_1 = __pyx_t_2; + __pyx_L19_bool_binop_done:; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":257 + * if not hasfields: + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + if (__pyx_t_1) { + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":259 + * if ((descr.byteorder == c'>' and little_endian) or + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" + */ + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 259, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 259, __pyx_L1_error) + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":257 + * if not hasfields: + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":260 + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + * if t == NPY_BYTE: f = "b" # <<<<<<<<<<<<<< + * elif t == NPY_UBYTE: f = "B" + * elif t == NPY_SHORT: f = "h" + */ + switch (__pyx_v_t) { + case NPY_BYTE: + __pyx_v_f = ((char *)"b"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":261 + * raise ValueError(u"Non-native byte order not supported") + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" # <<<<<<<<<<<<<< + * elif t == NPY_SHORT: f = "h" + * elif t == NPY_USHORT: f = "H" + */ + case NPY_UBYTE: + __pyx_v_f = ((char *)"B"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":262 + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" + * elif t == NPY_SHORT: f = "h" # <<<<<<<<<<<<<< + * elif t == NPY_USHORT: f = "H" + * elif t == NPY_INT: f = "i" + */ + case NPY_SHORT: + __pyx_v_f = ((char *)"h"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":263 + * elif t == NPY_UBYTE: f = "B" + * elif t == NPY_SHORT: f = "h" + * elif t == NPY_USHORT: f = "H" # <<<<<<<<<<<<<< + * elif t == NPY_INT: f = "i" + * elif t == NPY_UINT: f = "I" + */ + case NPY_USHORT: + __pyx_v_f = ((char *)"H"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":264 + * elif t == NPY_SHORT: f = "h" + * elif t == NPY_USHORT: f = "H" + * elif t == NPY_INT: f = "i" # <<<<<<<<<<<<<< + * elif t == NPY_UINT: f = "I" + * elif t == NPY_LONG: f = "l" + */ + case NPY_INT: + __pyx_v_f = ((char *)"i"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":265 + * elif t == NPY_USHORT: f = "H" + * elif t == NPY_INT: f = "i" + * elif t == NPY_UINT: f = "I" # <<<<<<<<<<<<<< + * elif t == NPY_LONG: f = "l" + * elif t == NPY_ULONG: f = "L" + */ + case NPY_UINT: + __pyx_v_f = ((char *)"I"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":266 + * elif t == NPY_INT: f = "i" + * elif t == NPY_UINT: f = "I" + * elif t == NPY_LONG: f = "l" # <<<<<<<<<<<<<< + * elif t == NPY_ULONG: f = "L" + * elif t == NPY_LONGLONG: f = "q" + */ + case NPY_LONG: + __pyx_v_f = ((char *)"l"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":267 + * elif t == NPY_UINT: f = "I" + * elif t == NPY_LONG: f = "l" + * elif t == NPY_ULONG: f = "L" # <<<<<<<<<<<<<< + * elif t == NPY_LONGLONG: f = "q" + * elif t == NPY_ULONGLONG: f = "Q" + */ + case NPY_ULONG: + __pyx_v_f = ((char *)"L"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":268 + * elif t == NPY_LONG: f = "l" + * elif t == NPY_ULONG: f = "L" + * elif t == NPY_LONGLONG: f = "q" # <<<<<<<<<<<<<< + * elif t == NPY_ULONGLONG: f = "Q" + * elif t == NPY_FLOAT: f = "f" + */ + case NPY_LONGLONG: + __pyx_v_f = ((char *)"q"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":269 + * elif t == NPY_ULONG: f = "L" + * elif t == NPY_LONGLONG: f = "q" + * elif t == NPY_ULONGLONG: f = "Q" # <<<<<<<<<<<<<< + * elif t == NPY_FLOAT: f = "f" + * elif t == NPY_DOUBLE: f = "d" + */ + case NPY_ULONGLONG: + __pyx_v_f = ((char *)"Q"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":270 + * elif t == NPY_LONGLONG: f = "q" + * elif t == NPY_ULONGLONG: f = "Q" + * elif t == NPY_FLOAT: f = "f" # <<<<<<<<<<<<<< + * elif t == NPY_DOUBLE: f = "d" + * elif t == NPY_LONGDOUBLE: f = "g" + */ + case NPY_FLOAT: + __pyx_v_f = ((char *)"f"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":271 + * elif t == NPY_ULONGLONG: f = "Q" + * elif t == NPY_FLOAT: f = "f" + * elif t == NPY_DOUBLE: f = "d" # <<<<<<<<<<<<<< + * elif t == NPY_LONGDOUBLE: f = "g" + * elif t == NPY_CFLOAT: f = "Zf" + */ + case NPY_DOUBLE: + __pyx_v_f = ((char *)"d"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":272 + * elif t == NPY_FLOAT: f = "f" + * elif t == NPY_DOUBLE: f = "d" + * elif t == NPY_LONGDOUBLE: f = "g" # <<<<<<<<<<<<<< + * elif t == NPY_CFLOAT: f = "Zf" + * elif t == NPY_CDOUBLE: f = "Zd" + */ + case NPY_LONGDOUBLE: + __pyx_v_f = ((char *)"g"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":273 + * elif t == NPY_DOUBLE: f = "d" + * elif t == NPY_LONGDOUBLE: f = "g" + * elif t == NPY_CFLOAT: f = "Zf" # <<<<<<<<<<<<<< + * elif t == NPY_CDOUBLE: f = "Zd" + * elif t == NPY_CLONGDOUBLE: f = "Zg" + */ + case NPY_CFLOAT: + __pyx_v_f = ((char *)"Zf"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":274 + * elif t == NPY_LONGDOUBLE: f = "g" + * elif t == NPY_CFLOAT: f = "Zf" + * elif t == NPY_CDOUBLE: f = "Zd" # <<<<<<<<<<<<<< + * elif t == NPY_CLONGDOUBLE: f = "Zg" + * elif t == NPY_OBJECT: f = "O" + */ + case NPY_CDOUBLE: + __pyx_v_f = ((char *)"Zd"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":275 + * elif t == NPY_CFLOAT: f = "Zf" + * elif t == NPY_CDOUBLE: f = "Zd" + * elif t == NPY_CLONGDOUBLE: f = "Zg" # <<<<<<<<<<<<<< + * elif t == NPY_OBJECT: f = "O" + * else: + */ + case NPY_CLONGDOUBLE: + __pyx_v_f = ((char *)"Zg"); + break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":276 + * elif t == NPY_CDOUBLE: f = "Zd" + * elif t == NPY_CLONGDOUBLE: f = "Zg" + * elif t == NPY_OBJECT: f = "O" # <<<<<<<<<<<<<< + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + */ + case NPY_OBJECT: + __pyx_v_f = ((char *)"O"); + break; + default: + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":278 + * elif t == NPY_OBJECT: f = "O" + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<< + * info.format = f + * return + */ + __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 278, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_6 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 278, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 278, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_GIVEREF(__pyx_t_6); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_6); + __pyx_t_6 = 0; + __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 278, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_Raise(__pyx_t_6, 0, 0, 0); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __PYX_ERR(1, 278, __pyx_L1_error) + break; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":279 + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + * info.format = f # <<<<<<<<<<<<<< + * return + * else: + */ + __pyx_v_info->format = __pyx_v_f; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":280 + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + * info.format = f + * return # <<<<<<<<<<<<<< + * else: + * info.format = stdlib.malloc(_buffer_format_string_len) + */ + __pyx_r = 0; + goto __pyx_L0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":255 + * info.obj = self + * + * if not hasfields: # <<<<<<<<<<<<<< + * t = descr.type_num + * if ((descr.byteorder == c'>' and little_endian) or + */ + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":282 + * return + * else: + * info.format = stdlib.malloc(_buffer_format_string_len) # <<<<<<<<<<<<<< + * info.format[0] = c'^' # Native data types, manual alignment + * offset = 0 + */ + /*else*/ { + __pyx_v_info->format = ((char *)malloc(0xFF)); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":283 + * else: + * info.format = stdlib.malloc(_buffer_format_string_len) + * info.format[0] = c'^' # Native data types, manual alignment # <<<<<<<<<<<<<< + * offset = 0 + * f = _util_dtypestring(descr, info.format + 1, + */ + (__pyx_v_info->format[0]) = '^'; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":284 + * info.format = stdlib.malloc(_buffer_format_string_len) + * info.format[0] = c'^' # Native data types, manual alignment + * offset = 0 # <<<<<<<<<<<<<< + * f = _util_dtypestring(descr, info.format + 1, + * info.format + _buffer_format_string_len, + */ + __pyx_v_offset = 0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":285 + * info.format[0] = c'^' # Native data types, manual alignment + * offset = 0 + * f = _util_dtypestring(descr, info.format + 1, # <<<<<<<<<<<<<< + * info.format + _buffer_format_string_len, + * &offset) + */ + __pyx_t_7 = __pyx_f_5numpy__util_dtypestring(__pyx_v_descr, (__pyx_v_info->format + 1), (__pyx_v_info->format + 0xFF), (&__pyx_v_offset)); if (unlikely(__pyx_t_7 == NULL)) __PYX_ERR(1, 285, __pyx_L1_error) + __pyx_v_f = __pyx_t_7; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":288 + * info.format + _buffer_format_string_len, + * &offset) + * f[0] = c'\0' # Terminate format string # <<<<<<<<<<<<<< + * + * def __releasebuffer__(ndarray self, Py_buffer* info): + */ + (__pyx_v_f[0]) = '\x00'; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":197 + * # experimental exception made for __getbuffer__ and __releasebuffer__ + * # -- the details of this may change. + * def __getbuffer__(ndarray self, Py_buffer* info, int flags): # <<<<<<<<<<<<<< + * # This implementation of getbuffer is geared towards Cython + * # requirements, and does not yet fullfill the PEP. + */ + + /* function exit code */ + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_6); + __Pyx_AddTraceback("numpy.ndarray.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = -1; + if (__pyx_v_info != NULL && __pyx_v_info->obj != NULL) { + __Pyx_GOTREF(__pyx_v_info->obj); + __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = NULL; + } + goto __pyx_L2; + __pyx_L0:; + if (__pyx_v_info != NULL && __pyx_v_info->obj == Py_None) { + __Pyx_GOTREF(Py_None); + __Pyx_DECREF(Py_None); __pyx_v_info->obj = NULL; + } + __pyx_L2:; + __Pyx_XDECREF((PyObject *)__pyx_v_descr); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":290 + * f[0] = c'\0' # Terminate format string + * + * def __releasebuffer__(ndarray self, Py_buffer* info): # <<<<<<<<<<<<<< + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + */ + +/* Python wrapper */ +static CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info); /*proto*/ +static CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info) { + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__releasebuffer__ (wrapper)", 0); + __pyx_pf_5numpy_7ndarray_2__releasebuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info)); + + /* function exit code */ + __Pyx_RefNannyFinishContext(); +} + +static void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info) { + __Pyx_RefNannyDeclarations + int __pyx_t_1; + __Pyx_RefNannySetupContext("__releasebuffer__", 0); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":291 + * + * def __releasebuffer__(ndarray self, Py_buffer* info): + * if PyArray_HASFIELDS(self): # <<<<<<<<<<<<<< + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + */ + __pyx_t_1 = (PyArray_HASFIELDS(__pyx_v_self) != 0); + if (__pyx_t_1) { + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":292 + * def __releasebuffer__(ndarray self, Py_buffer* info): + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) # <<<<<<<<<<<<<< + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + * stdlib.free(info.strides) + */ + free(__pyx_v_info->format); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":291 + * + * def __releasebuffer__(ndarray self, Py_buffer* info): + * if PyArray_HASFIELDS(self): # <<<<<<<<<<<<<< + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + */ + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":293 + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * stdlib.free(info.strides) + * # info.shape was stored after info.strides in the same block + */ + __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0); + if (__pyx_t_1) { + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":294 + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + * stdlib.free(info.strides) # <<<<<<<<<<<<<< + * # info.shape was stored after info.strides in the same block + * + */ + free(__pyx_v_info->strides); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":293 + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * stdlib.free(info.strides) + * # info.shape was stored after info.strides in the same block + */ + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":290 + * f[0] = c'\0' # Terminate format string + * + * def __releasebuffer__(ndarray self, Py_buffer* info): # <<<<<<<<<<<<<< + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + */ + + /* function exit code */ + __Pyx_RefNannyFinishContext(); +} + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":770 + * ctypedef npy_cdouble complex_t + * + * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(1, a) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyObject *__pyx_v_a) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew1", 0); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":771 + * + * cdef inline object PyArray_MultiIterNew1(a): + * return PyArray_MultiIterNew(1, a) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew2(a, b): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(1, ((void *)__pyx_v_a)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 771, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":770 + * ctypedef npy_cdouble complex_t + * + * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(1, a) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew1", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":773 + * return PyArray_MultiIterNew(1, a) + * + * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(2, a, b) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyObject *__pyx_v_a, PyObject *__pyx_v_b) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew2", 0); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":774 + * + * cdef inline object PyArray_MultiIterNew2(a, b): + * return PyArray_MultiIterNew(2, a, b) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(2, ((void *)__pyx_v_a), ((void *)__pyx_v_b)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 774, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":773 + * return PyArray_MultiIterNew(1, a) + * + * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(2, a, b) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew2", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":776 + * return PyArray_MultiIterNew(2, a, b) + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(3, a, b, c) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew3", 0); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":777 + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): + * return PyArray_MultiIterNew(3, a, b, c) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(3, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 777, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":776 + * return PyArray_MultiIterNew(2, a, b) + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(3, a, b, c) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew3", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":779 + * return PyArray_MultiIterNew(3, a, b, c) + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(4, a, b, c, d) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew4(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew4", 0); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":780 + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): + * return PyArray_MultiIterNew(4, a, b, c, d) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(4, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 780, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":779 + * return PyArray_MultiIterNew(3, a, b, c) + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(4, a, b, c, d) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew4", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":782 + * return PyArray_MultiIterNew(4, a, b, c, d) + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew5(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d, PyObject *__pyx_v_e) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew5", 0); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":783 + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): + * return PyArray_MultiIterNew(5, a, b, c, d, e) # <<<<<<<<<<<<<< + * + * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(5, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d), ((void *)__pyx_v_e)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 783, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":782 + * return PyArray_MultiIterNew(4, a, b, c, d) + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew5", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":785 + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: # <<<<<<<<<<<<<< + * # Recursive utility function used in __getbuffer__ to get format + * # string. The new location in the format string is returned. + */ + +static CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *__pyx_v_descr, char *__pyx_v_f, char *__pyx_v_end, int *__pyx_v_offset) { + PyArray_Descr *__pyx_v_child = 0; + int __pyx_v_endian_detector; + int __pyx_v_little_endian; + PyObject *__pyx_v_fields = 0; + PyObject *__pyx_v_childname = NULL; + PyObject *__pyx_v_new_offset = NULL; + PyObject *__pyx_v_t = NULL; + char *__pyx_r; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + Py_ssize_t __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + int __pyx_t_5; + int __pyx_t_6; + int __pyx_t_7; + long __pyx_t_8; + char *__pyx_t_9; + __Pyx_RefNannySetupContext("_util_dtypestring", 0); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":790 + * + * cdef dtype child + * cdef int endian_detector = 1 # <<<<<<<<<<<<<< + * cdef bint little_endian = ((&endian_detector)[0] != 0) + * cdef tuple fields + */ + __pyx_v_endian_detector = 1; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":791 + * cdef dtype child + * cdef int endian_detector = 1 + * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<< + * cdef tuple fields + * + */ + __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":794 + * cdef tuple fields + * + * for childname in descr.names: # <<<<<<<<<<<<<< + * fields = descr.fields[childname] + * child, new_offset = fields + */ + if (unlikely(__pyx_v_descr->names == Py_None)) { + PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); + __PYX_ERR(1, 794, __pyx_L1_error) + } + __pyx_t_1 = __pyx_v_descr->names; __Pyx_INCREF(__pyx_t_1); __pyx_t_2 = 0; + for (;;) { + if (__pyx_t_2 >= PyTuple_GET_SIZE(__pyx_t_1)) break; + #if CYTHON_COMPILING_IN_CPYTHON + __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_3); __pyx_t_2++; if (unlikely(0 < 0)) __PYX_ERR(1, 794, __pyx_L1_error) + #else + __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 794, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + #endif + __Pyx_XDECREF_SET(__pyx_v_childname, __pyx_t_3); + __pyx_t_3 = 0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":795 + * + * for childname in descr.names: + * fields = descr.fields[childname] # <<<<<<<<<<<<<< + * child, new_offset = fields + * + */ + if (unlikely(__pyx_v_descr->fields == Py_None)) { + PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); + __PYX_ERR(1, 795, __pyx_L1_error) + } + __pyx_t_3 = __Pyx_PyDict_GetItem(__pyx_v_descr->fields, __pyx_v_childname); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 795, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + if (!(likely(PyTuple_CheckExact(__pyx_t_3))||((__pyx_t_3) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_t_3)->tp_name), 0))) __PYX_ERR(1, 795, __pyx_L1_error) + __Pyx_XDECREF_SET(__pyx_v_fields, ((PyObject*)__pyx_t_3)); + __pyx_t_3 = 0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":796 + * for childname in descr.names: + * fields = descr.fields[childname] + * child, new_offset = fields # <<<<<<<<<<<<<< + * + * if (end - f) - (new_offset - offset[0]) < 15: + */ + if (likely(__pyx_v_fields != Py_None)) { + PyObject* sequence = __pyx_v_fields; + #if CYTHON_COMPILING_IN_CPYTHON + Py_ssize_t size = Py_SIZE(sequence); + #else + Py_ssize_t size = PySequence_Size(sequence); + #endif + if (unlikely(size != 2)) { + if (size > 2) __Pyx_RaiseTooManyValuesError(2); + else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); + __PYX_ERR(1, 796, __pyx_L1_error) + } + #if CYTHON_COMPILING_IN_CPYTHON + __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); + __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); + __Pyx_INCREF(__pyx_t_3); + __Pyx_INCREF(__pyx_t_4); + #else + __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 796, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 796, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + #endif + } else { + __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 796, __pyx_L1_error) + } + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_dtype))))) __PYX_ERR(1, 796, __pyx_L1_error) + __Pyx_XDECREF_SET(__pyx_v_child, ((PyArray_Descr *)__pyx_t_3)); + __pyx_t_3 = 0; + __Pyx_XDECREF_SET(__pyx_v_new_offset, __pyx_t_4); + __pyx_t_4 = 0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":798 + * child, new_offset = fields + * + * if (end - f) - (new_offset - offset[0]) < 15: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + */ + __pyx_t_4 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 798, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyNumber_Subtract(__pyx_v_new_offset, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 798, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 798, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = ((((__pyx_v_end - __pyx_v_f) - ((int)__pyx_t_5)) < 15) != 0); + if (__pyx_t_6) { + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":799 + * + * if (end - f) - (new_offset - offset[0]) < 15: + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") # <<<<<<<<<<<<<< + * + * if ((child.byteorder == c'>' and little_endian) or + */ + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 799, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 799, __pyx_L1_error) + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":798 + * child, new_offset = fields + * + * if (end - f) - (new_offset - offset[0]) < 15: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + */ + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":801 + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + * if ((child.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (child.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + __pyx_t_7 = ((__pyx_v_child->byteorder == '>') != 0); + if (!__pyx_t_7) { + goto __pyx_L8_next_or; + } else { + } + __pyx_t_7 = (__pyx_v_little_endian != 0); + if (!__pyx_t_7) { + } else { + __pyx_t_6 = __pyx_t_7; + goto __pyx_L7_bool_binop_done; + } + __pyx_L8_next_or:; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":802 + * + * if ((child.byteorder == c'>' and little_endian) or + * (child.byteorder == c'<' and not little_endian)): # <<<<<<<<<<<<<< + * raise ValueError(u"Non-native byte order not supported") + * # One could encode it in the format string and have Cython + */ + __pyx_t_7 = ((__pyx_v_child->byteorder == '<') != 0); + if (__pyx_t_7) { + } else { + __pyx_t_6 = __pyx_t_7; + goto __pyx_L7_bool_binop_done; + } + __pyx_t_7 = ((!(__pyx_v_little_endian != 0)) != 0); + __pyx_t_6 = __pyx_t_7; + __pyx_L7_bool_binop_done:; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":801 + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + * if ((child.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (child.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + if (__pyx_t_6) { + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":803 + * if ((child.byteorder == c'>' and little_endian) or + * (child.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * # One could encode it in the format string and have Cython + * # complain instead, BUT: < and > in format strings also imply + */ + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 803, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 803, __pyx_L1_error) + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":801 + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + * if ((child.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<< + * (child.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":813 + * + * # Output padding bytes + * while offset[0] < new_offset: # <<<<<<<<<<<<<< + * f[0] = 120 # "x"; pad byte + * f += 1 + */ + while (1) { + __pyx_t_3 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 813, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_t_3, __pyx_v_new_offset, Py_LT); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 813, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 813, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (!__pyx_t_6) break; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":814 + * # Output padding bytes + * while offset[0] < new_offset: + * f[0] = 120 # "x"; pad byte # <<<<<<<<<<<<<< + * f += 1 + * offset[0] += 1 + */ + (__pyx_v_f[0]) = 0x78; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":815 + * while offset[0] < new_offset: + * f[0] = 120 # "x"; pad byte + * f += 1 # <<<<<<<<<<<<<< + * offset[0] += 1 + * + */ + __pyx_v_f = (__pyx_v_f + 1); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":816 + * f[0] = 120 # "x"; pad byte + * f += 1 + * offset[0] += 1 # <<<<<<<<<<<<<< + * + * offset[0] += child.itemsize + */ + __pyx_t_8 = 0; + (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + 1); + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":818 + * offset[0] += 1 + * + * offset[0] += child.itemsize # <<<<<<<<<<<<<< + * + * if not PyDataType_HASFIELDS(child): + */ + __pyx_t_8 = 0; + (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + __pyx_v_child->elsize); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":820 + * offset[0] += child.itemsize + * + * if not PyDataType_HASFIELDS(child): # <<<<<<<<<<<<<< + * t = child.type_num + * if end - f < 5: + */ + __pyx_t_6 = ((!(PyDataType_HASFIELDS(__pyx_v_child) != 0)) != 0); + if (__pyx_t_6) { + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":821 + * + * if not PyDataType_HASFIELDS(child): + * t = child.type_num # <<<<<<<<<<<<<< + * if end - f < 5: + * raise RuntimeError(u"Format string allocated too short.") + */ + __pyx_t_4 = __Pyx_PyInt_From_int(__pyx_v_child->type_num); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 821, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_XDECREF_SET(__pyx_v_t, __pyx_t_4); + __pyx_t_4 = 0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":822 + * if not PyDataType_HASFIELDS(child): + * t = child.type_num + * if end - f < 5: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short.") + * + */ + __pyx_t_6 = (((__pyx_v_end - __pyx_v_f) < 5) != 0); + if (__pyx_t_6) { + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":823 + * t = child.type_num + * if end - f < 5: + * raise RuntimeError(u"Format string allocated too short.") # <<<<<<<<<<<<<< + * + * # Until ticket #99 is fixed, use integers to avoid warnings + */ + __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 823, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_Raise(__pyx_t_4, 0, 0, 0); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __PYX_ERR(1, 823, __pyx_L1_error) + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":822 + * if not PyDataType_HASFIELDS(child): + * t = child.type_num + * if end - f < 5: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short.") + * + */ + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":826 + * + * # Until ticket #99 is fixed, use integers to avoid warnings + * if t == NPY_BYTE: f[0] = 98 #"b" # <<<<<<<<<<<<<< + * elif t == NPY_UBYTE: f[0] = 66 #"B" + * elif t == NPY_SHORT: f[0] = 104 #"h" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_BYTE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 826, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 826, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 826, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 98; + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":827 + * # Until ticket #99 is fixed, use integers to avoid warnings + * if t == NPY_BYTE: f[0] = 98 #"b" + * elif t == NPY_UBYTE: f[0] = 66 #"B" # <<<<<<<<<<<<<< + * elif t == NPY_SHORT: f[0] = 104 #"h" + * elif t == NPY_USHORT: f[0] = 72 #"H" + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UBYTE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 827, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 827, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 827, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 66; + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":828 + * if t == NPY_BYTE: f[0] = 98 #"b" + * elif t == NPY_UBYTE: f[0] = 66 #"B" + * elif t == NPY_SHORT: f[0] = 104 #"h" # <<<<<<<<<<<<<< + * elif t == NPY_USHORT: f[0] = 72 #"H" + * elif t == NPY_INT: f[0] = 105 #"i" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_SHORT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 828, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 828, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 828, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x68; + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":829 + * elif t == NPY_UBYTE: f[0] = 66 #"B" + * elif t == NPY_SHORT: f[0] = 104 #"h" + * elif t == NPY_USHORT: f[0] = 72 #"H" # <<<<<<<<<<<<<< + * elif t == NPY_INT: f[0] = 105 #"i" + * elif t == NPY_UINT: f[0] = 73 #"I" + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_USHORT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 829, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 829, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 829, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 72; + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":830 + * elif t == NPY_SHORT: f[0] = 104 #"h" + * elif t == NPY_USHORT: f[0] = 72 #"H" + * elif t == NPY_INT: f[0] = 105 #"i" # <<<<<<<<<<<<<< + * elif t == NPY_UINT: f[0] = 73 #"I" + * elif t == NPY_LONG: f[0] = 108 #"l" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_INT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 830, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 830, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 830, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x69; + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":831 + * elif t == NPY_USHORT: f[0] = 72 #"H" + * elif t == NPY_INT: f[0] = 105 #"i" + * elif t == NPY_UINT: f[0] = 73 #"I" # <<<<<<<<<<<<<< + * elif t == NPY_LONG: f[0] = 108 #"l" + * elif t == NPY_ULONG: f[0] = 76 #"L" + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UINT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 831, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 831, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 831, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 73; + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":832 + * elif t == NPY_INT: f[0] = 105 #"i" + * elif t == NPY_UINT: f[0] = 73 #"I" + * elif t == NPY_LONG: f[0] = 108 #"l" # <<<<<<<<<<<<<< + * elif t == NPY_ULONG: f[0] = 76 #"L" + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 832, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 832, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 832, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x6C; + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":833 + * elif t == NPY_UINT: f[0] = 73 #"I" + * elif t == NPY_LONG: f[0] = 108 #"l" + * elif t == NPY_ULONG: f[0] = 76 #"L" # <<<<<<<<<<<<<< + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 833, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 833, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 833, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 76; + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":834 + * elif t == NPY_LONG: f[0] = 108 #"l" + * elif t == NPY_ULONG: f[0] = 76 #"L" + * elif t == NPY_LONGLONG: f[0] = 113 #"q" # <<<<<<<<<<<<<< + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + * elif t == NPY_FLOAT: f[0] = 102 #"f" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGLONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 834, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 834, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 834, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x71; + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":835 + * elif t == NPY_ULONG: f[0] = 76 #"L" + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" # <<<<<<<<<<<<<< + * elif t == NPY_FLOAT: f[0] = 102 #"f" + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONGLONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 835, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 835, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 835, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 81; + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":836 + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + * elif t == NPY_FLOAT: f[0] = 102 #"f" # <<<<<<<<<<<<<< + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_FLOAT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 836, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 836, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 836, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x66; + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":837 + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + * elif t == NPY_FLOAT: f[0] = 102 #"f" + * elif t == NPY_DOUBLE: f[0] = 100 #"d" # <<<<<<<<<<<<<< + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_DOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 837, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 837, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 837, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x64; + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":838 + * elif t == NPY_FLOAT: f[0] = 102 #"f" + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" # <<<<<<<<<<<<<< + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 838, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 838, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 838, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 0x67; + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":839 + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf # <<<<<<<<<<<<<< + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CFLOAT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 839, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 839, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 839, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 90; + (__pyx_v_f[1]) = 0x66; + __pyx_v_f = (__pyx_v_f + 1); + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":840 + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd # <<<<<<<<<<<<<< + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg + * elif t == NPY_OBJECT: f[0] = 79 #"O" + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 840, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 840, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 840, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 90; + (__pyx_v_f[1]) = 0x64; + __pyx_v_f = (__pyx_v_f + 1); + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":841 + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg # <<<<<<<<<<<<<< + * elif t == NPY_OBJECT: f[0] = 79 #"O" + * else: + */ + __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CLONGDOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 841, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 841, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 841, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 90; + (__pyx_v_f[1]) = 0x67; + __pyx_v_f = (__pyx_v_f + 1); + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":842 + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg + * elif t == NPY_OBJECT: f[0] = 79 #"O" # <<<<<<<<<<<<<< + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + */ + __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_OBJECT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 842, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 842, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 842, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 79; + goto __pyx_L15; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":844 + * elif t == NPY_OBJECT: f[0] = 79 #"O" + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<< + * f += 1 + * else: + */ + /*else*/ { + __pyx_t_3 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 844, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 844, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_GIVEREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 844, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __PYX_ERR(1, 844, __pyx_L1_error) + } + __pyx_L15:; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":845 + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + * f += 1 # <<<<<<<<<<<<<< + * else: + * # Cython ignores struct boundary information ("T{...}"), + */ + __pyx_v_f = (__pyx_v_f + 1); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":820 + * offset[0] += child.itemsize + * + * if not PyDataType_HASFIELDS(child): # <<<<<<<<<<<<<< + * t = child.type_num + * if end - f < 5: + */ + goto __pyx_L13; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":849 + * # Cython ignores struct boundary information ("T{...}"), + * # so don't output it + * f = _util_dtypestring(child, f, end, offset) # <<<<<<<<<<<<<< + * return f + * + */ + /*else*/ { + __pyx_t_9 = __pyx_f_5numpy__util_dtypestring(__pyx_v_child, __pyx_v_f, __pyx_v_end, __pyx_v_offset); if (unlikely(__pyx_t_9 == NULL)) __PYX_ERR(1, 849, __pyx_L1_error) + __pyx_v_f = __pyx_t_9; + } + __pyx_L13:; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":794 + * cdef tuple fields + * + * for childname in descr.names: # <<<<<<<<<<<<<< + * fields = descr.fields[childname] + * child, new_offset = fields + */ + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":850 + * # so don't output it + * f = _util_dtypestring(child, f, end, offset) + * return f # <<<<<<<<<<<<<< + * + * + */ + __pyx_r = __pyx_v_f; + goto __pyx_L0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":785 + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: # <<<<<<<<<<<<<< + * # Recursive utility function used in __getbuffer__ to get format + * # string. The new location in the format string is returned. + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("numpy._util_dtypestring", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XDECREF((PyObject *)__pyx_v_child); + __Pyx_XDECREF(__pyx_v_fields); + __Pyx_XDECREF(__pyx_v_childname); + __Pyx_XDECREF(__pyx_v_new_offset); + __Pyx_XDECREF(__pyx_v_t); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":966 + * + * + * cdef inline void set_array_base(ndarray arr, object base): # <<<<<<<<<<<<<< + * cdef PyObject* baseptr + * if base is None: + */ + +static CYTHON_INLINE void __pyx_f_5numpy_set_array_base(PyArrayObject *__pyx_v_arr, PyObject *__pyx_v_base) { + PyObject *__pyx_v_baseptr; + __Pyx_RefNannyDeclarations + int __pyx_t_1; + int __pyx_t_2; + __Pyx_RefNannySetupContext("set_array_base", 0); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":968 + * cdef inline void set_array_base(ndarray arr, object base): + * cdef PyObject* baseptr + * if base is None: # <<<<<<<<<<<<<< + * baseptr = NULL + * else: + */ + __pyx_t_1 = (__pyx_v_base == Py_None); + __pyx_t_2 = (__pyx_t_1 != 0); + if (__pyx_t_2) { + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":969 + * cdef PyObject* baseptr + * if base is None: + * baseptr = NULL # <<<<<<<<<<<<<< + * else: + * Py_INCREF(base) # important to do this before decref below! + */ + __pyx_v_baseptr = NULL; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":968 + * cdef inline void set_array_base(ndarray arr, object base): + * cdef PyObject* baseptr + * if base is None: # <<<<<<<<<<<<<< + * baseptr = NULL + * else: + */ + goto __pyx_L3; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":971 + * baseptr = NULL + * else: + * Py_INCREF(base) # important to do this before decref below! # <<<<<<<<<<<<<< + * baseptr = base + * Py_XDECREF(arr.base) + */ + /*else*/ { + Py_INCREF(__pyx_v_base); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":972 + * else: + * Py_INCREF(base) # important to do this before decref below! + * baseptr = base # <<<<<<<<<<<<<< + * Py_XDECREF(arr.base) + * arr.base = baseptr + */ + __pyx_v_baseptr = ((PyObject *)__pyx_v_base); + } + __pyx_L3:; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":973 + * Py_INCREF(base) # important to do this before decref below! + * baseptr = base + * Py_XDECREF(arr.base) # <<<<<<<<<<<<<< + * arr.base = baseptr + * + */ + Py_XDECREF(__pyx_v_arr->base); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":974 + * baseptr = base + * Py_XDECREF(arr.base) + * arr.base = baseptr # <<<<<<<<<<<<<< + * + * cdef inline object get_array_base(ndarray arr): + */ + __pyx_v_arr->base = __pyx_v_baseptr; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":966 + * + * + * cdef inline void set_array_base(ndarray arr, object base): # <<<<<<<<<<<<<< + * cdef PyObject* baseptr + * if base is None: + */ + + /* function exit code */ + __Pyx_RefNannyFinishContext(); +} + +/* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":976 + * arr.base = baseptr + * + * cdef inline object get_array_base(ndarray arr): # <<<<<<<<<<<<<< + * if arr.base is NULL: + * return None + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_get_array_base(PyArrayObject *__pyx_v_arr) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + int __pyx_t_1; + __Pyx_RefNannySetupContext("get_array_base", 0); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":977 + * + * cdef inline object get_array_base(ndarray arr): + * if arr.base is NULL: # <<<<<<<<<<<<<< + * return None + * else: + */ + __pyx_t_1 = ((__pyx_v_arr->base == NULL) != 0); + if (__pyx_t_1) { + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":978 + * cdef inline object get_array_base(ndarray arr): + * if arr.base is NULL: + * return None # <<<<<<<<<<<<<< + * else: + * return arr.base + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(Py_None); + __pyx_r = Py_None; + goto __pyx_L0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":977 + * + * cdef inline object get_array_base(ndarray arr): + * if arr.base is NULL: # <<<<<<<<<<<<<< + * return None + * else: + */ + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":980 + * return None + * else: + * return arr.base # <<<<<<<<<<<<<< + */ + /*else*/ { + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(((PyObject *)__pyx_v_arr->base)); + __pyx_r = ((PyObject *)__pyx_v_arr->base); + goto __pyx_L0; + } + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":976 + * arr.base = baseptr + * + * cdef inline object get_array_base(ndarray arr): # <<<<<<<<<<<<<< + * if arr.base is NULL: + * return None + */ + + /* function exit code */ + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static PyMethodDef __pyx_methods[] = { + {0, 0, 0, 0} +}; + +#if PY_MAJOR_VERSION >= 3 +static struct PyModuleDef __pyx_moduledef = { + #if PY_VERSION_HEX < 0x03020000 + { PyObject_HEAD_INIT(NULL) NULL, 0, NULL }, + #else + PyModuleDef_HEAD_INIT, + #endif + "gpu_nms", + 0, /* m_doc */ + -1, /* m_size */ + __pyx_methods /* m_methods */, + NULL, /* m_reload */ + NULL, /* m_traverse */ + NULL, /* m_clear */ + NULL /* m_free */ +}; +#endif + +static __Pyx_StringTabEntry __pyx_string_tab[] = { + {&__pyx_kp_s_D_v_zix_caffe_caffe_win_20160523, __pyx_k_D_v_zix_caffe_caffe_win_20160523, sizeof(__pyx_k_D_v_zix_caffe_caffe_win_20160523), 0, 0, 1, 0}, + {&__pyx_kp_u_Format_string_allocated_too_shor, __pyx_k_Format_string_allocated_too_shor, sizeof(__pyx_k_Format_string_allocated_too_shor), 0, 1, 0, 0}, + {&__pyx_kp_u_Format_string_allocated_too_shor_2, __pyx_k_Format_string_allocated_too_shor_2, sizeof(__pyx_k_Format_string_allocated_too_shor_2), 0, 1, 0, 0}, + {&__pyx_kp_u_Non_native_byte_order_not_suppor, __pyx_k_Non_native_byte_order_not_suppor, sizeof(__pyx_k_Non_native_byte_order_not_suppor), 0, 1, 0, 0}, + {&__pyx_n_s_RuntimeError, __pyx_k_RuntimeError, sizeof(__pyx_k_RuntimeError), 0, 0, 1, 1}, + {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1}, + {&__pyx_n_s_argsort, __pyx_k_argsort, sizeof(__pyx_k_argsort), 0, 0, 1, 1}, + {&__pyx_n_s_boxes_dim, __pyx_k_boxes_dim, sizeof(__pyx_k_boxes_dim), 0, 0, 1, 1}, + {&__pyx_n_s_boxes_num, __pyx_k_boxes_num, sizeof(__pyx_k_boxes_num), 0, 0, 1, 1}, + {&__pyx_n_s_dets, __pyx_k_dets, sizeof(__pyx_k_dets), 0, 0, 1, 1}, + {&__pyx_n_s_device_id, __pyx_k_device_id, sizeof(__pyx_k_device_id), 0, 0, 1, 1}, + {&__pyx_n_s_dtype, __pyx_k_dtype, sizeof(__pyx_k_dtype), 0, 0, 1, 1}, + {&__pyx_n_s_gpu_nms, __pyx_k_gpu_nms, sizeof(__pyx_k_gpu_nms), 0, 0, 1, 1}, + {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, + {&__pyx_n_s_int32, __pyx_k_int32, sizeof(__pyx_k_int32), 0, 0, 1, 1}, + {&__pyx_n_s_keep, __pyx_k_keep, sizeof(__pyx_k_keep), 0, 0, 1, 1}, + {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, + {&__pyx_kp_u_ndarray_is_not_C_contiguous, __pyx_k_ndarray_is_not_C_contiguous, sizeof(__pyx_k_ndarray_is_not_C_contiguous), 0, 1, 0, 0}, + {&__pyx_kp_u_ndarray_is_not_Fortran_contiguou, __pyx_k_ndarray_is_not_Fortran_contiguou, sizeof(__pyx_k_ndarray_is_not_Fortran_contiguou), 0, 1, 0, 0}, + {&__pyx_n_s_nms_gpu_nms, __pyx_k_nms_gpu_nms, sizeof(__pyx_k_nms_gpu_nms), 0, 0, 1, 1}, + {&__pyx_n_s_np, __pyx_k_np, sizeof(__pyx_k_np), 0, 0, 1, 1}, + {&__pyx_n_s_num_out, __pyx_k_num_out, sizeof(__pyx_k_num_out), 0, 0, 1, 1}, + {&__pyx_n_s_numpy, __pyx_k_numpy, sizeof(__pyx_k_numpy), 0, 0, 1, 1}, + {&__pyx_n_s_order, __pyx_k_order, sizeof(__pyx_k_order), 0, 0, 1, 1}, + {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1}, + {&__pyx_n_s_scores, __pyx_k_scores, sizeof(__pyx_k_scores), 0, 0, 1, 1}, + {&__pyx_n_s_sorted_dets, __pyx_k_sorted_dets, sizeof(__pyx_k_sorted_dets), 0, 0, 1, 1}, + {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, + {&__pyx_n_s_thresh, __pyx_k_thresh, sizeof(__pyx_k_thresh), 0, 0, 1, 1}, + {&__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_k_unknown_dtype_code_in_numpy_pxd, sizeof(__pyx_k_unknown_dtype_code_in_numpy_pxd), 0, 1, 0, 0}, + {&__pyx_n_s_zeros, __pyx_k_zeros, sizeof(__pyx_k_zeros), 0, 0, 1, 1}, + {0, 0, 0, 0, 0, 0, 0} +}; +static int __Pyx_InitCachedBuiltins(void) { + __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 218, __pyx_L1_error) + __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(1, 231, __pyx_L1_error) + __pyx_builtin_RuntimeError = __Pyx_GetBuiltinName(__pyx_n_s_RuntimeError); if (!__pyx_builtin_RuntimeError) __PYX_ERR(1, 799, __pyx_L1_error) + return 0; + __pyx_L1_error:; + return -1; +} + +static int __Pyx_InitCachedConstants(void) { + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); + + /* "nms/gpu_nms.pyx":24 + * keep = np.zeros(boxes_num, dtype=np.int32) + * cdef np.ndarray[np.float32_t, ndim=1] \ + * scores = dets[:, 4] # <<<<<<<<<<<<<< + * #cdef np.ndarray[np.int_t, ndim=1] \ // 20160601, by xzn + * # order = scores.argsort()[::-1] + */ + __pyx_slice_ = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice_)) __PYX_ERR(0, 24, __pyx_L1_error) + __Pyx_GOTREF(__pyx_slice_); + __Pyx_GIVEREF(__pyx_slice_); + __pyx_tuple__2 = PyTuple_Pack(2, __pyx_slice_, __pyx_int_4); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(0, 24, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__2); + __Pyx_GIVEREF(__pyx_tuple__2); + + /* "nms/gpu_nms.pyx":28 + * # order = scores.argsort()[::-1] + * cdef np.ndarray[np.intp_t, ndim=1] \ + * order = scores.argsort()[::-1] # <<<<<<<<<<<<<< + * cdef np.ndarray[np.float32_t, ndim=2] \ + * sorted_dets = dets[order, :] + */ + __pyx_slice__3 = PySlice_New(Py_None, Py_None, __pyx_int_neg_1); if (unlikely(!__pyx_slice__3)) __PYX_ERR(0, 28, __pyx_L1_error) + __Pyx_GOTREF(__pyx_slice__3); + __Pyx_GIVEREF(__pyx_slice__3); + + /* "nms/gpu_nms.pyx":30 + * order = scores.argsort()[::-1] + * cdef np.ndarray[np.float32_t, ndim=2] \ + * sorted_dets = dets[order, :] # <<<<<<<<<<<<<< + * _nms(&keep[0], &num_out, &sorted_dets[0, 0], boxes_num, boxes_dim, thresh, device_id) + * keep = keep[:num_out] + */ + __pyx_slice__4 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__4)) __PYX_ERR(0, 30, __pyx_L1_error) + __Pyx_GOTREF(__pyx_slice__4); + __Pyx_GIVEREF(__pyx_slice__4); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":218 + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") # <<<<<<<<<<<<<< + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + */ + __pyx_tuple__5 = PyTuple_Pack(1, __pyx_kp_u_ndarray_is_not_C_contiguous); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(1, 218, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__5); + __Pyx_GIVEREF(__pyx_tuple__5); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":222 + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") # <<<<<<<<<<<<<< + * + * info.buf = PyArray_DATA(self) + */ + __pyx_tuple__6 = PyTuple_Pack(1, __pyx_kp_u_ndarray_is_not_Fortran_contiguou); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(1, 222, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__6); + __Pyx_GIVEREF(__pyx_tuple__6); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":259 + * if ((descr.byteorder == c'>' and little_endian) or + * (descr.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" + */ + __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_u_Non_native_byte_order_not_suppor); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(1, 259, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__7); + __Pyx_GIVEREF(__pyx_tuple__7); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":799 + * + * if (end - f) - (new_offset - offset[0]) < 15: + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") # <<<<<<<<<<<<<< + * + * if ((child.byteorder == c'>' and little_endian) or + */ + __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_u_Format_string_allocated_too_shor); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(1, 799, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__8); + __Pyx_GIVEREF(__pyx_tuple__8); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":803 + * if ((child.byteorder == c'>' and little_endian) or + * (child.byteorder == c'<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * # One could encode it in the format string and have Cython + * # complain instead, BUT: < and > in format strings also imply + */ + __pyx_tuple__9 = PyTuple_Pack(1, __pyx_kp_u_Non_native_byte_order_not_suppor); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(1, 803, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__9); + __Pyx_GIVEREF(__pyx_tuple__9); + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":823 + * t = child.type_num + * if end - f < 5: + * raise RuntimeError(u"Format string allocated too short.") # <<<<<<<<<<<<<< + * + * # Until ticket #99 is fixed, use integers to avoid warnings + */ + __pyx_tuple__10 = PyTuple_Pack(1, __pyx_kp_u_Format_string_allocated_too_shor_2); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(1, 823, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__10); + __Pyx_GIVEREF(__pyx_tuple__10); + + /* "nms/gpu_nms.pyx":16 + * void _nms(np.int32_t*, int*, np.float32_t*, int, int, float, int) + * + * def gpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh, # <<<<<<<<<<<<<< + * np.int32_t device_id=0): + * cdef int boxes_num = dets.shape[0] + */ + __pyx_tuple__11 = PyTuple_Pack(10, __pyx_n_s_dets, __pyx_n_s_thresh, __pyx_n_s_device_id, __pyx_n_s_boxes_num, __pyx_n_s_boxes_dim, __pyx_n_s_num_out, __pyx_n_s_keep, __pyx_n_s_scores, __pyx_n_s_order, __pyx_n_s_sorted_dets); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(0, 16, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__11); + __Pyx_GIVEREF(__pyx_tuple__11); + __pyx_codeobj__12 = (PyObject*)__Pyx_PyCode_New(3, 0, 10, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__11, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_D_v_zix_caffe_caffe_win_20160523, __pyx_n_s_gpu_nms, 16, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__12)) __PYX_ERR(0, 16, __pyx_L1_error) + __Pyx_RefNannyFinishContext(); + return 0; + __pyx_L1_error:; + __Pyx_RefNannyFinishContext(); + return -1; +} + +static int __Pyx_InitGlobals(void) { + if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error); + __pyx_int_4 = PyInt_FromLong(4); if (unlikely(!__pyx_int_4)) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) __PYX_ERR(0, 1, __pyx_L1_error) + return 0; + __pyx_L1_error:; + return -1; +} + +#if PY_MAJOR_VERSION < 3 +PyMODINIT_FUNC initgpu_nms(void); /*proto*/ +PyMODINIT_FUNC initgpu_nms(void) +#else +PyMODINIT_FUNC PyInit_gpu_nms(void); /*proto*/ +PyMODINIT_FUNC PyInit_gpu_nms(void) +#endif +{ + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannyDeclarations + #if CYTHON_REFNANNY + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); + if (!__Pyx_RefNanny) { + PyErr_Clear(); + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); + if (!__Pyx_RefNanny) + Py_FatalError("failed to import 'refnanny' module"); + } + #endif + __Pyx_RefNannySetupContext("PyMODINIT_FUNC PyInit_gpu_nms(void)", 0); + if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) + #ifdef __Pyx_CyFunction_USED + if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_FusedFunction_USED + if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_Coroutine_USED + if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_Generator_USED + if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_StopAsyncIteration_USED + if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + /*--- Library function declarations ---*/ + /*--- Threads initialization code ---*/ + #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS + #ifdef WITH_THREAD /* Python build with threading support? */ + PyEval_InitThreads(); + #endif + #endif + /*--- Module creation code ---*/ + #if PY_MAJOR_VERSION < 3 + __pyx_m = Py_InitModule4("gpu_nms", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); + #else + __pyx_m = PyModule_Create(&__pyx_moduledef); + #endif + if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) + Py_INCREF(__pyx_d); + __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) + #if CYTHON_COMPILING_IN_PYPY + Py_INCREF(__pyx_b); + #endif + if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); + /*--- Initialize various global constants etc. ---*/ + if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) + if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + if (__pyx_module_is_main_nms__gpu_nms) { + if (PyObject_SetAttrString(__pyx_m, "__name__", __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) + } + #if PY_MAJOR_VERSION >= 3 + { + PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) + if (!PyDict_GetItemString(modules, "nms.gpu_nms")) { + if (unlikely(PyDict_SetItemString(modules, "nms.gpu_nms", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error) + } + } + #endif + /*--- Builtin init code ---*/ + if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + /*--- Constants init code ---*/ + if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + /*--- Global init code ---*/ + /*--- Variable export code ---*/ + /*--- Function export code ---*/ + /*--- Type init code ---*/ + /*--- Type import code ---*/ + __pyx_ptype_7cpython_4type_type = __Pyx_ImportType(__Pyx_BUILTIN_MODULE_NAME, "type", + #if CYTHON_COMPILING_IN_PYPY + sizeof(PyTypeObject), + #else + sizeof(PyHeapTypeObject), + #endif + 0); if (unlikely(!__pyx_ptype_7cpython_4type_type)) __PYX_ERR(2, 9, __pyx_L1_error) + __pyx_ptype_5numpy_dtype = __Pyx_ImportType("numpy", "dtype", sizeof(PyArray_Descr), 0); if (unlikely(!__pyx_ptype_5numpy_dtype)) __PYX_ERR(1, 155, __pyx_L1_error) + __pyx_ptype_5numpy_flatiter = __Pyx_ImportType("numpy", "flatiter", sizeof(PyArrayIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_flatiter)) __PYX_ERR(1, 168, __pyx_L1_error) + __pyx_ptype_5numpy_broadcast = __Pyx_ImportType("numpy", "broadcast", sizeof(PyArrayMultiIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_broadcast)) __PYX_ERR(1, 172, __pyx_L1_error) + __pyx_ptype_5numpy_ndarray = __Pyx_ImportType("numpy", "ndarray", sizeof(PyArrayObject), 0); if (unlikely(!__pyx_ptype_5numpy_ndarray)) __PYX_ERR(1, 181, __pyx_L1_error) + __pyx_ptype_5numpy_ufunc = __Pyx_ImportType("numpy", "ufunc", sizeof(PyUFuncObject), 0); if (unlikely(!__pyx_ptype_5numpy_ufunc)) __PYX_ERR(1, 861, __pyx_L1_error) + /*--- Variable import code ---*/ + /*--- Function import code ---*/ + /*--- Execution code ---*/ + #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) + if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + + /* "nms/gpu_nms.pyx":8 + * # -------------------------------------------------------- + * + * import numpy as np # <<<<<<<<<<<<<< + * cimport numpy as np + * + */ + __pyx_t_1 = __Pyx_Import(__pyx_n_s_numpy, 0, -1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 8, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + if (PyDict_SetItem(__pyx_d, __pyx_n_s_np, __pyx_t_1) < 0) __PYX_ERR(0, 8, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "nms/gpu_nms.pyx":11 + * cimport numpy as np + * + * assert sizeof(int) == sizeof(np.int32_t) # <<<<<<<<<<<<<< + * + * cdef extern from "gpu_nms.hpp": + */ + #ifndef CYTHON_WITHOUT_ASSERTIONS + if (unlikely(!Py_OptimizeFlag)) { + if (unlikely(!(((sizeof(int)) == (sizeof(__pyx_t_5numpy_int32_t))) != 0))) { + PyErr_SetNone(PyExc_AssertionError); + __PYX_ERR(0, 11, __pyx_L1_error) + } + } + #endif + + /* "nms/gpu_nms.pyx":16 + * void _nms(np.int32_t*, int*, np.float32_t*, int, int, float, int) + * + * def gpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh, # <<<<<<<<<<<<<< + * np.int32_t device_id=0): + * cdef int boxes_num = dets.shape[0] + */ + __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_3nms_7gpu_nms_1gpu_nms, NULL, __pyx_n_s_nms_gpu_nms); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 16, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + if (PyDict_SetItem(__pyx_d, __pyx_n_s_gpu_nms, __pyx_t_1) < 0) __PYX_ERR(0, 16, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "nms/gpu_nms.pyx":1 + * # -------------------------------------------------------- # <<<<<<<<<<<<<< + * # Faster R-CNN + * # Copyright (c) 2015 Microsoft + */ + __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "C:/Anaconda2/lib/site-packages/Cython/Includes/numpy/__init__.pxd":976 + * arr.base = baseptr + * + * cdef inline object get_array_base(ndarray arr): # <<<<<<<<<<<<<< + * if arr.base is NULL: + * return None + */ + + /*--- Wrapped vars code ---*/ + + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + if (__pyx_m) { + if (__pyx_d) { + __Pyx_AddTraceback("init nms.gpu_nms", __pyx_clineno, __pyx_lineno, __pyx_filename); + } + Py_DECREF(__pyx_m); __pyx_m = 0; + } else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_ImportError, "init nms.gpu_nms"); + } + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + #if PY_MAJOR_VERSION < 3 + return; + #else + return __pyx_m; + #endif +} + +/* --- Runtime support code --- */ +/* Refnanny */ +#if CYTHON_REFNANNY +static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { + PyObject *m = NULL, *p = NULL; + void *r = NULL; + m = PyImport_ImportModule((char *)modname); + if (!m) goto end; + p = PyObject_GetAttrString(m, (char *)"RefNannyAPI"); + if (!p) goto end; + r = PyLong_AsVoidPtr(p); +end: + Py_XDECREF(p); + Py_XDECREF(m); + return (__Pyx_RefNannyAPIStruct *)r; +} +#endif + +/* RaiseArgTupleInvalid */ +static void __Pyx_RaiseArgtupleInvalid( + const char* func_name, + int exact, + Py_ssize_t num_min, + Py_ssize_t num_max, + Py_ssize_t num_found) +{ + Py_ssize_t num_expected; + const char *more_or_less; + if (num_found < num_min) { + num_expected = num_min; + more_or_less = "at least"; + } else { + num_expected = num_max; + more_or_less = "at most"; + } + if (exact) { + more_or_less = "exactly"; + } + PyErr_Format(PyExc_TypeError, + "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", + func_name, more_or_less, num_expected, + (num_expected == 1) ? "" : "s", num_found); +} + +/* RaiseDoubleKeywords */ +static void __Pyx_RaiseDoubleKeywordsError( + const char* func_name, + PyObject* kw_name) +{ + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION >= 3 + "%s() got multiple values for keyword argument '%U'", func_name, kw_name); + #else + "%s() got multiple values for keyword argument '%s'", func_name, + PyString_AsString(kw_name)); + #endif +} + +/* ParseKeywords */ +static int __Pyx_ParseOptionalKeywords( + PyObject *kwds, + PyObject **argnames[], + PyObject *kwds2, + PyObject *values[], + Py_ssize_t num_pos_args, + const char* function_name) +{ + PyObject *key = 0, *value = 0; + Py_ssize_t pos = 0; + PyObject*** name; + PyObject*** first_kw_arg = argnames + num_pos_args; + while (PyDict_Next(kwds, &pos, &key, &value)) { + name = first_kw_arg; + while (*name && (**name != key)) name++; + if (*name) { + values[name-argnames] = value; + continue; + } + name = first_kw_arg; + #if PY_MAJOR_VERSION < 3 + if (likely(PyString_CheckExact(key)) || likely(PyString_Check(key))) { + while (*name) { + if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) + && _PyString_Eq(**name, key)) { + values[name-argnames] = value; + break; + } + name++; + } + if (*name) continue; + else { + PyObject*** argname = argnames; + while (argname != first_kw_arg) { + if ((**argname == key) || ( + (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) + && _PyString_Eq(**argname, key))) { + goto arg_passed_twice; + } + argname++; + } + } + } else + #endif + if (likely(PyUnicode_Check(key))) { + while (*name) { + int cmp = (**name == key) ? 0 : + #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 + (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 : + #endif + PyUnicode_Compare(**name, key); + if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; + if (cmp == 0) { + values[name-argnames] = value; + break; + } + name++; + } + if (*name) continue; + else { + PyObject*** argname = argnames; + while (argname != first_kw_arg) { + int cmp = (**argname == key) ? 0 : + #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 + (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 : + #endif + PyUnicode_Compare(**argname, key); + if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; + if (cmp == 0) goto arg_passed_twice; + argname++; + } + } + } else + goto invalid_keyword_type; + if (kwds2) { + if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; + } else { + goto invalid_keyword; + } + } + return 0; +arg_passed_twice: + __Pyx_RaiseDoubleKeywordsError(function_name, key); + goto bad; +invalid_keyword_type: + PyErr_Format(PyExc_TypeError, + "%.200s() keywords must be strings", function_name); + goto bad; +invalid_keyword: + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION < 3 + "%.200s() got an unexpected keyword argument '%.200s'", + function_name, PyString_AsString(key)); + #else + "%s() got an unexpected keyword argument '%U'", + function_name, key); + #endif +bad: + return -1; +} + +/* ArgTypeTest */ +static void __Pyx_RaiseArgumentTypeInvalid(const char* name, PyObject *obj, PyTypeObject *type) { + PyErr_Format(PyExc_TypeError, + "Argument '%.200s' has incorrect type (expected %.200s, got %.200s)", + name, type->tp_name, Py_TYPE(obj)->tp_name); +} +static CYTHON_INLINE int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed, + const char *name, int exact) +{ + if (unlikely(!type)) { + PyErr_SetString(PyExc_SystemError, "Missing type object"); + return 0; + } + if (none_allowed && obj == Py_None) return 1; + else if (exact) { + if (likely(Py_TYPE(obj) == type)) return 1; + #if PY_MAJOR_VERSION == 2 + else if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1; + #endif + } + else { + if (likely(PyObject_TypeCheck(obj, type))) return 1; + } + __Pyx_RaiseArgumentTypeInvalid(name, obj, type); + return 0; +} + +/* BufferFormatCheck */ +static CYTHON_INLINE int __Pyx_IsLittleEndian(void) { + unsigned int n = 1; + return *(unsigned char*)(&n) != 0; +} +static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, + __Pyx_BufFmt_StackElem* stack, + __Pyx_TypeInfo* type) { + stack[0].field = &ctx->root; + stack[0].parent_offset = 0; + ctx->root.type = type; + ctx->root.name = "buffer dtype"; + ctx->root.offset = 0; + ctx->head = stack; + ctx->head->field = &ctx->root; + ctx->fmt_offset = 0; + ctx->head->parent_offset = 0; + ctx->new_packmode = '@'; + ctx->enc_packmode = '@'; + ctx->new_count = 1; + ctx->enc_count = 0; + ctx->enc_type = 0; + ctx->is_complex = 0; + ctx->is_valid_array = 0; + ctx->struct_alignment = 0; + while (type->typegroup == 'S') { + ++ctx->head; + ctx->head->field = type->fields; + ctx->head->parent_offset = 0; + type = type->fields->type; + } +} +static int __Pyx_BufFmt_ParseNumber(const char** ts) { + int count; + const char* t = *ts; + if (*t < '0' || *t > '9') { + return -1; + } else { + count = *t++ - '0'; + while (*t >= '0' && *t < '9') { + count *= 10; + count += *t++ - '0'; + } + } + *ts = t; + return count; +} +static int __Pyx_BufFmt_ExpectNumber(const char **ts) { + int number = __Pyx_BufFmt_ParseNumber(ts); + if (number == -1) + PyErr_Format(PyExc_ValueError,\ + "Does not understand character buffer dtype format string ('%c')", **ts); + return number; +} +static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) { + PyErr_Format(PyExc_ValueError, + "Unexpected format string character: '%c'", ch); +} +static const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) { + switch (ch) { + case 'c': return "'char'"; + case 'b': return "'signed char'"; + case 'B': return "'unsigned char'"; + case 'h': return "'short'"; + case 'H': return "'unsigned short'"; + case 'i': return "'int'"; + case 'I': return "'unsigned int'"; + case 'l': return "'long'"; + case 'L': return "'unsigned long'"; + case 'q': return "'long long'"; + case 'Q': return "'unsigned long long'"; + case 'f': return (is_complex ? "'complex float'" : "'float'"); + case 'd': return (is_complex ? "'complex double'" : "'double'"); + case 'g': return (is_complex ? "'complex long double'" : "'long double'"); + case 'T': return "a struct"; + case 'O': return "Python object"; + case 'P': return "a pointer"; + case 's': case 'p': return "a string"; + case 0: return "end"; + default: return "unparseable format string"; + } +} +static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) { + switch (ch) { + case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return 2; + case 'i': case 'I': case 'l': case 'L': return 4; + case 'q': case 'Q': return 8; + case 'f': return (is_complex ? 8 : 4); + case 'd': return (is_complex ? 16 : 8); + case 'g': { + PyErr_SetString(PyExc_ValueError, "Python does not define a standard format string size for long double ('g').."); + return 0; + } + case 'O': case 'P': return sizeof(void*); + default: + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } +} +static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) { + switch (ch) { + case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return sizeof(short); + case 'i': case 'I': return sizeof(int); + case 'l': case 'L': return sizeof(long); + #ifdef HAVE_LONG_LONG + case 'q': case 'Q': return sizeof(PY_LONG_LONG); + #endif + case 'f': return sizeof(float) * (is_complex ? 2 : 1); + case 'd': return sizeof(double) * (is_complex ? 2 : 1); + case 'g': return sizeof(long double) * (is_complex ? 2 : 1); + case 'O': case 'P': return sizeof(void*); + default: { + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } + } +} +typedef struct { char c; short x; } __Pyx_st_short; +typedef struct { char c; int x; } __Pyx_st_int; +typedef struct { char c; long x; } __Pyx_st_long; +typedef struct { char c; float x; } __Pyx_st_float; +typedef struct { char c; double x; } __Pyx_st_double; +typedef struct { char c; long double x; } __Pyx_st_longdouble; +typedef struct { char c; void *x; } __Pyx_st_void_p; +#ifdef HAVE_LONG_LONG +typedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong; +#endif +static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED int is_complex) { + switch (ch) { + case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short); + case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int); + case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long); +#ifdef HAVE_LONG_LONG + case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG); +#endif + case 'f': return sizeof(__Pyx_st_float) - sizeof(float); + case 'd': return sizeof(__Pyx_st_double) - sizeof(double); + case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double); + case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*); + default: + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } +} +/* These are for computing the padding at the end of the struct to align + on the first member of the struct. This will probably the same as above, + but we don't have any guarantees. + */ +typedef struct { short x; char c; } __Pyx_pad_short; +typedef struct { int x; char c; } __Pyx_pad_int; +typedef struct { long x; char c; } __Pyx_pad_long; +typedef struct { float x; char c; } __Pyx_pad_float; +typedef struct { double x; char c; } __Pyx_pad_double; +typedef struct { long double x; char c; } __Pyx_pad_longdouble; +typedef struct { void *x; char c; } __Pyx_pad_void_p; +#ifdef HAVE_LONG_LONG +typedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong; +#endif +static size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int is_complex) { + switch (ch) { + case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short); + case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int); + case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long); +#ifdef HAVE_LONG_LONG + case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG); +#endif + case 'f': return sizeof(__Pyx_pad_float) - sizeof(float); + case 'd': return sizeof(__Pyx_pad_double) - sizeof(double); + case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double); + case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*); + default: + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } +} +static char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) { + switch (ch) { + case 'c': + return 'H'; + case 'b': case 'h': case 'i': + case 'l': case 'q': case 's': case 'p': + return 'I'; + case 'B': case 'H': case 'I': case 'L': case 'Q': + return 'U'; + case 'f': case 'd': case 'g': + return (is_complex ? 'C' : 'R'); + case 'O': + return 'O'; + case 'P': + return 'P'; + default: { + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } + } +} +static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) { + if (ctx->head == NULL || ctx->head->field == &ctx->root) { + const char* expected; + const char* quote; + if (ctx->head == NULL) { + expected = "end"; + quote = ""; + } else { + expected = ctx->head->field->type->name; + quote = "'"; + } + PyErr_Format(PyExc_ValueError, + "Buffer dtype mismatch, expected %s%s%s but got %s", + quote, expected, quote, + __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex)); + } else { + __Pyx_StructField* field = ctx->head->field; + __Pyx_StructField* parent = (ctx->head - 1)->field; + PyErr_Format(PyExc_ValueError, + "Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'", + field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex), + parent->type->name, field->name); + } +} +static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) { + char group; + size_t size, offset, arraysize = 1; + if (ctx->enc_type == 0) return 0; + if (ctx->head->field->type->arraysize[0]) { + int i, ndim = 0; + if (ctx->enc_type == 's' || ctx->enc_type == 'p') { + ctx->is_valid_array = ctx->head->field->type->ndim == 1; + ndim = 1; + if (ctx->enc_count != ctx->head->field->type->arraysize[0]) { + PyErr_Format(PyExc_ValueError, + "Expected a dimension of size %zu, got %zu", + ctx->head->field->type->arraysize[0], ctx->enc_count); + return -1; + } + } + if (!ctx->is_valid_array) { + PyErr_Format(PyExc_ValueError, "Expected %d dimensions, got %d", + ctx->head->field->type->ndim, ndim); + return -1; + } + for (i = 0; i < ctx->head->field->type->ndim; i++) { + arraysize *= ctx->head->field->type->arraysize[i]; + } + ctx->is_valid_array = 0; + ctx->enc_count = 1; + } + group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex); + do { + __Pyx_StructField* field = ctx->head->field; + __Pyx_TypeInfo* type = field->type; + if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') { + size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex); + } else { + size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex); + } + if (ctx->enc_packmode == '@') { + size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex); + size_t align_mod_offset; + if (align_at == 0) return -1; + align_mod_offset = ctx->fmt_offset % align_at; + if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset; + if (ctx->struct_alignment == 0) + ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type, + ctx->is_complex); + } + if (type->size != size || type->typegroup != group) { + if (type->typegroup == 'C' && type->fields != NULL) { + size_t parent_offset = ctx->head->parent_offset + field->offset; + ++ctx->head; + ctx->head->field = type->fields; + ctx->head->parent_offset = parent_offset; + continue; + } + if ((type->typegroup == 'H' || group == 'H') && type->size == size) { + } else { + __Pyx_BufFmt_RaiseExpected(ctx); + return -1; + } + } + offset = ctx->head->parent_offset + field->offset; + if (ctx->fmt_offset != offset) { + PyErr_Format(PyExc_ValueError, + "Buffer dtype mismatch; next field is at offset %" CYTHON_FORMAT_SSIZE_T "d but %" CYTHON_FORMAT_SSIZE_T "d expected", + (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset); + return -1; + } + ctx->fmt_offset += size; + if (arraysize) + ctx->fmt_offset += (arraysize - 1) * size; + --ctx->enc_count; + while (1) { + if (field == &ctx->root) { + ctx->head = NULL; + if (ctx->enc_count != 0) { + __Pyx_BufFmt_RaiseExpected(ctx); + return -1; + } + break; + } + ctx->head->field = ++field; + if (field->type == NULL) { + --ctx->head; + field = ctx->head->field; + continue; + } else if (field->type->typegroup == 'S') { + size_t parent_offset = ctx->head->parent_offset + field->offset; + if (field->type->fields->type == NULL) continue; + field = field->type->fields; + ++ctx->head; + ctx->head->field = field; + ctx->head->parent_offset = parent_offset; + break; + } else { + break; + } + } + } while (ctx->enc_count); + ctx->enc_type = 0; + ctx->is_complex = 0; + return 0; +} +static CYTHON_INLINE PyObject * +__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp) +{ + const char *ts = *tsp; + int i = 0, number; + int ndim = ctx->head->field->type->ndim; +; + ++ts; + if (ctx->new_count != 1) { + PyErr_SetString(PyExc_ValueError, + "Cannot handle repeated arrays in format string"); + return NULL; + } + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + while (*ts && *ts != ')') { + switch (*ts) { + case ' ': case '\f': case '\r': case '\n': case '\t': case '\v': continue; + default: break; + } + number = __Pyx_BufFmt_ExpectNumber(&ts); + if (number == -1) return NULL; + if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i]) + return PyErr_Format(PyExc_ValueError, + "Expected a dimension of size %zu, got %d", + ctx->head->field->type->arraysize[i], number); + if (*ts != ',' && *ts != ')') + return PyErr_Format(PyExc_ValueError, + "Expected a comma in format string, got '%c'", *ts); + if (*ts == ',') ts++; + i++; + } + if (i != ndim) + return PyErr_Format(PyExc_ValueError, "Expected %d dimension(s), got %d", + ctx->head->field->type->ndim, i); + if (!*ts) { + PyErr_SetString(PyExc_ValueError, + "Unexpected end of format string, expected ')'"); + return NULL; + } + ctx->is_valid_array = 1; + ctx->new_count = 1; + *tsp = ++ts; + return Py_None; +} +static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) { + int got_Z = 0; + while (1) { + switch(*ts) { + case 0: + if (ctx->enc_type != 0 && ctx->head == NULL) { + __Pyx_BufFmt_RaiseExpected(ctx); + return NULL; + } + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + if (ctx->head != NULL) { + __Pyx_BufFmt_RaiseExpected(ctx); + return NULL; + } + return ts; + case ' ': + case '\r': + case '\n': + ++ts; + break; + case '<': + if (!__Pyx_IsLittleEndian()) { + PyErr_SetString(PyExc_ValueError, "Little-endian buffer not supported on big-endian compiler"); + return NULL; + } + ctx->new_packmode = '='; + ++ts; + break; + case '>': + case '!': + if (__Pyx_IsLittleEndian()) { + PyErr_SetString(PyExc_ValueError, "Big-endian buffer not supported on little-endian compiler"); + return NULL; + } + ctx->new_packmode = '='; + ++ts; + break; + case '=': + case '@': + case '^': + ctx->new_packmode = *ts++; + break; + case 'T': + { + const char* ts_after_sub; + size_t i, struct_count = ctx->new_count; + size_t struct_alignment = ctx->struct_alignment; + ctx->new_count = 1; + ++ts; + if (*ts != '{') { + PyErr_SetString(PyExc_ValueError, "Buffer acquisition: Expected '{' after 'T'"); + return NULL; + } + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->enc_type = 0; + ctx->enc_count = 0; + ctx->struct_alignment = 0; + ++ts; + ts_after_sub = ts; + for (i = 0; i != struct_count; ++i) { + ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts); + if (!ts_after_sub) return NULL; + } + ts = ts_after_sub; + if (struct_alignment) ctx->struct_alignment = struct_alignment; + } + break; + case '}': + { + size_t alignment = ctx->struct_alignment; + ++ts; + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->enc_type = 0; + if (alignment && ctx->fmt_offset % alignment) { + ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment); + } + } + return ts; + case 'x': + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->fmt_offset += ctx->new_count; + ctx->new_count = 1; + ctx->enc_count = 0; + ctx->enc_type = 0; + ctx->enc_packmode = ctx->new_packmode; + ++ts; + break; + case 'Z': + got_Z = 1; + ++ts; + if (*ts != 'f' && *ts != 'd' && *ts != 'g') { + __Pyx_BufFmt_RaiseUnexpectedChar('Z'); + return NULL; + } + case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I': + case 'l': case 'L': case 'q': case 'Q': + case 'f': case 'd': case 'g': + case 'O': case 'p': + if (ctx->enc_type == *ts && got_Z == ctx->is_complex && + ctx->enc_packmode == ctx->new_packmode) { + ctx->enc_count += ctx->new_count; + ctx->new_count = 1; + got_Z = 0; + ++ts; + break; + } + case 's': + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->enc_count = ctx->new_count; + ctx->enc_packmode = ctx->new_packmode; + ctx->enc_type = *ts; + ctx->is_complex = got_Z; + ++ts; + ctx->new_count = 1; + got_Z = 0; + break; + case ':': + ++ts; + while(*ts != ':') ++ts; + ++ts; + break; + case '(': + if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL; + break; + default: + { + int number = __Pyx_BufFmt_ExpectNumber(&ts); + if (number == -1) return NULL; + ctx->new_count = (size_t)number; + } + } + } +} +static CYTHON_INLINE void __Pyx_ZeroBuffer(Py_buffer* buf) { + buf->buf = NULL; + buf->obj = NULL; + buf->strides = __Pyx_zeros; + buf->shape = __Pyx_zeros; + buf->suboffsets = __Pyx_minusones; +} +static CYTHON_INLINE int __Pyx_GetBufferAndValidate( + Py_buffer* buf, PyObject* obj, __Pyx_TypeInfo* dtype, int flags, + int nd, int cast, __Pyx_BufFmt_StackElem* stack) +{ + if (obj == Py_None || obj == NULL) { + __Pyx_ZeroBuffer(buf); + return 0; + } + buf->buf = NULL; + if (__Pyx_GetBuffer(obj, buf, flags) == -1) goto fail; + if (buf->ndim != nd) { + PyErr_Format(PyExc_ValueError, + "Buffer has wrong number of dimensions (expected %d, got %d)", + nd, buf->ndim); + goto fail; + } + if (!cast) { + __Pyx_BufFmt_Context ctx; + __Pyx_BufFmt_Init(&ctx, stack, dtype); + if (!__Pyx_BufFmt_CheckString(&ctx, buf->format)) goto fail; + } + if ((unsigned)buf->itemsize != dtype->size) { + PyErr_Format(PyExc_ValueError, + "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "d byte%s) does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "d byte%s)", + buf->itemsize, (buf->itemsize > 1) ? "s" : "", + dtype->name, (Py_ssize_t)dtype->size, (dtype->size > 1) ? "s" : ""); + goto fail; + } + if (buf->suboffsets == NULL) buf->suboffsets = __Pyx_minusones; + return 0; +fail:; + __Pyx_ZeroBuffer(buf); + return -1; +} +static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info) { + if (info->buf == NULL) return; + if (info->suboffsets == __Pyx_minusones) info->suboffsets = NULL; + __Pyx_ReleaseBuffer(info); +} + +/* GetBuiltinName */ + static PyObject *__Pyx_GetBuiltinName(PyObject *name) { + PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name); + if (unlikely(!result)) { + PyErr_Format(PyExc_NameError, +#if PY_MAJOR_VERSION >= 3 + "name '%U' is not defined", name); +#else + "name '%.200s' is not defined", PyString_AS_STRING(name)); +#endif + } + return result; +} + +/* GetModuleGlobalName */ + static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name) { + PyObject *result; +#if CYTHON_COMPILING_IN_CPYTHON + result = PyDict_GetItem(__pyx_d, name); + if (likely(result)) { + Py_INCREF(result); + } else { +#else + result = PyObject_GetItem(__pyx_d, name); + if (!result) { + PyErr_Clear(); +#endif + result = __Pyx_GetBuiltinName(name); + } + return result; +} + +/* PyObjectCall */ + #if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { + PyObject *result; + ternaryfunc call = func->ob_type->tp_call; + if (unlikely(!call)) + return PyObject_Call(func, arg, kw); + if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) + return NULL; + result = (*call)(func, arg, kw); + Py_LeaveRecursiveCall(); + if (unlikely(!result) && unlikely(!PyErr_Occurred())) { + PyErr_SetString( + PyExc_SystemError, + "NULL result without error in PyObject_Call"); + } + return result; +} +#endif + +/* ExtTypeTest */ + static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { + if (unlikely(!type)) { + PyErr_SetString(PyExc_SystemError, "Missing type object"); + return 0; + } + if (likely(PyObject_TypeCheck(obj, type))) + return 1; + PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s", + Py_TYPE(obj)->tp_name, type->tp_name); + return 0; +} + +/* PyObjectCallMethO */ + #if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { + PyObject *self, *result; + PyCFunction cfunc; + cfunc = PyCFunction_GET_FUNCTION(func); + self = PyCFunction_GET_SELF(func); + if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) + return NULL; + result = cfunc(self, arg); + Py_LeaveRecursiveCall(); + if (unlikely(!result) && unlikely(!PyErr_Occurred())) { + PyErr_SetString( + PyExc_SystemError, + "NULL result without error in PyObject_Call"); + } + return result; +} +#endif + +/* PyObjectCallOneArg */ + #if CYTHON_COMPILING_IN_CPYTHON +static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) { + PyObject *result; + PyObject *args = PyTuple_New(1); + if (unlikely(!args)) return NULL; + Py_INCREF(arg); + PyTuple_SET_ITEM(args, 0, arg); + result = __Pyx_PyObject_Call(func, args, NULL); + Py_DECREF(args); + return result; +} +static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { +#ifdef __Pyx_CyFunction_USED + if (likely(PyCFunction_Check(func) || PyObject_TypeCheck(func, __pyx_CyFunctionType))) { +#else + if (likely(PyCFunction_Check(func))) { +#endif + if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { + return __Pyx_PyObject_CallMethO(func, arg); + } + } + return __Pyx__PyObject_CallOneArg(func, arg); +} +#else +static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { + PyObject *result; + PyObject *args = PyTuple_Pack(1, arg); + if (unlikely(!args)) return NULL; + result = __Pyx_PyObject_Call(func, args, NULL); + Py_DECREF(args); + return result; +} +#endif + +/* PyObjectCallNoArg */ + #if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) { +#ifdef __Pyx_CyFunction_USED + if (likely(PyCFunction_Check(func) || PyObject_TypeCheck(func, __pyx_CyFunctionType))) { +#else + if (likely(PyCFunction_Check(func))) { +#endif + if (likely(PyCFunction_GET_FLAGS(func) & METH_NOARGS)) { + return __Pyx_PyObject_CallMethO(func, NULL); + } + } + return __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL); +} +#endif + +/* BufferIndexError */ + static void __Pyx_RaiseBufferIndexError(int axis) { + PyErr_Format(PyExc_IndexError, + "Out of bounds on buffer access (axis %d)", axis); +} + +/* SliceObject */ + static CYTHON_INLINE PyObject* __Pyx_PyObject_GetSlice(PyObject* obj, + Py_ssize_t cstart, Py_ssize_t cstop, + PyObject** _py_start, PyObject** _py_stop, PyObject** _py_slice, + int has_cstart, int has_cstop, CYTHON_UNUSED int wraparound) { +#if CYTHON_COMPILING_IN_CPYTHON + PyMappingMethods* mp; +#if PY_MAJOR_VERSION < 3 + PySequenceMethods* ms = Py_TYPE(obj)->tp_as_sequence; + if (likely(ms && ms->sq_slice)) { + if (!has_cstart) { + if (_py_start && (*_py_start != Py_None)) { + cstart = __Pyx_PyIndex_AsSsize_t(*_py_start); + if ((cstart == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; + } else + cstart = 0; + } + if (!has_cstop) { + if (_py_stop && (*_py_stop != Py_None)) { + cstop = __Pyx_PyIndex_AsSsize_t(*_py_stop); + if ((cstop == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; + } else + cstop = PY_SSIZE_T_MAX; + } + if (wraparound && unlikely((cstart < 0) | (cstop < 0)) && likely(ms->sq_length)) { + Py_ssize_t l = ms->sq_length(obj); + if (likely(l >= 0)) { + if (cstop < 0) { + cstop += l; + if (cstop < 0) cstop = 0; + } + if (cstart < 0) { + cstart += l; + if (cstart < 0) cstart = 0; + } + } else { + if (!PyErr_ExceptionMatches(PyExc_OverflowError)) + goto bad; + PyErr_Clear(); + } + } + return ms->sq_slice(obj, cstart, cstop); + } +#endif + mp = Py_TYPE(obj)->tp_as_mapping; + if (likely(mp && mp->mp_subscript)) +#endif + { + PyObject* result; + PyObject *py_slice, *py_start, *py_stop; + if (_py_slice) { + py_slice = *_py_slice; + } else { + PyObject* owned_start = NULL; + PyObject* owned_stop = NULL; + if (_py_start) { + py_start = *_py_start; + } else { + if (has_cstart) { + owned_start = py_start = PyInt_FromSsize_t(cstart); + if (unlikely(!py_start)) goto bad; + } else + py_start = Py_None; + } + if (_py_stop) { + py_stop = *_py_stop; + } else { + if (has_cstop) { + owned_stop = py_stop = PyInt_FromSsize_t(cstop); + if (unlikely(!py_stop)) { + Py_XDECREF(owned_start); + goto bad; + } + } else + py_stop = Py_None; + } + py_slice = PySlice_New(py_start, py_stop, Py_None); + Py_XDECREF(owned_start); + Py_XDECREF(owned_stop); + if (unlikely(!py_slice)) goto bad; + } +#if CYTHON_COMPILING_IN_CPYTHON + result = mp->mp_subscript(obj, py_slice); +#else + result = PyObject_GetItem(obj, py_slice); +#endif + if (!_py_slice) { + Py_DECREF(py_slice); + } + return result; + } + PyErr_Format(PyExc_TypeError, + "'%.200s' object is unsliceable", Py_TYPE(obj)->tp_name); +bad: + return NULL; +} + +/* BufferFallbackError */ + static void __Pyx_RaiseBufferFallbackError(void) { + PyErr_SetString(PyExc_ValueError, + "Buffer acquisition failed on assignment; and then reacquiring the old buffer failed too!"); +} + +/* PyErrFetchRestore */ + #if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { + PyObject *tmp_type, *tmp_value, *tmp_tb; + tmp_type = tstate->curexc_type; + tmp_value = tstate->curexc_value; + tmp_tb = tstate->curexc_traceback; + tstate->curexc_type = type; + tstate->curexc_value = value; + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_type); + Py_XDECREF(tmp_value); + Py_XDECREF(tmp_tb); +} +static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { + *type = tstate->curexc_type; + *value = tstate->curexc_value; + *tb = tstate->curexc_traceback; + tstate->curexc_type = 0; + tstate->curexc_value = 0; + tstate->curexc_traceback = 0; +} +#endif + +/* RaiseException */ + #if PY_MAJOR_VERSION < 3 +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, + CYTHON_UNUSED PyObject *cause) { + __Pyx_PyThreadState_declare + Py_XINCREF(type); + if (!value || value == Py_None) + value = NULL; + else + Py_INCREF(value); + if (!tb || tb == Py_None) + tb = NULL; + else { + Py_INCREF(tb); + if (!PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto raise_error; + } + } + if (PyType_Check(type)) { +#if CYTHON_COMPILING_IN_PYPY + if (!value) { + Py_INCREF(Py_None); + value = Py_None; + } +#endif + PyErr_NormalizeException(&type, &value, &tb); + } else { + if (value) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto raise_error; + } + value = type; + type = (PyObject*) Py_TYPE(type); + Py_INCREF(type); + if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto raise_error; + } + } + __Pyx_PyThreadState_assign + __Pyx_ErrRestore(type, value, tb); + return; +raise_error: + Py_XDECREF(value); + Py_XDECREF(type); + Py_XDECREF(tb); + return; +} +#else +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { + PyObject* owned_instance = NULL; + if (tb == Py_None) { + tb = 0; + } else if (tb && !PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto bad; + } + if (value == Py_None) + value = 0; + if (PyExceptionInstance_Check(type)) { + if (value) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto bad; + } + value = type; + type = (PyObject*) Py_TYPE(value); + } else if (PyExceptionClass_Check(type)) { + PyObject *instance_class = NULL; + if (value && PyExceptionInstance_Check(value)) { + instance_class = (PyObject*) Py_TYPE(value); + if (instance_class != type) { + int is_subclass = PyObject_IsSubclass(instance_class, type); + if (!is_subclass) { + instance_class = NULL; + } else if (unlikely(is_subclass == -1)) { + goto bad; + } else { + type = instance_class; + } + } + } + if (!instance_class) { + PyObject *args; + if (!value) + args = PyTuple_New(0); + else if (PyTuple_Check(value)) { + Py_INCREF(value); + args = value; + } else + args = PyTuple_Pack(1, value); + if (!args) + goto bad; + owned_instance = PyObject_Call(type, args, NULL); + Py_DECREF(args); + if (!owned_instance) + goto bad; + value = owned_instance; + if (!PyExceptionInstance_Check(value)) { + PyErr_Format(PyExc_TypeError, + "calling %R should have returned an instance of " + "BaseException, not %R", + type, Py_TYPE(value)); + goto bad; + } + } + } else { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto bad; + } +#if PY_VERSION_HEX >= 0x03030000 + if (cause) { +#else + if (cause && cause != Py_None) { +#endif + PyObject *fixed_cause; + if (cause == Py_None) { + fixed_cause = NULL; + } else if (PyExceptionClass_Check(cause)) { + fixed_cause = PyObject_CallObject(cause, NULL); + if (fixed_cause == NULL) + goto bad; + } else if (PyExceptionInstance_Check(cause)) { + fixed_cause = cause; + Py_INCREF(fixed_cause); + } else { + PyErr_SetString(PyExc_TypeError, + "exception causes must derive from " + "BaseException"); + goto bad; + } + PyException_SetCause(value, fixed_cause); + } + PyErr_SetObject(type, value); + if (tb) { +#if CYTHON_COMPILING_IN_PYPY + PyObject *tmp_type, *tmp_value, *tmp_tb; + PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); + Py_INCREF(tb); + PyErr_Restore(tmp_type, tmp_value, tb); + Py_XDECREF(tmp_tb); +#else + PyThreadState *tstate = PyThreadState_GET(); + PyObject* tmp_tb = tstate->curexc_traceback; + if (tb != tmp_tb) { + Py_INCREF(tb); + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_tb); + } +#endif + } +bad: + Py_XDECREF(owned_instance); + return; +} +#endif + +/* RaiseTooManyValuesToUnpack */ + static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { + PyErr_Format(PyExc_ValueError, + "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); +} + +/* RaiseNeedMoreValuesToUnpack */ + static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { + PyErr_Format(PyExc_ValueError, + "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", + index, (index == 1) ? "" : "s"); +} + +/* RaiseNoneIterError */ + static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) { + PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); +} + +/* Import */ + static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { + PyObject *empty_list = 0; + PyObject *module = 0; + PyObject *global_dict = 0; + PyObject *empty_dict = 0; + PyObject *list; + #if PY_VERSION_HEX < 0x03030000 + PyObject *py_import; + py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); + if (!py_import) + goto bad; + #endif + if (from_list) + list = from_list; + else { + empty_list = PyList_New(0); + if (!empty_list) + goto bad; + list = empty_list; + } + global_dict = PyModule_GetDict(__pyx_m); + if (!global_dict) + goto bad; + empty_dict = PyDict_New(); + if (!empty_dict) + goto bad; + { + #if PY_MAJOR_VERSION >= 3 + if (level == -1) { + if (strchr(__Pyx_MODULE_NAME, '.')) { + #if PY_VERSION_HEX < 0x03030000 + PyObject *py_level = PyInt_FromLong(1); + if (!py_level) + goto bad; + module = PyObject_CallFunctionObjArgs(py_import, + name, global_dict, empty_dict, list, py_level, NULL); + Py_DECREF(py_level); + #else + module = PyImport_ImportModuleLevelObject( + name, global_dict, empty_dict, list, 1); + #endif + if (!module) { + if (!PyErr_ExceptionMatches(PyExc_ImportError)) + goto bad; + PyErr_Clear(); + } + } + level = 0; + } + #endif + if (!module) { + #if PY_VERSION_HEX < 0x03030000 + PyObject *py_level = PyInt_FromLong(level); + if (!py_level) + goto bad; + module = PyObject_CallFunctionObjArgs(py_import, + name, global_dict, empty_dict, list, py_level, NULL); + Py_DECREF(py_level); + #else + module = PyImport_ImportModuleLevelObject( + name, global_dict, empty_dict, list, level); + #endif + } + } +bad: + #if PY_VERSION_HEX < 0x03030000 + Py_XDECREF(py_import); + #endif + Py_XDECREF(empty_list); + Py_XDECREF(empty_dict); + return module; +} + +/* CodeObjectCache */ + static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { + int start = 0, mid = 0, end = count - 1; + if (end >= 0 && code_line > entries[end].code_line) { + return count; + } + while (start < end) { + mid = start + (end - start) / 2; + if (code_line < entries[mid].code_line) { + end = mid; + } else if (code_line > entries[mid].code_line) { + start = mid + 1; + } else { + return mid; + } + } + if (code_line <= entries[mid].code_line) { + return mid; + } else { + return mid + 1; + } +} +static PyCodeObject *__pyx_find_code_object(int code_line) { + PyCodeObject* code_object; + int pos; + if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { + return NULL; + } + pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); + if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { + return NULL; + } + code_object = __pyx_code_cache.entries[pos].code_object; + Py_INCREF(code_object); + return code_object; +} +static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { + int pos, i; + __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; + if (unlikely(!code_line)) { + return; + } + if (unlikely(!entries)) { + entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); + if (likely(entries)) { + __pyx_code_cache.entries = entries; + __pyx_code_cache.max_count = 64; + __pyx_code_cache.count = 1; + entries[0].code_line = code_line; + entries[0].code_object = code_object; + Py_INCREF(code_object); + } + return; + } + pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); + if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { + PyCodeObject* tmp = entries[pos].code_object; + entries[pos].code_object = code_object; + Py_DECREF(tmp); + return; + } + if (__pyx_code_cache.count == __pyx_code_cache.max_count) { + int new_max = __pyx_code_cache.max_count + 64; + entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( + __pyx_code_cache.entries, (size_t)new_max*sizeof(__Pyx_CodeObjectCacheEntry)); + if (unlikely(!entries)) { + return; + } + __pyx_code_cache.entries = entries; + __pyx_code_cache.max_count = new_max; + } + for (i=__pyx_code_cache.count; i>pos; i--) { + entries[i] = entries[i-1]; + } + entries[pos].code_line = code_line; + entries[pos].code_object = code_object; + __pyx_code_cache.count++; + Py_INCREF(code_object); +} + +/* AddTraceback */ + #include "compile.h" +#include "frameobject.h" +#include "traceback.h" +static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( + const char *funcname, int c_line, + int py_line, const char *filename) { + PyCodeObject *py_code = 0; + PyObject *py_srcfile = 0; + PyObject *py_funcname = 0; + #if PY_MAJOR_VERSION < 3 + py_srcfile = PyString_FromString(filename); + #else + py_srcfile = PyUnicode_FromString(filename); + #endif + if (!py_srcfile) goto bad; + if (c_line) { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); + #else + py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); + #endif + } + else { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromString(funcname); + #else + py_funcname = PyUnicode_FromString(funcname); + #endif + } + if (!py_funcname) goto bad; + py_code = __Pyx_PyCode_New( + 0, + 0, + 0, + 0, + 0, + __pyx_empty_bytes, /*PyObject *code,*/ + __pyx_empty_tuple, /*PyObject *consts,*/ + __pyx_empty_tuple, /*PyObject *names,*/ + __pyx_empty_tuple, /*PyObject *varnames,*/ + __pyx_empty_tuple, /*PyObject *freevars,*/ + __pyx_empty_tuple, /*PyObject *cellvars,*/ + py_srcfile, /*PyObject *filename,*/ + py_funcname, /*PyObject *name,*/ + py_line, + __pyx_empty_bytes /*PyObject *lnotab*/ + ); + Py_DECREF(py_srcfile); + Py_DECREF(py_funcname); + return py_code; +bad: + Py_XDECREF(py_srcfile); + Py_XDECREF(py_funcname); + return NULL; +} +static void __Pyx_AddTraceback(const char *funcname, int c_line, + int py_line, const char *filename) { + PyCodeObject *py_code = 0; + PyFrameObject *py_frame = 0; + py_code = __pyx_find_code_object(c_line ? c_line : py_line); + if (!py_code) { + py_code = __Pyx_CreateCodeObjectForTraceback( + funcname, c_line, py_line, filename); + if (!py_code) goto bad; + __pyx_insert_code_object(c_line ? c_line : py_line, py_code); + } + py_frame = PyFrame_New( + PyThreadState_GET(), /*PyThreadState *tstate,*/ + py_code, /*PyCodeObject *code,*/ + __pyx_d, /*PyObject *globals,*/ + 0 /*PyObject *locals*/ + ); + if (!py_frame) goto bad; + py_frame->f_lineno = py_line; + PyTraceBack_Here(py_frame); +bad: + Py_XDECREF(py_code); + Py_XDECREF(py_frame); +} + +#if PY_MAJOR_VERSION < 3 +static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) { + if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags); + if (PyObject_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) return __pyx_pw_5numpy_7ndarray_1__getbuffer__(obj, view, flags); + PyErr_Format(PyExc_TypeError, "'%.200s' does not have the buffer interface", Py_TYPE(obj)->tp_name); + return -1; +} +static void __Pyx_ReleaseBuffer(Py_buffer *view) { + PyObject *obj = view->obj; + if (!obj) return; + if (PyObject_CheckBuffer(obj)) { + PyBuffer_Release(view); + return; + } + if (PyObject_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) { __pyx_pw_5numpy_7ndarray_3__releasebuffer__(obj, view); return; } + Py_DECREF(obj); + view->obj = NULL; +} +#endif + + + /* CIntFromPyVerify */ + #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ + __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) +#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ + __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) +#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ + {\ + func_type value = func_value;\ + if (sizeof(target_type) < sizeof(func_type)) {\ + if (unlikely(value != (func_type) (target_type) value)) {\ + func_type zero = 0;\ + if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ + return (target_type) -1;\ + if (is_unsigned && unlikely(value < zero))\ + goto raise_neg_overflow;\ + else\ + goto raise_overflow;\ + }\ + }\ + return (target_type) value;\ + } + +/* CIntToPy */ + static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) { + const int neg_one = (int) -1, const_zero = (int) 0; + const int is_unsigned = neg_one > const_zero; + if (is_unsigned) { + if (sizeof(int) < sizeof(long)) { + return PyInt_FromLong((long) value); + } else if (sizeof(int) <= sizeof(unsigned long)) { + return PyLong_FromUnsignedLong((unsigned long) value); + } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); + } + } else { + if (sizeof(int) <= sizeof(long)) { + return PyInt_FromLong((long) value); + } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { + return PyLong_FromLongLong((PY_LONG_LONG) value); + } + } + { + int one = 1; int little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&value; + return _PyLong_FromByteArray(bytes, sizeof(int), + little, !is_unsigned); + } +} + +/* None */ + #if CYTHON_CCOMPLEX + #ifdef __cplusplus + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + return ::std::complex< float >(x, y); + } + #else + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + return x + y*(__pyx_t_float_complex)_Complex_I; + } + #endif +#else + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + __pyx_t_float_complex z; + z.real = x; + z.imag = y; + return z; + } +#endif + +/* None */ + #if CYTHON_CCOMPLEX +#else + static CYTHON_INLINE int __Pyx_c_eqf(__pyx_t_float_complex a, __pyx_t_float_complex b) { + return (a.real == b.real) && (a.imag == b.imag); + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sumf(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real + b.real; + z.imag = a.imag + b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_difff(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real - b.real; + z.imag = a.imag - b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prodf(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real * b.real - a.imag * b.imag; + z.imag = a.real * b.imag + a.imag * b.real; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quotf(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + float denom = b.real * b.real + b.imag * b.imag; + z.real = (a.real * b.real + a.imag * b.imag) / denom; + z.imag = (a.imag * b.real - a.real * b.imag) / denom; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_negf(__pyx_t_float_complex a) { + __pyx_t_float_complex z; + z.real = -a.real; + z.imag = -a.imag; + return z; + } + static CYTHON_INLINE int __Pyx_c_is_zerof(__pyx_t_float_complex a) { + return (a.real == 0) && (a.imag == 0); + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conjf(__pyx_t_float_complex a) { + __pyx_t_float_complex z; + z.real = a.real; + z.imag = -a.imag; + return z; + } + #if 1 + static CYTHON_INLINE float __Pyx_c_absf(__pyx_t_float_complex z) { + #if !defined(HAVE_HYPOT) || defined(_MSC_VER) + return sqrtf(z.real*z.real + z.imag*z.imag); + #else + return hypotf(z.real, z.imag); + #endif + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_powf(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + float r, lnr, theta, z_r, z_theta; + if (b.imag == 0 && b.real == (int)b.real) { + if (b.real < 0) { + float denom = a.real * a.real + a.imag * a.imag; + a.real = a.real / denom; + a.imag = -a.imag / denom; + b.real = -b.real; + } + switch ((int)b.real) { + case 0: + z.real = 1; + z.imag = 0; + return z; + case 1: + return a; + case 2: + z = __Pyx_c_prodf(a, a); + return __Pyx_c_prodf(a, a); + case 3: + z = __Pyx_c_prodf(a, a); + return __Pyx_c_prodf(z, a); + case 4: + z = __Pyx_c_prodf(a, a); + return __Pyx_c_prodf(z, z); + } + } + if (a.imag == 0) { + if (a.real == 0) { + return a; + } + r = a.real; + theta = 0; + } else { + r = __Pyx_c_absf(a); + theta = atan2f(a.imag, a.real); + } + lnr = logf(r); + z_r = expf(lnr * b.real - theta * b.imag); + z_theta = theta * b.real + lnr * b.imag; + z.real = z_r * cosf(z_theta); + z.imag = z_r * sinf(z_theta); + return z; + } + #endif +#endif + +/* None */ + #if CYTHON_CCOMPLEX + #ifdef __cplusplus + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + return ::std::complex< double >(x, y); + } + #else + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + return x + y*(__pyx_t_double_complex)_Complex_I; + } + #endif +#else + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + __pyx_t_double_complex z; + z.real = x; + z.imag = y; + return z; + } +#endif + +/* None */ + #if CYTHON_CCOMPLEX +#else + static CYTHON_INLINE int __Pyx_c_eq(__pyx_t_double_complex a, __pyx_t_double_complex b) { + return (a.real == b.real) && (a.imag == b.imag); + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real + b.real; + z.imag = a.imag + b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real - b.real; + z.imag = a.imag - b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real * b.real - a.imag * b.imag; + z.imag = a.real * b.imag + a.imag * b.real; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + double denom = b.real * b.real + b.imag * b.imag; + z.real = (a.real * b.real + a.imag * b.imag) / denom; + z.imag = (a.imag * b.real - a.real * b.imag) / denom; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg(__pyx_t_double_complex a) { + __pyx_t_double_complex z; + z.real = -a.real; + z.imag = -a.imag; + return z; + } + static CYTHON_INLINE int __Pyx_c_is_zero(__pyx_t_double_complex a) { + return (a.real == 0) && (a.imag == 0); + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj(__pyx_t_double_complex a) { + __pyx_t_double_complex z; + z.real = a.real; + z.imag = -a.imag; + return z; + } + #if 1 + static CYTHON_INLINE double __Pyx_c_abs(__pyx_t_double_complex z) { + #if !defined(HAVE_HYPOT) || defined(_MSC_VER) + return sqrt(z.real*z.real + z.imag*z.imag); + #else + return hypot(z.real, z.imag); + #endif + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + double r, lnr, theta, z_r, z_theta; + if (b.imag == 0 && b.real == (int)b.real) { + if (b.real < 0) { + double denom = a.real * a.real + a.imag * a.imag; + a.real = a.real / denom; + a.imag = -a.imag / denom; + b.real = -b.real; + } + switch ((int)b.real) { + case 0: + z.real = 1; + z.imag = 0; + return z; + case 1: + return a; + case 2: + z = __Pyx_c_prod(a, a); + return __Pyx_c_prod(a, a); + case 3: + z = __Pyx_c_prod(a, a); + return __Pyx_c_prod(z, a); + case 4: + z = __Pyx_c_prod(a, a); + return __Pyx_c_prod(z, z); + } + } + if (a.imag == 0) { + if (a.real == 0) { + return a; + } + r = a.real; + theta = 0; + } else { + r = __Pyx_c_abs(a); + theta = atan2(a.imag, a.real); + } + lnr = log(r); + z_r = exp(lnr * b.real - theta * b.imag); + z_theta = theta * b.real + lnr * b.imag; + z.real = z_r * cos(z_theta); + z.imag = z_r * sin(z_theta); + return z; + } + #endif +#endif + +/* CIntToPy */ + static CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value) { + const enum NPY_TYPES neg_one = (enum NPY_TYPES) -1, const_zero = (enum NPY_TYPES) 0; + const int is_unsigned = neg_one > const_zero; + if (is_unsigned) { + if (sizeof(enum NPY_TYPES) < sizeof(long)) { + return PyInt_FromLong((long) value); + } else if (sizeof(enum NPY_TYPES) <= sizeof(unsigned long)) { + return PyLong_FromUnsignedLong((unsigned long) value); + } else if (sizeof(enum NPY_TYPES) <= sizeof(unsigned PY_LONG_LONG)) { + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); + } + } else { + if (sizeof(enum NPY_TYPES) <= sizeof(long)) { + return PyInt_FromLong((long) value); + } else if (sizeof(enum NPY_TYPES) <= sizeof(PY_LONG_LONG)) { + return PyLong_FromLongLong((PY_LONG_LONG) value); + } + } + { + int one = 1; int little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&value; + return _PyLong_FromByteArray(bytes, sizeof(enum NPY_TYPES), + little, !is_unsigned); + } +} + +/* CIntFromPy */ + static CYTHON_INLINE npy_int32 __Pyx_PyInt_As_npy_int32(PyObject *x) { + const npy_int32 neg_one = (npy_int32) -1, const_zero = (npy_int32) 0; + const int is_unsigned = neg_one > const_zero; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_Check(x))) { + if (sizeof(npy_int32) < sizeof(long)) { + __PYX_VERIFY_RETURN_INT(npy_int32, long, PyInt_AS_LONG(x)) + } else { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + goto raise_neg_overflow; + } + return (npy_int32) val; + } + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (npy_int32) 0; + case 1: __PYX_VERIFY_RETURN_INT(npy_int32, digit, digits[0]) + case 2: + if (8 * sizeof(npy_int32) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) >= 2 * PyLong_SHIFT) { + return (npy_int32) (((((npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0])); + } + } + break; + case 3: + if (8 * sizeof(npy_int32) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) >= 3 * PyLong_SHIFT) { + return (npy_int32) (((((((npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0])); + } + } + break; + case 4: + if (8 * sizeof(npy_int32) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) >= 4 * PyLong_SHIFT) { + return (npy_int32) (((((((((npy_int32)digits[3]) << PyLong_SHIFT) | (npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0])); + } + } + break; + } +#endif +#if CYTHON_COMPILING_IN_CPYTHON + if (unlikely(Py_SIZE(x) < 0)) { + goto raise_neg_overflow; + } +#else + { + int result = PyObject_RichCompareBool(x, Py_False, Py_LT); + if (unlikely(result < 0)) + return (npy_int32) -1; + if (unlikely(result == 1)) + goto raise_neg_overflow; + } +#endif + if (sizeof(npy_int32) <= sizeof(unsigned long)) { + __PYX_VERIFY_RETURN_INT_EXC(npy_int32, unsigned long, PyLong_AsUnsignedLong(x)) + } else if (sizeof(npy_int32) <= sizeof(unsigned PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(npy_int32, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) + } + } else { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (npy_int32) 0; + case -1: __PYX_VERIFY_RETURN_INT(npy_int32, sdigit, (sdigit) (-(sdigit)digits[0])) + case 1: __PYX_VERIFY_RETURN_INT(npy_int32, digit, +digits[0]) + case -2: + if (8 * sizeof(npy_int32) - 1 > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 2 * PyLong_SHIFT) { + return (npy_int32) (((npy_int32)-1)*(((((npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + case 2: + if (8 * sizeof(npy_int32) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 2 * PyLong_SHIFT) { + return (npy_int32) ((((((npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + case -3: + if (8 * sizeof(npy_int32) - 1 > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 3 * PyLong_SHIFT) { + return (npy_int32) (((npy_int32)-1)*(((((((npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + case 3: + if (8 * sizeof(npy_int32) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 3 * PyLong_SHIFT) { + return (npy_int32) ((((((((npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + case -4: + if (8 * sizeof(npy_int32) - 1 > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 4 * PyLong_SHIFT) { + return (npy_int32) (((npy_int32)-1)*(((((((((npy_int32)digits[3]) << PyLong_SHIFT) | (npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + case 4: + if (8 * sizeof(npy_int32) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(npy_int32, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(npy_int32) - 1 > 4 * PyLong_SHIFT) { + return (npy_int32) ((((((((((npy_int32)digits[3]) << PyLong_SHIFT) | (npy_int32)digits[2]) << PyLong_SHIFT) | (npy_int32)digits[1]) << PyLong_SHIFT) | (npy_int32)digits[0]))); + } + } + break; + } +#endif + if (sizeof(npy_int32) <= sizeof(long)) { + __PYX_VERIFY_RETURN_INT_EXC(npy_int32, long, PyLong_AsLong(x)) + } else if (sizeof(npy_int32) <= sizeof(PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(npy_int32, PY_LONG_LONG, PyLong_AsLongLong(x)) + } + } + { +#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) + PyErr_SetString(PyExc_RuntimeError, + "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); +#else + npy_int32 val; + PyObject *v = __Pyx_PyNumber_IntOrLong(x); + #if PY_MAJOR_VERSION < 3 + if (likely(v) && !PyLong_Check(v)) { + PyObject *tmp = v; + v = PyNumber_Long(tmp); + Py_DECREF(tmp); + } + #endif + if (likely(v)) { + int one = 1; int is_little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&val; + int ret = _PyLong_AsByteArray((PyLongObject *)v, + bytes, sizeof(val), + is_little, !is_unsigned); + Py_DECREF(v); + if (likely(!ret)) + return val; + } +#endif + return (npy_int32) -1; + } + } else { + npy_int32 val; + PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); + if (!tmp) return (npy_int32) -1; + val = __Pyx_PyInt_As_npy_int32(tmp); + Py_DECREF(tmp); + return val; + } +raise_overflow: + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to npy_int32"); + return (npy_int32) -1; +raise_neg_overflow: + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to npy_int32"); + return (npy_int32) -1; +} + +/* CIntFromPy */ + static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { + const int neg_one = (int) -1, const_zero = (int) 0; + const int is_unsigned = neg_one > const_zero; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_Check(x))) { + if (sizeof(int) < sizeof(long)) { + __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) + } else { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + goto raise_neg_overflow; + } + return (int) val; + } + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (int) 0; + case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) + case 2: + if (8 * sizeof(int) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) { + return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); + } + } + break; + case 3: + if (8 * sizeof(int) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) { + return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); + } + } + break; + case 4: + if (8 * sizeof(int) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) { + return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); + } + } + break; + } +#endif +#if CYTHON_COMPILING_IN_CPYTHON + if (unlikely(Py_SIZE(x) < 0)) { + goto raise_neg_overflow; + } +#else + { + int result = PyObject_RichCompareBool(x, Py_False, Py_LT); + if (unlikely(result < 0)) + return (int) -1; + if (unlikely(result == 1)) + goto raise_neg_overflow; + } +#endif + if (sizeof(int) <= sizeof(unsigned long)) { + __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) + } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) + } + } else { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (int) 0; + case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) + case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) + case -2: + if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { + return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case 2: + if (8 * sizeof(int) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { + return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case -3: + if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { + return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case 3: + if (8 * sizeof(int) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { + return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case -4: + if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { + return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case 4: + if (8 * sizeof(int) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { + return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + } +#endif + if (sizeof(int) <= sizeof(long)) { + __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) + } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) + } + } + { +#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) + PyErr_SetString(PyExc_RuntimeError, + "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); +#else + int val; + PyObject *v = __Pyx_PyNumber_IntOrLong(x); + #if PY_MAJOR_VERSION < 3 + if (likely(v) && !PyLong_Check(v)) { + PyObject *tmp = v; + v = PyNumber_Long(tmp); + Py_DECREF(tmp); + } + #endif + if (likely(v)) { + int one = 1; int is_little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&val; + int ret = _PyLong_AsByteArray((PyLongObject *)v, + bytes, sizeof(val), + is_little, !is_unsigned); + Py_DECREF(v); + if (likely(!ret)) + return val; + } +#endif + return (int) -1; + } + } else { + int val; + PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); + if (!tmp) return (int) -1; + val = __Pyx_PyInt_As_int(tmp); + Py_DECREF(tmp); + return val; + } +raise_overflow: + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to int"); + return (int) -1; +raise_neg_overflow: + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to int"); + return (int) -1; +} + +/* CIntToPy */ + static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { + const long neg_one = (long) -1, const_zero = (long) 0; + const int is_unsigned = neg_one > const_zero; + if (is_unsigned) { + if (sizeof(long) < sizeof(long)) { + return PyInt_FromLong((long) value); + } else if (sizeof(long) <= sizeof(unsigned long)) { + return PyLong_FromUnsignedLong((unsigned long) value); + } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); + } + } else { + if (sizeof(long) <= sizeof(long)) { + return PyInt_FromLong((long) value); + } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { + return PyLong_FromLongLong((PY_LONG_LONG) value); + } + } + { + int one = 1; int little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&value; + return _PyLong_FromByteArray(bytes, sizeof(long), + little, !is_unsigned); + } +} + +/* CIntFromPy */ + static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { + const long neg_one = (long) -1, const_zero = (long) 0; + const int is_unsigned = neg_one > const_zero; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_Check(x))) { + if (sizeof(long) < sizeof(long)) { + __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) + } else { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + goto raise_neg_overflow; + } + return (long) val; + } + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (long) 0; + case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) + case 2: + if (8 * sizeof(long) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) { + return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); + } + } + break; + case 3: + if (8 * sizeof(long) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) { + return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); + } + } + break; + case 4: + if (8 * sizeof(long) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) { + return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); + } + } + break; + } +#endif +#if CYTHON_COMPILING_IN_CPYTHON + if (unlikely(Py_SIZE(x) < 0)) { + goto raise_neg_overflow; + } +#else + { + int result = PyObject_RichCompareBool(x, Py_False, Py_LT); + if (unlikely(result < 0)) + return (long) -1; + if (unlikely(result == 1)) + goto raise_neg_overflow; + } +#endif + if (sizeof(long) <= sizeof(unsigned long)) { + __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) + } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) + } + } else { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (long) 0; + case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) + case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) + case -2: + if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { + return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case 2: + if (8 * sizeof(long) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { + return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case -3: + if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { + return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case 3: + if (8 * sizeof(long) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { + return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case -4: + if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { + return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case 4: + if (8 * sizeof(long) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { + return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + } +#endif + if (sizeof(long) <= sizeof(long)) { + __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) + } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) + } + } + { +#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) + PyErr_SetString(PyExc_RuntimeError, + "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); +#else + long val; + PyObject *v = __Pyx_PyNumber_IntOrLong(x); + #if PY_MAJOR_VERSION < 3 + if (likely(v) && !PyLong_Check(v)) { + PyObject *tmp = v; + v = PyNumber_Long(tmp); + Py_DECREF(tmp); + } + #endif + if (likely(v)) { + int one = 1; int is_little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&val; + int ret = _PyLong_AsByteArray((PyLongObject *)v, + bytes, sizeof(val), + is_little, !is_unsigned); + Py_DECREF(v); + if (likely(!ret)) + return val; + } +#endif + return (long) -1; + } + } else { + long val; + PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); + if (!tmp) return (long) -1; + val = __Pyx_PyInt_As_long(tmp); + Py_DECREF(tmp); + return val; + } +raise_overflow: + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to long"); + return (long) -1; +raise_neg_overflow: + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to long"); + return (long) -1; +} + +/* CheckBinaryVersion */ + static int __Pyx_check_binary_version(void) { + char ctversion[4], rtversion[4]; + PyOS_snprintf(ctversion, 4, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); + PyOS_snprintf(rtversion, 4, "%s", Py_GetVersion()); + if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) { + char message[200]; + PyOS_snprintf(message, sizeof(message), + "compiletime version %s of module '%.100s' " + "does not match runtime version %s", + ctversion, __Pyx_MODULE_NAME, rtversion); + return PyErr_WarnEx(NULL, message, 1); + } + return 0; +} + +/* ModuleImport */ + #ifndef __PYX_HAVE_RT_ImportModule +#define __PYX_HAVE_RT_ImportModule +static PyObject *__Pyx_ImportModule(const char *name) { + PyObject *py_name = 0; + PyObject *py_module = 0; + py_name = __Pyx_PyIdentifier_FromString(name); + if (!py_name) + goto bad; + py_module = PyImport_Import(py_name); + Py_DECREF(py_name); + return py_module; +bad: + Py_XDECREF(py_name); + return 0; +} +#endif + +/* TypeImport */ + #ifndef __PYX_HAVE_RT_ImportType +#define __PYX_HAVE_RT_ImportType +static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, + size_t size, int strict) +{ + PyObject *py_module = 0; + PyObject *result = 0; + PyObject *py_name = 0; + char warning[200]; + Py_ssize_t basicsize; +#ifdef Py_LIMITED_API + PyObject *py_basicsize; +#endif + py_module = __Pyx_ImportModule(module_name); + if (!py_module) + goto bad; + py_name = __Pyx_PyIdentifier_FromString(class_name); + if (!py_name) + goto bad; + result = PyObject_GetAttr(py_module, py_name); + Py_DECREF(py_name); + py_name = 0; + Py_DECREF(py_module); + py_module = 0; + if (!result) + goto bad; + if (!PyType_Check(result)) { + PyErr_Format(PyExc_TypeError, + "%.200s.%.200s is not a type object", + module_name, class_name); + goto bad; + } +#ifndef Py_LIMITED_API + basicsize = ((PyTypeObject *)result)->tp_basicsize; +#else + py_basicsize = PyObject_GetAttrString(result, "__basicsize__"); + if (!py_basicsize) + goto bad; + basicsize = PyLong_AsSsize_t(py_basicsize); + Py_DECREF(py_basicsize); + py_basicsize = 0; + if (basicsize == (Py_ssize_t)-1 && PyErr_Occurred()) + goto bad; +#endif + if (!strict && (size_t)basicsize > size) { + PyOS_snprintf(warning, sizeof(warning), + "%s.%s size changed, may indicate binary incompatibility. Expected %zd, got %zd", + module_name, class_name, basicsize, size); + if (PyErr_WarnEx(NULL, warning, 0) < 0) goto bad; + } + else if ((size_t)basicsize != size) { + PyErr_Format(PyExc_ValueError, + "%.200s.%.200s has the wrong size, try recompiling. Expected %zd, got %zd", + module_name, class_name, basicsize, size); + goto bad; + } + return (PyTypeObject *)result; +bad: + Py_XDECREF(py_module); + Py_XDECREF(result); + return NULL; +} +#endif + +/* InitStrings */ + static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { + while (t->p) { + #if PY_MAJOR_VERSION < 3 + if (t->is_unicode) { + *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); + } else if (t->intern) { + *t->p = PyString_InternFromString(t->s); + } else { + *t->p = PyString_FromStringAndSize(t->s, t->n - 1); + } + #else + if (t->is_unicode | t->is_str) { + if (t->intern) { + *t->p = PyUnicode_InternFromString(t->s); + } else if (t->encoding) { + *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); + } else { + *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); + } + } else { + *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); + } + #endif + if (!*t->p) + return -1; + ++t; + } + return 0; +} + +static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { + return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); +} +static CYTHON_INLINE char* __Pyx_PyObject_AsString(PyObject* o) { + Py_ssize_t ignore; + return __Pyx_PyObject_AsStringAndSize(o, &ignore); +} +static CYTHON_INLINE char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { +#if CYTHON_COMPILING_IN_CPYTHON && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) + if ( +#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII + __Pyx_sys_getdefaultencoding_not_ascii && +#endif + PyUnicode_Check(o)) { +#if PY_VERSION_HEX < 0x03030000 + char* defenc_c; + PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); + if (!defenc) return NULL; + defenc_c = PyBytes_AS_STRING(defenc); +#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII + { + char* end = defenc_c + PyBytes_GET_SIZE(defenc); + char* c; + for (c = defenc_c; c < end; c++) { + if ((unsigned char) (*c) >= 128) { + PyUnicode_AsASCIIString(o); + return NULL; + } + } + } +#endif + *length = PyBytes_GET_SIZE(defenc); + return defenc_c; +#else + if (__Pyx_PyUnicode_READY(o) == -1) return NULL; +#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII + if (PyUnicode_IS_ASCII(o)) { + *length = PyUnicode_GET_LENGTH(o); + return PyUnicode_AsUTF8(o); + } else { + PyUnicode_AsASCIIString(o); + return NULL; + } +#else + return PyUnicode_AsUTF8AndSize(o, length); +#endif +#endif + } else +#endif +#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) + if (PyByteArray_Check(o)) { + *length = PyByteArray_GET_SIZE(o); + return PyByteArray_AS_STRING(o); + } else +#endif + { + char* result; + int r = PyBytes_AsStringAndSize(o, &result, length); + if (unlikely(r < 0)) { + return NULL; + } else { + return result; + } + } +} +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { + int is_true = x == Py_True; + if (is_true | (x == Py_False) | (x == Py_None)) return is_true; + else return PyObject_IsTrue(x); +} +static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { + PyNumberMethods *m; + const char *name = NULL; + PyObject *res = NULL; +#if PY_MAJOR_VERSION < 3 + if (PyInt_Check(x) || PyLong_Check(x)) +#else + if (PyLong_Check(x)) +#endif + return __Pyx_NewRef(x); + m = Py_TYPE(x)->tp_as_number; +#if PY_MAJOR_VERSION < 3 + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Int(x); + } + else if (m && m->nb_long) { + name = "long"; + res = PyNumber_Long(x); + } +#else + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Long(x); + } +#endif + if (res) { +#if PY_MAJOR_VERSION < 3 + if (!PyInt_Check(res) && !PyLong_Check(res)) { +#else + if (!PyLong_Check(res)) { +#endif + PyErr_Format(PyExc_TypeError, + "__%.4s__ returned non-%.4s (type %.200s)", + name, name, Py_TYPE(res)->tp_name); + Py_DECREF(res); + return NULL; + } + } + else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_TypeError, + "an integer is required"); + } + return res; +} +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { + Py_ssize_t ival; + PyObject *x; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_CheckExact(b))) { + if (sizeof(Py_ssize_t) >= sizeof(long)) + return PyInt_AS_LONG(b); + else + return PyInt_AsSsize_t(x); + } +#endif + if (likely(PyLong_CheckExact(b))) { + #if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)b)->ob_digit; + const Py_ssize_t size = Py_SIZE(b); + if (likely(__Pyx_sst_abs(size) <= 1)) { + ival = likely(size) ? digits[0] : 0; + if (size == -1) ival = -ival; + return ival; + } else { + switch (size) { + case 2: + if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { + return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case -2: + if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { + return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case 3: + if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { + return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case -3: + if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { + return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case 4: + if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { + return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case -4: + if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { + return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + } + } + #endif + return PyLong_AsSsize_t(b); + } + x = PyNumber_Index(b); + if (!x) return -1; + ival = PyInt_AsSsize_t(x); + Py_DECREF(x); + return ival; +} +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { + return PyInt_FromSize_t(ival); +} + + +#endif /* Py_PYTHON_H */ + diff --git a/lib/nms/gpu_nms.hpp b/lib/nms/gpu_nms.hpp new file mode 100644 index 0000000..68b6d42 --- /dev/null +++ b/lib/nms/gpu_nms.hpp @@ -0,0 +1,2 @@ +void _nms(int* keep_out, int* num_out, const float* boxes_host, int boxes_num, + int boxes_dim, float nms_overlap_thresh, int device_id); diff --git a/lib/nms/gpu_nms.pyx b/lib/nms/gpu_nms.pyx new file mode 100644 index 0000000..2b51bec --- /dev/null +++ b/lib/nms/gpu_nms.pyx @@ -0,0 +1,31 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2015 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified from py-faster-rcnn (https://github.com/rbgirshick/py-faster-rcnn) +# -------------------------------------------------------- + +import numpy as np +cimport numpy as np + +assert sizeof(int) == sizeof(np.int32_t) + +cdef extern from "gpu_nms.hpp": + void _nms(np.int32_t*, int*, np.float32_t*, int, int, float, int) + +def gpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh, + np.int32_t device_id=0): + cdef int boxes_num = dets.shape[0] + cdef int boxes_dim = dets.shape[1] + cdef int num_out + cdef np.ndarray[np.int32_t, ndim=1] \ + keep = np.zeros(boxes_num, dtype=np.int32) + cdef np.ndarray[np.float32_t, ndim=1] \ + scores = dets[:, 4] + cdef np.ndarray[np.int32_t, ndim=1] \ + order = scores.argsort()[::-1].astype(np.int32) + cdef np.ndarray[np.float32_t, ndim=2] \ + sorted_dets = dets[order, :] + _nms(&keep[0], &num_out, &sorted_dets[0, 0], boxes_num, boxes_dim, thresh, device_id) + keep = keep[:num_out] + return list(order[keep]) diff --git a/lib/nms/nms.py b/lib/nms/nms.py new file mode 100644 index 0000000..b91de1e --- /dev/null +++ b/lib/nms/nms.py @@ -0,0 +1,126 @@ +import numpy as np + +from cpu_nms import cpu_nms +from gpu_nms import gpu_nms + + +def py_nms_wrapper(thresh): + def _nms(dets): + return nms(dets, thresh) + return _nms + + +def py_softnms_wrapper(thresh, max_dets=-1): + def _nms(dets): + return soft_nms(dets, thresh, max_dets) + return _nms + + +def cpu_nms_wrapper(thresh): + def _nms(dets): + return cpu_nms(dets, thresh) + return _nms + + +def gpu_nms_wrapper(thresh, device_id): + def _nms(dets): + return gpu_nms(dets, thresh, device_id) + return _nms + +def nms(dets, thresh): + """ + greedily select boxes with high confidence and overlap with current maximum <= thresh + rule out overlap >= thresh + :param dets: [[x1, y1, x2, y2 score]] + :param thresh: retain overlap < thresh + :return: indexes to keep + """ + if dets.shape[0] == 0: + return [] + + x1 = dets[:, 0] + y1 = dets[:, 1] + x2 = dets[:, 2] + y2 = dets[:, 3] + scores = dets[:, 4] + + areas = (x2 - x1 + 1) * (y2 - y1 + 1) + order = scores.argsort()[::-1] + + keep = [] + while order.size > 0: + i = order[0] + keep.append(i) + xx1 = np.maximum(x1[i], x1[order[1:]]) + yy1 = np.maximum(y1[i], y1[order[1:]]) + xx2 = np.minimum(x2[i], x2[order[1:]]) + yy2 = np.minimum(y2[i], y2[order[1:]]) + + w = np.maximum(0.0, xx2 - xx1 + 1) + h = np.maximum(0.0, yy2 - yy1 + 1) + inter = w * h + ovr = inter / (areas[i] + areas[order[1:]] - inter) + + inds = np.where(ovr <= thresh)[0] + order = order[inds + 1] + + return keep + + +def rescore(overlap, scores, thresh, type='gaussian'): + assert overlap.shape[0] == scores.shape[0] + if type == 'linear': + inds = np.where(overlap >= thresh)[0] + scores[inds] = scores[inds] * (1 - overlap[inds]) + else: + scores = scores * np.exp(- overlap**2 / thresh) + + return scores + + +def soft_nms(dets, thresh, max_dets): + if dets.shape[0] == 0: + return np.zeros((0, 5)) + + x1 = dets[:, 0] + y1 = dets[:, 1] + x2 = dets[:, 2] + y2 = dets[:, 3] + scores = dets[:, 4] + + areas = (x2 - x1 + 1) * (y2 - y1 + 1) + order = scores.argsort()[::-1] + scores = scores[order] + + if max_dets == -1: + max_dets = order.size + + keep = np.zeros(max_dets, dtype=np.intp) + keep_cnt = 0 + + while order.size > 0 and keep_cnt < max_dets: + i = order[0] + dets[i, 4] = scores[0] + xx1 = np.maximum(x1[i], x1[order[1:]]) + yy1 = np.maximum(y1[i], y1[order[1:]]) + xx2 = np.minimum(x2[i], x2[order[1:]]) + yy2 = np.minimum(y2[i], y2[order[1:]]) + + w = np.maximum(0.0, xx2 - xx1 + 1) + h = np.maximum(0.0, yy2 - yy1 + 1) + inter = w * h + ovr = inter / (areas[i] + areas[order[1:]] - inter) + + order = order[1:] + scores = rescore(ovr, scores[1:], thresh) + + tmp = scores.argsort()[::-1] + order = order[tmp] + scores = scores[tmp] + + keep[keep_cnt] = i + keep_cnt += 1 + + keep = keep[:keep_cnt] + dets = dets[keep, :] + return dets diff --git a/lib/nms/nms_kernel.cu b/lib/nms/nms_kernel.cu new file mode 100644 index 0000000..e042efb --- /dev/null +++ b/lib/nms/nms_kernel.cu @@ -0,0 +1,144 @@ +// ------------------------------------------------------------------ +// Deformable Convolutional Networks +// Copyright (c) 2015 Microsoft +// Licensed under The MIT License +// Modified from MATLAB Faster R-CNN (https://github.com/shaoqingren/faster_rcnn) +// ------------------------------------------------------------------ + +#include "gpu_nms.hpp" +#include +#include + +#define CUDA_CHECK(condition) \ + /* Code block avoids redefinition of cudaError_t error */ \ + do { \ + cudaError_t error = condition; \ + if (error != cudaSuccess) { \ + std::cout << cudaGetErrorString(error) << std::endl; \ + } \ + } while (0) + +#define DIVUP(m,n) ((m) / (n) + ((m) % (n) > 0)) +int const threadsPerBlock = sizeof(unsigned long long) * 8; + +__device__ inline float devIoU(float const * const a, float const * const b) { + float left = max(a[0], b[0]), right = min(a[2], b[2]); + float top = max(a[1], b[1]), bottom = min(a[3], b[3]); + float width = max(right - left + 1, 0.f), height = max(bottom - top + 1, 0.f); + float interS = width * height; + float Sa = (a[2] - a[0] + 1) * (a[3] - a[1] + 1); + float Sb = (b[2] - b[0] + 1) * (b[3] - b[1] + 1); + return interS / (Sa + Sb - interS); +} + +__global__ void nms_kernel(const int n_boxes, const float nms_overlap_thresh, + const float *dev_boxes, unsigned long long *dev_mask) { + const int row_start = blockIdx.y; + const int col_start = blockIdx.x; + + // if (row_start > col_start) return; + + const int row_size = + min(n_boxes - row_start * threadsPerBlock, threadsPerBlock); + const int col_size = + min(n_boxes - col_start * threadsPerBlock, threadsPerBlock); + + __shared__ float block_boxes[threadsPerBlock * 5]; + if (threadIdx.x < col_size) { + block_boxes[threadIdx.x * 5 + 0] = + dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 0]; + block_boxes[threadIdx.x * 5 + 1] = + dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 1]; + block_boxes[threadIdx.x * 5 + 2] = + dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 2]; + block_boxes[threadIdx.x * 5 + 3] = + dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 3]; + block_boxes[threadIdx.x * 5 + 4] = + dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 4]; + } + __syncthreads(); + + if (threadIdx.x < row_size) { + const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x; + const float *cur_box = dev_boxes + cur_box_idx * 5; + int i = 0; + unsigned long long t = 0; + int start = 0; + if (row_start == col_start) { + start = threadIdx.x + 1; + } + for (i = start; i < col_size; i++) { + if (devIoU(cur_box, block_boxes + i * 5) > nms_overlap_thresh) { + t |= 1ULL << i; + } + } + const int col_blocks = DIVUP(n_boxes, threadsPerBlock); + dev_mask[cur_box_idx * col_blocks + col_start] = t; + } +} + +void _set_device(int device_id) { + int current_device; + CUDA_CHECK(cudaGetDevice(¤t_device)); + if (current_device == device_id) { + return; + } + // The call to cudaSetDevice must come before any calls to Get, which + // may perform initialization using the GPU. + CUDA_CHECK(cudaSetDevice(device_id)); +} + +void _nms(int* keep_out, int* num_out, const float* boxes_host, int boxes_num, + int boxes_dim, float nms_overlap_thresh, int device_id) { + _set_device(device_id); + + float* boxes_dev = NULL; + unsigned long long* mask_dev = NULL; + + const int col_blocks = DIVUP(boxes_num, threadsPerBlock); + + CUDA_CHECK(cudaMalloc(&boxes_dev, + boxes_num * boxes_dim * sizeof(float))); + CUDA_CHECK(cudaMemcpy(boxes_dev, + boxes_host, + boxes_num * boxes_dim * sizeof(float), + cudaMemcpyHostToDevice)); + + CUDA_CHECK(cudaMalloc(&mask_dev, + boxes_num * col_blocks * sizeof(unsigned long long))); + + dim3 blocks(DIVUP(boxes_num, threadsPerBlock), + DIVUP(boxes_num, threadsPerBlock)); + dim3 threads(threadsPerBlock); + nms_kernel<<>>(boxes_num, + nms_overlap_thresh, + boxes_dev, + mask_dev); + + std::vector mask_host(boxes_num * col_blocks); + CUDA_CHECK(cudaMemcpy(&mask_host[0], + mask_dev, + sizeof(unsigned long long) * boxes_num * col_blocks, + cudaMemcpyDeviceToHost)); + + std::vector remv(col_blocks); + memset(&remv[0], 0, sizeof(unsigned long long) * col_blocks); + + int num_to_keep = 0; + for (int i = 0; i < boxes_num; i++) { + int nblock = i / threadsPerBlock; + int inblock = i % threadsPerBlock; + + if (!(remv[nblock] & (1ULL << inblock))) { + keep_out[num_to_keep++] = i; + unsigned long long *p = &mask_host[0] + i * col_blocks; + for (int j = nblock; j < col_blocks; j++) { + remv[j] |= p[j]; + } + } + } + *num_out = num_to_keep; + + CUDA_CHECK(cudaFree(boxes_dev)); + CUDA_CHECK(cudaFree(mask_dev)); +} diff --git a/lib/nms/setup_linux.py b/lib/nms/setup_linux.py new file mode 100644 index 0000000..880ea79 --- /dev/null +++ b/lib/nms/setup_linux.py @@ -0,0 +1,141 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2015 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified from py-faster-rcnn (https://github.com/rbgirshick/py-faster-rcnn) +# -------------------------------------------------------- + +import os +from os.path import join as pjoin +from setuptools import setup +from distutils.extension import Extension +from Cython.Distutils import build_ext +import numpy as np + + +def find_in_path(name, path): + "Find a file in a search path" + # Adapted fom + # http://code.activestate.com/recipes/52224-find-a-file-given-a-search-path/ + for dir in path.split(os.pathsep): + binpath = pjoin(dir, name) + if os.path.exists(binpath): + return os.path.abspath(binpath) + return None + + +def locate_cuda(): + """Locate the CUDA environment on the system + Returns a dict with keys 'home', 'nvcc', 'include', and 'lib64' + and values giving the absolute path to each directory. + Starts by looking for the CUDAHOME env variable. If not found, everything + is based on finding 'nvcc' in the PATH. + """ + + # first check if the CUDAHOME env variable is in use + if 'CUDAHOME' in os.environ: + home = os.environ['CUDAHOME'] + nvcc = pjoin(home, 'bin', 'nvcc') + else: + # otherwise, search the PATH for NVCC + default_path = pjoin(os.sep, 'usr', 'local', 'cuda', 'bin') + nvcc = find_in_path('nvcc', os.environ['PATH'] + os.pathsep + default_path) + if nvcc is None: + raise EnvironmentError('The nvcc binary could not be ' + 'located in your $PATH. Either add it to your path, or set $CUDAHOME') + home = os.path.dirname(os.path.dirname(nvcc)) + + cudaconfig = {'home':home, 'nvcc':nvcc, + 'include': pjoin(home, 'include'), + 'lib64': pjoin(home, 'lib64')} + for k, v in cudaconfig.iteritems(): + if not os.path.exists(v): + raise EnvironmentError('The CUDA %s path could not be located in %s' % (k, v)) + + return cudaconfig +CUDA = locate_cuda() + + +# Obtain the numpy include directory. This logic works across numpy versions. +try: + numpy_include = np.get_include() +except AttributeError: + numpy_include = np.get_numpy_include() + + +def customize_compiler_for_nvcc(self): + """inject deep into distutils to customize how the dispatch + to gcc/nvcc works. + If you subclass UnixCCompiler, it's not trivial to get your subclass + injected in, and still have the right customizations (i.e. + distutils.sysconfig.customize_compiler) run on it. So instead of going + the OO route, I have this. Note, it's kindof like a wierd functional + subclassing going on.""" + + # tell the compiler it can processes .cu + self.src_extensions.append('.cu') + + # save references to the default compiler_so and _comple methods + default_compiler_so = self.compiler_so + super = self._compile + + # now redefine the _compile method. This gets executed for each + # object but distutils doesn't have the ability to change compilers + # based on source extension: we add it. + def _compile(obj, src, ext, cc_args, extra_postargs, pp_opts): + if os.path.splitext(src)[1] == '.cu': + # use the cuda for .cu files + self.set_executable('compiler_so', CUDA['nvcc']) + # use only a subset of the extra_postargs, which are 1-1 translated + # from the extra_compile_args in the Extension class + postargs = extra_postargs['nvcc'] + else: + postargs = extra_postargs['gcc'] + + super(obj, src, ext, cc_args, postargs, pp_opts) + # reset the default compiler_so, which we might have changed for cuda + self.compiler_so = default_compiler_so + + # inject our redefined _compile method into the class + self._compile = _compile + + +# run the customize_compiler +class custom_build_ext(build_ext): + def build_extensions(self): + customize_compiler_for_nvcc(self.compiler) + build_ext.build_extensions(self) + + +ext_modules = [ + Extension( + "cpu_nms", + ["cpu_nms.pyx"], + extra_compile_args={'gcc': ["-Wno-cpp", "-Wno-unused-function"]}, + include_dirs = [numpy_include] + ), + Extension('gpu_nms', + ['nms_kernel.cu', 'gpu_nms.pyx'], + library_dirs=[CUDA['lib64']], + libraries=['cudart'], + language='c++', + runtime_library_dirs=[CUDA['lib64']], + # this syntax is specific to this build system + # we're only going to use certain compiler args with nvcc and not with + # gcc the implementation of this trick is in customize_compiler() below + extra_compile_args={'gcc': ["-Wno-unused-function"], + 'nvcc': ['-arch=sm_52', + '--ptxas-options=-v', + '-c', + '--compiler-options', + "'-fPIC'"]}, + include_dirs = [numpy_include, CUDA['include']] + ), +] + +setup( + name='nms', + ext_modules=ext_modules, + # inject our custom trigger + cmdclass={'build_ext': custom_build_ext}, +) diff --git a/lib/nms/setup_windows.py b/lib/nms/setup_windows.py new file mode 100644 index 0000000..10c3b26 --- /dev/null +++ b/lib/nms/setup_windows.py @@ -0,0 +1,145 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2015 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified from py-faster-rcnn (https://github.com/rbgirshick/py-faster-rcnn) +# -------------------------------------------------------- + +import numpy as np +import os +from os.path import join as pjoin +#from distutils.core import setup +from setuptools import setup +from distutils.extension import Extension +from Cython.Distutils import build_ext +import subprocess + +#change for windows, by MrX +nvcc_bin = 'nvcc.exe' +lib_dir = 'lib/x64' + +import distutils.msvc9compiler +distutils.msvc9compiler.VERSION = 14.0 + + +def find_in_path(name, path): + "Find a file in a search path" + # Adapted fom + # http://code.activestate.com/recipes/52224-find-a-file-given-a-search-path/ + for dir in path.split(os.pathsep): + binpath = pjoin(dir, name) + if os.path.exists(binpath): + return os.path.abspath(binpath) + return None + + +def locate_cuda(): + """Locate the CUDA environment on the system + + Returns a dict with keys 'home', 'nvcc', 'include', and 'lib64' + and values giving the absolute path to each directory. + + Starts by looking for the CUDAHOME env variable. If not found, everything + is based on finding 'nvcc' in the PATH. + """ + + # first check if the CUDAHOME env variable is in use + if 'CUDA_PATH' in os.environ: + home = os.environ['CUDA_PATH'] + print("home = %s\n" % home) + nvcc = pjoin(home, 'bin', nvcc_bin) + else: + # otherwise, search the PATH for NVCC + default_path = pjoin(os.sep, 'usr', 'local', 'cuda', 'bin') + nvcc = find_in_path(nvcc_bin, os.environ['PATH'] + os.pathsep + default_path) + if nvcc is None: + raise EnvironmentError('The nvcc binary could not be ' + 'located in your $PATH. Either add it to your path, or set $CUDA_PATH') + home = os.path.dirname(os.path.dirname(nvcc)) + print("home = %s, nvcc = %s\n" % (home, nvcc)) + + + cudaconfig = {'home':home, 'nvcc':nvcc, + 'include': pjoin(home, 'include'), + 'lib64': pjoin(home, lib_dir)} + for k, v in cudaconfig.iteritems(): + if not os.path.exists(v): + raise EnvironmentError('The CUDA %s path could not be located in %s' % (k, v)) + + return cudaconfig +CUDA = locate_cuda() + + +# Obtain the numpy include directory. This logic works across numpy versions. +try: + numpy_include = np.get_include() +except AttributeError: + numpy_include = np.get_numpy_include() + + +def customize_compiler_for_nvcc(self): + """inject deep into distutils to customize how the dispatch + to gcc/nvcc works. + + If you subclass UnixCCompiler, it's not trivial to get your subclass + injected in, and still have the right customizations (i.e. + distutils.sysconfig.customize_compiler) run on it. So instead of going + the OO route, I have this. Note, it's kindof like a wierd functional + subclassing going on.""" + + # tell the compiler it can processes .cu + #self.src_extensions.append('.cu') + + + # save references to the default compiler_so and _comple methods + #default_compiler_so = self.spawn + #default_compiler_so = self.rc + super = self.compile + + # now redefine the _compile method. This gets executed for each + # object but distutils doesn't have the ability to change compilers + # based on source extension: we add it. + def compile(sources, output_dir=None, macros=None, include_dirs=None, debug=0, extra_preargs=None, extra_postargs=None, depends=None): + postfix=os.path.splitext(sources[0])[1] + + if postfix == '.cu': + # use the cuda for .cu files + #self.set_executable('compiler_so', CUDA['nvcc']) + # use only a subset of the extra_postargs, which are 1-1 translated + # from the extra_compile_args in the Extension class + postargs = extra_postargs['nvcc'] + else: + postargs = extra_postargs['gcc'] + + + return super(sources, output_dir, macros, include_dirs, debug, extra_preargs, postargs, depends) + # reset the default compiler_so, which we might have changed for cuda + #self.rc = default_compiler_so + + # inject our redefined _compile method into the class + self.compile = compile + + +# run the customize_compiler +class custom_build_ext(build_ext): + def build_extensions(self): + customize_compiler_for_nvcc(self.compiler) + build_ext.build_extensions(self) + + +ext_modules = [ + # unix _compile: obj, src, ext, cc_args, extra_postargs, pp_opts + Extension( + "cpu_nms", + sources=["cpu_nms.pyx"], + extra_compile_args={'gcc': []}, + include_dirs = [numpy_include], + ), +] + +setup( + name='fast_rcnn', + ext_modules=ext_modules, + # inject our custom trigger + cmdclass={'build_ext': custom_build_ext}, +) diff --git a/lib/nms/setup_windows_cuda.py b/lib/nms/setup_windows_cuda.py new file mode 100644 index 0000000..7a45cf9 --- /dev/null +++ b/lib/nms/setup_windows_cuda.py @@ -0,0 +1,128 @@ +#!/usr/bin/env python + +import numpy as np +import os +# on Windows, we need the original PATH without Anaconda's compiler in it: +PATH = os.environ.get('PATH') + ';C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\bin' +from distutils.spawn import spawn, find_executable +from setuptools import setup, find_packages, Extension +from setuptools.command.build_ext import build_ext +import sys + +# CUDA specific config +# nvcc is assumed to be in user's PATH +nvcc_compile_args = ['-O', '--ptxas-options=-v', '-arch=compute_35', '-code=sm_35,sm_52,sm_61', '-c', '--compiler-options=-fPIC'] +nvcc_compile_args = os.environ.get('NVCCFLAGS', '').split() + nvcc_compile_args +cuda_libs = ['cublas'] +nvcc_bin = 'nvcc.exe' +lib_dir = 'lib/x64' + + +import distutils.msvc9compiler +distutils.msvc9compiler.VERSION = 14.0 + +# Obtain the numpy include directory. This logic works across numpy versions. +try: + numpy_include = np.get_include() +except AttributeError: + numpy_include = np.get_numpy_include() + + +cudamat_ext = Extension('gpu_nms', + sources=[ + 'gpu_nms.cu' + ], + language='c++', + libraries=cuda_libs, + extra_compile_args=nvcc_compile_args, + include_dirs = [numpy_include, 'C:\\Programming\\CUDA\\v8.0\\include']) + + +class CUDA_build_ext(build_ext): + """ + Custom build_ext command that compiles CUDA files. + Note that all extension source files will be processed with this compiler. + """ + def build_extensions(self): + self.compiler.src_extensions.append('.cu') + self.compiler.set_executable('compiler_so', 'nvcc') + self.compiler.set_executable('linker_so', 'nvcc --shared') + if hasattr(self.compiler, '_c_extensions'): + self.compiler._c_extensions.append('.cu') # needed for Windows + self.compiler.spawn = self.spawn + build_ext.build_extensions(self) + + def spawn(self, cmd, search_path=1, verbose=0, dry_run=0): + """ + Perform any CUDA specific customizations before actually launching + compile/link etc. commands. + """ + if (sys.platform == 'darwin' and len(cmd) >= 2 and cmd[0] == 'nvcc' and + cmd[1] == '--shared' and cmd.count('-arch') > 0): + # Versions of distutils on OSX earlier than 2.7.9 inject + # '-arch x86_64' which we need to strip while using nvcc for + # linking + while True: + try: + index = cmd.index('-arch') + del cmd[index:index+2] + except ValueError: + break + elif self.compiler.compiler_type == 'msvc': + # There are several things we need to do to change the commands + # issued by MSVCCompiler into one that works with nvcc. In the end, + # it might have been easier to write our own CCompiler class for + # nvcc, as we're only interested in creating a shared library to + # load with ctypes, not in creating an importable Python extension. + # - First, we replace the cl.exe or link.exe call with an nvcc + # call. In case we're running Anaconda, we search cl.exe in the + # original search path we captured further above -- Anaconda + # inserts a MSVC version into PATH that is too old for nvcc. + cmd[:1] = ['nvcc', '--compiler-bindir', + os.path.dirname(find_executable("cl.exe", PATH)) + or cmd[0]] + # - Secondly, we fix a bunch of command line arguments. + for idx, c in enumerate(cmd): + # create .dll instead of .pyd files + #if '.pyd' in c: cmd[idx] = c = c.replace('.pyd', '.dll') #20160601, by MrX + # replace /c by -c + if c == '/c': cmd[idx] = '-c' + # replace /DLL by --shared + elif c == '/DLL': cmd[idx] = '--shared' + # remove --compiler-options=-fPIC + elif '-fPIC' in c: del cmd[idx] + # replace /Tc... by ... + elif c.startswith('/Tc'): cmd[idx] = c[3:] + # replace /Fo... by -o ... + elif c.startswith('/Fo'): cmd[idx:idx+1] = ['-o', c[3:]] + # replace /LIBPATH:... by -L... + elif c.startswith('/LIBPATH:'): cmd[idx] = '-L' + c[9:] + # replace /OUT:... by -o ... + elif c.startswith('/OUT:'): cmd[idx:idx+1] = ['-o', c[5:]] + # remove /EXPORT:initlibcudamat or /EXPORT:initlibcudalearn + elif c.startswith('/EXPORT:'): del cmd[idx] + # replace cublas.lib by -lcublas + elif c == 'cublas.lib': cmd[idx] = '-lcublas' + # - Finally, we pass on all arguments starting with a '/' to the + # compiler or linker, and have nvcc handle all other arguments + if '--shared' in cmd: + pass_on = '--linker-options=' + # we only need MSVCRT for a .dll, remove CMT if it sneaks in: + cmd.append('/NODEFAULTLIB:libcmt.lib') + else: + pass_on = '--compiler-options=' + cmd = ([c for c in cmd if c[0] != '/'] + + [pass_on + ','.join(c for c in cmd if c[0] == '/')]) + # For the future: Apart from the wrongly set PATH by Anaconda, it + # would suffice to run the following for compilation on Windows: + # nvcc -c -O -o .obj .cu + # And the following for linking: + # nvcc --shared -o .dll .obj .obj -lcublas + # This could be done by a NVCCCompiler class for all platforms. + spawn(cmd, search_path, verbose, dry_run) + +setup(name="py_fast_rcnn_gpu", + description="Performs linear algebra computation on the GPU via CUDA", + ext_modules=[cudamat_ext], + cmdclass={'build_ext': CUDA_build_ext}, +) diff --git a/lib/rpn/__init__.py b/lib/rpn/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/lib/rpn/generate_anchor.py b/lib/rpn/generate_anchor.py new file mode 100644 index 0000000..09dde2f --- /dev/null +++ b/lib/rpn/generate_anchor.py @@ -0,0 +1,85 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +""" +Generate base anchors on index 0 +""" + +import numpy as np + + +def generate_anchors(base_size=16, ratios=[0.5, 1, 2], + scales=2 ** np.arange(3, 6)): + """ + Generate anchor (reference) windows by enumerating aspect ratios X + scales wrt a reference (0, 0, 15, 15) window. + """ + + base_anchor = np.array([1, 1, base_size, base_size]) - 1 + ratio_anchors = _ratio_enum(base_anchor, ratios) + anchors = np.vstack([_scale_enum(ratio_anchors[i, :], scales) + for i in xrange(ratio_anchors.shape[0])]) + return anchors + + +def _whctrs(anchor): + """ + Return width, height, x center, and y center for an anchor (window). + """ + + w = anchor[2] - anchor[0] + 1 + h = anchor[3] - anchor[1] + 1 + x_ctr = anchor[0] + 0.5 * (w - 1) + y_ctr = anchor[1] + 0.5 * (h - 1) + return w, h, x_ctr, y_ctr + + +def _mkanchors(ws, hs, x_ctr, y_ctr): + """ + Given a vector of widths (ws) and heights (hs) around a center + (x_ctr, y_ctr), output a set of anchors (windows). + """ + + ws = ws[:, np.newaxis] + hs = hs[:, np.newaxis] + anchors = np.hstack((x_ctr - 0.5 * (ws - 1), + y_ctr - 0.5 * (hs - 1), + x_ctr + 0.5 * (ws - 1), + y_ctr + 0.5 * (hs - 1))) + return anchors + + +def _ratio_enum(anchor, ratios): + """ + Enumerate a set of anchors for each aspect ratio wrt an anchor. + """ + + w, h, x_ctr, y_ctr = _whctrs(anchor) + size = w * h + size_ratios = size / ratios + ws = np.round(np.sqrt(size_ratios)) + hs = np.round(ws * ratios) + anchors = _mkanchors(ws, hs, x_ctr, y_ctr) + return anchors + + +def _scale_enum(anchor, scales): + """ + Enumerate a set of anchors for each scale wrt an anchor. + """ + + w, h, x_ctr, y_ctr = _whctrs(anchor) + ws = w * scales + hs = h * scales + anchors = _mkanchors(ws, hs, x_ctr, y_ctr) + return anchors diff --git a/lib/rpn/rpn.py b/lib/rpn/rpn.py new file mode 100644 index 0000000..3d318ec --- /dev/null +++ b/lib/rpn/rpn.py @@ -0,0 +1,812 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Haozhi Qi +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- +""" +RPN: +data = + {'data': [num_images, c, h, w], + 'im_info': [num_images, 4] (optional)} +label = + {'gt_boxes': [num_boxes, 5] (optional), + 'label': [batch_size, 1] <- [batch_size, num_anchors, feat_height, feat_width], + 'bbox_target': [batch_size, num_anchors, feat_height, feat_width], + 'bbox_weight': [batch_size, num_anchors, feat_height, feat_width]} +""" + +import numpy as np +import numpy.random as npr + +from utils.image import get_image, tensor_vstack,get_test_image +from generate_anchor import generate_anchors +from bbox.bbox_transform import bbox_overlaps, bbox_transform, bbox_poly2hbb +import pdb + +def get_rpn_testbatch(roidb, cfg): + """ + return a dict of testbatch + :param roidb: ['image', 'flipped'] + :return: data, label, im_info + """ + # assert len(roidb) == 1, 'Single batch only' + imgs, roidb = get_test_image(roidb, cfg) + im_array = imgs + im_info = [np.array([roidb[i]['im_info']], dtype=np.float32) for i in range(len(roidb))] + + data = [{'data': im_array[i], + 'im_info': im_info[i]} for i in range(len(roidb))] + label = {} + + return data, label, im_info + + +def get_rpn_batch(roidb, cfg): + """ + prototype for rpn batch: data, im_info, gt_boxes + :param roidb: ['image', 'flipped'] + ['gt_boxes', 'boxes', 'gt_classes'] + :return: data, label + """ + assert len(roidb) == 1, 'Single batch only' + imgs, roidb = get_image(roidb, cfg) + im_array = imgs[0] + im_info = np.array([roidb[0]['im_info']], dtype=np.float32) + + # gt boxes: (x1, y1, x2, y2, cls) + if roidb[0]['gt_classes'].size > 0: + gt_inds = np.where(roidb[0]['gt_classes'] != 0)[0] + gt_boxes = np.empty((roidb[0]['boxes'].shape[0], 5), dtype=np.float32) + gt_boxes[:, 0:4] = roidb[0]['boxes'][gt_inds, :] + gt_boxes[:, 4] = roidb[0]['gt_classes'][gt_inds] + else: + gt_boxes = np.empty((0, 5), dtype=np.float32) + + data = {'data': im_array, + 'im_info': im_info} + label = {'gt_boxes': gt_boxes} + + return data, label + +def get_rpn_batch_poly(roidb, cfg): + """ + prototype for rpn batch poly: data, im_info, gt_boxes + :param roidb: ['image'] + :param cfg: + :return: + """ + assert len(roidb) == 1, 'Single batch only' + imgs, roidb = get_image(roidb, cfg) + im_array = imgs[0] + im_info = np.array([roidb[0]['im_info']], dtype=np.float32) + + # gt boxes: (x1, y1, x2, y2, cls) + if roidb[0]['gt_classes'].size > 0: + gt_inds = np.where(roidb[0]['gt_classes'] != 0)[0] + gt_boxes = np.empty((roidb[0]['boxes'].shape[0], 9), dtype=np.float32) + gt_boxes[:, 0:8] = roidb[0]['boxes'][gt_inds, :] + gt_boxes[:, 8] = roidb[0]['gt_classes'][gt_inds] + else: + gt_boxes = np.empty((0, 9), dtype=np.float32) + + data = {'data': im_array, + 'im_info': im_info} + label = {'gt_boxes': gt_boxes} + + return data, label + +def assign_anchor(feat_shape, gt_boxes, im_info, cfg, feat_stride=16, + scales=(8, 16, 32), ratios=(0.5, 1, 2), allowed_border=0): + """ + assign ground truth boxes to anchor positions + :param feat_shape: infer output shape + :param gt_boxes: assign ground truth + :param im_info: filter out anchors overlapped with edges + :param feat_stride: anchor position step + :param scales: used to generate anchors, affects num_anchors (per location) + :param ratios: aspect ratios of generated anchors + :param allowed_border: filter out anchors with edge overlap > allowed_border + :return: dict of label + 'label': of shape (batch_size, 1) <- (batch_size, num_anchors, feat_height, feat_width) + 'bbox_target': of shape (batch_size, num_anchors * 4, feat_height, feat_width) + 'bbox_inside_weight': *todo* mark the assigned anchors + 'bbox_outside_weight': used to normalize the bbox_loss, all weights sums to RPN_POSITIVE_WEIGHT + """ + def _unmap(data, count, inds, fill=0): + """" unmap a subset inds of data into original data of size count """ + if len(data.shape) == 1: + ret = np.empty((count,), dtype=np.float32) + ret.fill(fill) + ret[inds] = data + else: + ret = np.empty((count,) + data.shape[1:], dtype=np.float32) + ret.fill(fill) + ret[inds, :] = data + return ret + + DEBUG = False + im_info = im_info[0] + scales = np.array(scales, dtype=np.float32) + base_anchors = generate_anchors(base_size=feat_stride, ratios=list(ratios), scales=scales) + num_anchors = base_anchors.shape[0] + feat_height, feat_width = feat_shape[-2:] + + if DEBUG: + print 'anchors:' + print base_anchors + print 'anchor shapes:' + print np.hstack((base_anchors[:, 2::4] - base_anchors[:, 0::4], + base_anchors[:, 3::4] - base_anchors[:, 1::4])) + print 'im_info', im_info + print 'height', feat_height, 'width', feat_width + print 'gt_boxes shape', gt_boxes.shape + print 'gt_boxes', gt_boxes + + # 1. generate proposals from bbox deltas and shifted anchors + shift_x = np.arange(0, feat_width) * feat_stride + shift_y = np.arange(0, feat_height) * feat_stride + shift_x, shift_y = np.meshgrid(shift_x, shift_y) + shifts = np.vstack((shift_x.ravel(), shift_y.ravel(), shift_x.ravel(), shift_y.ravel())).transpose() + # add A anchors (1, A, 4) to + # cell K shifts (K, 1, 4) to get + # shift anchors (K, A, 4) + # reshape to (K*A, 4) shifted anchors + A = num_anchors + K = shifts.shape[0] + all_anchors = base_anchors.reshape((1, A, 4)) + shifts.reshape((1, K, 4)).transpose((1, 0, 2)) + all_anchors = all_anchors.reshape((K * A, 4)) + total_anchors = int(K * A) + + # only keep anchors inside the image + inds_inside = np.where((all_anchors[:, 0] >= -allowed_border) & + (all_anchors[:, 1] >= -allowed_border) & + (all_anchors[:, 2] < im_info[1] + allowed_border) & + (all_anchors[:, 3] < im_info[0] + allowed_border))[0] + if DEBUG: + print 'total_anchors', total_anchors + print 'inds_inside', len(inds_inside) + + # keep only inside anchors + anchors = all_anchors[inds_inside, :] + if DEBUG: + print 'anchors shape', anchors.shape + + # label: 1 is positive, 0 is negative, -1 is dont care + labels = np.empty((len(inds_inside),), dtype=np.float32) + labels.fill(-1) + + if gt_boxes.size > 0: + # overlap between the anchors and the gt boxes + # overlaps (ex, gt) + overlaps = bbox_overlaps(anchors.astype(np.float), gt_boxes.astype(np.float)) + argmax_overlaps = overlaps.argmax(axis=1) + max_overlaps = overlaps[np.arange(len(inds_inside)), argmax_overlaps] + gt_argmax_overlaps = overlaps.argmax(axis=0) + gt_max_overlaps = overlaps[gt_argmax_overlaps, np.arange(overlaps.shape[1])] + gt_argmax_overlaps = np.where(overlaps == gt_max_overlaps)[0] + + if not cfg.TRAIN.RPN_CLOBBER_POSITIVES: + # assign bg labels first so that positive labels can clobber them + labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0 + + # fg label: for each gt, anchor with highest overlap + labels[gt_argmax_overlaps] = 1 + + # fg label: above threshold IoU + labels[max_overlaps >= cfg.TRAIN.RPN_POSITIVE_OVERLAP] = 1 + + if cfg.TRAIN.RPN_CLOBBER_POSITIVES: + # assign bg labels last so that negative labels can clobber positives + labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0 + else: + labels[:] = 0 + + # subsample positive labels if we have too many + num_fg = int(cfg.TRAIN.RPN_FG_FRACTION * cfg.TRAIN.RPN_BATCH_SIZE) + fg_inds = np.where(labels == 1)[0] + if len(fg_inds) > num_fg: + disable_inds = npr.choice(fg_inds, size=(len(fg_inds) - num_fg), replace=False) + if DEBUG: + disable_inds = fg_inds[:(len(fg_inds) - num_fg)] + labels[disable_inds] = -1 + + # subsample negative labels if we have too many + num_bg = cfg.TRAIN.RPN_BATCH_SIZE - np.sum(labels == 1) + bg_inds = np.where(labels == 0)[0] + if len(bg_inds) > num_bg: + disable_inds = npr.choice(bg_inds, size=(len(bg_inds) - num_bg), replace=False) + if DEBUG: + disable_inds = bg_inds[:(len(bg_inds) - num_bg)] + labels[disable_inds] = -1 + + bbox_targets = np.zeros((len(inds_inside), 4), dtype=np.float32) + if gt_boxes.size > 0: + bbox_targets[:] = bbox_transform(anchors, gt_boxes[argmax_overlaps, :4]) + + bbox_weights = np.zeros((len(inds_inside), 4), dtype=np.float32) + bbox_weights[labels == 1, :] = np.array(cfg.TRAIN.RPN_BBOX_WEIGHTS) + + if DEBUG: + _sums = bbox_targets[labels == 1, :].sum(axis=0) + _squared_sums = (bbox_targets[labels == 1, :] ** 2).sum(axis=0) + _counts = np.sum(labels == 1) + means = _sums / (_counts + 1e-14) + stds = np.sqrt(_squared_sums / _counts - means ** 2) + print 'means', means + print 'stdevs', stds + + # map up to original set of anchors + labels = _unmap(labels, total_anchors, inds_inside, fill=-1) + bbox_targets = _unmap(bbox_targets, total_anchors, inds_inside, fill=0) + bbox_weights = _unmap(bbox_weights, total_anchors, inds_inside, fill=0) + + if DEBUG: + print 'rpn: max max_overlaps', np.max(max_overlaps) + print 'rpn: num_positives', np.sum(labels == 1) + print 'rpn: num_negatives', np.sum(labels == 0) + _fg_sum = np.sum(labels == 1) + _bg_sum = np.sum(labels == 0) + _count = 1 + print 'rpn: num_positive avg', _fg_sum / _count + print 'rpn: num_negative avg', _bg_sum / _count + + labels = labels.reshape((1, feat_height, feat_width, A)).transpose(0, 3, 1, 2) + labels = labels.reshape((1, A * feat_height * feat_width)) + bbox_targets = bbox_targets.reshape((1, feat_height, feat_width, A * 4)).transpose(0, 3, 1, 2) + bbox_weights = bbox_weights.reshape((1, feat_height, feat_width, A * 4)).transpose((0, 3, 1, 2)) + + label = {'label': labels, + 'bbox_target': bbox_targets, + 'bbox_weight': bbox_weights} + return label + + +def assign_pyramid_anchor(feat_shapes, gt_boxes, im_info, cfg, feat_strides=(4, 8, 16, 32, 64), + scales=(8,), ratios=(0.5, 1, 2), allowed_border=0, balance_scale_bg=False,): + """ + assign ground truth boxes to anchor positions + :param feat_shapes: infer output shape + :param gt_boxes: assign ground truth + :param im_info: filter out anchors overlapped with edges + :param feat_strides: anchor position step + :param scales: used to generate anchors, affects num_anchors (per location) + :param ratios: aspect ratios of generated anchors + :param allowed_border: filter out anchors with edge overlap > allowed_border + :param balance_scale_bg: restrict the background samples for each pyramid level + :return: dict of label + 'label': of shape (batch_size, 1) <- (batch_size, num_anchors, feat_height, feat_width) + 'bbox_target': of shape (batch_size, num_anchors * 4, feat_height, feat_width) + 'bbox_inside_weight': *todo* mark the assigned anchors + 'bbox_outside_weight': used to normalize the bbox_loss, all weights sums to RPN_POSITIVE_WEIGHT + """ + def _unmap(data, count, inds, fill=0): + """" unmap a subset inds of data into original data of size count """ + if len(data.shape) == 1: + ret = np.empty((count,), dtype=np.float32) + ret.fill(fill) + ret[inds] = data + else: + ret = np.empty((count,) + data.shape[1:], dtype=np.float32) + ret.fill(fill) + ret[inds, :] = data + return ret + + DEBUG = False + im_info = im_info[0] + scales = np.array(scales, dtype=np.float32) + ratios = np.array(ratios, dtype=np.float32) + assert(len(feat_shapes) == len(feat_strides)) + + fpn_args = [] + fpn_anchors_fid = np.zeros(0).astype(int) + fpn_anchors = np.zeros([0, 4]) + fpn_labels = np.zeros(0) + fpn_inds_inside = [] + for feat_id in range(len(feat_strides)): + # len(scales.shape) == 1 just for backward compatibility, will remove in the future + if len(scales.shape) == 1: + base_anchors = generate_anchors(base_size=feat_strides[feat_id], ratios=ratios, scales=scales) + else: + assert len(scales.shape) == len(ratios.shape) == 2 + base_anchors = generate_anchors(base_size=feat_strides[feat_id], ratios=ratios[feat_id], scales=scales[feat_id]) + num_anchors = base_anchors.shape[0] + feat_height, feat_width = feat_shapes[feat_id][0][-2:] + + # 1. generate proposals from bbox deltas and shifted anchors + shift_x = np.arange(0, feat_width) * feat_strides[feat_id] + shift_y = np.arange(0, feat_height) * feat_strides[feat_id] + shift_x, shift_y = np.meshgrid(shift_x, shift_y) + shifts = np.vstack((shift_x.ravel(), shift_y.ravel(), shift_x.ravel(), shift_y.ravel())).transpose() + # add A anchors (1, A, 4) to + # cell K shifts (K, 1, 4) to get + # shift anchors (K, A, 4) + # reshape to (K*A, 4) shifted anchors + A = num_anchors + K = shifts.shape[0] + all_anchors = base_anchors.reshape((1, A, 4)) + shifts.reshape((1, K, 4)).transpose((1, 0, 2)) + all_anchors = all_anchors.reshape((K * A, 4)) + total_anchors = int(K * A) + + # only keep anchors inside the image + inds_inside = np.where((all_anchors[:, 0] >= -allowed_border) & + (all_anchors[:, 1] >= -allowed_border) & + (all_anchors[:, 2] < im_info[1] + allowed_border) & + (all_anchors[:, 3] < im_info[0] + allowed_border))[0] + + # keep only inside anchors + anchors = all_anchors[inds_inside, :] + + # label: 1 is positive, 0 is negative, -1 is dont care + # for sigmoid classifier, ignore the 'background' class + labels = np.empty((len(inds_inside),), dtype=np.float32) + labels.fill(-1) + + fpn_anchors_fid = np.hstack((fpn_anchors_fid, len(inds_inside))) + fpn_anchors = np.vstack((fpn_anchors, anchors)) + fpn_labels = np.hstack((fpn_labels, labels)) + fpn_inds_inside.append(inds_inside) + fpn_args.append([feat_height, feat_width, A, total_anchors]) + + if gt_boxes.size > 0: + # overlap between the anchors and the gt boxes + # overlaps (ex, gt) + overlaps = bbox_overlaps(fpn_anchors.astype(np.float), gt_boxes.astype(np.float)) + argmax_overlaps = overlaps.argmax(axis=1) + max_overlaps = overlaps[np.arange(len(fpn_anchors)), argmax_overlaps] + gt_argmax_overlaps = overlaps.argmax(axis=0) + gt_max_overlaps = overlaps[gt_argmax_overlaps, np.arange(overlaps.shape[1])] + gt_argmax_overlaps = np.where(overlaps == gt_max_overlaps)[0] + + if not cfg.TRAIN.RPN_CLOBBER_POSITIVES: + # assign bg labels first so that positive labels can clobber them + fpn_labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0 + # fg label: for each gt, anchor with highest overlap + fpn_labels[gt_argmax_overlaps] = 1 + # fg label: above threshold IoU + fpn_labels[max_overlaps >= cfg.TRAIN.RPN_POSITIVE_OVERLAP] = 1 + if cfg.TRAIN.RPN_CLOBBER_POSITIVES: + # assign bg labels last so that negative labels can clobber positives + fpn_labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0 + else: + fpn_labels[:] = 0 + + # subsample positive labels if we have too many + num_fg = fpn_labels.shape[0] if cfg.TRAIN.RPN_BATCH_SIZE == -1 else int(cfg.TRAIN.RPN_FG_FRACTION * cfg.TRAIN.RPN_BATCH_SIZE) + fg_inds = np.where(fpn_labels >= 1)[0] + if len(fg_inds) > num_fg: + disable_inds = npr.choice(fg_inds, size=(len(fg_inds) - num_fg), replace=False) + if DEBUG: + disable_inds = fg_inds[:(len(fg_inds) - num_fg)] + fpn_labels[disable_inds] = -1 + + # subsample negative labels if we have too many + num_bg = fpn_labels.shape[0] if cfg.TRAIN.RPN_BATCH_SIZE == -1 else cfg.TRAIN.RPN_BATCH_SIZE - np.sum(fpn_labels >= 1) + bg_inds = np.where(fpn_labels == 0)[0] + fpn_anchors_fid = np.hstack((0, fpn_anchors_fid.cumsum())) + + if balance_scale_bg: + num_bg_scale = num_bg / len(feat_strides) + for feat_id in range(0, len(feat_strides)): + bg_ind_scale = bg_inds[(bg_inds >= fpn_anchors_fid[feat_id]) & (bg_inds < fpn_anchors_fid[feat_id+1])] + if len(bg_ind_scale) > num_bg_scale: + disable_inds = npr.choice(bg_ind_scale, size=(len(bg_ind_scale) - num_bg_scale), replace=False) + fpn_labels[disable_inds] = -1 + else: + if len(bg_inds) > num_bg: + disable_inds = npr.choice(bg_inds, size=(len(bg_inds) - num_bg), replace=False) + if DEBUG: + disable_inds = bg_inds[:(len(bg_inds) - num_bg)] + fpn_labels[disable_inds] = -1 + + fpn_bbox_targets = np.zeros((len(fpn_anchors), 4), dtype=np.float32) + if gt_boxes.size > 0: + fpn_bbox_targets[fpn_labels >= 1, :] = bbox_transform(fpn_anchors[fpn_labels >= 1, :], gt_boxes[argmax_overlaps[fpn_labels >= 1], :4]) + # fpn_bbox_targets[:] = bbox_transform(fpn_anchors, gt_boxes[argmax_overlaps, :4]) + # fpn_bbox_targets = (fpn_bbox_targets - np.array(cfg.TRAIN.BBOX_MEANS)) / np.array(cfg.TRAIN.BBOX_STDS) + fpn_bbox_weights = np.zeros((len(fpn_anchors), 4), dtype=np.float32) + + fpn_bbox_weights[fpn_labels >= 1, :] = np.array(cfg.TRAIN.RPN_BBOX_WEIGHTS) + + label_list = [] + bbox_target_list = [] + bbox_weight_list = [] + for feat_id in range(0, len(feat_strides)): + feat_height, feat_width, A, total_anchors = fpn_args[feat_id] + # map up to original set of anchors + labels = _unmap(fpn_labels[fpn_anchors_fid[feat_id]:fpn_anchors_fid[feat_id+1]], total_anchors, fpn_inds_inside[feat_id], fill=-1) + bbox_targets = _unmap(fpn_bbox_targets[fpn_anchors_fid[feat_id]:fpn_anchors_fid[feat_id+1]], total_anchors, fpn_inds_inside[feat_id], fill=0) + bbox_weights = _unmap(fpn_bbox_weights[fpn_anchors_fid[feat_id]:fpn_anchors_fid[feat_id+1]], total_anchors, fpn_inds_inside[feat_id], fill=0) + + labels = labels.reshape((1, feat_height, feat_width, A)).transpose(0, 3, 1, 2) + labels = labels.reshape((1, A * feat_height * feat_width)) + + bbox_targets = bbox_targets.reshape((1, feat_height, feat_width, A * 4)).transpose(0, 3, 1, 2) + bbox_targets = bbox_targets.reshape((1, A * 4, -1)) + bbox_weights = bbox_weights.reshape((1, feat_height, feat_width, A * 4)).transpose((0, 3, 1, 2)) + bbox_weights = bbox_weights.reshape((1, A * 4, -1)) + + label_list.append(labels) + bbox_target_list.append(bbox_targets) + bbox_weight_list.append(bbox_weights) + # label.update({'label_p' + str(feat_id + feat_id_start): labels, + # 'bbox_target_p' + str(feat_id + feat_id_start): bbox_targets, + # 'bbox_weight_p' + str(feat_id + feat_id_start): bbox_weights}) + + label = { + 'label': np.concatenate(label_list, axis=1), + 'bbox_target': np.concatenate(bbox_target_list, axis=2), + 'bbox_weight': np.concatenate(bbox_weight_list, axis=2) + } + + return label + +def assign_pyramid_anchor_poly(feat_shapes, gt_boxes, im_info, cfg, feat_strides=(4, 8, 16, 32, 64), + scales=(8,), ratios=(0.5, 1, 2), allowed_border=0, balance_scale_bg=False,): + """ + assign ground truth boxes to anchor positions + :param feat_shapes: infer output shape + :param gt_boxes: assign ground truth + :param im_info: filter out anchors overlapped with edges + :param feat_strides: anchor position step + :param scales: used to generate anchors, affects num_anchors (per location) + :param ratios: aspect ratios of generated anchors + :param allowed_border: filter out anchors with edge overlap > allowed_border + :param balance_scale_bg: restrict the background samples for each pyramid level + :return: dict of label + 'label': of shape (batch_size, 1) <- (batch_size, num_anchors, feat_height, feat_width) + 'bbox_target': of shape (batch_size, num_anchors * 4, feat_height, feat_width) + 'bbox_inside_weight': *todo* mark the assigned anchors + 'bbox_outside_weight': used to normalize the bbox_loss, all weights sums to RPN_POSITIVE_WEIGHT + """ + def _unmap(data, count, inds, fill=0): + """" unmap a subset inds of data into original data of size count """ + if len(data.shape) == 1: + ret = np.empty((count,), dtype=np.float32) + ret.fill(fill) + ret[inds] = data + else: + ret = np.empty((count,) + data.shape[1:], dtype=np.float32) + ret.fill(fill) + ret[inds, :] = data + return ret + + DEBUG = False + im_info = im_info[0] + scales = np.array(scales, dtype=np.float32) + ratios = np.array(ratios, dtype=np.float32) + assert(len(feat_shapes) == len(feat_strides)) + + fpn_args = [] + fpn_anchors_fid = np.zeros(0).astype(int) + fpn_anchors = np.zeros([0, 4]) + fpn_labels = np.zeros(0) + fpn_inds_inside = [] + for feat_id in range(len(feat_strides)): + # len(scales.shape) == 1 just for backward compatibility, will remove in the future + if len(scales.shape) == 1: + base_anchors = generate_anchors(base_size=feat_strides[feat_id], ratios=ratios, scales=scales) + else: + assert len(scales.shape) == len(ratios.shape) == 2 + base_anchors = generate_anchors(base_size=feat_strides[feat_id], ratios=ratios[feat_id], scales=scales[feat_id]) + num_anchors = base_anchors.shape[0] + feat_height, feat_width = feat_shapes[feat_id][0][-2:] + + # 1. generate proposals from bbox deltas and shifted anchors + shift_x = np.arange(0, feat_width) * feat_strides[feat_id] + shift_y = np.arange(0, feat_height) * feat_strides[feat_id] + shift_x, shift_y = np.meshgrid(shift_x, shift_y) + shifts = np.vstack((shift_x.ravel(), shift_y.ravel(), shift_x.ravel(), shift_y.ravel())).transpose() + # add A anchors (1, A, 4) to + # cell K shifts (K, 1, 4) to get + # shift anchors (K, A, 4) + # reshape to (K*A, 4) shifted anchors + A = num_anchors + K = shifts.shape[0] + all_anchors = base_anchors.reshape((1, A, 4)) + shifts.reshape((1, K, 4)).transpose((1, 0, 2)) + all_anchors = all_anchors.reshape((K * A, 4)) + total_anchors = int(K * A) + + # only keep anchors inside the image + inds_inside = np.where((all_anchors[:, 0] >= -allowed_border) & + (all_anchors[:, 1] >= -allowed_border) & + (all_anchors[:, 2] < im_info[1] + allowed_border) & + (all_anchors[:, 3] < im_info[0] + allowed_border))[0] + + # keep only inside anchors + anchors = all_anchors[inds_inside, :] + + # label: 1 is positive, 0 is negative, -1 is dont care + # for sigmoid classifier, ignore the 'background' class + labels = np.empty((len(inds_inside),), dtype=np.float32) + labels.fill(-1) + + fpn_anchors_fid = np.hstack((fpn_anchors_fid, len(inds_inside))) + fpn_anchors = np.vstack((fpn_anchors, anchors)) + fpn_labels = np.hstack((fpn_labels, labels)) + fpn_inds_inside.append(inds_inside) + fpn_args.append([feat_height, feat_width, A, total_anchors]) + + gt_boxes = bbox_poly2hbb(gt_boxes) + + if gt_boxes.size > 0: + # overlap between the anchors and the gt boxes + # overlaps (ex, gt) + overlaps = bbox_overlaps(fpn_anchors.astype(np.float), gt_boxes.astype(np.float)) + argmax_overlaps = overlaps.argmax(axis=1) + max_overlaps = overlaps[np.arange(len(fpn_anchors)), argmax_overlaps] + gt_argmax_overlaps = overlaps.argmax(axis=0) + gt_max_overlaps = overlaps[gt_argmax_overlaps, np.arange(overlaps.shape[1])] + gt_argmax_overlaps = np.where(overlaps == gt_max_overlaps)[0] + + if not cfg.TRAIN.RPN_CLOBBER_POSITIVES: + # assign bg labels first so that positive labels can clobber them + fpn_labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0 + # fg label: for each gt, anchor with highest overlap + fpn_labels[gt_argmax_overlaps] = 1 + # fg label: above threshold IoU + fpn_labels[max_overlaps >= cfg.TRAIN.RPN_POSITIVE_OVERLAP] = 1 + if cfg.TRAIN.RPN_CLOBBER_POSITIVES: + # assign bg labels last so that negative labels can clobber positives + fpn_labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0 + else: + fpn_labels[:] = 0 + + # subsample positive labels if we have too many + num_fg = fpn_labels.shape[0] if cfg.TRAIN.RPN_BATCH_SIZE == -1 else int(cfg.TRAIN.RPN_FG_FRACTION * cfg.TRAIN.RPN_BATCH_SIZE) + fg_inds = np.where(fpn_labels >= 1)[0] + if len(fg_inds) > num_fg: + disable_inds = npr.choice(fg_inds, size=(len(fg_inds) - num_fg), replace=False) + if DEBUG: + disable_inds = fg_inds[:(len(fg_inds) - num_fg)] + fpn_labels[disable_inds] = -1 + + # subsample negative labels if we have too many + num_bg = fpn_labels.shape[0] if cfg.TRAIN.RPN_BATCH_SIZE == -1 else cfg.TRAIN.RPN_BATCH_SIZE - np.sum(fpn_labels >= 1) + bg_inds = np.where(fpn_labels == 0)[0] + fpn_anchors_fid = np.hstack((0, fpn_anchors_fid.cumsum())) + + if balance_scale_bg: + num_bg_scale = num_bg / len(feat_strides) + for feat_id in range(0, len(feat_strides)): + bg_ind_scale = bg_inds[(bg_inds >= fpn_anchors_fid[feat_id]) & (bg_inds < fpn_anchors_fid[feat_id+1])] + if len(bg_ind_scale) > num_bg_scale: + disable_inds = npr.choice(bg_ind_scale, size=(len(bg_ind_scale) - num_bg_scale), replace=False) + fpn_labels[disable_inds] = -1 + else: + if len(bg_inds) > num_bg: + disable_inds = npr.choice(bg_inds, size=(len(bg_inds) - num_bg), replace=False) + if DEBUG: + disable_inds = bg_inds[:(len(bg_inds) - num_bg)] + fpn_labels[disable_inds] = -1 + + fpn_bbox_targets = np.zeros((len(fpn_anchors), 4), dtype=np.float32) + if gt_boxes.size > 0: + fpn_bbox_targets[fpn_labels >= 1, :] = bbox_transform(fpn_anchors[fpn_labels >= 1, :], gt_boxes[argmax_overlaps[fpn_labels >= 1], :4]) + # fpn_bbox_targets[:] = bbox_transform(fpn_anchors, gt_boxes[argmax_overlaps, :4]) + # fpn_bbox_targets = (fpn_bbox_targets - np.array(cfg.TRAIN.BBOX_MEANS)) / np.array(cfg.TRAIN.BBOX_STDS) + fpn_bbox_weights = np.zeros((len(fpn_anchors), 4), dtype=np.float32) + fpn_bbox_weights[fpn_labels >= 1, :] = np.array(cfg.TRAIN.RPN_BBOX_WEIGHTS) + + label_list = [] + bbox_target_list = [] + bbox_weight_list = [] + for feat_id in range(0, len(feat_strides)): + feat_height, feat_width, A, total_anchors = fpn_args[feat_id] + # map up to original set of anchors + labels = _unmap(fpn_labels[fpn_anchors_fid[feat_id]:fpn_anchors_fid[feat_id+1]], total_anchors, fpn_inds_inside[feat_id], fill=-1) + bbox_targets = _unmap(fpn_bbox_targets[fpn_anchors_fid[feat_id]:fpn_anchors_fid[feat_id+1]], total_anchors, fpn_inds_inside[feat_id], fill=0) + bbox_weights = _unmap(fpn_bbox_weights[fpn_anchors_fid[feat_id]:fpn_anchors_fid[feat_id+1]], total_anchors, fpn_inds_inside[feat_id], fill=0) + + labels = labels.reshape((1, feat_height, feat_width, A)).transpose(0, 3, 1, 2) + labels = labels.reshape((1, A * feat_height * feat_width)) + + bbox_targets = bbox_targets.reshape((1, feat_height, feat_width, A * 4)).transpose(0, 3, 1, 2) + bbox_targets = bbox_targets.reshape((1, A * 4, -1)) + bbox_weights = bbox_weights.reshape((1, feat_height, feat_width, A * 4)).transpose((0, 3, 1, 2)) + bbox_weights = bbox_weights.reshape((1, A * 4, -1)) + + label_list.append(labels) + bbox_target_list.append(bbox_targets) + bbox_weight_list.append(bbox_weights) + # label.update({'label_p' + str(feat_id + feat_id_start): labels, + # 'bbox_target_p' + str(feat_id + feat_id_start): bbox_targets, + # 'bbox_weight_p' + str(feat_id + feat_id_start): bbox_weights}) + # pdb.set_trace() + label = { + 'label': np.concatenate(label_list, axis=1), + 'bbox_target': np.concatenate(bbox_target_list, axis=2), + 'bbox_weight': np.concatenate(bbox_weight_list, axis=2) + } + + return label + +def assign_anchor_poly(feat_shape, gt_boxes, im_info, cfg, feat_stride=16, + scales=(8, 16, 32), ratios=(0.5, 1, 2), allowed_border=0): + """ + assign ground truth boxes to anchor positions + :param feat_shape: infer output shape + :param gt_boxes: assign ground truth + :param im_info: filter out anchors overlapped with edges + :param feat_stride: anchor position step + :param scales: used to generate anchors, affects num_anchors (per location) + :param ratios: aspect ratios of generated anchors + :param allowed_border: filter out anchors with edge overlap > allowed_border + :return: dict of label + 'label': of shape (batch_size, 1) <- (batch_size, num_anchors, feat_height, feat_width) + 'bbox_target': of shape (batch_size, num_anchors * 4, feat_height, feat_width) + 'bbox_inside_weight': *todo* mark the assigned anchors + 'bbox_outside_weight': used to normalize the bbox_loss, all weights sums to RPN_POSITIVE_WEIGHT + """ + def _unmap(data, count, inds, fill=0): + """" unmap a subset inds of data into original data of size count """ + if len(data.shape) == 1: + ret = np.empty((count,), dtype=np.float32) + ret.fill(fill) + ret[inds] = data + else: + ret = np.empty((count,) + data.shape[1:], dtype=np.float32) + ret.fill(fill) + ret[inds, :] = data + return ret + + DEBUG = False + im_info = im_info[0] + scales = np.array(scales, dtype=np.float32) + base_anchors = generate_anchors(base_size=feat_stride, ratios=list(ratios), scales=scales) + num_anchors = base_anchors.shape[0] + feat_height, feat_width = feat_shape[-2:] + + if DEBUG: + print 'anchors:' + print base_anchors + print 'anchor shapes:' + print np.hstack((base_anchors[:, 2::4] - base_anchors[:, 0::4], + base_anchors[:, 3::4] - base_anchors[:, 1::4])) + print 'im_info', im_info + print 'height', feat_height, 'width', feat_width + print 'gt_boxes shape', gt_boxes.shape + print 'gt_boxes', gt_boxes + + # 1. generate proposals from bbox deltas and shifted anchors + shift_x = np.arange(0, feat_width) * feat_stride + shift_y = np.arange(0, feat_height) * feat_stride + shift_x, shift_y = np.meshgrid(shift_x, shift_y) + shifts = np.vstack((shift_x.ravel(), shift_y.ravel(), shift_x.ravel(), shift_y.ravel())).transpose() + # add A anchors (1, A, 4) to + # cell K shifts (K, 1, 4) to get + # shift anchors (K, A, 4) + # reshape to (K*A, 4) shifted anchors + A = num_anchors + K = shifts.shape[0] + all_anchors = base_anchors.reshape((1, A, 4)) + shifts.reshape((1, K, 4)).transpose((1, 0, 2)) + all_anchors = all_anchors.reshape((K * A, 4)) + total_anchors = int(K * A) + + # only keep anchors inside the image + inds_inside = np.where((all_anchors[:, 0] >= -allowed_border) & + (all_anchors[:, 1] >= -allowed_border) & + (all_anchors[:, 2] < im_info[1] + allowed_border) & + (all_anchors[:, 3] < im_info[0] + allowed_border))[0] + if DEBUG: + print 'total_anchors', total_anchors + print 'inds_inside', len(inds_inside) + + # keep only inside anchors + anchors = all_anchors[inds_inside, :] + if DEBUG: + print 'anchors shape', anchors.shape + + # label: 1 is positive, 0 is negative, -1 is dont care + labels = np.empty((len(inds_inside),), dtype=np.float32) + labels.fill(-1) + + gt_boxes_bbox = np.zeros((gt_boxes.shape[0], 4), dtype=gt_boxes.dtype) + + ex_x = np.vstack((gt_boxes[:, 0], gt_boxes[:, 2], gt_boxes[:, 4], gt_boxes[:, 6])) + ex_y = np.vstack((gt_boxes[:, 1], gt_boxes[:, 3], gt_boxes[:, 5], gt_boxes[:, 7])) + gt_boxes_bbox[:, 0] = np.amin(ex_x, axis=0) + gt_boxes_bbox[:, 1] = np.amin(ex_y, axis=0) + gt_boxes_bbox[:, 2] = np.amax(ex_x, axis=0) + gt_boxes_bbox[:, 3] = np.amax(ex_y, axis=0) + + if gt_boxes.size > 0: + # overlap between the anchors and the gt boxes + # overlaps (ex, gt) + overlaps = bbox_overlaps(anchors.astype(np.float), gt_boxes_bbox.astype(np.float)) + argmax_overlaps = overlaps.argmax(axis=1) + max_overlaps = overlaps[np.arange(len(inds_inside)), argmax_overlaps] + gt_argmax_overlaps = overlaps.argmax(axis=0) + gt_max_overlaps = overlaps[gt_argmax_overlaps, np.arange(overlaps.shape[1])] + gt_argmax_overlaps = np.where(overlaps == gt_max_overlaps)[0] + + if not cfg.TRAIN.RPN_CLOBBER_POSITIVES: + # assign bg labels first so that positive labels can clobber them + labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0 + + # fg label: for each gt, anchor with highest overlap + labels[gt_argmax_overlaps] = 1 + + # fg label: above threshold IoU + labels[max_overlaps >= cfg.TRAIN.RPN_POSITIVE_OVERLAP] = 1 + + if cfg.TRAIN.RPN_CLOBBER_POSITIVES: + # assign bg labels last so that negative labels can clobber positives + labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0 + else: + labels[:] = 0 + + # subsample positive labels if we have too many + num_fg = int(cfg.TRAIN.RPN_FG_FRACTION * cfg.TRAIN.RPN_BATCH_SIZE) + fg_inds = np.where(labels == 1)[0] + if len(fg_inds) > num_fg: + disable_inds = npr.choice(fg_inds, size=(len(fg_inds) - num_fg), replace=False) + if DEBUG: + disable_inds = fg_inds[:(len(fg_inds) - num_fg)] + labels[disable_inds] = -1 + + # subsample negative labels if we have too many + num_bg = cfg.TRAIN.RPN_BATCH_SIZE - np.sum(labels == 1) + bg_inds = np.where(labels == 0)[0] + if len(bg_inds) > num_bg: + disable_inds = npr.choice(bg_inds, size=(len(bg_inds) - num_bg), replace=False) + if DEBUG: + disable_inds = bg_inds[:(len(bg_inds) - num_bg)] + labels[disable_inds] = -1 + + bbox_targets = np.zeros((len(inds_inside), 4), dtype=np.float32) + # temp = np.zeros((anchors.shape[0], 8), dtype=anchors.dtype) + # temp[:, 0] = anchors[:, 0] + # temp[:, 1] = anchors[:, 1] + # temp[:, 2] = anchors[:, 2] + # temp[:, 3] = anchors[:, 1] + # temp[:, 4] = anchors[:, 2] + # temp[:, 5] = anchors[:, 3] + # temp[:, 6] = anchors[:, 0] + # temp[:, 7] = anchors[:, 3] + # eight_coordinate_anchors = temp + + if gt_boxes.size > 0: + bbox_targets[:] = bbox_transform(anchors, gt_boxes_bbox[argmax_overlaps, :4]) + + bbox_weights = np.zeros((len(inds_inside), 4), dtype=np.float32) + bbox_weights[labels == 1, :] = np.array(cfg.TRAIN.RPN_BBOX_WEIGHTS) + + if DEBUG: + _sums = bbox_targets[labels == 1, :].sum(axis=0) + _squared_sums = (bbox_targets[labels == 1, :] ** 2).sum(axis=0) + _counts = np.sum(labels == 1) + means = _sums / (_counts + 1e-14) + stds = np.sqrt(_squared_sums / _counts - means ** 2) + print 'means', means + print 'stdevs', stds + + # map up to original set of anchors + labels = _unmap(labels, total_anchors, inds_inside, fill=-1) + bbox_targets = _unmap(bbox_targets, total_anchors, inds_inside, fill=0) + bbox_weights = _unmap(bbox_weights, total_anchors, inds_inside, fill=0) + + if DEBUG: + print 'rpn: max max_overlaps', np.max(max_overlaps) + print 'rpn: num_positives', np.sum(labels == 1) + print 'rpn: num_negatives', np.sum(labels == 0) + _fg_sum = np.sum(labels == 1) + _bg_sum = np.sum(labels == 0) + _count = 1 + print 'rpn: num_positive avg', _fg_sum / _count + print 'rpn: num_negative avg', _bg_sum / _count + + labels = labels.reshape((1, feat_height, feat_width, A)).transpose(0, 3, 1, 2) + labels = labels.reshape((1, A * feat_height * feat_width)) + bbox_targets = bbox_targets.reshape((1, feat_height, feat_width, A * 4)).transpose(0, 3, 1, 2) + bbox_weights = bbox_weights.reshape((1, feat_height, feat_width, A * 4)).transpose((0, 3, 1, 2)) + + label = {'label': labels, + 'bbox_target': bbox_targets, + 'bbox_weight': bbox_weights} + return label \ No newline at end of file diff --git a/lib/segmentation/__init__.py b/lib/segmentation/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/lib/segmentation/segmentation.py b/lib/segmentation/segmentation.py new file mode 100644 index 0000000..2bfbf67 --- /dev/null +++ b/lib/segmentation/segmentation.py @@ -0,0 +1,58 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Yuwen Xiong +# -------------------------------------------------------- + +""" +Segmentation: +data = + {'data': [num_images, c, h, w], + 'im_info': [num_images, 4] (optional)} +label = + {'label': [batch_size, 1] <- [batch_size, c, h, w]} +""" + +import numpy as np +from utils.image import get_segmentation_image, tensor_vstack + +def get_segmentation_test_batch(segdb, config): + """ + return a dict of train batch + :param segdb: ['image', 'flipped'] + :param config: the config setting + :return: data, label, im_info + """ + imgs, seg_cls_gts, segdb = get_segmentation_image(segdb, config) + im_array = imgs + im_info = [np.array([segdb[i]['im_info']], dtype=np.float32) for i in xrange(len(segdb))] + + data = [{'data': im_array[i], + 'im_info': im_info[i]} for i in xrange(len(segdb))] + label = [{'label':seg_cls_gts[i]} for i in xrange(len(segdb))] + + return data, label, im_info + +def get_segmentation_train_batch(segdb, config): + """ + return a dict of train batch + :param segdb: ['image', 'flipped'] + :param config: the config setting + :return: data, label, im_info + """ + # assert len(segdb) == 1, 'Single batch only' + assert len(segdb) == 1, 'Single batch only' + + imgs, seg_cls_gts, segdb = get_segmentation_image(segdb, config) + im_array = imgs[0] + seg_cls_gt = seg_cls_gts[0] + + im_info = np.array([segdb[0]['im_info']], dtype=np.float32) + + data = {'data': im_array, + 'im_info': im_info} + label = {'label': seg_cls_gt} + + return data, label + diff --git a/lib/sharedcore/__init__.py b/lib/sharedcore/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/lib/utils/PrefetchingIter.py b/lib/utils/PrefetchingIter.py new file mode 100644 index 0000000..95ac625 --- /dev/null +++ b/lib/utils/PrefetchingIter.py @@ -0,0 +1,145 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Modified by Yuwen Xiong +# -------------------------------------------------------- +# Based on: +# MX-RCNN +# Copyright (c) 2016 by Contributors +# Licence under The Apache 2.0 License +# https://github.com/ijkguo/mx-rcnn/ +# -------------------------------------------------------- + +import mxnet as mx +from mxnet.io import DataDesc, DataBatch +import threading + + +class PrefetchingIter(mx.io.DataIter): + """Base class for prefetching iterators. Takes one or more DataIters ( + or any class with "reset" and "next" methods) and combine them with + prefetching. For example: + + Parameters + ---------- + iters : DataIter or list of DataIter + one or more DataIters (or any class with "reset" and "next" methods) + rename_data : None or list of dict + i-th element is a renaming map for i-th iter, in the form of + {'original_name' : 'new_name'}. Should have one entry for each entry + in iter[i].provide_data + rename_label : None or list of dict + Similar to rename_data + + Examples + -------- + iter = PrefetchingIter([NDArrayIter({'data': X1}), NDArrayIter({'data': X2})], + rename_data=[{'data': 'data1'}, {'data': 'data2'}]) + """ + def __init__(self, iters, rename_data=None, rename_label=None): + super(PrefetchingIter, self).__init__() + if not isinstance(iters, list): + iters = [iters] + self.n_iter = len(iters) + assert self.n_iter ==1, "Our prefetching iter only support 1 DataIter" + self.iters = iters + self.rename_data = rename_data + self.rename_label = rename_label + self.batch_size = len(self.provide_data) * self.provide_data[0][0][1][0] + self.data_ready = [threading.Event() for i in range(self.n_iter)] + self.data_taken = [threading.Event() for i in range(self.n_iter)] + for e in self.data_taken: + e.set() + self.started = True + self.current_batch = [None for _ in range(self.n_iter)] + self.next_batch = [None for _ in range(self.n_iter)] + def prefetch_func(self, i): + """Thread entry""" + while True: + self.data_taken[i].wait() + if not self.started: + break + try: + self.next_batch[i] = self.iters[i].next() + except StopIteration: + self.next_batch[i] = None + self.data_taken[i].clear() + self.data_ready[i].set() + self.prefetch_threads = [threading.Thread(target=prefetch_func, args=[self, i]) \ + for i in range(self.n_iter)] + for thread in self.prefetch_threads: + thread.setDaemon(True) + thread.start() + + def __del__(self): + self.started = False + for e in self.data_taken: + e.set() + for thread in self.prefetch_threads: + thread.join() + + @property + def provide_data(self): + """The name and shape of data provided by this iterator""" + if self.rename_data is None: + return sum([i.provide_data for i in self.iters], []) + else: + return sum([[ + DataDesc(r[x.name], x.shape, x.dtype) + if isinstance(x, DataDesc) else DataDesc(*x) + for x in i.provide_data + ] for r, i in zip(self.rename_data, self.iters)], []) + + @property + def provide_label(self): + """The name and shape of label provided by this iterator""" + if self.rename_label is None: + return sum([i.provide_label for i in self.iters], []) + else: + return sum([[ + DataDesc(r[x.name], x.shape, x.dtype) + if isinstance(x, DataDesc) else DataDesc(*x) + for x in i.provide_label + ] for r, i in zip(self.rename_label, self.iters)], []) + + def reset(self): + for e in self.data_ready: + e.wait() + for i in self.iters: + i.reset() + for e in self.data_ready: + e.clear() + for e in self.data_taken: + e.set() + + def iter_next(self): + for e in self.data_ready: + e.wait() + if self.next_batch[0] is None: + return False + else: + self.current_batch = self.next_batch[0] + for e in self.data_ready: + e.clear() + for e in self.data_taken: + e.set() + return True + + def next(self): + if self.iter_next(): + return self.current_batch + else: + raise StopIteration + + def getdata(self): + return self.current_batch.data + + def getlabel(self): + return self.current_batch.label + + def getindex(self): + return self.current_batch.index + + def getpad(self): + return self.current_batch.pad diff --git a/lib/utils/__init__.py b/lib/utils/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/lib/utils/combine_model.py b/lib/utils/combine_model.py new file mode 100644 index 0000000..0deff5c --- /dev/null +++ b/lib/utils/combine_model.py @@ -0,0 +1,22 @@ +from load_model import load_checkpoint +from save_model import save_checkpoint + + +def combine_model(prefix1, epoch1, prefix2, epoch2, prefix_out, epoch_out): + args1, auxs1 = load_checkpoint(prefix1, epoch1) + args2, auxs2 = load_checkpoint(prefix2, epoch2) + arg_names = args1.keys() + args2.keys() + aux_names = auxs1.keys() + auxs2.keys() + args = dict() + for arg in arg_names: + if arg in args1: + args[arg] = args1[arg] + if arg in args2: + args[arg] = args2[arg] + auxs = dict() + for aux in aux_names: + if aux in auxs1: + auxs[aux] = auxs1[aux] + if aux in auxs2: + auxs[aux] = auxs2[aux] + save_checkpoint(prefix_out, epoch_out, args, auxs) diff --git a/lib/utils/create_logger.py b/lib/utils/create_logger.py new file mode 100644 index 0000000..1898905 --- /dev/null +++ b/lib/utils/create_logger.py @@ -0,0 +1,35 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Bin Xiao +# -------------------------------------------------------- + +import os +import logging +import time + +def create_logger(root_output_path, cfg, image_set): + # set up logger + if not os.path.exists(root_output_path): + os.makedirs(root_output_path) + assert os.path.exists(root_output_path), '{} does not exist'.format(root_output_path) + + cfg_name = os.path.basename(cfg).split('.')[0] + config_output_path = os.path.join(root_output_path, '{}'.format(cfg_name)) + if not os.path.exists(config_output_path): + os.makedirs(config_output_path) + + image_sets = [iset for iset in image_set.split('+')] + final_output_path = os.path.join(config_output_path, '{}'.format('_'.join(image_sets))) + if not os.path.exists(final_output_path): + os.makedirs(final_output_path) + + log_file = '{}_{}.log'.format(cfg_name, time.strftime('%Y-%m-%d-%H-%M')) + head = '%(asctime)-15s %(message)s' + logging.basicConfig(filename=os.path.join(final_output_path, log_file), format=head) + logger = logging.getLogger() + logger.setLevel(logging.INFO) + + return logger, final_output_path + diff --git a/lib/utils/image.py b/lib/utils/image.py new file mode 100644 index 0000000..33e018b --- /dev/null +++ b/lib/utils/image.py @@ -0,0 +1,232 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Yuwen Xiong +# -------------------------------------------------------- + +import numpy as np +import os +import cv2 +import random +from PIL import Image +from bbox.bbox_transform import clip_boxes +import pdb + +# TODO: This two functions should be merged with individual data loader +def get_test_image(roidb, config): + """ + preprocess image and return processed roidb + :param roidb: a list of roidb + :return: list of img as in mxnet format + roidb add new item['im_info'] + 0 --- x (width, second dim of im) + | + y (height, first dim of im) + """ + num_images = len(roidb) + processed_ims = [] + processed_roidb = [] + for i in range(num_images): + roi_rec = roidb[i] + assert os.path.exists(roi_rec['image']), '%s does not exist'.format(roi_rec['image']) + im = cv2.imread(roi_rec['image'], cv2.IMREAD_COLOR | cv2.IMREAD_IGNORE_ORIENTATION) + # print (roidb[i]) + # if roidb[i]['flipped']: + # im = im[:, ::-1, :] + new_rec = roi_rec.copy() + scale_ind = random.randrange(len(config.SCALES)) + # print "config.SCALES[scale_ind]:",config.SCALES[scale_ind] + target_size = config.SCALES[scale_ind][0] + max_size = config.SCALES[scale_ind][1] + im, im_scale = resize(im, target_size, max_size, stride=config.network.IMAGE_STRIDE) + im_tensor = transform(im, config.network.PIXEL_MEANS) + processed_ims.append(im_tensor) + im_info = [im_tensor.shape[2], im_tensor.shape[3], im_scale] + # new_rec['boxes'] = clip_boxes(np.round(roi_rec['boxes'].copy() * im_scale), im_info[:2]) + new_rec['im_info'] = im_info + processed_roidb.append(new_rec) + return processed_ims, processed_roidb + +def get_image(roidb, config): + """ + preprocess image and return processed roidb + :param roidb: a list of roidb + :return: list of img as in mxnet format + roidb add new item['im_info'] + 0 --- x (width, second dim of im) + | + y (height, first dim of im) + """ + num_images = len(roidb) + processed_ims = [] + processed_roidb = [] + for i in range(num_images): + roi_rec = roidb[i] + assert os.path.exists(roi_rec['image']), '%s does not exist'.format(roi_rec['image']) + im = cv2.imread(roi_rec['image'], cv2.IMREAD_COLOR|cv2.IMREAD_IGNORE_ORIENTATION) + # print (roidb[i]) + if roidb[i]['flipped']: + im = im[:, ::-1, :] + new_rec = roi_rec.copy() + scale_ind = random.randrange(len(config.SCALES)) + target_size = config.SCALES[scale_ind][0] + # pdb.set_trace() + max_size = config.SCALES[scale_ind][1] + im, im_scale = resize(im, target_size, max_size, stride=config.network.IMAGE_STRIDE) + im_tensor = transform(im, config.network.PIXEL_MEANS) + processed_ims.append(im_tensor) + im_info = [im_tensor.shape[2], im_tensor.shape[3], im_scale] + new_rec['boxes'] = clip_boxes(np.round(roi_rec['boxes'].copy() * im_scale), im_info[:2]) + new_rec['im_info'] = im_info + processed_roidb.append(new_rec) + return processed_ims, processed_roidb + + +def get_segmentation_image(segdb, config): + """ + propocess image and return segdb + :param segdb: a list of segdb + :return: list of img as mxnet format + """ + num_images = len(segdb) + assert num_images > 0, 'No images' + processed_ims = [] + processed_segdb = [] + processed_seg_cls_gt = [] + for i in range(num_images): + seg_rec = segdb[i] + assert os.path.exists(seg_rec['image']), '%s does not exist'.format(seg_rec['image']) + im = np.array(cv2.imread(seg_rec['image'])) + + new_rec = seg_rec.copy() + + scale_ind = random.randrange(len(config.SCALES)) + target_size = config.SCALES[scale_ind][0] + max_size = config.SCALES[scale_ind][1] + im, im_scale = resize(im, target_size, max_size, stride=config.network.IMAGE_STRIDE) + im_tensor = transform(im, config.network.PIXEL_MEANS) + im_info = [im_tensor.shape[2], im_tensor.shape[3], im_scale] + new_rec['im_info'] = im_info + + seg_cls_gt = np.array(Image.open(seg_rec['seg_cls_path'])) + seg_cls_gt, seg_cls_gt_scale = resize( + seg_cls_gt, target_size, max_size, stride=config.network.IMAGE_STRIDE, interpolation=cv2.INTER_NEAREST) + seg_cls_gt_tensor = transform_seg_gt(seg_cls_gt) + + processed_ims.append(im_tensor) + processed_segdb.append(new_rec) + processed_seg_cls_gt.append(seg_cls_gt_tensor) + + return processed_ims, processed_seg_cls_gt, processed_segdb + +def resize(im, target_size, max_size, stride=0, interpolation = cv2.INTER_LINEAR): + """ + only resize input image to target size and return scale + :param im: BGR image input by opencv + :param target_size: one dimensional size (the short side) + :param max_size: one dimensional max size (the long side) + :param stride: if given, pad the image to designated stride + :param interpolation: if given, using given interpolation method to resize image + :return: + """ + im_shape = im.shape + im_size_min = np.min(im_shape[0:2]) + im_size_max = np.max(im_shape[0:2]) + #print "im_size_min,target_size",im_size_min,target_size + im_scale = float(target_size) / float(im_size_min) + # prevent bigger axis from being more than max_size: + if np.round(im_scale * im_size_max) > max_size: + im_scale = float(max_size) / float(im_size_max) + im = cv2.resize(im, None, None, fx=im_scale, fy=im_scale, interpolation=interpolation) + + if stride == 0: + return im, im_scale + else: + # pad to product of stride + im_height = int(np.ceil(im.shape[0] / float(stride)) * stride) + im_width = int(np.ceil(im.shape[1] / float(stride)) * stride) + im_channel = im.shape[2] + padded_im = np.zeros((im_height, im_width, im_channel)) + padded_im[:im.shape[0], :im.shape[1], :] = im + return padded_im, im_scale + +def transform(im, pixel_means): + """ + transform into mxnet tensor + substract pixel size and transform to correct format + :param im: [height, width, channel] in BGR + :param pixel_means: [B, G, R pixel means] + :return: [batch, channel, height, width] + """ + im_tensor = np.zeros((1, 3, im.shape[0], im.shape[1])) + for i in range(3): + im_tensor[0, i, :, :] = im[:, :, 2 - i] - pixel_means[2 - i] + return im_tensor + +def transform_seg_gt(gt): + """ + transform segmentation gt image into mxnet tensor + :param gt: [height, width, channel = 1] + :return: [batch, channel = 1, height, width] + """ + gt_tensor = np.zeros((1, 1, gt.shape[0], gt.shape[1])) + gt_tensor[0, 0, :, :] = gt[:, :] + + return gt_tensor + +def transform_inverse(im_tensor, pixel_means): + """ + transform from mxnet im_tensor to ordinary RGB image + im_tensor is limited to one image + :param im_tensor: [batch, channel, height, width] + :param pixel_means: [B, G, R pixel means] + :return: im [height, width, channel(RGB)] + """ + assert im_tensor.shape[0] == 1 + im_tensor = im_tensor.copy() + # put channel back + channel_swap = (0, 2, 3, 1) + im_tensor = im_tensor.transpose(channel_swap) + im = im_tensor[0] + assert im.shape[2] == 3 + im += pixel_means[[2, 1, 0]] + im = im.astype(np.uint8) + return im + +def tensor_vstack(tensor_list, pad=0): + """ + vertically stack tensors + :param tensor_list: list of tensor to be stacked vertically + :param pad: label to pad with + :return: tensor with max shape + """ + ndim = len(tensor_list[0].shape) + dtype = tensor_list[0].dtype + islice = tensor_list[0].shape[0] + dimensions = [] + first_dim = sum([tensor.shape[0] for tensor in tensor_list]) + dimensions.append(first_dim) + for dim in range(1, ndim): + dimensions.append(max([tensor.shape[dim] for tensor in tensor_list])) + if pad == 0: + all_tensor = np.zeros(tuple(dimensions), dtype=dtype) + elif pad == 1: + all_tensor = np.ones(tuple(dimensions), dtype=dtype) + else: + all_tensor = np.full(tuple(dimensions), pad, dtype=dtype) + if ndim == 1: + for ind, tensor in enumerate(tensor_list): + all_tensor[ind*islice:(ind+1)*islice] = tensor + elif ndim == 2: + for ind, tensor in enumerate(tensor_list): + all_tensor[ind*islice:(ind+1)*islice, :tensor.shape[1]] = tensor + elif ndim == 3: + for ind, tensor in enumerate(tensor_list): + all_tensor[ind*islice:(ind+1)*islice, :tensor.shape[1], :tensor.shape[2]] = tensor + elif ndim == 4: + for ind, tensor in enumerate(tensor_list): + all_tensor[ind*islice:(ind+1)*islice, :tensor.shape[1], :tensor.shape[2], :tensor.shape[3]] = tensor + else: + raise Exception('Sorry, unimplemented.') + return all_tensor diff --git a/lib/utils/image_processing.py b/lib/utils/image_processing.py new file mode 100644 index 0000000..49567f3 --- /dev/null +++ b/lib/utils/image_processing.py @@ -0,0 +1,91 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Yuwen Xiong +# -------------------------------------------------------- + +import numpy as np +import cv2 + + +def resize(im, target_size, max_size): + """ + only resize input image to target size and return scale + :param im: BGR image input by opencv + :param target_size: one dimensional size (the short side) + :param max_size: one dimensional max size (the long side) + :return: + """ + im_shape = im.shape + im_size_min = np.min(im_shape[0:2]) + im_size_max = np.max(im_shape[0:2]) + im_scale = float(target_size) / float(im_size_min) + # prevent bigger axis from being more than max_size: + if np.round(im_scale * im_size_max) > max_size: + im_scale = float(max_size) / float(im_size_max) + im = cv2.resize(im, None, None, fx=im_scale, fy=im_scale, interpolation=cv2.INTER_LINEAR) + return im, im_scale + + +def transform(im, pixel_means, need_mean=False): + """ + transform into mxnet tensor + subtract pixel size and transform to correct format + :param im: [height, width, channel] in BGR + :param pixel_means: [[[R, G, B pixel means]]] + :return: [batch, channel, height, width] + """ + assert False, "shouldn't reach here." + im = im.copy() + im[:, :, (0, 1, 2)] = im[:, :, (2, 1, 0)] + im = im.astype(float) + if need_mean: + im -= pixel_means + im_tensor = im[np.newaxis, :] + # put channel first + channel_swap = (0, 3, 1, 2) + im_tensor = im_tensor.transpose(channel_swap) + return im_tensor + + +def transform_inverse(im_tensor, pixel_means): + """ + transform from mxnet im_tensor to ordinary RGB image + im_tensor is limited to one image + :param im_tensor: [batch, channel, height, width] + :param pixel_means: [[[R, G, B pixel means]]] + :return: im [height, width, channel(RGB)] + """ + assert im_tensor.shape[0] == 1 + im_tensor = im_tensor.copy() + # put channel back + channel_swap = (0, 2, 3, 1) + im_tensor = im_tensor.transpose(channel_swap) + im = im_tensor[0] + assert im.shape[2] == 3 + im += pixel_means + im = im.astype(np.uint8) + return im + + +def tensor_vstack(tensor_list, pad=0): + """ + vertically stack tensors + :param tensor_list: list of tensor to be stacked vertically + :param pad: label to pad with + :return: tensor with max shape + """ + ndim = len(tensor_list[0].shape) + if ndim == 1: + return np.hstack(tensor_list) + dimensions = [0] + for dim in range(1, ndim): + dimensions.append(max([tensor.shape[dim] for tensor in tensor_list])) + for ind, tensor in enumerate(tensor_list): + pad_shape = [(0, 0)] + for dim in range(1, ndim): + pad_shape.append((0, dimensions[dim] - tensor.shape[dim])) + tensor_list[ind] = np.lib.pad(tensor, pad_shape, 'constant', constant_values=pad) + all_tensor = np.vstack(tensor_list) + return all_tensor diff --git a/lib/utils/load_data.py b/lib/utils/load_data.py new file mode 100644 index 0000000..ab51374 --- /dev/null +++ b/lib/utils/load_data.py @@ -0,0 +1,91 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Yuwen Xiong +# -------------------------------------------------------- + +import numpy as np +from dataset import * + + +def load_gt_roidb(dataset_name, image_set_name, root_path, dataset_path, result_path=None, + flip=False): + """ load ground truth roidb """ + # print(dataset_name) + # print(image_set_name) + # print(root_path) + # print(dataset_path) + imdb = eval(dataset_name)(image_set_name, root_path, dataset_path, result_path) + roidb = imdb.gt_roidb() + # print 'roidb 0', roidb[0] + if flip: + roidb = imdb.append_flipped_images(roidb) + #print (roidb) + #print('cccccccccccccccc') + return roidb + +def load_gt_roidb_poly(dataset_name, image_set_name, root_path, dataset_path, result_path=None, + flip=False): + """""" + imdb = eval(dataset_name)(image_set_name, root_path, dataset_path, result_path) + roidb = imdb.gt_roidb() + if flip: + roidb = imdb.append_flipped_images_poly(roidb) + return roidb + +def load_proposal_roidb(dataset_name, image_set_name, root_path, dataset_path, result_path=None, + proposal='rpn', append_gt=True, flip=False): + """ load proposal roidb (append_gt when training) """ + imdb = eval(dataset_name)(image_set_name, root_path, dataset_path, result_path) + + gt_roidb = imdb.gt_roidb() + roidb = eval('imdb.' + proposal + '_roidb')(gt_roidb, append_gt) + if flip: + roidb = imdb.append_flipped_images(roidb) + return roidb + + +def merge_roidb(roidbs): + """ roidb are list, concat them together """ + roidb = roidbs[0] + for r in roidbs[1:]: + roidb.extend(r) + return roidb + + +def filter_roidb(roidb, config): + """ remove roidb entries without usable rois """ + + def is_valid(entry): + """ valid images have at least 1 fg or bg roi """ + overlaps = entry['max_overlaps'] + fg_inds = np.where(overlaps >= config.TRAIN.FG_THRESH)[0] + bg_inds = np.where((overlaps < config.TRAIN.BG_THRESH_HI) & (overlaps >= config.TRAIN.BG_THRESH_LO))[0] + valid = len(fg_inds) > 0 or len(bg_inds) > 0 + return valid + + num = len(roidb) + filtered_roidb = [entry for entry in roidb if is_valid(entry)] + num_after = len(filtered_roidb) + print 'filtered %d roidb entries: %d -> %d' % (num - num_after, num, num_after) + + return filtered_roidb + + +def load_gt_segdb(dataset_name, image_set_name, root_path, dataset_path, result_path=None, + flip=False): + """ load ground truth segdb """ + imdb = eval(dataset_name)(image_set_name, root_path, dataset_path, result_path) + segdb = imdb.gt_segdb() + if flip: + segdb = imdb.append_flipped_images_for_segmentation(segdb) + return segdb + + +def merge_segdb(segdbs): + """ segdb are list, concat them together """ + segdb = segdbs[0] + for r in segdbs[1:]: + segdb.extend(r) + return segdb diff --git a/lib/utils/load_model.py b/lib/utils/load_model.py new file mode 100644 index 0000000..982c0b4 --- /dev/null +++ b/lib/utils/load_model.py @@ -0,0 +1,66 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Yuwen Xiong +# -------------------------------------------------------- + +import mxnet as mx + + +def load_checkpoint(prefix, epoch): + """ + Load model checkpoint from file. + :param prefix: Prefix of model name. + :param epoch: Epoch number of model we would like to load. + :return: (arg_params, aux_params) + arg_params : dict of str to NDArray + Model parameter, dict of name to NDArray of net's weights. + aux_params : dict of str to NDArray + Model parameter, dict of name to NDArray of net's auxiliary states. + """ + save_dict = mx.nd.load('%s-%04d.params' % (prefix, epoch)) + arg_params = {} + aux_params = {} + for k, v in save_dict.items(): + tp, name = k.split(':', 1) + if tp == 'arg': + arg_params[name] = v + if tp == 'aux': + aux_params[name] = v + return arg_params, aux_params + + +def convert_context(params, ctx): + """ + :param params: dict of str to NDArray + :param ctx: the context to convert to + :return: dict of str of NDArray with context ctx + """ + new_params = dict() + for k, v in params.items(): + new_params[k] = v.as_in_context(ctx) + return new_params + + +def load_param(prefix, epoch, convert=False, ctx=None, process=False): + """ + wrapper for load checkpoint + :param prefix: Prefix of model name. + :param epoch: Epoch number of model we would like to load. + :param convert: reference model should be converted to GPU NDArray first + :param ctx: if convert then ctx must be designated. + :param process: model should drop any test + :return: (arg_params, aux_params) + """ + arg_params, aux_params = load_checkpoint(prefix, epoch) + if convert: + if ctx is None: + ctx = mx.cpu() + arg_params = convert_context(arg_params, ctx) + aux_params = convert_context(aux_params, ctx) + if process: + tests = [k for k in arg_params.keys() if '_test' in k] + for test in tests: + arg_params[test.replace('_test', '')] = arg_params.pop(test) + return arg_params, aux_params diff --git a/lib/utils/lr_scheduler.py b/lib/utils/lr_scheduler.py new file mode 100644 index 0000000..62409d5 --- /dev/null +++ b/lib/utils/lr_scheduler.py @@ -0,0 +1,67 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Yuwen Xiong +# -------------------------------------------------------- + + +import logging +from mxnet.lr_scheduler import LRScheduler + +class WarmupMultiFactorScheduler(LRScheduler): + """Reduce learning rate in factor at steps specified in a list + + Assume the weight has been updated by n times, then the learning rate will + be + + base_lr * factor^(sum((step/n)<=1)) # step is an array + + Parameters + ---------- + step: list of int + schedule learning rate after n updates + factor: float + the factor for reducing the learning rate + """ + def __init__(self, step, factor=1, warmup=False, warmup_lr=0, warmup_step=0): + super(WarmupMultiFactorScheduler, self).__init__() + assert isinstance(step, list) and len(step) >= 1 + for i, _step in enumerate(step): + if i != 0 and step[i] <= step[i-1]: + raise ValueError("Schedule step must be an increasing integer list") + if _step < 1: + raise ValueError("Schedule step must be greater or equal than 1 round") + if factor > 1.0: + raise ValueError("Factor must be no more than 1 to make lr reduce") + self.step = step + self.cur_step_ind = 0 + self.factor = factor + self.count = 0 + self.warmup = warmup + self.warmup_lr = warmup_lr + self.warmup_step = warmup_step + + def __call__(self, num_update): + """ + Call to schedule current learning rate + + Parameters + ---------- + num_update: int + the maximal number of updates applied to a weight. + """ + + # NOTE: use while rather than if (for continuing training via load_epoch) + if self.warmup and num_update < self.warmup_step: + return self.warmup_lr + while self.cur_step_ind <= len(self.step)-1: + if num_update > self.step[self.cur_step_ind]: + self.count = self.step[self.cur_step_ind] + self.cur_step_ind += 1 + self.base_lr *= self.factor + logging.info("Update[%d]: Change learning rate to %0.5e", + num_update, self.base_lr) + else: + return self.base_lr + return self.base_lr diff --git a/lib/utils/mask_coco2voc.py b/lib/utils/mask_coco2voc.py new file mode 100644 index 0000000..5283a49 --- /dev/null +++ b/lib/utils/mask_coco2voc.py @@ -0,0 +1,56 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Yi Li +# -------------------------------------------------------- + +from skimage.draw import polygon +import numpy as np + +def segToMask( S, h, w ): + """ + Convert polygon segmentation to binary mask. + :param S (float array) : polygon segmentation mask + :param h (int) : target mask height + :param w (int) : target mask width + :return: M (bool 2D array) : binary mask + """ + M = np.zeros((h,w), dtype=np.bool) + for s in S: + N = len(s) + rr, cc = polygon(np.array(s[1:N:2]).clip(max=h-1), \ + np.array(s[0:N:2]).clip(max=w-1)) # (y, x) + M[rr, cc] = 1 + return M + + +def decodeMask(R): + """ + Decode binary mask M encoded via run-length encoding. + :param R (object RLE) : run-length encoding of binary mask + :return: M (bool 2D array) : decoded binary mask + """ + N = len(R['counts']) + M = np.zeros( (R['size'][0]*R['size'][1], )) + n = 0 + val = 1 + for pos in range(N): + val = not val + for c in range(R['counts'][pos]): + R['counts'][pos] + M[n] = val + n += 1 + return M.reshape((R['size']), order='F') + +def mask_coco2voc(coco_masks, im_height, im_width): + voc_masks = np.zeros((len(coco_masks), im_height, im_width)) + for i, ann in enumerate(coco_masks): + if type(ann) == list: + # polygon + m = segToMask(ann, im_height, im_width) + else: + # rle + m = decodeMask(ann) + voc_masks[i,:,:]=m; + return voc_masks diff --git a/lib/utils/mask_voc2coco.py b/lib/utils/mask_voc2coco.py new file mode 100644 index 0000000..1d104cb --- /dev/null +++ b/lib/utils/mask_voc2coco.py @@ -0,0 +1,51 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Yi Li +# -------------------------------------------------------- + +from skimage.draw import polygon +import numpy as np +import cv2 +from utils.tictoc import tic, toc +from dataset.pycocotools.mask import encode as encodeMask_c + +def encodeMask(M): + """ + Encode binary mask M using run-length encoding. + :param M (bool 2D array) : binary mask to encode + :return: R (object RLE) : run-length encoding of binary mask + """ + [h, w] = M.shape + M = M.flatten(order='F') + N = len(M) + counts_list = [] + pos = 0 + # counts + counts_list.append(1) + diffs = np.logical_xor(M[0:N - 1], M[1:N]) + for diff in diffs: + if diff: + pos += 1 + counts_list.append(1) + else: + counts_list[pos] += 1 + # if array starts from 1. start with 0 counts for 0 + if M[0] == 1: + counts_list = [0] + counts_list + return {'size': [h, w], + 'counts': counts_list, + } + +def mask_voc2coco(voc_masks, voc_boxes, im_height, im_width, binary_thresh = 0.4): + num_pred = len(voc_masks) + assert(num_pred==voc_boxes.shape[0]) + mask_img = np.zeros((im_height, im_width, num_pred), dtype=np.uint8, order='F') + for i in xrange(num_pred): + pred_box = np.round(voc_boxes[i, :4]).astype(int) + pred_mask = voc_masks[i] + pred_mask = cv2.resize(pred_mask.astype(np.float32), (pred_box[2] - pred_box[0] + 1, pred_box[3] - pred_box[1] + 1)) + mask_img[pred_box[1]:pred_box[3]+1, pred_box[0]:pred_box[2]+1, i] = pred_mask >= binary_thresh + coco_mask = encodeMask_c(mask_img) + return coco_mask diff --git a/lib/utils/roidb.py b/lib/utils/roidb.py new file mode 100644 index 0000000..4347842 --- /dev/null +++ b/lib/utils/roidb.py @@ -0,0 +1,45 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Yuwen Xiong +# -------------------------------------------------------- + +""" +roidb +basic format [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped'] +extended ['image', 'max_classes', 'max_overlaps', 'bbox_targets'] +""" + +import cv2 +import numpy as np + +from bbox.bbox_regression import compute_bbox_regression_targets + + +def prepare_roidb(imdb, roidb, cfg): + """ + add image path, max_classes, max_overlaps to roidb + :param imdb: image database, provide path + :param roidb: roidb + :return: None + """ + print 'prepare roidb' + for i in range(len(roidb)): # image_index + roidb[i]['image'] = imdb.image_path_from_index(imdb.image_set_index[i]) + if cfg.TRAIN.ASPECT_GROUPING: + size = cv2.imread(roidb[i]['image']).shape + roidb[i]['height'] = size[0] + roidb[i]['width'] = size[1] + gt_overlaps = roidb[i]['gt_overlaps'].toarray() + max_overlaps = gt_overlaps.max(axis=1) + max_classes = gt_overlaps.argmax(axis=1) + roidb[i]['max_overlaps'] = max_overlaps + roidb[i]['max_classes'] = max_classes + + # background roi => background class + zero_indexes = np.where(max_overlaps == 0)[0] + assert all(max_classes[zero_indexes] == 0) + # foreground roi => foreground class + nonzero_indexes = np.where(max_overlaps > 0)[0] + assert all(max_classes[nonzero_indexes] != 0) diff --git a/lib/utils/save_model.py b/lib/utils/save_model.py new file mode 100644 index 0000000..180d4f9 --- /dev/null +++ b/lib/utils/save_model.py @@ -0,0 +1,25 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Yuwen Xiong +# -------------------------------------------------------- + +import mxnet as mx + + +def save_checkpoint(prefix, epoch, arg_params, aux_params): + """Checkpoint the model data into file. + :param prefix: Prefix of model name. + :param epoch: The epoch number of the model. + :param arg_params: dict of str to NDArray + Model parameter, dict of name to NDArray of net's weights. + :param aux_params: dict of str to NDArray + Model parameter, dict of name to NDArray of net's auxiliary states. + :return: None + prefix-epoch.params will be saved for parameters. + """ + save_dict = {('arg:%s' % k) : v for k, v in arg_params.items()} + save_dict.update({('aux:%s' % k) : v for k, v in aux_params.items()}) + param_name = '%s-%04d.params' % (prefix, epoch) + mx.nd.save(param_name, save_dict) diff --git a/lib/utils/show_boxes.py b/lib/utils/show_boxes.py new file mode 100644 index 0000000..42b7ec2 --- /dev/null +++ b/lib/utils/show_boxes.py @@ -0,0 +1,32 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Yi Li, Haocheng Zhang +# -------------------------------------------------------- + +import matplotlib.pyplot as plt +from random import random as rand +def show_boxes(im, dets, classes, scale = 1.0): + plt.cla() + plt.axis("off") + plt.imshow(im) + for cls_idx, cls_name in enumerate(classes): + cls_dets = dets[cls_idx] + for det in cls_dets: + bbox = det[:4] * scale + color = (rand(), rand(), rand()) + rect = plt.Rectangle((bbox[0], bbox[1]), + bbox[2] - bbox[0], + bbox[3] - bbox[1], fill=False, + edgecolor=color, linewidth=2.5) + plt.gca().add_patch(rect) + + if cls_dets.shape[1] == 5: + score = det[-1] + plt.gca().text(bbox[0], bbox[1], + '{:s} {:.3f}'.format(cls_name, score), + bbox=dict(facecolor=color, alpha=0.5), fontsize=9, color='white') + plt.show() + return im + diff --git a/lib/utils/show_masks.py b/lib/utils/show_masks.py new file mode 100644 index 0000000..0bee42d --- /dev/null +++ b/lib/utils/show_masks.py @@ -0,0 +1,39 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Yuwen Xiong +# -------------------------------------------------------- + +import numpy as np +import matplotlib.pyplot as plt +import random +import cv2 + +def show_masks(im, dets, msks, show = True, thresh = 1e-3, scale = 1.0): + plt.cla() + plt.imshow(im) + for det, msk in zip(dets, msks): + color = (random.random(), random.random(), random.random()) # generate a random color + bbox = det[:4] * scale + cod = np.zeros(4).astype(int) + cod[0] = int(bbox[0]) + cod[1] = int(bbox[1]) + cod[2] = int(bbox[2]) + cod[3] = int(bbox[3]) + if im[cod[0]:cod[2], cod[1]:cod[3], 0].size > 0: + msk = cv2.resize(msk, im[cod[1]:cod[3], cod[0]:cod[2], 0].T.shape) + bimsk = msk > thresh + bimsk = bimsk.astype(int) + bimsk = np.repeat(bimsk[:, :, np.newaxis], 3, axis=2) + mskd = im[cod[1]:cod[3], cod[0]:cod[2], :] * bimsk + clmsk = np.ones(bimsk.shape) * bimsk + clmsk[:, :, 0] = clmsk[:, :, 0] * color[0] * 256; + clmsk[:, :, 1] = clmsk[:, :, 1] * color[1] * 256; + clmsk[:, :, 2] = clmsk[:, :, 2] * color[2] * 256; + im[cod[1]:cod[3], cod[0]:cod[2], :] = im[cod[1]:cod[3], cod[0]:cod[2], :] + 0.8 * clmsk - 0.8 * mskd + plt.imshow(im) + if(show): + plt.show() + return im + diff --git a/lib/utils/show_offset.py b/lib/utils/show_offset.py new file mode 100644 index 0000000..9fb07f6 --- /dev/null +++ b/lib/utils/show_offset.py @@ -0,0 +1,136 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Guodong Zhang +# -------------------------------------------------------- + +import matplotlib.pyplot as plt +import numpy as np + +def show_boxes_simple(bbox, color='r', lw=2): + rect = plt.Rectangle((bbox[0], bbox[1]), bbox[2] - bbox[0], + bbox[3] - bbox[1], fill=False, edgecolor=color, linewidth=lw) + plt.gca().add_patch(rect) + +def kernel_inv_map(vis_attr, target_point, map_h, map_w): + pos_shift = [vis_attr['dilation'] * 0 - vis_attr['pad'], + vis_attr['dilation'] * 1 - vis_attr['pad'], + vis_attr['dilation'] * 2 - vis_attr['pad']] + source_point = [] + for idx in range(vis_attr['filter_size']**2): + cur_source_point = np.array([target_point[0] + pos_shift[idx / 3], + target_point[1] + pos_shift[idx % 3]]) + if cur_source_point[0] < 0 or cur_source_point[1] < 0 \ + or cur_source_point[0] > map_h - 1 or cur_source_point[1] > map_w - 1: + continue + source_point.append(cur_source_point.astype('f')) + return source_point + +def offset_inv_map(source_points, offset): + for idx, _ in enumerate(source_points): + source_points[idx][0] += offset[2*idx] + source_points[idx][1] += offset[2*idx + 1] + return source_points + +def get_bottom_position(vis_attr, top_points, all_offset): + map_h = all_offset[0].shape[2] + map_w = all_offset[0].shape[3] + + for level in range(vis_attr['plot_level']): + source_points = [] + for idx, cur_top_point in enumerate(top_points): + cur_top_point = np.round(cur_top_point) + if cur_top_point[0] < 0 or cur_top_point[1] < 0 \ + or cur_top_point[0] > map_h-1 or cur_top_point[1] > map_w-1: + continue + cur_source_point = kernel_inv_map(vis_attr, cur_top_point, map_h, map_w) + cur_offset = np.squeeze(all_offset[level][:, :, int(cur_top_point[0]), int(cur_top_point[1])]) + cur_source_point = offset_inv_map(cur_source_point, cur_offset) + source_points = source_points + cur_source_point + top_points = source_points + return source_points + +def plot_according_to_point(vis_attr, im, source_points, map_h, map_w, color=[255,0,0]): + plot_area = vis_attr['plot_area'] + for idx, cur_source_point in enumerate(source_points): + y = np.round((cur_source_point[0] + 0.5) * im.shape[0] / map_h).astype('i') + x = np.round((cur_source_point[1] + 0.5) * im.shape[1] / map_w).astype('i') + + if x < 0 or y < 0 or x > im.shape[1]-1 or y > im.shape[0]-1: + continue + y = min(y, im.shape[0] - vis_attr['plot_area'] - 1) + x = min(x, im.shape[1] - vis_attr['plot_area'] - 1) + y = max(y, vis_attr['plot_area']) + x = max(x, vis_attr['plot_area']) + im[y-plot_area:y+plot_area+1, x-plot_area:x+plot_area+1, :] = np.tile( + np.reshape(color, (1, 1, 3)), (2*plot_area+1, 2*plot_area+1, 1) + ) + return im + + + +def show_dpsroi_offset(im, boxes, offset, classes, trans_std=0.1): + plt.cla + for idx, bbox in enumerate(boxes): + plt.figure(idx+1) + plt.axis("off") + plt.imshow(im) + + offset_w = np.squeeze(offset[idx, classes[idx]*2, :, :]) * trans_std + offset_h = np.squeeze(offset[idx, classes[idx]*2+1, :, :]) * trans_std + x1 = int(bbox[0]) + y1 = int(bbox[1]) + x2 = int(bbox[2]) + y2 = int(bbox[3]) + roi_width = x2-x1+1 + roi_height = y2-y1+1 + part_size = offset_w.shape[0] + bin_size_w = roi_width / part_size + bin_size_h = roi_height / part_size + show_boxes_simple(bbox, color='b') + for ih in range(part_size): + for iw in range(part_size): + sub_box = np.array([x1+iw*bin_size_w, y1+ih*bin_size_h, + x1+(iw+1)*bin_size_w, y1+(ih+1)*bin_size_h]) + sub_offset = offset_h[ih, iw] * np.array([0, 1, 0, 1]) * roi_height \ + + offset_w[ih, iw] * np.array([1, 0, 1, 0]) * roi_width + sub_box = sub_box + sub_offset + show_boxes_simple(sub_box) + plt.show() + +def show_dconv_offset(im, all_offset, step=[2, 2], filter_size=3, + dilation=2, pad=2, plot_area=2, plot_level=3): + vis_attr = {'filter_size': filter_size, 'dilation': dilation, 'pad': pad, + 'plot_area': plot_area, 'plot_level': plot_level} + + map_h = all_offset[0].shape[2] + map_w = all_offset[0].shape[3] + + step_h = step[0] + step_w = step[1] + start_h = np.round(step_h / 2) + start_w = np.round(step_w / 2) + + plt.figure() + for im_h in range(start_h, map_h, step_h): + for im_w in range(start_w, map_w, step_w): + target_point = np.array([im_h, im_w]) + source_y = np.round(target_point[0] * im.shape[0] / map_h) + source_x = np.round(target_point[1] * im.shape[1] / map_w) + if source_y < plot_area or source_x < plot_area \ + or source_y >= im.shape[0] - plot_area or source_x >= im.shape[1] - plot_area: + continue + + cur_im = np.copy(im) + source_points = get_bottom_position(vis_attr, [target_point], all_offset) + cur_im = plot_according_to_point(vis_attr, cur_im, source_points, map_h, map_w) + cur_im[source_y-plot_area:source_y+plot_area+1, source_x-plot_area:source_x+plot_area+1, :] = \ + np.tile(np.reshape([0, 255, 0], (1, 1, 3)), (2*plot_area+1, 2*plot_area+1, 1)) + + + plt.axis("off") + plt.imshow(cur_im) + plt.show(block=False) + plt.pause(0.01) + plt.clf() diff --git a/lib/utils/symbol.py b/lib/utils/symbol.py new file mode 100644 index 0000000..f12b622 --- /dev/null +++ b/lib/utils/symbol.py @@ -0,0 +1,55 @@ +# -------------------------------------------------------- +# Deformable Convolutional Networks +# Copyright (c) 2017 Microsoft +# Licensed under The MIT License [see LICENSE for details] +# Written by Yuwen Xiong +# -------------------------------------------------------- + +import numpy as np +class Symbol: + def __init__(self): + self.arg_shape_dict = None + self.out_shape_dict = None + self.aux_shape_dict = None + self.sym = None + + @property + def symbol(self): + return self.sym + + def get_symbol(self, cfg, is_train=True): + """ + return a generated symbol, it also need to be assigned to self.sym + """ + raise NotImplementedError() + + def init_weights(self, cfg, arg_params, aux_params): + raise NotImplementedError() + + def get_msra_std(self, shape): + fan_in = float(shape[1]) + if len(shape) > 2: + fan_in *= np.prod(shape[2:]) + print(np.sqrt(2 / fan_in)) + return np.sqrt(2 / fan_in) + + def infer_shape(self, data_shape_dict): + # infer shape + arg_shape, out_shape, aux_shape = self.sym.infer_shape(**data_shape_dict) + self.arg_shape_dict = dict(zip(self.sym.list_arguments(), arg_shape)) + self.out_shape_dict = dict(zip(self.sym.list_outputs(), out_shape)) + self.aux_shape_dict = dict(zip(self.sym.list_auxiliary_states(), aux_shape)) + + def check_parameter_shapes(self, arg_params, aux_params, data_shape_dict, is_train=True): + for k in self.sym.list_arguments(): + if k in data_shape_dict or (False if is_train else 'label' in k): + continue + assert k in arg_params, k + ' not initialized' + assert arg_params[k].shape == self.arg_shape_dict[k], \ + 'shape inconsistent for ' + k + ' inferred ' + str(self.arg_shape_dict[k]) + ' provided ' + str( + arg_params[k].shape) + for k in self.sym.list_auxiliary_states(): + assert k in aux_params, k + ' not initialized' + assert aux_params[k].shape == self.aux_shape_dict[k], \ + 'shape inconsistent for ' + k + ' inferred ' + str(self.aux_shape_dict[k]) + ' provided ' + str( + aux_params[k].shape) diff --git a/lib/utils/tictoc.py b/lib/utils/tictoc.py new file mode 100644 index 0000000..caa3b96 --- /dev/null +++ b/lib/utils/tictoc.py @@ -0,0 +1,14 @@ +import time + +def tic(): + import time + global startTime_for_tictoc + startTime_for_tictoc = time.time() + return startTime_for_tictoc + +def toc(): + if 'startTime_for_tictoc' in globals(): + endTime = time.time() + return endTime - startTime_for_tictoc + else: + return None \ No newline at end of file diff --git a/prepare_data/ImgSplit_multi_process.py b/prepare_data/ImgSplit_multi_process.py new file mode 100644 index 0000000..2dd36f0 --- /dev/null +++ b/prepare_data/ImgSplit_multi_process.py @@ -0,0 +1,301 @@ +""" +------------- +This is the multi-process version +""" +import os +import codecs +import numpy as np +import math +from dota_utils import GetFileFromThisRootDir +import cv2 +import shapely.geometry as shgeo +import dota_utils as util +import copy +from multiprocessing import Pool +from functools import partial +import time + +def choose_best_pointorder_fit_another(poly1, poly2): + """ + To make the two polygons best fit with each point + """ + x1 = poly1[0] + y1 = poly1[1] + x2 = poly1[2] + y2 = poly1[3] + x3 = poly1[4] + y3 = poly1[5] + x4 = poly1[6] + y4 = poly1[7] + combinate = [np.array([x1, y1, x2, y2, x3, y3, x4, y4]), np.array([x2, y2, x3, y3, x4, y4, x1, y1]), + np.array([x3, y3, x4, y4, x1, y1, x2, y2]), np.array([x4, y4, x1, y1, x2, y2, x3, y3])] + dst_coordinate = np.array(poly2) + distances = np.array([np.sum((coord - dst_coordinate)**2) for coord in combinate]) + sorted = distances.argsort() + return combinate[sorted[0]] + +def cal_line_length(point1, point2): + return math.sqrt( math.pow(point1[0] - point2[0], 2) + math.pow(point1[1] - point2[1], 2)) + + +def split_single_warp(name, split_base, rate, extent): + split_base.SplitSingle(name, rate, extent) + +class splitbase(): + def __init__(self, + basepath, + outpath, + code = 'utf-8', + gap=512, + subsize=1024, + thresh=0.7, + choosebestpoint=True, + ext = '.png', + padding=True, + num_process=8 + ): + """ + :param basepath: base path for dota data + :param outpath: output base path for dota data, + the basepath and outputpath have the similar subdirectory, 'images' and 'labelTxt' + :param code: encodeing format of txt file + :param gap: overlap between two patches + :param subsize: subsize of patch + :param thresh: the thresh determine whether to keep the instance if the instance is cut down in the process of split + :param choosebestpoint: used to choose the first point for the + :param ext: ext for the image format + :param padding: if to padding the images so that all the images have the same size + """ + self.basepath = basepath + self.outpath = outpath + self.code = code + self.gap = gap + self.subsize = subsize + self.slide = self.subsize - self.gap + self.thresh = thresh + self.imagepath = os.path.join(self.basepath, 'images') + self.labelpath = os.path.join(self.basepath, 'labelTxt') + self.outimagepath = os.path.join(self.outpath, 'images') + self.outlabelpath = os.path.join(self.outpath, 'labelTxt') + self.choosebestpoint = choosebestpoint + self.ext = ext + self.padding = padding + self.num_process = num_process + self.pool = Pool(num_process) + print('padding:', padding) + + # pdb.set_trace() + if not os.path.isdir(self.outpath): + os.mkdir(self.outpath) + if not os.path.isdir(self.outimagepath): + # pdb.set_trace() + os.mkdir(self.outimagepath) + if not os.path.isdir(self.outlabelpath): + os.mkdir(self.outlabelpath) + # pdb.set_trace() + ## point: (x, y), rec: (xmin, ymin, xmax, ymax) + # def __del__(self): + # self.f_sub.close() + ## grid --> (x, y) position of grids + def polyorig2sub(self, left, up, poly): + polyInsub = np.zeros(len(poly)) + for i in range(int(len(poly)/2)): + polyInsub[i * 2] = int(poly[i * 2] - left) + polyInsub[i * 2 + 1] = int(poly[i * 2 + 1] - up) + return polyInsub + + def calchalf_iou(self, poly1, poly2): + """ + It is not the iou on usual, the iou is the value of intersection over poly1 + """ + inter_poly = poly1.intersection(poly2) + inter_area = inter_poly.area + poly1_area = poly1.area + half_iou = inter_area / poly1_area + return inter_poly, half_iou + + def saveimagepatches(self, img, subimgname, left, up): + subimg = copy.deepcopy(img[up: (up + self.subsize), left: (left + self.subsize)]) + outdir = os.path.join(self.outimagepath, subimgname + self.ext) + h, w, c = np.shape(subimg) + if (self.padding): + outimg = np.zeros((self.subsize, self.subsize, 3)) + outimg[0:h, 0:w, :] = subimg + cv2.imwrite(outdir, outimg) + else: + cv2.imwrite(outdir, subimg) + + def GetPoly4FromPoly5(self, poly): + distances = [cal_line_length((poly[i * 2], poly[i * 2 + 1] ), (poly[(i + 1) * 2], poly[(i + 1) * 2 + 1])) for i in range(int(len(poly)/2 - 1))] + distances.append(cal_line_length((poly[0], poly[1]), (poly[8], poly[9]))) + pos = np.array(distances).argsort()[0] + count = 0 + outpoly = [] + while count < 5: + #print('count:', count) + if (count == pos): + outpoly.append((poly[count * 2] + poly[(count * 2 + 2)%10])/2) + outpoly.append((poly[(count * 2 + 1)%10] + poly[(count * 2 + 3)%10])/2) + count = count + 1 + elif (count == (pos + 1)%5): + count = count + 1 + continue + + else: + outpoly.append(poly[count * 2]) + outpoly.append(poly[count * 2 + 1]) + count = count + 1 + return outpoly + + def savepatches(self, resizeimg, objects, subimgname, left, up, right, down): + outdir = os.path.join(self.outlabelpath, subimgname + '.txt') + mask_poly = [] + imgpoly = shgeo.Polygon([(left, up), (right, up), (right, down), + (left, down)]) + with codecs.open(outdir, 'w', self.code) as f_out: + for obj in objects: + gtpoly = shgeo.Polygon([(obj['poly'][0], obj['poly'][1]), + (obj['poly'][2], obj['poly'][3]), + (obj['poly'][4], obj['poly'][5]), + (obj['poly'][6], obj['poly'][7])]) + if (gtpoly.area <= 0): + continue + inter_poly, half_iou = self.calchalf_iou(gtpoly, imgpoly) + + # print('writing...') + if (half_iou == 1): + polyInsub = self.polyorig2sub(left, up, obj['poly']) + outline = ' '.join(list(map(str, polyInsub))) + outline = outline + ' ' + obj['name'] + ' ' + str(obj['difficult']) + f_out.write(outline + '\n') + elif (half_iou > 0): + #elif (half_iou > self.thresh): + ## print('<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<') + inter_poly = shgeo.polygon.orient(inter_poly, sign=1) + out_poly = list(inter_poly.exterior.coords)[0: -1] + if len(out_poly) < 4: + continue + + out_poly2 = [] + for i in range(len(out_poly)): + out_poly2.append(out_poly[i][0]) + out_poly2.append(out_poly[i][1]) + + if (len(out_poly) == 5): + #print('==========================') + out_poly2 = self.GetPoly4FromPoly5(out_poly2) + elif (len(out_poly) > 5): + """ + if the cut instance is a polygon with points more than 5, we do not handle it currently + """ + continue + if (self.choosebestpoint): + out_poly2 = choose_best_pointorder_fit_another(out_poly2, obj['poly']) + + polyInsub = self.polyorig2sub(left, up, out_poly2) + + for index, item in enumerate(polyInsub): + if (item <= 1): + polyInsub[index] = 1 + elif (item >= self.subsize): + polyInsub[index] = self.subsize + outline = ' '.join(list(map(str, polyInsub))) + if (half_iou > self.thresh): + outline = outline + ' ' + obj['name'] + ' ' + str(obj['difficult']) + else: + ## if the left part is too small, label as '2' + outline = outline + ' ' + obj['name'] + ' ' + '2' + f_out.write(outline + '\n') + #else: + # mask_poly.append(inter_poly) + self.saveimagepatches(resizeimg, subimgname, left, up) + + def SplitSingle(self, name, rate, extent): + """ + split a single image and ground truth + :param name: image name + :param rate: the resize scale for the image + :param extent: the image format + :return: + """ + img = cv2.imread(os.path.join(self.imagepath, name + extent)) + if np.shape(img) == (): + return + fullname = os.path.join(self.labelpath, name + '.txt') + objects = util.parse_dota_poly2(fullname) + for obj in objects: + obj['poly'] = list(map(lambda x:rate*x, obj['poly'])) + #obj['poly'] = list(map(lambda x: ([2 * y for y in x]), obj['poly'])) + + if (rate != 1): + resizeimg = cv2.resize(img, None, fx=rate, fy=rate, interpolation = cv2.INTER_CUBIC) + else: + resizeimg = img + outbasename = name + '__' + str(rate) + '__' + weight = np.shape(resizeimg)[1] + height = np.shape(resizeimg)[0] + + left, up = 0, 0 + while (left < weight): + if (left + self.subsize >= weight): + left = max(weight - self.subsize, 0) + up = 0 + while (up < height): + if (up + self.subsize >= height): + up = max(height - self.subsize, 0) + right = min(left + self.subsize, weight - 1) + down = min(up + self.subsize, height - 1) + subimgname = outbasename + str(left) + '___' + str(up) + # self.f_sub.write(name + ' ' + subimgname + ' ' + str(left) + ' ' + str(up) + '\n') + self.savepatches(resizeimg, objects, subimgname, left, up, right, down) + if (up + self.subsize >= height): + break + else: + up = up + self.slide + if (left + self.subsize >= weight): + break + else: + left = left + self.slide + + def splitdata(self, rate): + """ + :param rate: resize rate before cut + """ + imagelist = GetFileFromThisRootDir(self.imagepath) + imagenames = [util.custombasename(x) for x in imagelist if (util.custombasename(x) != 'Thumbs')] + if self.num_process == 1: + for name in imagenames: + self.SplitSingle(name, rate, self.ext) + else: + + # worker = partial(self.SplitSingle, rate=rate, extent=self.ext) + worker = partial(split_single_warp, split_base=self, rate=rate, extent=self.ext) + self.pool.map(worker, imagenames) + + def __getstate__(self): + self_dict = self.__dict__.copy() + del self_dict['pool'] + return self_dict + + def __setstate__(self, state): + self.__dict__.update(state) +if __name__ == '__main__': + # example usage of ImgSplit + # start = time.clock() + # split = splitbase(r'/data/dj/dota/val', + # r'/data/dj/dota/val_1024_debugmulti-process_refactor') # time cost 19s + # # split.splitdata(1) + # # split.splitdata(2) + # split.splitdata(0.4) + # + # elapsed = (time.clock() - start) + # print("Time used:", elapsed) + + split = splitbase(r'/home/dingjian/data/dota/val', + r'/home/dingjian/data/dota/valsplit', + gap=200, + subsize=1024, + num_process=8 + ) + split.splitdata(1) + diff --git a/prepare_data/SplitOnlyImage_multi_process.py b/prepare_data/SplitOnlyImage_multi_process.py new file mode 100644 index 0000000..3c7be49 --- /dev/null +++ b/prepare_data/SplitOnlyImage_multi_process.py @@ -0,0 +1,104 @@ +import os +import numpy as np +import cv2 +import copy +import dota_utils as util +from multiprocessing import Pool +from functools import partial + + +def split_single_warp(name, split_base, rate, extent): + split_base.SplitSingle(name, rate, extent) +class splitbase(): + def __init__(self, + srcpath, + dstpath, + gap=100, + subsize=1024, + ext='.png', + padding=True, + num_process=32): + self.srcpath = srcpath + self.outpath = dstpath + self.gap = gap + self.subsize = subsize + self.slide = self.subsize - self.gap + self.srcpath = srcpath + self.dstpath = dstpath + self.ext = ext + self.padding = padding + self.pool = Pool(num_process) + + if not os.path.isdir(self.outpath): + os.mkdir(self.outpath) + + def saveimagepatches(self, img, subimgname, left, up, ext='.png'): + subimg = copy.deepcopy(img[up: (up + self.subsize), left: (left + self.subsize)]) + outdir = os.path.join(self.dstpath, subimgname + ext) + h, w, c = np.shape(subimg) + if (self.padding): + outimg = np.zeros((self.subsize, self.subsize, 3)) + outimg[0:h, 0:w, :] = subimg + cv2.imwrite(outdir, outimg) + else: + cv2.imwrite(outdir, subimg) + + def SplitSingle(self, name, rate, extent): + img = cv2.imread(os.path.join(self.srcpath, name + extent)) + assert np.shape(img) != () + + if (rate != 1): + resizeimg = cv2.resize(img, None, fx=rate, fy=rate, interpolation=cv2.INTER_CUBIC) + else: + resizeimg = img + outbasename = name + '__' + str(rate) + '__' + + weight = np.shape(resizeimg)[1] + height = np.shape(resizeimg)[0] + + # if (max(weight, height) < self.subsize/2): + # return + + left, up = 0, 0 + while (left < weight): + if (left + self.subsize >= weight): + left = max(weight - self.subsize, 0) + up = 0 + while (up < height): + if (up + self.subsize >= height): + up = max(height - self.subsize, 0) + subimgname = outbasename + str(left) + '___' + str(up) + self.saveimagepatches(resizeimg, subimgname, left, up) + if (up + self.subsize >= height): + break + else: + up = up + self.slide + if (left + self.subsize >= weight): + break + else: + left = left + self.slide + + def splitdata(self, rate): + + imagelist = util.GetFileFromThisRootDir(self.srcpath) + imagenames = [util.custombasename(x) for x in imagelist if (util.custombasename(x) != 'Thumbs')] + + # worker = partial(self.SplitSingle, rate=rate, extent=self.ext) + worker = partial(split_single_warp, split_base=self, rate=rate, extent=self.ext) + self.pool.map(worker, imagenames) + # + # for name in imagenames: + # self.SplitSingle(name, rate, self.ext) + def __getstate__(self): + self_dict = self.__dict__.copy() + del self_dict['pool'] + return self_dict + + def __setstate__(self, state): + self.__dict__.update(state) + +if __name__ == '__main__': + split = splitbase(r'/home/dingjian/data/dota/val/images', + r'/home/dingjian/data/dota/valsplit', + num_process=32) + split.splitdata(1) \ No newline at end of file diff --git a/prepare_data/__init__.py b/prepare_data/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/prepare_data/dota_utils.py b/prepare_data/dota_utils.py new file mode 100644 index 0000000..cab48ab --- /dev/null +++ b/prepare_data/dota_utils.py @@ -0,0 +1,259 @@ +import sys +import codecs +import numpy as np +import shapely.geometry as shgeo +import os +import re +import math +# import polyiou +""" + some basic functions which are useful for process DOTA data +""" + +wordname_15 = ['plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court', + 'basketball-court', 'storage-tank', 'soccer-ball-field', 'roundabout', 'harbor', 'swimming-pool', 'helicopter'] + +def custombasename(fullname): + return os.path.basename(os.path.splitext(fullname)[0]) + +def GetFileFromThisRootDir(dir,ext = None): + allfiles = [] + needExtFilter = (ext != None) + for root,dirs,files in os.walk(dir): + for filespath in files: + filepath = os.path.join(root, filespath) + extension = os.path.splitext(filepath)[1][1:] + if needExtFilter and extension in ext: + allfiles.append(filepath) + elif not needExtFilter: + allfiles.append(filepath) + return allfiles + +def TuplePoly2Poly(poly): + outpoly = [poly[0][0], poly[0][1], + poly[1][0], poly[1][1], + poly[2][0], poly[2][1], + poly[3][0], poly[3][1] + ] + return outpoly + +def parse_dota_poly(filename): + """ + parse the dota ground truth in the format: + [(x1, y1), (x2, y2), (x3, y3), (x4, y4)] + """ + objects = [] + #print('filename:', filename) + f = [] + if (sys.version_info >= (3, 5)): + fd = open(filename, 'r') + f = fd + elif (sys.version_info >= 2.7): + fd = codecs.open(filename, 'r') + f = fd + # count = 0 + while True: + line = f.readline() + # count = count + 1 + # if count < 2: + # continue + if line: + splitlines = line.strip().split(' ') + object_struct = {} + ### clear the wrong name after check all the data + #if (len(splitlines) >= 9) and (splitlines[8] in classname): + if (len(splitlines) < 9): + continue + if (len(splitlines) >= 9): + object_struct['name'] = splitlines[8] + if (len(splitlines) == 9): + object_struct['difficult'] = '0' + elif (len(splitlines) >= 10): + # if splitlines[9] == '1': + # if (splitlines[9] == 'tr'): + # object_struct['difficult'] = '1' + # else: + object_struct['difficult'] = splitlines[9] + # else: + # object_struct['difficult'] = 0 + object_struct['poly'] = [(float(splitlines[0]), float(splitlines[1])), + (float(splitlines[2]), float(splitlines[3])), + (float(splitlines[4]), float(splitlines[5])), + (float(splitlines[6]), float(splitlines[7])) + ] + gtpoly = shgeo.Polygon(object_struct['poly']) + object_struct['area'] = gtpoly.area + # poly = list(map(lambda x:np.array(x), object_struct['poly'])) + # object_struct['long-axis'] = max(distance(poly[0], poly[1]), distance(poly[1], poly[2])) + # object_struct['short-axis'] = min(distance(poly[0], poly[1]), distance(poly[1], poly[2])) + # if (object_struct['long-axis'] < 15): + # object_struct['difficult'] = '1' + # global small_count + # small_count = small_count + 1 + objects.append(object_struct) + else: + break + return objects + +def parse_dota_poly2(filename): + """ + parse the dota ground truth in the format: + [x1, y1, x2, y2, x3, y3, x4, y4] + """ + objects = parse_dota_poly(filename) + for obj in objects: + obj['poly'] = TuplePoly2Poly(obj['poly']) + obj['poly'] = list(map(int, obj['poly'])) + return objects + +def parse_dota_rec(filename): + """ + parse the dota ground truth in the bounding box format: + "xmin, ymin, xmax, ymax" + """ + objects = parse_dota_poly(filename) + for obj in objects: + poly = obj['poly'] + bbox = dots4ToRec4(poly) + obj['bndbox'] = bbox + return objects +## bounding box transfer for varies format + +def dots4ToRec4(poly): + xmin, xmax, ymin, ymax = min(poly[0][0], min(poly[1][0], min(poly[2][0], poly[3][0]))), \ + max(poly[0][0], max(poly[1][0], max(poly[2][0], poly[3][0]))), \ + min(poly[0][1], min(poly[1][1], min(poly[2][1], poly[3][1]))), \ + max(poly[0][1], max(poly[1][1], max(poly[2][1], poly[3][1]))) + return xmin, ymin, xmax, ymax +def dots4ToRec8(poly): + xmin, ymin, xmax, ymax = dots4ToRec4(poly) + return xmin, ymin, xmax, ymin, xmax, ymax, xmin, ymax + #return dots2ToRec8(dots4ToRec4(poly)) +def dots2ToRec8(rec): + xmin, ymin, xmax, ymax = rec[0], rec[1], rec[2], rec[3] + return xmin, ymin, xmax, ymin, xmax, ymax, xmin, ymax + +def groundtruth2Task1(srcpath, dstpath): + filelist = GetFileFromThisRootDir(srcpath) + # names = [custombasename(x.strip())for x in filelist] + filedict = {} + for cls in wordname_15: + fd = open(os.path.join(dstpath, 'Task1_') + cls + r'.txt', 'w') + filedict[cls] = fd + for filepath in filelist: + objects = parse_dota_poly2(filepath) + + subname = custombasename(filepath) + pattern2 = re.compile(r'__([\d+\.]+)__\d+___') + rate = re.findall(pattern2, subname)[0] + + for obj in objects: + category = obj['name'] + difficult = obj['difficult'] + poly = obj['poly'] + if difficult == '2': + continue + if rate == '0.5': + outline = custombasename(filepath) + ' ' + '1' + ' ' + ' '.join(map(str, poly)) + elif rate == '1': + outline = custombasename(filepath) + ' ' + '0.8' + ' ' + ' '.join(map(str, poly)) + elif rate == '2': + outline = custombasename(filepath) + ' ' + '0.6' + ' ' + ' '.join(map(str, poly)) + + filedict[category].write(outline + '\n') + +def Task2groundtruth_poly(srcpath, dstpath): + thresh = 0.1 + filedict = {} + Tasklist = GetFileFromThisRootDir(srcpath, '.txt') + + for Taskfile in Tasklist: + idname = custombasename(Taskfile).split('_')[-1] + # idname = datamap_inverse[idname] + f = open(Taskfile, 'r') + lines = f.readlines() + for line in lines: + if len(line) == 0: + continue + # print('line:', line) + splitline = line.strip().split(' ') + filename = splitline[0] + confidence = splitline[1] + bbox = splitline[2:] + if float(confidence) > thresh: + if filename not in filedict: + # filedict[filename] = codecs.open(os.path.join(dstpath, filename + '.txt'), 'w', 'utf_16') + filedict[filename] = codecs.open(os.path.join(dstpath, filename + '.txt'), 'w') + # poly = util.dots2ToRec8(bbox) + poly = bbox + # filedict[filename].write(' '.join(poly) + ' ' + idname + '_' + str(round(float(confidence), 2)) + '\n') + # print('idname:', idname) + + # filedict[filename].write(' '.join(poly) + ' ' + idname + '_' + str(round(float(confidence), 2)) + '\n') + + filedict[filename].write(' '.join(poly) + ' ' + idname + '\n') + + +def polygonToRotRectangle(bbox): + """ + :param bbox: The polygon stored in format [x1, y1, x2, y2, x3, y3, x4, y4] + :return: Rotated Rectangle in format [cx, cy, w, h, theta] + """ + bbox = np.array(bbox,dtype=np.float32) + bbox = np.reshape(bbox,newshape=(2,4),order='F') + angle = math.atan2(-(bbox[0,1]-bbox[0,0]),bbox[1,1]-bbox[1,0]) + + center = [[0],[0]] + + for i in range(4): + center[0] += bbox[0,i] + center[1] += bbox[1,i] + + center = np.array(center,dtype=np.float32)/4.0 + + R = np.array([[math.cos(angle), -math.sin(angle)], [math.sin(angle), math.cos(angle)]], dtype=np.float32) + + normalized = np.matmul(R.transpose(),bbox-center) + + xmin = np.min(normalized[0,:]) + xmax = np.max(normalized[0,:]) + ymin = np.min(normalized[1,:]) + ymax = np.max(normalized[1,:]) + + w = xmax - xmin + 1 + h = ymax - ymin + 1 + + return [float(center[0]),float(center[1]),w,h,angle] + +def cal_line_length(point1, point2): + return math.sqrt( math.pow(point1[0] - point2[0], 2) + math.pow(point1[1] - point2[1], 2)) + +def get_best_begin_point(coordinate): + x1 = coordinate[0][0] + y1 = coordinate[0][1] + x2 = coordinate[1][0] + y2 = coordinate[1][1] + x3 = coordinate[2][0] + y3 = coordinate[2][1] + x4 = coordinate[3][0] + y4 = coordinate[3][1] + xmin = min(x1, x2, x3, x4) + ymin = min(y1, y2, y3, y4) + xmax = max(x1, x2, x3, x4) + ymax = max(y1, y2, y3, y4) + combinate = [[[x1, y1], [x2, y2], [x3, y3], [x4, y4]], [[x2, y2], [x3, y3], [x4, y4], [x1, y1]], + [[x3, y3], [x4, y4], [x1, y1], [x2, y2]], [[x4, y4], [x1, y1], [x2, y2], [x3, y3]]] + dst_coordinate = [[xmin, ymin], [xmax, ymin], [xmax, ymax], [xmin, ymax]] + force = 100000000.0 + force_flag = 0 + for i in range(4): + temp_force = cal_line_length(combinate[i][0], dst_coordinate[0]) + cal_line_length(combinate[i][1], + dst_coordinate[ + 1]) + cal_line_length( + combinate[i][2], dst_coordinate[2]) + cal_line_length(combinate[i][3], dst_coordinate[3]) + if temp_force < force: + force = temp_force + force_flag = i + if force_flag != 0: + print("choose one direction!") + return combinate[force_flag] diff --git a/prepare_data/prepare_data.py b/prepare_data/prepare_data.py new file mode 100644 index 0000000..91447d6 --- /dev/null +++ b/prepare_data/prepare_data.py @@ -0,0 +1,352 @@ +import os +import dota_utils as util +import SplitOnlyImage_multi_process as SplitOnlyImage_multi_process +import ImgSplit_multi_process as ImgSplit_multi_process +import argparse +import shutil +from multiprocessing import Pool +import numpy as np +from functools import partial +import cv2 + +def parse_args(): + parser = argparse.ArgumentParser(description='Preprae data') + parser.add_argument('--data_path', help='the root path stored the dota data') + parser.add_argument('--num_process', type=int, help='the num of process used to prepare data') + args = parser.parse_args() + + return args + +def filecopy_single(path_tuple): + srcdir, dstdir = path_tuple[0], path_tuple[1] + if os.path.exists(srcdir): + shutil.copyfile(srcdir, dstdir) + +def filecopy(srcpath, dstpath, filenames, extent, num_process=32): + path_pair_list = [] + for name in filenames: + srcdir = os.path.join(srcpath, name + extent) + dstdir = os.path.join(dstpath, name + extent) + path_pair_list.append((srcdir, dstdir)) + + copy_pool = Pool(num_process) + copy_pool.map(filecopy_single, path_pair_list) + +def filecopy_v2(srcpath, dstpath, num_process=32): + filenames = util.GetFileFromThisRootDir(srcpath) + filenames = [os.path.basename(x.strip()) for x in filenames] + path_pair_list = [] + for name in filenames: + srcdir = os.path.join(srcpath, name) + dstdir = os.path.join(dstpath, name) + path_pair_list.append((srcdir, dstdir)) + + copy_pool = Pool(num_process) + copy_pool.map(filecopy_single, path_pair_list) + +def filemove_single(path_tuple): + srcdir, dstdir = path_tuple[0], path_tuple[1] + if os.path.exists(srcdir): + shutil.move(srcdir, dstdir) + +def filemove(srcpath, dstpath, filenames, extent, num_process=32): + path_pair_list = [] + for name in filenames: + srcdir = os.path.join(srcpath, name + extent) + dstdir = os.path.join(dstpath, name + extent) + path_pair_list.append((srcdir, dstdir)) + + move_pool = Pool(num_process) + move_pool.map(filemove_single, path_pair_list) + +def filemove_v2(srcpath, dstpath, extent, num_process=32): + filelist = util.GetFileFromThisRootDir(srcpath) + filenames = [util.custombasename(x.strip()) for x in filelist] + print('srcpath: ', srcpath) + print('num: ', len(filenames)) + filemove(srcpath, dstpath, filenames, extent, num_process) + + +def extract_largesize_index(labelpath): + filenames = util.GetFileFromThisRootDir(labelpath) + large_size_index = [] + for name in filenames: + objs = util.parse_dota_poly(name) + flag = 0 + for obj in objs: + poly = np.array(obj['poly']) + xmin, ymin, xmax, ymax = np.min(poly[:, 0]), np.min(poly[:, 1]), np.max(poly[:, 0]), np.max(poly[:, 1]) + w = xmax - xmin + h = ymax - ymin + max_side = max(w, h) + if max_side > 400: + flag = 1 + break + if flag: + large_size_index.append(util.custombasename(name)) + # print('index:', large_size_index) + # print('len:', len(large_size_index)) + + return large_size_index + +def rotate_matrix(theta): + return np.array([[np.cos(theta), np.sin(theta)], + [-np.sin(theta), np.cos(theta)]]) + +def rotate_single_run(name, srcpath, dstpath): + """ + only support 0, 90, 180, 270 now + :param img: + :param boxes: + :param angle: + :return: + """ + + src_imgpath = os.path.join(srcpath, 'images') + dst_imgpath = os.path.join(dstpath, 'images') + + src_labelTxt = os.path.join(srcpath, 'labelTxt') + dst_labelTxt = os.path.join(dstpath, 'labelTxt') + + objs = util.parse_dota_poly2(os.path.join(src_labelTxt, name + '.txt')) + img = cv2.imread(os.path.join(src_imgpath, name + '.png')) + angle = [np.pi / 2, np.pi, np.pi/2 * 3] + + img_90 = np.rot90(img, 1) + img_180 = np.rot90(img, 2) + img_270 = np.rot90(img, 3) + + cv2.imwrite(os.path.join(dst_imgpath, name + '_90.png'), img_90) + cv2.imwrite(os.path.join(dst_imgpath, name + '_180.png'), img_180) + cv2.imwrite(os.path.join(dst_imgpath, name + '_270.png'), img_270) + + h, w, c = img.shape + # print('h:', h, 'w:', w, 'c:', c) + + angles = [np.pi/2, np.pi, np.pi/2 * 3] + + rotate_90 = rotate_matrix(np.pi/2) + rotate_180 = rotate_matrix(np.pi) + rotate_270 = rotate_matrix(np.pi/2 * 3) + + + rotate_90_polys = [] + rotate_180_polys = [] + rotate_270_polys = [] + + for obj in objs: + poly = np.array(obj['poly']) + poly = np.reshape(poly, newshape=(2, 4), order='F') + centered_poly = poly - np.array([[w/2.], [h/2.]]) + rotated_poly_90 = np.matmul(rotate_90, centered_poly) + np.array([[h/2.], [w/2.]]) + rotated_poly_180 = np.matmul(rotate_180, centered_poly)+ np.array([[w/2.], [h/2.]]) + rotated_poly_270 = np.matmul(rotate_270, centered_poly) + np.array([[h/2.], [w/2.]]) + + rotate_90_polys.append(np.reshape(rotated_poly_90, newshape=(8), order='F')) + rotate_180_polys.append(np.reshape(rotated_poly_180, newshape=(8), order='F')) + rotate_270_polys.append(np.reshape(rotated_poly_270, newshape=(8), order='F')) + + with open(os.path.join(dst_labelTxt, name + '_90.txt'), 'w') as f_out: + for index, poly in enumerate(rotate_90_polys): + cls = objs[index]['name'] + diff =objs[index]['difficult'] + outline = ' '.join(map(str, list(poly))) + ' ' + cls + ' ' + diff + f_out.write(outline + '\n') + + with open(os.path.join(dst_labelTxt, name + '_180.txt'), 'w') as f_out: + for index, poly in enumerate(rotate_180_polys): + cls = objs[index]['name'] + diff =objs[index]['difficult'] + outline = ' '.join(map(str, list(poly))) + ' ' + cls + ' ' + diff + f_out.write(outline + '\n') + + with open(os.path.join(dst_labelTxt, name + '_270.txt'), 'w') as f_out: + for index, poly in enumerate(rotate_270_polys): + cls = objs[index]['name'] + diff =objs[index]['difficult'] + outline = ' '.join(map(str, list(poly))) + ' ' + cls + ' ' + diff + f_out.write(outline + '\n') + +def rotate_augment(srcpath, dstpath): + + pool = Pool(32) + imgnames = util.GetFileFromThisRootDir(os.path.join(srcpath, 'images')) + names = [util.custombasename(x) for x in imgnames] + + dst_imgpath = os.path.join(dstpath, 'images') + dst_labelTxt = os.path.join(dstpath, 'labelTxt') + + if not os.path.exists(dst_imgpath): + os.makedirs(dst_imgpath) + + if not os.path.exists(dst_labelTxt): + os.makedirs(dst_labelTxt) + + rotate_fun = partial(rotate_single_run, srcpath=srcpath, dstpath=dstpath) + + pool.map(rotate_fun, names) + +def prepare(): + args = parse_args() + data_root_path = args.data_path + + train_path = os.path.join(data_root_path, 'train') + val_path = os.path.join(data_root_path, 'val') + test_path = os.path.join(data_root_path, 'test') + + if not os.path.exists(os.path.join(data_root_path, 'trainval_large')): + os.makedirs(os.path.join(data_root_path, 'trainval_large')) + if not os.path.exists(os.path.join(data_root_path, 'trainval_large', 'images')): + os.makedirs(os.path.join(data_root_path, 'trainval_large', 'images')) + if not os.path.exists(os.path.join(data_root_path, 'trainval_large', 'labelTxt')): + os.makedirs(os.path.join(data_root_path, 'trainval_large', 'labelTxt')) + + if not os.path.exists(os.path.join(data_root_path, 'trainval1024_1')): + os.makedirs(os.path.join(data_root_path, 'trainval1024_1')) + + split_train = ImgSplit_multi_process.splitbase(train_path, + os.path.join(data_root_path, 'trainval1024_1'), + gap=200, + subsize=1024, + num_process=args.num_process + ) + split_train.splitdata(1) + + split_val = ImgSplit_multi_process.splitbase(val_path, + os.path.join(data_root_path, 'trainval1024_1'), + gap=200, + subsize=1024, + num_process=args.num_process + ) + split_val.splitdata(1) + + # extract train images contain large intances + train_large_names = extract_largesize_index(os.path.join(data_root_path, 'train', 'labelTxt')) + filecopy(os.path.join(data_root_path, 'train', 'labelTxt'), + os.path.join(data_root_path, 'trainval_large', 'labelTxt'), + train_large_names, + '.txt', + num_process=args.num_process) + filecopy(os.path.join(data_root_path, 'train', 'images'), + os.path.join(data_root_path, 'trainval_large', 'images'), + train_large_names, + '.png', + num_process=args.num_process) + + # extract val images contain large instances + val_large_names = extract_largesize_index(os.path.join(data_root_path, 'val', 'labelTxt')) + filecopy(os.path.join(data_root_path, 'val', 'labelTxt'), + os.path.join(data_root_path, 'trainval_large', 'labelTxt'), + val_large_names, + '.txt', + num_process=args.num_process) + filecopy(os.path.join(data_root_path, 'val', 'images'), + os.path.join(data_root_path, 'trainval_large', 'images'), + val_large_names, + '.png', + num_process=args.num_process) + + # split for images contin large size instances + if not os.path.exists(os.path.join(data_root_path, 'trainval_large_1024_0.4')): + os.makedirs(os.path.join(data_root_path, 'trainval_large_1024_0.4')) + split_trainval_large = ImgSplit_multi_process.splitbase(os.path.join(data_root_path, 'trainval_large'), + os.path.join(data_root_path, 'trainval_large_1024_0.4'), + gap=512, + subsize=1024, + num_process=args.num_process) + split_trainval_large.splitdata(0.4) + + # rotate augment for images contain large size instances + rotate_augment(os.path.join(data_root_path, 'trainval_large_1024_0.4'), + os.path.join(data_root_path, 'trainval_large_1024_0.4_rotate')) + + # copy files to images and labelTxt + if not os.path.exists(os.path.join(data_root_path, 'images')): + os.makedirs(os.path.join(data_root_path, 'images')) + if not os.path.exists(os.path.join(data_root_path, 'labelTxt')): + os.makedirs(os.path.join(data_root_path, 'labelTxt')) + + filemove_v2(os.path.join(data_root_path, 'trainval1024_1', 'images'), + os.path.join(data_root_path, 'images'), + '.png', + num_process=args.num_process + ) + filemove_v2(os.path.join(data_root_path, 'trainval1024_1', 'labelTxt'), + os.path.join(data_root_path, 'labelTxt'), + '.txt', + num_process=args.num_process + ) + + filemove_v2(os.path.join(data_root_path, 'trainval_large_1024_0.4', 'images'), + os.path.join(data_root_path, 'images'), + '.png', + num_process=args.num_process + ) + filemove_v2(os.path.join(data_root_path, 'trainval_large_1024_0.4', 'labelTxt'), + os.path.join(data_root_path, 'labelTxt'), + '.txt', + num_process=args.num_process + ) + + filemove_v2(os.path.join(data_root_path, 'trainval_large_1024_0.4_rotate', 'images'), + os.path.join(data_root_path, 'images'), + '.png', + num_process=args.num_process + ) + filemove_v2(os.path.join(data_root_path, 'trainval_large_1024_0.4_rotate', 'labelTxt'), + os.path.join(data_root_path, 'labelTxt'), + '.txt', + num_process=args.num_process + ) + + train_without_balance = util.GetFileFromThisRootDir(os.path.join(data_root_path, 'labelTxt')) + train_without_balance_names = [util.custombasename(x.strip()) for x in train_without_balance] + + # data balance + with open('train_balance_extend.txt', 'r') as f_in: + train_balance_names = f_in.readlines() + train_balance_names = [x.strip() for x in train_balance_names] + train_names = train_without_balance_names + train_balance_names + with open(os.path.join(data_root_path, 'train.txt'), 'w') as f_out: + for index, name in enumerate(train_names): + if index == (len(train_names) - 1): + f_out.write(name) + else: + f_out.write(name + '\n') + + # prepare test data + if not os.path.exists(os.path.join(data_root_path, 'test1024')): + os.makedirs(os.path.join(data_root_path, 'test1024')) + + split_test = SplitOnlyImage_multi_process.splitbase(os.path.join(test_path, 'images'), + os.path.join(data_root_path, 'test1024', 'images'), + gap=512, + subsize=1024, + num_process=args.num_process + ) + split_test.splitdata(1) + split_test.splitdata(0.5) + + test_names = util.GetFileFromThisRootDir(os.path.join(data_root_path, 'test1024', 'images')) + test_names = [util.custombasename(x.strip()) for x in test_names] + + with open(os.path.join(data_root_path, 'test.txt'), 'w') as f_out: + for index, name in enumerate(test_names): + if index == (len(test_names) - 1): + f_out.write(name) + else: + f_out.write(name + '\n') + + filemove_v2(os.path.join(data_root_path, 'test1024', 'images'), + os.path.join(data_root_path, 'images'), + '.png', + num_process=args.num_process) + + shutil.rmtree(os.path.join(data_root_path, r'trainval_large_1024_0.4')) + shutil.rmtree(os.path.join(data_root_path, r'trainval_large_1024_0.4_rotate')) + shutil.rmtree(os.path.join(data_root_path, r'test1024')) + shutil.rmtree(os.path.join(data_root_path, r'trainval1024_1')) + shutil.rmtree(os.path.join(data_root_path, r'trainval_large')) + +if __name__ == '__main__': + prepare() diff --git a/prepare_data/prepare_data.sh b/prepare_data/prepare_data.sh new file mode 100644 index 0000000..befbc89 --- /dev/null +++ b/prepare_data/prepare_data.sh @@ -0,0 +1,2 @@ +#!/usr/bin/env bash +python prepare_data.py --data_path /data/dj/dota_prepare_test --num_process 32 \ No newline at end of file diff --git a/prepare_data/train_balance_extend.txt b/prepare_data/train_balance_extend.txt new file mode 100644 index 0000000..880ce15 --- /dev/null +++ b/prepare_data/train_balance_extend.txt @@ -0,0 +1,10723 @@ +P2114__1__0___0 +P0791__1__107___0 +P2240__0.4__0___0_90 +P2240__0.4__0___0_270 +P0815__1__0___0 +P1570__1__2976___3940 +P2131__1__280___0 +P1577__1__824___824 +P1410__0.4__1024___1357_90 +P1730__1__2472___824 +P0510__1__87___0 +P0029__1__1648___824 +P0021__0.4__0___270 +P2197__1__0___0 +P0806__0.4__0___0_90 +P1586__1__2472___1648 +P0091__1__0___114 +P1743__1__824___0 +P1571__1__824___0 +P1732__1__0___1648 +P1401__1__0___2507 +P1740__1__2472___2976 +P0050__1__0___1648 +P0093__1__0___0 +P0856__1__824___0 +P1410__0.4__1024___512_270 +P1505__0.4__27___0 +P1738__1__2472___2976 +P1570__1__1648___1648 +P1876__1__0___573 +P1962__0.4__0___0_180 +P2768__1__437___1333 +P1658__1__824___0 +P0770__0.4__0___0 +P1577__1__824___0 +P1584__1__1648___3296 +P2011__0.4__0___0 +P0146__1__0___860 +P0044__0.4__0___0_90 +P0044__0.4__0___0_270 +P1738__0.4__576___576 +P1633__1__0___2472 +P0806__1__0___93 +P0050__1__0___824 +P0021__1__824___2210 +P2168__1__824___150 +P1931__0.4__0___0_90 +P0117__1__0___0 +P1636__1__1648___2976 +P1410__0.4__1024___0_90 +P0131__0.4__0___0_90 +P0856__0.4__0___0_90 +P1410__0.4__1110___512_270 +P1550__1__2976___824 +P0047__0.4__0___0 +P1551__1__2472___2472 +P2593__0.4__0___0_270 +P0141__0.4__0___0_270 +P1573__1__2472___2472 +P0146__0.4__52___0_270 +P2055__0.4__0___0_90 +P1410__0.4__1024___512_180 +P0021__0.4__124___0_180 +P1962__1__0___0 +P0504__1__946___0 +P2331__0.4__0___0_90 +P2115__1__0___340 +P1738__1__824___2472 +P0780__1__45___567 +P1633__1__2472___2976 +P1903__0.4__0___0_270 +P1636__1__2976___2976 +P2091__1__0___0 +P0387__0.4__0___0_90 +P1738__0.4__0___512_180 +P1506__0.4__0___0 +P1514__1__0___2472 +P0665__0.4__0___0_90 +P2197__1__0___665 +P0491__0.4__0___0 +P0050__0.4__0___0_90 +P1565__1__0___824 +P2638__1__0___0 +P1631__1__1648___2472 +P1653__1__0___0 +P1738__1__2976___2976 +P1962__0.4__0___0 +P1742__1__824___0 +P2331__1__824___824 +P1533__1__1648___2976 +P1584__1__0___0 +P0491__0.4__97___0 +P1646__1__3271___2976 +P1903__1__0___264 +P2011__0.4__0___0_270 +P2130__1__0___0 +P1506__1__824___1624 +P2087__1__0___0 +P2381__1__1205___0 +P1738__0.4__576___576_180 +P0386__1__0___76 +P1573__1__1648___3940 +P1410__0.4__1110___512 +P0021__1__0___824 +P0044__0.4__0___0_180 +P1550__1__1648___0 +P1903__1__824___0 +P0029__0.4__0___0_180 +P1570__1__0___2472 +P0021__1__824___1648 +P1560__1__1648___2976 +P0030__0.4__0___0_90 +P1410__1__1648___2472 +P0111__0.4__0___0 +P1887__1__0___0 +P0111__1__938___0 +P0809__0.4__0___0_90 +P1738__1__824___2976 +P2206__1__0___596 +P1631__1__0___2472 +P0791__0.4__0___0_180 +P1569__1__1648___2472 +P2331__0.4__286___0_180 +P2100__1__568___422 +P1410__0.4__1024___1357_180 +P1738__0.4__0___512_90 +P0146__0.4__52___0 +P0052__1__837___0 +P0021__0.4__0___0_180 +P2121__1__0___91 +P0029__0.4__80___0_90 +P1505__1__1603___0 +P2011__0.4__0___0_90 +P0504__1__824___494 +P1565__0.4__0___0_90 +P1730__1__2472___0 +P1573__1__824___824 +P1565__0.4__0___0 +P1566__1__0___0 +P1649__1__0___824 +P1551__1__1648___1648 +P1410__0.4__1024___0_180 +P1565__0.4__0___0_270 +P0791__0.4__0___0_90 +P1742__1__824___1648 +P1537__1__824___2472 +P1647__1__2472___1648 +P1547__1__0___2976 +P1410__1__2472___2472 +P0030__1__0___1382 +P0806__0.4__0___0_270 +P1962__0.4__0___0_90 +P1410__1__4120___824 +P1573__1__824___3296 +P0809__1__0___91 +P1410__0.4__1110___1024 +P0352__1__0___0 +P2251__1__0___56 +P0021__0.4__0___0_270 +P0111__1__824___0 +P2266__1__0___0 +P0050__0.4__0___0_270 +P0141__1__0___727 +P1401__1__824___2472 +P0855__1__0___0 +P1727__1__824___1648 +P1565__1__0___1648 +P0809__1__0___0 +P1732__1__1648___0 +P1931__0.4__0___0 +P1631__1__0___2976 +P2257__1__0___0 +P1506__0.4__328___35_270 +P2286__0.4__1___0_270 +P1743__1__0___0 +P1410__0.4__1110___1024_180 +P1736__1__2472___3942 +P0050__0.4__0___226_180 +P0141__1__824___0 +P1583__1__2472___2472 +P2157__0.4__0___0_270 +P1641__1__2472___0 +P1740__1__824___2472 +P1903__1__824___264 +P1742__1__0___1648 +P1962__0.4__0___0_270 +P1522__1__2976___0 +P1584__1__2472___3296 +P2150__1__0___0 +P2286__1__1539___0 +P1410__0.4__1110___512_180 +P0149__1__0___0 +P1964__0.4__0___0 +P2116__1__0___395 +P1658__1__824___824 +P0030__0.4__0___0_270 +P1565__0.4__0___576 +P0770__1__586___334 +P1410__0.4__1110___0_90 +P1652__1__2472___824 +P2197__0.4__0___0_180 +P1736__1__0___3942 +P1649__1__1648___0 +P1732__1__1648___2472 +P2241__1__0___0 +P2160__1__0___0 +P2168__1__0___150 +P2203__0.4__0___0 +P1727__1__0___1648 +P1693__1__824___2472 +P1727__1__2472___2976 +P0047__0.4__0___0_90 +P1410__0.4__0___1357_270 +P1647__1__824___824 +P1738__0.4__512___576_180 +P1580__1__824___824 +P0352__1__379___0 +P1964__1__277___605 +P0461__1__0___0 +P2768__1__0___1333 +P0146__1__0___824 +P1903__0.4__0___0_90 +P1742__1__1648___0 +P0856__1__838___0 +P1410__0.4__0___1024 +P1693__1__824___1648 +P2206__1__200___596 +P2107__0.4__0___0_270 +P1506__0.4__0___35_90 +P1910__1__0___0 +P2228__1__959___824 +P1410__0.4__512___1357_270 +P1691__1__2472___2472 +P1537__1__0___1648 +P2160__1__489___571 +P0021__0.4__0___0_90 +P1571__1__824___2472 +P0374__1__318___0 +P1507__1__119___633 +P2130__1__70___0 +P1730__1__1648___824 +P0130__1__0___0 +P0806__1__383___0 +P2638__1__0___857 +P1410__0.4__512___512_270 +P1506__0.4__328___0_180 +P0050__0.4__0___0 +P0491__1__0___0 +P1640__0.4__512___969_180 +P1732__1__2472___0 +P1646__1__0___2976 +P1410__1__824___4120 +P1675__1__1648___1648 +P2286__0.4__0___0_180 +P1506__0.4__328___35 +P1410__0.4__1024___1024_90 +P2244__1__241___0 +P1550__1__824___0 +P1569__1__824___824 +P0146__0.4__52___0_90 +P2115__1__0___0 +P1506__0.4__0___35 +P1569__1__0___1648 +P2244__1__0___49 +P2591__0.4__0___0_270 +P2286__0.4__1___0_90 +P1551__1__2472___2976 +P1736__1__3258___3296 +P2538__1__0___824 +P0091__1__0___0 +P2262__0.4__0___0_90 +P1736__1__3258___0 +P1571__1__824___824 +P0146__1__0___0 +P0491__1__1778___0 +P1410__1__824___3296 +P1724__1__0___1648 +P2194__1__692___1148 +P0395__1__112___0 +P0382__1__0___148 +P1739__1__2976___0 +P2331__1__824___935 +P0352__1__0___242 +P0044__0.4__0___0 +P1647__1__2472___824 +P1570__1__2472___3940 +P1742__1__824___2976 +P2131__1__280___397 +P2012__1__0___0 +P1727__1__1648___0 +P1738__0.4__512___576 +P2089__1__0___0 +P0665__1__0___0 +P1977__1__0___0 +P1410__0.4__1110___0_270 +P1650__1__1648___1648 +P1505__1__824___824 +P1410__0.4__0___512_270 +P2160__1__0___571 +P1606__1__0___2976 +P2262__1__0___0 +P1573__1__0___3940 +P1410__1__1648___3296 +P1410__0.4__512___0_180 +P1570__1__2472___3296 +P1742__1__1648___824 +P0395__1__0___0 +P0021__1__0___2210 +P1609__1__824___2976 +P2228__1__0___824 +P0665__1__270___0 +P1565__0.4__0___576_180 +P2148__1__0___56 +P1565__0.4__0___512 +P2011__1__0___103 +P1410__0.4__512___1357_180 +P1631__1__2976___0 +P1646__1__0___824 +P0140__1__0___0 +P1580__1__0___2976 +P1522__1__2976___824 +P1570__1__0___1648 +P0131__1__512___0 +P0141__1__0___0 +P1641__1__2472___824 +P1410__0.4__0___1024_180 +P1640__1__2976___3958 +P1649__1__824___0 +P1631__1__1648___2976 +P1569__1__1648___2976 +P0021__1__1846___2210 +P2131__1__0___0 +P2055__0.4__0___0_270 +P1645__1__0___1648 +P1649__1__824___824 +P1410__1__0___3296 +P2260__0.4__0___0_180 +P1410__0.4__0___1024_90 +P0402__1__0___0 +P0815__1__144___0 +P0150__1__0___0 +P1652__1__0___824 +P1903__1__0___0 +P0021__1__824___824 +P1903__0.4__0___0 +P0141__0.4__0___0_180 +P1515__1__0___2472 +P1505__0.4__27___0_270 +P1646__1__824___2976 +P0770__0.4__0___0_90 +P2168__1__995___0 +P2162__1__0___0 +P2150__1__0___56 +P1560__1__1648___2472 +P0491__0.4__0___0_180 +P0029__1__1648___1383 +P0491__1__1648___0 +P2206__1__0___0 +P1646__1__1648___0 +P0791__0.4__0___0_270 +P1410__0.4__512___512 +P2011__1__0___0 +P1562__1__0___2976 +P1550__1__2976___1648 +P1636__1__2976___2472 +P0021__1__0___1648 +P0049__1__604___824 +P1571__1__2976___0 +P2262__1__824___0 +P1506__0.4__328___35_180 +P0146__0.4__0___0_180 +P1410__0.4__1024___1024 +P0111__1__824___720 +P1505__1__824___0 +P1656__1__2472___0 +P2130__1__70___118 +P2163__1__461___0 +P2228__1__0___1158 +P1633__1__824___2976 +P1727__1__2976___2976 +P1739__1__2976___824 +P0374__1__318___178 +P1506__0.4__328___0 +P0382__1__59___0 +P1649__1__0___1648 +P1977__1__647___735 +P1410__0.4__0___1357_90 +P2130__1__0___118 +P0491__0.4__97___0_180 +P2663__1__824___0 +P0030__0.4__0___0_180 +P1650__1__824___824 +P0021__1__0___0 +P0140__1__0___96 +P1641__1__2976___824 +P0856__0.4__0___0_270 +P1640__0.4__576___969_90 +P1583__1__2976___0 +P1518__1__0___2472 +P0093__1__0___115 +P0374__1__0___178 +P1964__0.4__0___0_180 +P1565__0.4__0___0_180 +P1569__1__824___1648 +P0491__0.4__97___0_90 +P0021__1__1648___2210 +P1640__0.4__576___969_180 +P0111__0.4__0___0_90 +P1507__1__119___0 +P2148__1__0___0 +P2055__1__130___418 +P1506__0.4__0___0_90 +P1631__1__824___2976 +P2260__0.4__0___0_90 +P2203__0.4__0___0_180 +P2591__0.4__0___0_90 +P1401__1__824___2507 +P0374__1__0___0 +P1647__1__1648___824 +P1876__1__306___0 +P0140__1__480___96 +P1410__0.4__512___512_180 +P2011__0.4__0___0_180 +P1638__1__0___824 +P2331__0.4__286___0_270 +P1565__0.4__0___512_270 +P2381__1__824___0 +P2107__1__0___0 +P1608__1__2976___2976 +P2162__0.4__0___0_270 +P1410__0.4__1024___512 +P0791__0.4__0___0 +P1517__1__2472___0 +P2591__1__824___1003 +P2255__1__0___0 +P0778__1__0___0 +P1640__0.4__512___969_270 +P0395__0.4__0___0_180 +P2197__0.4__0___0_270 +P1691__1__2976___2472 +P0510__1__0___0 +P0131__0.4__0___0_180 +P1410__0.4__512___1024_180 +P1514__1__824___1648 +P2168__1__824___0 +P1643__1__2472___2976 +P1571__1__1648___0 +P1742__1__824___824 +P0806__1__383___93 +P0029__0.4__0___0 +P1507__1__0___0 +P2262__0.4__0___0_270 +P1977__0.4__0___0_270 +P1675__1__824___1648 +P1560__1__824___2976 +P2091__0.4__0___0_270 +P0029__0.4__0___0_270 +P0111__1__938___720 +P0387__1__636___0 +P1633__1__2976___2976 +P1506__1__0___1624 +P2141__1__148___0 +P2162__0.4__0___0_180 +P2228__1__824___824 +P0352__1__379___242 +P1505__1__1603___824 +P2121__1__0___0 +P2192__1__0___0 +P0047__1__102___1143 +P1643__1__2976___2472 +P0141__1__824___727 +P1931__0.4__0___0_180 +P0021__0.4__124___270_90 +P0131__1__512___256 +P0804__1__0___0 +P1565__0.4__0___512_180 +P0052__1__0___824 +P2228__1__824___0 +P1876__0.4__0___0_270 +P1566__1__824___824 +P0778__1__585___334 +P1931__1__0___610 +P0029__1__824___1383 +P2055__1__0___0 +P1573__1__0___0 +P1573__1__2472___1648 +P1410__1__4312___824 +P0387__0.4__0___0_180 +P2157__0.4__0___0_180 +P1636__1__2472___2472 +P1584__1__2472___3947 +P0030__0.4__80___0_180 +P2121__0.4__0___0_270 +P1650__1__1648___0 +P0047__0.4__0___0_180 +P2331__1__1648___824 +P2251__1__247___0 +P2089__1__0___32 +P1679__1__0___3296 +P1535__1__824___0 +P1646__1__1648___824 +P0117__1__0___138 +P0386__1__0___0 +P1736__1__2472___3296 +P0352__0.4__0___0_90 +P1410__1__3296___824 +P1639__1__2472___1648 +P1571__1__1648___2472 +P0146__0.4__0___0 +P2168__1__995___150 +P1410__1__824___2472 +P0382__1__59___148 +P0050__1__824___824 +P1521__1__2976___2976 +P1505__0.4__27___0_180 +P2168__1__0___0 +P1738__0.4__576___576_90 +P2591__1__829___824 +P0050__0.4__0___226_90 +P1643__1__2472___2472 +P2194__0.4__0___0_90 +P1505__0.4__27___0_90 +P2286__0.4__1___0 +P1540__1__824___2472 +P2141__1__0___166 +P0052__0.4__0___0 +P1505__0.4__0___0_180 +P0021__0.4__124___0_90 +P1410__0.4__1110___1357_180 +P0146__1__824___0 +P0029__1__1735___0 +P1641__1__2976___0 +P2107__0.4__0___0_90 +P1410__0.4__1110___1357_90 +P1641__1__1648___1648 +P2331__0.4__286___0_90 +P1606__1__824___2976 +P1738__0.4__0___576_90 +P1636__1__2472___2976 +P1964__0.4__0___0_90 +P0146__1__824___860 +P0029__1__1735___1383 +P0050__1__824___1648 +P2116__1__282___395 +P0665__0.4__0___0_270 +P2121__0.4__0___0_180 +P1583__1__2976___1648 +P1580__1__824___2976 +P2244__1__0___0 +P0021__0.4__0___0 +P1573__1__2472___3940 +P1580__1__1648___0 +P1565__0.4__0___512_90 +P0829__1__0___0 +P1560__1__2472___2976 +P0021__0.4__124___270_180 +P2162__1__205___356 +P0021__1__1846___1648 +P2593__1__179___0 +P2331__1__1648___935 +P2141__1__0___0 +P2055__0.4__0___0_180 +P0117__1__345___0 +P1738__1__0___2472 +P2538__1__0___0 +P1680__1__2976___2472 +P1410__0.4__512___0_270 +P0778__1__0___334 +P2331__0.4__0___0 +P0491__1__0___742 +P2091__0.4__0___0_90 +P1519__1__1648___1648 +P2197__0.4__0___0_90 +P1522__1__2472___824 +P2107__0.4__0___0_180 +P0778__1__585___0 +P0131__1__0___0 +P2116__0.4__0___0_180 +P1410__0.4__0___512_180 +P1570__1__2976___3296 +P2203__1__446___0 +P0770__0.4__0___0_270 +P1515__1__824___2472 +P0491__1__824___0 +P1691__1__2976___1648 +P0780__1__0___567 +P1650__1__3271___0 +P1738__0.4__512___576_270 +P1639__1__2976___1648 +P1977__0.4__0___0 +P1539__1__824___1648 +P0791__1__107___58 +P2638__1__0___824 +P2141__0.4__0___0_270 +P0021__0.4__0___270_90 +P2116__0.4__0___0 +P2194__0.4__0___0_270 +P2160__1__489___0 +P1631__1__2976___824 +P2335__1__0___1167 +P1646__1__0___1648 +P2116__0.4__0___0_270 +P1738__0.4__512___576_90 +P1676__1__824___824 +P2241__1__0___48 +P1646__1__824___1648 +P1521__1__824___0 +P1410__0.4__1110___0 +P1505__0.4__0___0_270 +P0829__1__537___0 +P1410__1__1648___4928 +P2197__0.4__0___0 +P1566__1__0___1648 +P0504__1__946___494 +P2203__0.4__0___0_90 +P1410__0.4__1024___1357_270 +P2593__0.4__0___0_90 +P1738__0.4__0___512_270 +P0029__0.4__80___0 +P0049__1__0___0 +P0052__1__837___824 +P1551__1__2976___0 +P1732__1__1648___1648 +P2663__1__1455___0 +P1736__1__3258___1648 +P2286__0.4__0___0_270 +P1640__0.4__512___969 +P2100__1__0___0 +P1691__1__2472___1648 +P1608__1__2976___2472 +P1573__1__0___824 +P1580__1__1648___824 +P2121__0.4__0___0 +P0030__0.4__80___0_90 +P2251__1__0___0 +P0052__0.4__0___0_270 +P1506__0.4__0___0_180 +P2228__1__0___0 +P2591__0.4__0___0_180 +P0809__0.4__0___0_270 +P0047__1__0___1143 +P0387__0.4__0___0_270 +P0141__0.4__0___0 +P0029__0.4__80___0_180 +P0111__0.4__0___0_270 +P1506__0.4__0___35_180 +P2116__1__0___0 +P0050__0.4__0___0_180 +P1649__1__1648___824 +P0457__1__0___0 +P0140__1__480___0 +P2286__0.4__0___0 +P2331__0.4__286___0 +P1732__1__0___824 +P2087__1__401___0 +P2745__1__824___0 +P1410__0.4__0___512 +P1738__0.4__0___576 +P1583__1__2472___1648 +P0021__0.4__124___0 +P0029__0.4__80___0_270 +P2286__0.4__0___0_90 +P1647__1__2976___1648 +P0352__0.4__0___0 +P2194__1__692___824 +P1964__0.4__0___0_270 +P1732__1__824___1648 +P0029__1__1648___0 +P0052__0.4__0___0_90 +P1738__0.4__0___576_180 +P1521__1__2472___2976 +P1643__1__2976___2976 +P0770__1__586___0 +P0049__1__0___964 +P0021__0.4__124___270 +P2266__1__0___58 +P1903__0.4__0___0_180 +P0809__0.4__0___0 +P1573__1__2472___3296 +P1505__0.4__0___0_90 +P1931__1__65___610 +P1730__1__0___0 +P0491__1__1778___742 +P2260__0.4__0___0_270 +P0131__0.4__0___0_270 +P2331__0.4__0___0_180 +P1410__0.4__1110___1024_270 +P0052__1__0___0 +P2055__1__130___0 +P0050__1__824___2100 +P0029__1__1735___824 +P0395__0.4__0___0_270 +P1410__0.4__1024___512_90 +P0047__1__102___824 +P1609__1__0___2976 +P1410__0.4__1110___1357_270 +P1680__1__2976___1648 +P0491__0.4__0___0_90 +P2331__0.4__0___0_270 +P0049__1__0___824 +P0021__0.4__0___270_270 +P2141__1__148___166 +P0352__0.4__0___0_270 +P1565__0.4__0___576_270 +P2228__1__959___1158 +P1742__1__1648___1648 +P0030__0.4__0___0 +P0030__1__0___824 +P2244__1__241___49 +P1506__0.4__328___0_90 +P2593__0.4__0___0_180 +P0029__1__824___0 +P1736__1__2472___0 +P1876__0.4__0___0_90 +P2262__1__1267___0 +P1551__1__2472___0 +P0809__0.4__0___0_180 +P0131__0.4__0___0 +P0457__1__379___0 +P1519__1__824___1648 +P0856__0.4__0___0 +P1519__1__0___824 +P2240__0.4__0___0 +P0491__0.4__97___0_270 +P1736__1__0___3296 +P1876__1__306___573 +P0050__0.4__0___226 +P0131__1__0___256 +P1410__0.4__512___512_90 +P1732__1__3258___824 +P2157__1__0___0 +P1573__1__824___1648 +P2286__0.4__1___0_180 +P1676__1__824___1648 +P0146__0.4__0___0_270 +P0146__0.4__0___0_90 +P0856__0.4__0___0_180 +P1679__1__0___2472 +P0021__0.4__0___270_180 +P2157__0.4__0___0_90 +P1506__0.4__328___35_90 +P2157__1__89___0 +P0770__0.4__0___0_180 +P1401__1__0___1648 +P0829__1__0___661 +P1514__1__1648___1648 +P1650__1__824___1648 +P0387__0.4__0___0 +P1583__1__2976___2472 +P1514__1__0___2976 +P2091__0.4__0___0_180 +P1410__0.4__0___512_90 +P1640__0.4__576___969_270 +P1410__0.4__0___1024_270 +P1584__1__0___2472 +P2745__1__0___0 +P0141__0.4__0___0_90 +P1724__1__0___824 +P1876__1__0___0 +P0146__0.4__52___0_180 +P1645__1__0___824 +P1535__1__0___0 +P2162__1__205___0 +P1638__1__1648___1648 +P1410__0.4__0___1357_180 +P1732__1__824___0 +P1533__1__1648___2472 +P0806__1__0___0 +P1738__0.4__576___576_270 +P2240__1__0___573 +P2116__0.4__0___0_90 +P1573__1__3270___2472 +P1551__1__2976___2976 +P1730__1__2976___0 +P2118__1__0___0 +P0806__0.4__0___0 +P2100__1__0___422 +P1522__1__2472___0 +P1410__0.4__1024___1024_180 +P0146__1__1667___0 +P1410__0.4__512___1024_270 +P2240__1__0___0 +P1646__1__824___0 +P2591__1__824___824 +P1736__1__2472___824 +P1736__1__1648___3942 +P2162__0.4__0___0_90 +P2197__1__1225___0 +P0050__0.4__0___226_270 +P2591__0.4__0___0 +P1566__1__0___2472 +P0461__1__116___0 +P1876__0.4__0___0_180 +P1410__0.4__1024___0 +P2251__1__247___56 +P1977__0.4__0___0_180 +P1641__1__1648___2472 +P1647__1__2976___824 +P0030__0.4__80___0_270 +P2260__1__503___0 +P0052__1__824___824 +P2197__1__824___0 +P1656__1__2976___0 +P1650__1__1648___824 +P2207__1__0___0 +P0111__0.4__0___0_180 +P1640__0.4__576___969 +P0395__0.4__0___0_90 +P1964__1__0___605 +P1586__1__1648___1648 +P1410__0.4__1110___1357 +P2116__1__282___0 +P1505__0.4__0___0 +P1584__1__0___824 +P0021__1__824___0 +P1903__1__922___0 +P1573__1__1648___1648 +P1738__0.4__0___512 +P2593__0.4__0___0 +P1633__1__0___2976 +P2100__1__568___0 +P0049__1__604___0 +P2121__0.4__0___0_90 +P1738__1__0___2976 +P0146__1__824___824 +P2257__1__0___57 +P2055__0.4__0___0 +P2228__1__824___1158 +P1730__1__0___1648 +P1410__1__2472___3296 +P0029__1__824___824 +P1401__1__0___2472 +P2162__0.4__0___0 +P1410__0.4__1024___1024_270 +P0504__1__824___0 +P1962__1__0___145 +P0146__1__1648___0 +P0491__1__824___742 +P1540__1__824___2976 +P1573__1__824___3940 +P2240__0.4__0___0_180 +P0829__1__537___661 +P0117__1__345___138 +P2203__0.4__0___0_270 +P2107__0.4__0___0 +P2131__1__0___397 +P1410__0.4__512___0 +P2141__0.4__0___0_180 +P0030__0.4__80___0 +P0021__0.4__124___0_270 +P0791__1__0___0 +P1384__1__824___0 +P0791__1__0___58 +P2206__1__200___0 +P1410__0.4__1110___512_90 +P1649__1__0___0 +P1738__0.4__0___576_270 +P1562__1__824___824 +P2262__0.4__0___0 +P0021__0.4__124___270_270 +P0352__0.4__0___0_180 +P1550__1__1648___2976 +P2204__1__0___554 +P1410__0.4__512___1024_90 +P1410__0.4__512___0_90 +P1645__1__824___1648 +P0665__0.4__0___0 +P1507__1__0___633 +P1410__0.4__512___1357 +P0021__1__1648___1648 +P1573__1__3270___3296 +P1909__1__0___0 +P1638__1__2472___1648 +P0047__1__0___824 +P0491__1__1648___742 +P1732__1__2472___824 +P1645__1__824___824 +P2141__0.4__0___0_90 +P1876__0.4__0___0 +P1410__0.4__1024___0_270 +P2157__0.4__0___0 +P2162__1__0___356 +P2194__0.4__0___0 +P1571__1__824___3296 +P1410__0.4__0___1357 +P2260__0.4__0___0 +P1650__1__2472___0 +P2055__1__0___418 +P1506__0.4__0___35_270 +P0665__0.4__0___0_180 +P0395__0.4__0___0 +P1410__0.4__512___1357_90 +P0029__0.4__0___0_90 +P1903__1__922___264 +P1410__0.4__1110___0_180 +P1506__0.4__0___0_270 +P1506__1__0___824 +P1676__1__1648___1648 +P1551__1__2472___1648 +P1506__1__824___824 +P0770__1__0___334 +P1506__0.4__328___0_270 +P1565__0.4__0___576_90 +P0780__1__45___0 +P0050__1__0___0 +P1931__0.4__0___0_270 +P0806__0.4__0___0_180 +P0382__1__0___0 +P1573__1__0___1648 +P2591__1__829___1003 +P2141__0.4__0___0 +P1631__1__824___2472 +P0052__0.4__0___0_180 +P0491__0.4__0___0_270 +P2593__1__0___0 +P2262__0.4__0___0_180 +P2335__1__0___824 +P2194__0.4__0___0_180 +P0030__1__824___1382 +P1640__0.4__512___969_90 +P2204__1__0___0 +P1410__1__1648___4120 +P2122__1__0___0 +P2091__0.4__0___0 +P0637__1__0___0 +P0770__1__0___0 +P0780__1__0___0 +P0665__1__0___656 +P1571__1__824___1648 +P1410__0.4__1110___1024_90 +P1736__1__1648___3296 +P1410__0.4__512___1024 +P0044__1__0___708 +P1571__1__1648___3296 +P1977__0.4__0___0_90 +P0047__0.4__0___0_270 +P0855__1__144___0 +P0049__1__604___964 +P1410__0.4__1024___1357 +P1560__1__2472___2472 +P2203__1__0___0 +P1786__0.4__0___0_180 +P2742__1__824___824 +P1432__0.4__331___690_90 +P1752__1__2976___824 +P1466__0.4__1024___512 +P2725__1__2472___1648 +P1463__0.4__0___0_270 +P2572__0.4__0___0_180 +P2721__1__0___1429 +P0147__1__0___0 +P1505__0.4__27___0 +P1440__0.4__0___0_270 +P1876__1__0___573 +P2011__0.4__0___0 +P2651__1__641___0 +P2794__0.4__0___512_270 +P1467__0.4__512___510 +P2522__1__824___0 +P2722__1__824___824 +P1636__1__1648___2976 +P2348__1__0___341 +P1786__0.4__0___443_90 +P1964__1__0___0 +P1551__1__2472___2472 +P2570__0.4__0___0_180 +P1230__1__4176___824 +P0143__1__0___0 +P2692__1__1648___0 +P1268__0.4__512___424_90 +P1786__1__824___824 +P2528__1__1469___874 +P1977__1__647___0 +P2719__1__1648___0 +P2379__1__1061___0 +P2570__1__0___0 +P1471__1__1648___824 +P1440__1__824___621 +P2197__1__0___665 +P1445__0.4__1024___0_180 +P1467__1__2920___2472 +P1257__1__0___1648 +P1268__0.4__512___0_180 +P1464__1__2472___3296 +P2655__0.4__963___0_270 +P1794__1__0___3394 +P2502__1__0___824 +P2536__1__1381___1287 +P1247__0.4__0___0 +P1519__1__1648___2472 +P1640__0.4__576___512_270 +P1458__0.4__0___0 +P1466__0.4__0___1241_90 +P2756__0.4__0___688_90 +P2800__0.4__0___0 +P0044__0.4__0___0_180 +P1903__1__824___0 +P0029__0.4__0___0_180 +P1463__1__2472___1648 +P2512__1__824___0 +P1449__1__824___824 +P2594__0.4__0___90 +P1470__0.4__1177___512_180 +P1631__1__0___2472 +P0047__1__0___0 +P0141__0.4__707___0_270 +P1341__1__2976___824 +P1458__1__2880___3829 +P2011__0.4__0___0_90 +P2650__1__1933___974 +P2379__1__1061___824 +P1445__0.4__512___288_180 +P2438__1__0___0 +P1432__0.4__0___512_90 +P0018__1__800___0 +P1268__0.4__512___424 +P2435__1__0___1383 +P1791__1__3893___1648 +P1467__1__2472___1648 +P1445__0.4__0___288 +P1337__1__0___2472 +P1791__0.4__943___512_90 +P1268__0.4__512___424_180 +P1471__1__0___1648 +P0050__1__1535___2100 +P1432__0.4__0___512 +P0050__0.4__0___0_270 +P2368__0.4__7___0 +P0141__1__0___727 +P2595__0.4__0___0 +P2794__0.4__0___512_90 +P2257__1__0___0 +P1369__1__4120___1648 +P1638__1__2472___2472 +P1360__1__2472___1648 +P1742__1__0___1648 +P1466__0.4__1024___0_180 +P2580__1__0___0 +P2659__1__824___0 +P1740__1__1648___2976 +P1793__0.4__0___0_90 +P1432__0.4__0___0_180 +P0770__1__586___334 +P1863__0.4__0___296 +P1640__0.4__576___512_180 +P2756__0.4__512___688_90 +P1456__1__2470___0 +P2800__0.4__300___276_180 +P1639__1__1648___1648 +P2587__1__0___0 +P1458__0.4__0___0_180 +P1793__0.4__0___0_270 +P0047__0.4__0___0_90 +P1470__0.4__1024___512 +P1647__1__824___824 +P2650__0.4__159___0_270 +P1594__0.4__0___0_270 +P2754__0.4__0___465_180 +P2335__1__824___824 +P2721__1__824___824 +P2720__1__1648___0 +P2754__1__1648___1648 +P0130__1__0___0 +P2761__0.4__864___512 +P1640__1__0___3958 +P2597__1__0___0 +P2721__1__3515___1429 +P2721__1__2472___0 +P0673__0.4__0___0 +P2501__1__768___824 +P2761__0.4__864___0_270 +P1466__0.4__1024___1024 +P1467__0.4__0___510_90 +P2659__0.4__0___0 +P1471__0.4__512___0_90 +P1786__0.4__309___0_180 +P1464__1__824___0 +P1433__0.4__0___0_180 +P2655__1__824___0 +P1456__0.4__374___0_180 +P2714__1__3513___1426 +P1474__1__824___1648 +P2594__0.4__0___90_90 +P1466__1__2472___3296 +P2012__1__0___0 +P1464__0.4__0___512_180 +P0159__1__0___0 +P2585__0.4__0___0 +P1505__1__824___824 +P1475__0.4__0___0_270 +P1433__0.4__0___0_90 +P1791__0.4__943___512_180 +P1445__1__824___2257 +P1466__0.4__0___1241_270 +P1641__1__2472___824 +P1640__1__2976___3958 +P2761__0.4__864___0_90 +P1341__1__2976___0 +P2335__1__1463___1167 +P2166__0.4__0___0_180 +P2714__1__2472___1426 +P1903__1__0___0 +P1471__1__2472___824 +P1515__1__0___2472 +P1470__0.4__1024___512_270 +P2597__0.4__0___0_270 +P0673__1__91___123 +P1793__0.4__372___0_270 +P2592__1__0___443 +P1458__1__824___0 +P1791__0.4__943___0_90 +P2572__1__49___824 +P2365__1__1807___0 +P1342__1__824___824 +P2725__1__2472___0 +P2725__1__3502___0 +P2011__1__0___0 +P1594__1__1648___824 +P1432__0.4__331___690_270 +P2570__1__775___0 +P1977__1__0___735 +P1470__0.4__512___0_180 +P2754__0.4__359___465_180 +P1340__1__2472___824 +P1456__0.4__0___0_180 +P2362__0.4__0___0_90 +P2718__1__1648___1422 +P2800__1__824___824 +P2570__0.4__0___0_270 +P1238__1__2976___0 +P1470__0.4__1177___0_270 +P2716__1__824___0 +P2794__0.4__512___0_90 +P2055__1__130___418 +P2751__1__2472___2451 +P2203__0.4__0___0_180 +P2332__1__1967___824 +P1876__1__306___0 +P1463__1__2472___824 +P1633__1__824___1648 +P1470__1__3296___824 +P1475__0.4__0___0_180 +P2565__0.4__0___0_90 +P1863__0.4__0___0 +P2197__0.4__0___0_270 +P1433__1__0___0 +P0131__0.4__0___0_180 +P1440__1__824___0 +P1466__0.4__1024___512_270 +P1432__0.4__331___512 +P1458__0.4__512___0_180 +P0029__0.4__0___0_270 +P2587__1__824___824 +P2651__1__0___0 +P2595__1__0___0 +P2141__1__148___0 +P2379__1__824___824 +P2403__1__1109___824 +P0141__1__824___727 +P2590__1__0___0 +P1464__1__824___1648 +P2754__1__1648___824 +P2794__0.4__512___512_270 +P2650__0.4__159___0_90 +P1458__0.4__538___0_270 +P2714__1__2472___0 +P1268__0.4__576___424_270 +P1861__1__0___0 +P1639__1__0___3296 +P1584__1__2472___3947 +P2251__1__247___0 +P1791__0.4__943___512_270 +P1742__1__2472___824 +P2725__1__3296___1684 +P2587__1__0___824 +P1521__1__2976___2976 +P1505__0.4__27___0_180 +P2714__1__3296___1426 +P2591__1__829___824 +P0050__0.4__0___226_90 +P2194__0.4__0___0_90 +P1786__1__0___0 +P1467__0.4__0___510 +P1459__1__824___3296 +P1466__0.4__0___1241_180 +P2594__1__589___824 +P2580__0.4__0___0_180 +P2742__0.4__0___0_270 +P1641__1__2976___0 +P1466__0.4__0___1024_270 +P1467__0.4__554___0 +P2514__1__1016___1648 +P1466__0.4__512___0 +P2598__1__824___824 +P1458__0.4__0___0_90 +P1247__0.4__0___0_180 +P1594__0.4__512___0_90 +P2332__1__1648___824 +P1863__0.4__126___0_270 +P1445__0.4__512___0 +P2121__0.4__0___0_180 +P1466__0.4__1024___512_90 +P1863__1__0___824 +P0021__0.4__0___0 +P2714__1__824___824 +P2593__1__179___0 +P2724__1__824___824 +P1341__1__824___1648 +P1676__1__3267___1648 +P2428__1__0___1587 +P2091__0.4__0___0_90 +P1458__1__2880___2472 +P2653__0.4__0___166_180 +P2756__0.4__512___688_270 +P1337__1__824___1648 +P1793__0.4__372___229_180 +P0030__1__1648___824 +P2655__0.4__512___127_180 +P0047__1__102___0 +P2754__0.4__0___465 +P1333__0.4__576___0_270 +P1863__0.4__126___296_180 +P1466__0.4__512___512 +P1458__0.4__512___0_270 +P2373__1__824___824 +P0162__1__0___439 +P2672__0.4__0___0_180 +P2756__0.4__0___688 +P1791__0.4__512___0_180 +P1432__0.4__0___690 +P1977__0.4__0___0 +P1794__1__0___3296 +P2166__1__0___0 +P2598__0.4__0___0_90 +P1340__1__2976___1648 +P1449__1__824___1648 +P2335__1__0___1167 +P2116__0.4__0___0_270 +P2501__1__768___1436 +P1594__0.4__576___512_90 +P1640__0.4__512___512_180 +P2727__0.4__0___0 +P1471__1__2472___0 +P2203__0.4__0___0_90 +P0141__0.4__512___0_90 +P1464__0.4__0___838_180 +P1456__1__1648___0 +P1463__0.4__466___0_270 +P2656__0.4__0___0 +P1736__1__3258___1648 +P2657__0.4__0___0 +P1786__0.4__0___0_90 +P2121__0.4__0___0 +P2720__1__0___824 +P1464__0.4__0___838_270 +P2428__1__824___1587 +P1471__0.4__0___0_90 +P0141__0.4__0___0 +P0029__0.4__80___0_180 +P1861__1__1020___0 +P2657__0.4__0___0_180 +P0111__0.4__0___0_270 +P2800__0.4__300___276_270 +P1585__1__2976___1648 +P0050__0.4__0___0_180 +P1432__0.4__331___0_90 +P1638__1__2472___0 +P1432__1__1648___1648 +P1594__0.4__576___0_90 +P2373__1__1196___824 +P1466__0.4__512___1024 +P1342__1__2976___0 +P1964__0.4__0___0_270 +P2392__0.4__0___0_90 +P2592__0.4__0___0_90 +P2714__1__1648___0 +P1791__0.4__512___512_180 +P1463__1__1648___2022 +P1471__0.4__0___512_270 +P1584__1__1648___3947 +P1640__0.4__576___0_90 +P2653__1__0___824 +P0018__1__800___665 +P2538__1__863___0 +P2725__1__3296___824 +P0131__0.4__0___0_270 +P1432__1__1648___824 +P2768__1__437___824 +P2761__0.4__864___0 +P0021__0.4__0___270_270 +P1752__1__2976___1648 +P2536__1__1381___824 +P2587__0.4__0___0 +P2244__1__241___49 +P2597__0.4__0___0 +P2332__1__1967___1450 +P2655__0.4__0___0_180 +P1505__1__1603___1301 +P2721__1__0___824 +P0164__1__0___0 +P2563__1__883___0 +P1793__0.4__0___229 +P2594__1__589___1762 +P2592__0.4__0___0_270 +P1458__0.4__512___917_90 +P1876__1__306___573 +P0050__0.4__0___226 +P1732__1__3258___824 +P1640__1__2976___1648 +P0164__1__423___0 +P2721__1__0___0 +P2756__0.4__512___688_180 +P0021__0.4__0___270_180 +P2724__1__824___1432 +P1650__1__824___1648 +P0018__1__0___0 +P1514__1__0___2976 +P2423__1__0___0 +P1861__0.4__0___417 +P1240__1__1648___824 +P1794__0.4__230___743 +P2655__0.4__0___127_270 +P1458__1__2880___1648 +P0050__1__1535___1648 +P2572__1__49___1171 +P1445__0.4__1024___288 +P0673__1__91___0 +P2754__0.4__359___0_270 +P1466__0.4__512___0_180 +P1361__1__2976___2976 +P2794__0.4__0___0_270 +P2162__0.4__0___0_90 +P1471__0.4__0___994_270 +P1471__0.4__0___512_180 +P2591__0.4__0___0 +P1445__0.4__0___288_180 +P1640__0.4__576___512 +P2251__1__247___56 +P0030__0.4__80___0_270 +P2166__0.4__0___0_90 +P1466__0.4__0___512_270 +P1863__0.4__126___0_90 +P1333__1__2976___0 +P1458__1__0___824 +P1257__1__1648___0 +P2534__1__0___1593 +P1467__0.4__0___0_270 +P1432__0.4__0___512_180 +P2593__0.4__0___0 +P1633__1__0___2976 +P2203__1__446___824 +P1786__0.4__309___443_180 +P2592__1__0___0 +P2800__1__824___1648 +P1449__0.4__0___359_180 +P1794__0.4__0___512_90 +P1474__1__1648___2472 +P2734__1__1648___1648 +P1596__1__0___1648 +P2794__0.4__512___0_180 +P1464__0.4__0___512_90 +P2656__1__0___897 +P1467__1__1648___2810 +P1445__1__0___2257 +P1466__0.4__512___512_90 +P2719__1__824___0 +P1466__0.4__512___1241_270 +P1458__0.4__538___512_90 +P0021__1__1648___1648 +P2362__1__824___0 +P2721__1__3296___1429 +P1638__1__2472___1648 +P1466__0.4__0___0_270 +P2754__0.4__0___0_270 +P2141__0.4__0___0_90 +P2365__0.4__0___0_180 +P1793__0.4__0___229_270 +P2157__0.4__0___0 +P1333__0.4__512___0 +P1440__0.4__0___0_90 +P2055__1__0___418 +P2452__1__609___1136 +P1903__1__922___264 +P1585__1__2472___1648 +P0770__1__0___334 +P2598__1__824___0 +P2368__1__0___824 +P1466__1__1648___2472 +P2651__0.4__0___0_180 +P2091__0.4__0___0 +P1184__1__1570___3296 +P2538__1__824___0 +P1793__1__824___0 +P1470__0.4__1024___0_270 +P2800__0.4__0___0_90 +P1638__1__3270___2472 +P2522__1__0___0 +P2395__1__824___1439 +P2653__1__0___0 +P2800__0.4__300___0_270 +P2655__1__0___1648 +P1432__0.4__0___690_270 +P1443__1__3296___2316 +P1445__1__1648___1648 +P1268__1__2976___824 +P1445__0.4__512___288_270 +P0021__0.4__0___270 +P2595__0.4__0___0_180 +P2368__1__0___0 +P2655__0.4__963___127 +P2563__0.4__0___0 +P2720__1__1648___1422 +P1470__0.4__1177___512_90 +P2756__0.4__0___688_180 +P1732__1__0___1648 +P1794__0.4__230___512_180 +P1639__1__2472___3296 +P2651__1__641___897 +P2365__0.4__0___0 +P1333__0.4__576___0_90 +P2594__0.4__0___0_270 +P2714__1__1648___1426 +P1458__0.4__512___0_90 +P1458__1__2472___2472 +P1268__0.4__576___0_90 +P1333__0.4__576___0 +P0029__1__0___1383 +P2754__0.4__0___465_90 +P2714__1__0___824 +P2472__1__573___1348 +P2655__0.4__963___0_90 +P1466__0.4__1024___1024_180 +P2587__0.4__129___0_180 +P1340__1__2472___0 +P2593__0.4__0___0_270 +P0141__0.4__0___0_270 +P2742__1__824___1291 +P1340__1__2472___1648 +P0030__1__1735___1382 +P1791__1__3893___824 +P2724__1__2472___1432 +P1791__0.4__512___512_270 +P1458__0.4__512___512_270 +P2800__0.4__0___276_90 +P1445__1__4516___0 +P2166__0.4__0___0_270 +P2365__0.4__0___0_270 +P1640__0.4__0___969_270 +P2672__0.4__0___78_270 +P2655__1__0___0 +P2512__1__862___0 +P2375__1__0___0 +P1594__0.4__576___512_180 +P1360__1__2976___1648 +P1247__1__824___0 +P1471__1__824___1648 +P2653__1__824___0 +P2381__1__1205___0 +P0029__1__0___824 +P1449__0.4__0___0_180 +P1432__0.4__331___512_90 +P1550__1__1648___0 +P2506__1__824___824 +P1745__1__1648___1648 +P2655__1__3296___1854 +P1515__1__824___2976 +P2404__1__2472___0 +P2800__0.4__300___0_90 +P2435__1__549___1383 +P2692__0.4__50___0_180 +P1594__0.4__0___512_180 +P0021__0.4__0___0_180 +P2756__0.4__0___688_270 +P2655__0.4__512___0_270 +P2714__1__2472___824 +P1863__0.4__0___296_180 +P1466__0.4__1024___1024_270 +P0147__0.4__0___0 +P2751__1__824___1648 +P2800__1__0___0 +P1373__1__1648___2472 +P0018__1__0___665 +P2395__1__1648___1439 +P1466__1__2472___1648 +P2794__0.4__512___0_270 +P1638__1__2472___2976 +P1639__1__2976___2472 +P1594__0.4__576___0_270 +P0021__1__1648___824 +P1743__1__0___0 +P2435__1__549___824 +P1505__1__0___1301 +P1673__1__1648___1648 +P1640__0.4__0___969_180 +P2452__1__0___824 +P1464__0.4__502___838 +P1464__1__0___0 +P1522__1__2976___0 +P2502__1__0___0 +P2598__1__1332___0 +P2716__1__2996___824 +P1470__0.4__1177___512 +P2659__0.4__55___0_270 +P2368__0.4__0___0 +P2570__0.4__0___0 +P1464__0.4__502___0_270 +P2650__1__1933___824 +P2194__1__0___824 +P2203__0.4__0___0 +P1727__1__2472___2976 +P1636__1__1648___0 +P1449__0.4__0___359_90 +P1794__0.4__0___743 +P2768__1__0___1333 +P1466__0.4__512___1241_90 +P1791__0.4__512___0 +P1594__0.4__576___512_270 +P1445__0.4__1192___288_180 +P1463__0.4__466___194_90 +P1594__0.4__0___512 +P1730__1__1648___824 +P2655__1__824___1648 +P2719__1__1648___824 +P1467__0.4__512___510_180 +P2453__1__0___824 +P1640__0.4__512___512_270 +P1594__0.4__0___512_270 +P1268__0.4__512___0_270 +P1679__1__824___2472 +P2721__1__2472___1429 +P2348__1__465___341 +P1269__1__2472___1648 +P1639__1__2472___2472 +P2563__1__824___145 +P1584__1__824___3296 +P2761__0.4__864___0_180 +P1445__0.4__0___0_270 +P1470__0.4__1177___0 +P0018__0.4__0___0_90 +P1343__1__2976___1648 +P1432__1__824___2472 +P1432__1__824___0 +P1793__1__2466___824 +P2794__0.4__512___0 +P1742__1__1648___824 +P1466__0.4__512___1024_180 +P1467__1__1648___1648 +P1467__0.4__0___510_180 +P0131__1__512___0 +P1594__0.4__0___512_90 +P0141__1__0___0 +P1449__0.4__0___0_270 +P1463__1__1648___824 +P1631__1__0___1648 +P2563__0.4__0___0_270 +P1449__1__0___1648 +P1449__1__824___0 +P1467__0.4__0___0 +P2657__1__864___824 +P0673__0.4__0___0_270 +P2794__0.4__512___512_90 +P1634__1__824___0 +P2332__1__0___1450 +P0147__1__682___0 +P1640__0.4__512___512 +P1594__0.4__512___0_270 +P2472__1__0___824 +P2657__1__864___0 +P2655__0.4__963___127_90 +P1458__0.4__538___512_180 +P1471__0.4__896___0_90 +P1633__1__824___2976 +P0030__1__824___0 +P1727__1__2976___2976 +P2734__1__1648___1675 +P1445__0.4__0___0_90 +P1977__1__647___735 +P1466__0.4__512___1024_270 +P2727__1__633___0 +P2657__1__824___824 +P2587__0.4__0___0_90 +P2656__1__641___824 +P1650__1__824___824 +P2663__1__824___0 +P0030__0.4__0___0_180 +P2716__1__2472___1420 +P1641__1__2976___824 +P0856__1__824___424 +P1471__0.4__0___512_90 +P1594__1__2472___824 +P2657__1__0___1425 +P1518__1__0___2472 +P2655__1__824___1854 +P2721__1__1648___0 +P2692__0.4__50___0_90 +P1640__0.4__576___969_180 +P0111__0.4__0___0_90 +P1863__0.4__0___0_90 +P1863__0.4__0___0_270 +P2166__0.4__0___0 +P1529__1__0___1648 +P1863__0.4__126___296 +P1747__1__2472___1648 +P2794__0.4__0___0_180 +P1466__0.4__0___512 +P2362__0.4__0___0_270 +P2381__1__824___0 +P2162__0.4__0___0_270 +P2444__1__884___0 +P1640__0.4__512___969_270 +P2395__1__824___824 +P2565__0.4__0___0 +P2725__1__3502___824 +P1640__1__2472___2472 +P2365__1__1648___0 +P2756__1__1648___3257 +P1445__1__824___1648 +P1464__1__0___1648 +P1747__1__2976___1648 +P2655__1__2472___824 +P1325__1__1648___2472 +P1470__1__2472___824 +P1247__0.4__0___0_270 +P2761__1__3696___824 +P2121__1__0___0 +P1580__1__2472___0 +P1793__1__1648___0 +P2598__1__824___1030 +P1470__0.4__1024___0_180 +P2580__0.4__0___0_270 +P0029__1__824___1383 +P2055__1__0___0 +P1341__1__1648___0 +P1268__0.4__576___424_90 +P0047__0.4__0___0_180 +P2794__0.4__512___512 +P1464__1__1648___824 +P1458__0.4__0___0_270 +P0673__0.4__0___0_90 +P2597__1__0___703 +P1679__1__0___3296 +P1861__1__1020___824 +P1598__1__2472___2472 +P1786__0.4__0___443_180 +P0050__1__824___824 +P1445__0.4__512___0_270 +P2800__0.4__300___276_90 +P0139__1__0___0 +P2342__1__0___1350 +P1269__1__2472___824 +P2650__0.4__159___0_180 +P1791__0.4__512___512_90 +P1861__0.4__0___417_180 +P1786__0.4__309___0 +P2368__0.4__0___0_90 +P1373__1__2472___0 +P2373__1__824___0 +P0021__0.4__124___0_90 +P2725__1__0___824 +P2563__1__883___145 +P1471__1__0___2472 +P1456__0.4__374___0_270 +P1250__1__824___824 +P2754__0.4__359___0_180 +P1863__1__0___0 +P1445__0.4__512___288 +P2655__0.4__512___0 +P2244__1__0___0 +P1580__1__1648___0 +P2653__0.4__0___166_90 +P2452__1__609___824 +P2368__0.4__0___0_180 +P1463__0.4__0___194_180 +P2592__1__824___0 +P1786__0.4__309___443_90 +P2197__0.4__0___0_90 +P2655__0.4__512___127_270 +P2655__0.4__963___0 +P1464__1__824___3630 +P1791__0.4__943___512 +P0131__1__0___0 +P1247__1__0___0 +P2116__0.4__0___0_180 +P0044__1__624___708 +P1433__0.4__0___0_270 +P2720__1__2472___0 +P0770__0.4__0___0_270 +P1515__1__824___2472 +P2656__1__0___824 +P1638__1__3270___2976 +P1358__1__824___1648 +P1471__0.4__896___0_180 +P0018__0.4__0___0_270 +P0030__1__1735___0 +P1464__1__0___2472 +P1539__1__824___1648 +P0021__0.4__0___270_90 +P2116__0.4__0___0 +P2587__0.4__129___0_270 +P1458__0.4__538___917_270 +P2657__1__824___1425 +P1463__0.4__0___0_180 +P2197__0.4__0___0 +P0147__0.4__0___0_180 +P1475__0.4__0___0_90 +P2719__1__3514___0 +P2650__1__1648___974 +P1863__0.4__0___296_270 +P2663__1__1455___0 +P2756__0.4__863___688_180 +P1440__1__950___0 +P1786__0.4__0___443 +P2590__1__362___0 +P2650__0.4__0___0 +P1458__0.4__512___512_180 +P1641__1__0___1648 +P1639__1__2976___3296 +P2663__1__824___824 +P1594__0.4__576___512 +P0141__0.4__512___0 +P1794__0.4__230___512_90 +P1467__1__2472___2472 +P1556__1__2472___824 +P1445__1__4516___824 +P1325__1__1648___1648 +P1432__0.4__331___0_270 +P1793__0.4__372___229 +P1903__0.4__0___0_180 +P2794__0.4__512___512_180 +P2572__1__0___824 +P2714__1__824___1426 +P1794__0.4__0___512 +P2373__1__1196___0 +P2502__1__0___1400 +P2653__1__1091___824 +P2692__0.4__0___0_270 +P1471__0.4__0___0 +P2653__0.4__0___166_270 +P1445__0.4__1192___0_180 +P2593__0.4__0___0_180 +P1361__1__2472___2976 +P1432__0.4__0___0_90 +P1258__1__824___0 +P2719__1__3296___0 +P2692__0.4__0___0 +P0131__1__0___256 +P2362__1__824___1000 +P2157__1__0___0 +P1463__0.4__466___194_270 +P0856__0.4__0___0_180 +P1679__1__0___2472 +P2716__1__1648___1420 +P2157__1__89___0 +P2203__1__446___869 +P0770__0.4__0___0_180 +P2650__0.4__0___0_270 +P2800__0.4__0___0_180 +P2718__1__0___824 +P0856__1__0___424 +P1791__1__2472___1648 +P0141__0.4__0___0_90 +P1466__0.4__512___1241 +P1876__1__0___0 +P1466__0.4__0___1024_180 +P2162__1__205___0 +P1466__0.4__0___1241 +P1732__1__824___0 +P2368__0.4__7___0_90 +P1863__0.4__126___296_90 +P2716__1__1648___0 +P1463__1__824___1648 +P1594__0.4__512___512_180 +P1522__1__2472___0 +P1333__0.4__512___0_270 +P2591__1__824___824 +P2716__1__2996___0 +P1736__1__1648___3942 +P0050__0.4__0___226_270 +P2197__1__1225___0 +P2756__0.4__863___688_90 +P2365__0.4__108___0 +P1876__0.4__0___0_180 +P1466__0.4__0___512_180 +P2591__1__0___824 +P1977__0.4__0___0_180 +P2794__1__1648___0 +P2197__1__824___0 +P1456__1__2470___824 +P2672__0.4__0___0 +P0111__0.4__0___0_180 +P2656__1__0___0 +P1470__0.4__512___512 +P1636__1__2472___0 +P0673__1__0___0 +P2768__1__0___824 +P0144__1__0___0 +P1505__0.4__0___0 +P2672__0.4__0___78 +P2597__1__420___703 +P1793__0.4__372___229_270 +P0141__0.4__707___0 +P1793__0.4__0___0_180 +P1786__0.4__0___443_270 +P2392__0.4__0___0_180 +P1962__1__0___145 +P1540__1__824___2976 +P2597__0.4__0___0_180 +P1471__0.4__512___512_180 +P2416__1__0___824 +P2203__0.4__0___0_270 +P2727__0.4__0___0_180 +P2595__1__0___648 +P2528__1__824___824 +P2721__1__824___1429 +P1539__1__1648___1648 +P1456__0.4__374___0 +P2754__0.4__0___0_180 +P1636__1__824___2976 +P1574__1__2976___2976 +P1464__0.4__502___838_270 +P1449__1__0___0 +P1432__0.4__0___0 +P0164__1__423___222 +P2794__0.4__0___512_180 +P1333__0.4__512___0_180 +P2692__1__1662___0 +P2800__0.4__300___276 +P1445__1__0___824 +P0021__1__1846___0 +P1463__0.4__0___0_90 +P2800__0.4__0___276 +P1732__1__824___824 +P2719__1__824___824 +P2653__0.4__0___0_270 +P1432__0.4__331___512_270 +P2403__1__1109___0 +P2565__0.4__0___0_180 +P0029__0.4__0___0_90 +P1458__0.4__512___512_90 +P2594__0.4__0___90_270 +P1224__1__0___2976 +P1640__0.4__576___0_180 +P2650__0.4__0___0_180 +P1634__1__1648___0 +P1464__0.4__0___0_180 +P2591__1__829___1003 +P2593__1__0___0 +P1463__1__1648___1648 +P1471__0.4__896___0_270 +P1640__0.4__512___969_90 +P1791__0.4__943___0_270 +P2408__1__973___1539 +P2727__1__0___427 +P1464__1__2792___3296 +P2734__1__824___1648 +P1463__0.4__466___0_180 +P2594__0.4__0___0 +P2157__1__0___680 +P1751__1__824___1648 +P1794__0.4__230___743_90 +P1615__1__2472___2472 +P1456__1__2470___1488 +P2368__0.4__0___0_270 +P1440__1__950___621 +P0018__0.4__0___0_180 +P2651__0.4__0___0_90 +P2590__0.4__0___0_180 +P1445__0.4__0___288_90 +P1466__0.4__1024___0 +P2725__1__824___1684 +P1745__1__1648___824 +P2435__1__0___824 +P1445__0.4__1024___0_270 +P2378__1__0___0 +P1962__0.4__0___0_180 +P1470__0.4__512___512_180 +P2506__1__907___910 +P1693__1__2472___0 +P1459__1__1648___3296 +P1466__0.4__1024___1241 +P2572__0.4__0___0_270 +P2663__1__824___1143 +P0047__0.4__0___0 +P1432__0.4__331___0_180 +P1268__0.4__512___424_270 +P1342__1__2472___0 +P2672__0.4__0___0_90 +P2653__0.4__0___0 +P2055__0.4__0___0_90 +P1640__1__2976___2472 +P1962__1__0___0 +P2514__1__1016___1716 +P1464__1__2792___3630 +P1903__0.4__0___0_270 +P2362__1__1320___824 +P2368__0.4__7___0_270 +P2714__1__3296___824 +P1594__0.4__512___512_90 +P1432__0.4__331___0 +P1464__0.4__0___0_90 +P0050__0.4__0___0_90 +P1240__1__824___2472 +P2656__0.4__0___0_180 +P1467__0.4__512___0_180 +P1903__1__0___264 +P2663__1__1455___1143 +P2754__0.4__0___465_270 +P1459__1__1648___3406 +P2587__0.4__129___0 +P2519__1__0___0 +P1594__1__824___1648 +P1470__1__1648___1648 +P1445__0.4__512___0_90 +P2591__1__0___1003 +P2563__1__0___145 +P2656__0.4__0___0_270 +P2594__0.4__0___0_180 +P0111__0.4__0___0 +P2693__1__1131___1475 +P2722__1__2472___0 +P2724__1__824___0 +P2594__1__0___1762 +P2725__1__1648___1648 +P1463__0.4__0___0 +P2565__1__744___1258 +P2659__0.4__0___0_90 +P1962__0.4__0___0_90 +P2572__0.4__0___0 +P2657__1__864___1425 +P1464__0.4__502___838_90 +P0021__0.4__0___0_270 +P2501__1__0___1436 +P2395__1__1648___824 +P0141__1__824___0 +P2585__0.4__0___0_270 +P0164__0.4__0___0_270 +P1445__0.4__1192___288_90 +P1903__1__824___264 +P2572__1__0___1171 +P2335__1__1463___824 +P1584__1__2472___3296 +P2365__0.4__108___0_270 +P0149__1__0___0 +P2672__0.4__0___0_270 +P1964__0.4__0___0 +P2655__0.4__963___0_180 +P2655__1__3296___1648 +P1652__1__2472___824 +P1640__1__2976___3296 +P2722__1__0___1422 +P2692__0.4__50___0_270 +P1693__1__824___2472 +P1641__1__0___2472 +P1445__0.4__1192___0_90 +P2506__1__907___824 +P2656__0.4__0___0_90 +P1445__0.4__0___0 +P1861__0.4__0___417_90 +P2663__1__1455___824 +P2725__1__1648___824 +P2659__0.4__55___0_90 +P0030__1__1735___824 +P0021__0.4__0___0_90 +P1463__0.4__466___194_180 +P0050__0.4__0___0 +P1786__0.4__309___0_90 +P2761__0.4__864___512_180 +P1268__0.4__576___0_180 +P1794__0.4__0___743_90 +P1640__0.4__512___969_180 +P1466__0.4__512___1024_90 +P1467__0.4__512___510_270 +P1458__0.4__538___917_180 +P2542__1__0___824 +P2362__1__1320___1000 +P1432__0.4__331___690_180 +P1467__0.4__0___0_180 +P2244__1__241___0 +P2655__0.4__963___127_180 +P2428__1__824___824 +P1467__0.4__554___0_180 +P2374__1__0___686 +P2659__1__0___0 +P1739__1__2976___0 +P2721__1__1648___824 +P0044__0.4__0___0 +P1458__0.4__512___512 +P1230__1__4176___1648 +P2720__1__0___1422 +P1432__0.4__331___512_180 +P1977__1__0___0 +P1445__1__0___1648 +P2335__1__824___1167 +P2716__1__1648___824 +P1459__1__824___3406 +P2659__0.4__55___0_180 +P1432__0.4__0___690_90 +P2011__1__0___103 +P1340__1__2976___0 +P1470__0.4__1024___0 +P2379__1__0___824 +P1594__1__2976___824 +P1268__0.4__576___424 +P0021__1__1846___824 +P2722__1__0___824 +P1373__1__2976___824 +P1903__0.4__0___0 +P2655__1__2472___1854 +P0141__0.4__0___0_180 +P1505__0.4__27___0_270 +P2378__1__0___824 +P2332__1__1648___1450 +P2162__1__0___0 +P1456__0.4__0___0_270 +P1458__0.4__512___0 +P2504__1__596___0 +P1786__1__824___0 +P2650__1__1648___0 +P1467__0.4__0___0_90 +P1445__0.4__512___288_90 +P2512__1__862___824 +P1240__1__1648___1648 +P0111__1__824___720 +P1739__1__2976___824 +P2655__0.4__512___127 +P2365__0.4__108___0_90 +P2714__1__0___1426 +P1615__1__2472___824 +P0141__1__1648___727 +P1589__1__2976___2472 +P1791__0.4__512___0_90 +P0856__0.4__0___0_270 +P1640__0.4__576___969_90 +P1863__0.4__0___0_180 +P1636__1__824___0 +P1964__0.4__0___0_180 +P1636__1__1648___2472 +P1269__1__1648___824 +P1464__1__824___824 +P1445__0.4__1024___288_90 +P1467__1__2920___1648 +P2651__1__0___897 +P2716__1__2472___0 +P1458__0.4__538___512_270 +P2725__1__2472___824 +P2655__0.4__963___127_270 +P1445__0.4__512___0_180 +P1466__0.4__1024___0_270 +P2591__1__824___1003 +P1514__1__824___1648 +P1693__1__2976___0 +P1458__0.4__538___0_180 +P1594__0.4__576___0 +P1861__1__824___824 +P2692__0.4__50___0 +P1607__1__824___1648 +P1977__0.4__0___0_270 +P1636__1__824___2472 +P2332__1__0___824 +P0111__1__938___720 +P1464__0.4__0___512 +P1343__1__2472___1648 +P2650__1__1648___824 +P1333__1__2472___0 +P1342__1__824___0 +P1470__0.4__1024___512_180 +P1470__0.4__512___0 +P1471__0.4__896___512 +P2563__0.4__0___0_90 +P1505__1__1603___824 +P2597__0.4__0___0_90 +P1863__0.4__126___0 +P1471__0.4__512___512 +P1456__0.4__374___0_90 +P2598__0.4__0___0_270 +P0021__0.4__124___270_90 +P1464__0.4__0___838_90 +P0131__1__512___256 +P1231__1__1648___0 +P0673__0.4__0___0_180 +P2756__0.4__512___688 +P2472__1__573___824 +P2157__0.4__0___0_180 +P0030__0.4__80___0_180 +P1463__1__0___1648 +P1361__1__2472___2472 +P2585__0.4__0___0_90 +P1505__1__824___1301 +P1535__1__824___0 +P2721__1__3296___824 +P1470__1__2472___1648 +P1515__1__0___2976 +P1594__1__1648___0 +P1594__1__2472___0 +P2591__1__829___0 +P1470__0.4__512___0_90 +P2587__0.4__0___0_180 +P2725__1__3502___1648 +P2754__0.4__359___465_90 +P1471__0.4__896___512_90 +P2590__0.4__0___0 +P1250__1__824___1648 +P1458__0.4__538___0 +P2453__1__0___1185 +P2690__1__0___0 +P1467__0.4__554___510 +P2594__0.4__0___0_90 +P1863__1__824___0 +P1606__1__824___2976 +P1791__1__3296___1648 +P1594__1__824___2472 +P2655__0.4__512___127_90 +P2655__1__2472___0 +P1673__1__824___1648 +P1471__0.4__896___0 +P0856__1__838___424 +P2162__1__205___356 +P0021__1__1846___1648 +P2756__0.4__863___688 +P2055__0.4__0___0_180 +P0141__1__1648___0 +P1574__1__2976___1648 +P2580__0.4__0___0_90 +P1466__0.4__512___0_270 +P2197__1__1225___665 +P0141__1__2472___727 +P1519__1__1648___1648 +P2655__0.4__0___0_270 +P1456__0.4__0___0 +P1268__0.4__576___0 +P1589__1__2472___2976 +P2655__0.4__0___127_90 +P2194__0.4__0___0_270 +P1445__0.4__1024___288_270 +P2595__0.4__0___0_270 +P2716__1__0___0 +P2585__0.4__0___0_180 +P2656__1__641___897 +P1343__1__1648___2472 +P2724__1__1648___1432 +P1467__0.4__554___510_270 +P2593__0.4__0___0_90 +P2585__1__674___724 +P1521__1__2472___2472 +P0162__1__140___439 +P1466__0.4__0___1024_90 +P1471__1__1648___0 +P2587__1__824___0 +P1445__1__1648___824 +P1471__0.4__0___0_180 +P1861__1__0___824 +P1432__1__1648___0 +P2722__1__1648___0 +P2592__1__1031___443 +P1471__0.4__0___512 +P1466__0.4__512___512_180 +P1333__0.4__576___0_180 +P2725__1__3296___1648 +P2745__1__824___0 +P1470__1__3296___0 +P1458__1__824___824 +P0021__0.4__124___0 +P0029__0.4__80___0_270 +P1464__1__0___824 +P1732__1__824___1648 +P2572__0.4__0___0_90 +P2794__0.4__0___0_90 +P1470__0.4__1177___512_270 +P1521__1__2472___2976 +P2800__0.4__300___0_180 +P2657__0.4__0___0_270 +P1343__1__1648___2976 +P2742__0.4__0___0_180 +P1463__0.4__466___0 +P0030__1__824___824 +P0021__0.4__124___270 +P1358__1__1648___824 +P2408__1__824___1539 +P2197__1__824___665 +P1471__0.4__0___994_180 +P1471__0.4__512___512_270 +P2055__1__130___0 +P2203__1__0___869 +P1343__1__2472___2472 +P1466__1__1648___1648 +P1742__1__1648___1648 +P0030__0.4__0___0 +P2720__1__824___824 +P1341__1__1648___824 +P1432__1__824___1648 +P1591__1__1648___2976 +P1736__1__2472___0 +P1876__0.4__0___0_90 +P0131__0.4__0___0 +P0856__0.4__0___0 +P2362__1__1320___0 +P1445__1__824___824 +P2716__1__2996___1420 +P1361__1__2976___2472 +P2157__0.4__0___0_90 +P1471__0.4__512___0_180 +P0021__1__1648___0 +P2091__0.4__0___0_180 +P1373__1__2472___824 +P2721__1__2472___824 +P1268__0.4__576___0_270 +P1445__0.4__1024___288_180 +P2720__1__824___1422 +P1640__0.4__0___969 +P1470__1__1648___824 +P2655__0.4__0___0 +P2800__0.4__300___0 +P2595__0.4__0___0_90 +P2512__1__824___824 +P2657__1__0___824 +P2656__1__641___0 +P1464__0.4__0___0_270 +P1463__1__2472___2022 +P2742__1__0___1291 +P2655__0.4__0___127 +P1360__1__2472___824 +P1505__1__0___824 +P1445__0.4__1024___0_90 +P1466__1__2472___2472 +P2742__0.4__0___0 +P1640__0.4__576___969 +P2743__1__0___0 +P2800__0.4__0___276_270 +P2590__0.4__0___0_270 +P1458__0.4__512___917 +P2528__1__1469___824 +P0029__1__0___0 +P1470__0.4__1177___0_180 +P1458__1__0___0 +P1471__0.4__512___512_90 +P1793__0.4__372___0_90 +P2162__0.4__0___0 +P1458__0.4__512___917_270 +P1640__0.4__512___512_90 +P1521__1__2976___2472 +P2472__1__0___1348 +P1445__0.4__1192___0 +P1470__0.4__1177___0_90 +P1445__0.4__1024___0 +P2725__1__824___824 +P2655__1__3296___824 +P0141__0.4__707___0_180 +P2141__0.4__0___0_180 +P0030__0.4__80___0 +P2365__0.4__108___0_180 +P1463__1__2701___824 +P1793__0.4__372___0 +P0021__0.4__124___270_270 +P1464__1__2472___3630 +P1466__0.4__0___1024 +P1268__0.4__512___0 +P1574__1__2472___1648 +P2725__1__1648___0 +P2651__0.4__0___0 +P2754__0.4__0___0_90 +P0164__0.4__0___0 +P1470__0.4__512___512_270 +P1340__1__2976___824 +P1521__1__824___824 +P0141__0.4__512___0_270 +P0147__1__0___225 +P2527__1__1309___0 +P1876__0.4__0___0 +P2594__0.4__0___90_180 +P2650__0.4__159___0 +P2194__0.4__0___0 +P1748__1__1648___0 +P1466__0.4__1024___0_90 +P1471__1__824___2472 +P1432__0.4__0___512_270 +P2444__1__824___0 +P1791__1__2472___824 +P2794__1__824___824 +P2657__1__824___0 +P1471__0.4__896___512_270 +P2506__1__824___910 +P2592__0.4__0___0_180 +P2761__0.4__864___512_270 +P1358__1__824___2976 +P1640__0.4__0___969_90 +P1977__0.4__0___0_90 +P1470__0.4__1024___0_90 +P2655__1__3296___0 +P1748__1__824___0 +P1449__0.4__0___359_270 +P1793__0.4__0___0 +P1794__1__0___2472 +P1594__0.4__576___0_180 +P2754__0.4__359___0 +P1594__0.4__0___576_270 +P2379__1__824___0 +P1577__1__824___824 +P1467__0.4__512___0_90 +P2197__1__0___0 +P1470__1__2472___0 +P1743__1__824___0 +P2655__1__0___824 +P1471__0.4__0___994 +P1740__1__2472___2976 +P0856__1__824___0 +P1584__1__1648___3296 +P2768__1__437___1333 +P0770__0.4__0___0 +P2754__1__2433___1648 +P1577__1__824___0 +P0044__0.4__0___0_90 +P0044__0.4__0___0_270 +P2392__1__0___0 +P1791__0.4__512___512 +P1633__1__0___2472 +P2754__0.4__359___0_90 +P0131__0.4__0___0_90 +P0856__0.4__0___0_90 +P1638__1__1648___2472 +P2714__1__0___0 +P2650__1__824___974 +P1594__0.4__0___576_180 +P0021__0.4__124___0_180 +P2653__1__824___824 +P2725__1__1648___1684 +P2650__1__824___824 +P2592__0.4__0___0 +P1466__0.4__1024___512_180 +P2091__1__0___0 +P1339__1__0___1648 +P2563__1__0___0 +P1471__0.4__512___0 +P2742__0.4__0___0_90 +P1343__1__2472___2976 +P1514__1__0___2472 +P2365__0.4__0___0_90 +P1373__1__1648___1648 +P2655__0.4__512___0_90 +P1631__1__1648___2472 +P1962__0.4__0___0 +P1432__0.4__331___690 +P1257__1__824___1648 +P1342__1__0___0 +P1268__0.4__512___0_90 +P1551__1__2976___2472 +P1471__0.4__0___994_90 +P1445__0.4__1192___288_270 +P2662__1__1423___1909 +P2655__1__824___824 +P2011__0.4__0___0_270 +P1467__0.4__512___0 +P2458__1__0___0 +P2362__1__824___824 +P1369__1__4176___1648 +P1373__1__2976___0 +P1470__1__1648___0 +P1861__1__824___0 +P1443__1__2472___2316 +P2754__1__824___824 +P1464__1__0___3296 +P0030__0.4__0___0_90 +P0141__1__2472___0 +P2659__1__0___824 +P0673__1__0___123 +P2157__1__89___680 +P1466__0.4__1024___1241_270 +P1240__1__1648___2472 +P1224__1__824___2472 +P2121__1__0___91 +P0029__0.4__80___0_90 +P2754__0.4__359___465_270 +P2501__1__0___824 +P0147__0.4__0___0_270 +P2166__1__230___0 +P2716__1__824___1420 +P1445__1__4120___824 +P1432__0.4__0___0_270 +P1467__0.4__554___0_270 +P2672__0.4__0___78_180 +P2754__1__1648___0 +P2251__1__0___56 +P1594__0.4__512___0_180 +P1466__0.4__0___512_90 +P1791__0.4__943___0_180 +P2342__1__0___824 +P2594__1__0___824 +P2593__1__0___677 +P2375__1__541___0 +P2692__0.4__0___0_180 +P1640__0.4__576___512_90 +P1463__0.4__0___194_270 +P1467__0.4__0___510_270 +P0050__0.4__0___226_180 +P1639__1__0___3958 +P2157__0.4__0___0_270 +P1337__1__0___1648 +P1962__0.4__0___0_270 +P0164__1__0___222 +P2116__1__0___395 +P0030__0.4__0___0_270 +P1556__1__2472___0 +P1464__0.4__0___512_270 +P2197__0.4__0___0_180 +P2565__1__744___824 +P1794__0.4__230___512 +P2597__1__420___0 +P2408__1__973___824 +P1964__1__277___605 +P1903__0.4__0___0_90 +P0856__1__838___0 +P1693__1__824___1648 +P2408__1__824___824 +P2452__1__0___1136 +P0030__1__1648___0 +P1471__0.4__512___0_270 +P1529__1__0___2472 +P2590__1__0___149 +P1646__1__0___2976 +P1594__0.4__512___0 +P0147__1__682___225 +P1445__0.4__0___0_180 +P2244__1__0___49 +P2591__0.4__0___0_270 +P2659__0.4__0___0_180 +P1230__1__4120___824 +P2565__0.4__0___0_270 +P1736__1__3258___0 +P1268__1__2472___824 +P2570__1__775___441 +P1705__1__2472___1648 +P1224__1__824___2976 +P2594__1__0___1648 +P1793__1__1648___824 +P1594__0.4__0___0_180 +P2734__1__824___1675 +P1445__0.4__0___288_270 +P2659__0.4__55___0 +P2718__1__824___824 +P1640__1__0___3296 +P1456__1__1648___1488 +P2653__0.4__0___0_90 +P2727__0.4__0___0_90 +P0030__1__1648___1382 +P2598__0.4__0___0_180 +P2725__1__3502___1684 +P1631__1__1648___2976 +P2055__0.4__0___0_270 +P0141__0.4__512___0_180 +P1589__1__2472___2472 +P0141__0.4__707___0_90 +P1440__0.4__0___0_180 +P1458__0.4__538___917 +P2751__1__1648___2451 +P1586__1__2976___1648 +P1440__0.4__0___0 +P1633__1__824___2472 +P1333__0.4__512___0_90 +P0770__0.4__0___0_90 +P1456__1__1648___824 +P1641__1__2976___3958 +P2725__1__3296___0 +P1594__0.4__0___0 +P1445__0.4__1192___288 +P1793__1__1648___1648 +P2692__0.4__0___0_90 +P1432__1__824___824 +P1794__0.4__0___743_270 +P1466__0.4__1024___1241_90 +P1861__0.4__0___0_270 +P1466__0.4__0___0 +P0044__1__624___0 +P2756__0.4__863___688_270 +P1638__1__1648___0 +P2716__1__2472___824 +P1594__0.4__0___576 +P2659__1__0___963 +P2379__1__0___0 +P2655__1__0___1854 +P2800__1__824___0 +P1466__0.4__512___1241_180 +P1342__1__1648___824 +P1791__0.4__943___0 +P1257__1__824___2472 +P1433__0.4__0___0 +P1471__0.4__896___512_180 +P2725__1__0___0 +P1463__1__824___0 +P2591__0.4__0___0_90 +P1467__0.4__554___0_90 +P1464__1__1648___0 +P1443__1__3296___1648 +P2534__1__0___824 +P2011__0.4__0___0_180 +P1466__0.4__1024___1241_180 +P2727__1__0___0 +P1343__1__1648___0 +P1449__0.4__0___0 +P1463__0.4__0___194 +P1467__0.4__554___510_180 +P1964__1__277___0 +P2144__1__0___0 +P1432__0.4__0___690_180 +P1445__0.4__1192___0_270 +P1632__1__1648___2472 +P1474__1__824___2472 +P2653__0.4__0___166 +P0164__0.4__0___0_180 +P0029__0.4__0___0 +P1464__0.4__0___838 +P1636__1__1648___824 +P2743__1__0___824 +P2091__0.4__0___0_270 +P2653__1__1091___0 +P2727__1__633___427 +P1250__1__1648___1648 +P2528__1__824___874 +P2657__0.4__0___0_90 +P2162__0.4__0___0_180 +P1636__1__824___824 +P2598__0.4__0___0 +P2374__1__0___0 +P1240__1__0___824 +P1577__1__1648___0 +P1876__0.4__0___0_270 +P1464__0.4__0___0 +P1201__1__1648___0 +P2590__0.4__0___0_90 +P1343__1__2976___2472 +P2491__1__0___824 +P2121__0.4__0___0_270 +P2587__0.4__0___0_270 +P1639__1__2472___1648 +P2587__0.4__129___0_90 +P1793__0.4__0___229_90 +P1475__0.4__0___0 +P1475__1__0___0 +P1638__1__3270___1648 +P1589__1__2976___2976 +P1325__1__824___1648 +P1551__1__2976___1648 +P1863__0.4__0___296_90 +P2721__1__3515___824 +P1505__0.4__27___0_90 +P2720__1__1648___824 +P1540__1__824___2472 +P1445__1__1648___2257 +P1505__0.4__0___0_180 +P1342__1__0___1648 +P1964__0.4__0___0_90 +P0050__1__824___1648 +P2585__1__674___0 +P1256__1__824___824 +P1247__0.4__0___0_90 +P2116__1__282___395 +P1449__1__0___824 +P1341__1__1648___1648 +P2570__1__0___441 +P2714__1__3513___824 +P1463__0.4__466___0_90 +P1793__1__2466___1648 +P0021__0.4__124___270_180 +P2141__1__0___0 +P2650__1__824___0 +P1342__1__1648___0 +P2592__1__824___443 +P1358__1__1648___1648 +P2650__0.4__0___0_90 +P2693__1__824___1475 +P2794__0.4__0___512 +P2743__1__824___824 +P1458__1__2880___3296 +P2591__1__824___0 +P1458__0.4__538___0_90 +P1786__0.4__309___0_270 +P2655__0.4__0___127_180 +P2754__0.4__359___465 +P1632__1__1648___1648 +P2651__1__641___824 +P1467__0.4__554___510_90 +P2397__1__0___0 +P1466__0.4__512___512_270 +P2141__0.4__0___0_270 +P1863__0.4__126___296_270 +P1646__1__0___1648 +P1639__1__1648___3296 +P1521__1__824___0 +P1505__0.4__0___0_270 +P1341__1__1648___2472 +P2362__0.4__0___0_180 +P1615__1__2472___2976 +P1794__0.4__0___512_180 +P0029__0.4__80___0 +P1456__0.4__0___0_90 +P2662__1__1423___1648 +P2718__1__1648___824 +P1466__0.4__512___0_90 +P1640__0.4__512___969 +P2392__0.4__0___0 +P2725__1__824___0 +P0030__0.4__80___0_90 +P1631__1__1648___1648 +P1861__0.4__0___0_90 +P1786__0.4__309___443 +P2251__1__0___0 +P2794__1__824___0 +P1794__0.4__230___743_180 +P2591__0.4__0___0_180 +P1791__1__3296___824 +P1641__1__0___3958 +P1861__0.4__0___0_180 +P2672__1__0___824 +P1786__0.4__0___0_270 +P2116__1__0___0 +P2800__0.4__0___276_180 +P1466__0.4__1024___1024_90 +P2570__0.4__0___0_90 +P1470__0.4__1024___512_90 +P1598__1__2976___2472 +P1341__1__2472___0 +P2725__1__824___1648 +P2672__0.4__0___78_90 +P0770__1__586___0 +P1449__0.4__0___359 +P1463__0.4__0___194_90 +P2598__1__1332___1030 +P2722__1__1648___824 +P1464__0.4__502___0_90 +P1505__0.4__0___0_90 +P0147__0.4__0___0_90 +P1369__1__3296___1648 +P1458__0.4__538___917_90 +P1458__1__2472___3829 +P2141__1__148___166 +P2438__1__353___0 +P1861__0.4__0___417_270 +P2362__0.4__0___0 +P1794__0.4__230___512_270 +P0029__1__824___0 +P1474__1__1648___1648 +P1470__0.4__512___512_90 +P2598__1__1332___824 +P0856__1__0___0 +P1594__0.4__512___512 +P1577__1__1648___824 +P1647__1__824___1648 +P2663__1__0___0 +P1463__0.4__466___194 +P2800__0.4__0___0_270 +P2655__0.4__512___0_180 +P1640__0.4__576___969_270 +P2653__0.4__0___0_180 +P1594__0.4__0___576_90 +P1786__0.4__309___443_270 +P1224__1__0___2472 +P1638__1__1648___1648 +P1337__1__824___2472 +P2116__0.4__0___0_90 +P2594__1__589___1648 +P2742__1__0___824 +P1794__0.4__0___512_270 +P1458__0.4__512___917_180 +P2563__1__824___0 +P1354__1__2976___0 +P2655__1__2472___1648 +P1445__1__4120___0 +P1432__1__0___2472 +P2754__0.4__0___0 +P2491__1__0___0 +P1793__0.4__372___0_180 +P1466__0.4__0___0_90 +P2727__0.4__0___0_270 +P1464__1__824___3296 +P1594__0.4__0___0_90 +P2651__0.4__0___0_270 +P2721__1__1648___1429 +P1466__0.4__0___0_180 +P2563__0.4__0___0_180 +P1903__1__922___0 +P2659__0.4__0___0_270 +P2121__0.4__0___0_90 +P1464__0.4__502___838_180 +P2743__1__824___0 +P2055__0.4__0___0 +P2257__1__0___57 +P1640__0.4__576___0 +P1341__1__2472___824 +P0029__1__824___824 +P2794__0.4__0___0 +P1470__1__3296___1648 +P1791__0.4__512___0_270 +P1594__0.4__512___512_270 +P2725__1__2472___1684 +P2203__1__0___824 +P0164__0.4__0___0_90 +P0018__0.4__0___0 +P2368__0.4__7___0_180 +P1861__0.4__0___0 +P2593__1__179___677 +P0021__0.4__124___0_270 +P1580__1__2976___0 +P2591__1__0___0 +P2651__1__0___824 +P1794__0.4__0___743_180 +P1863__1__824___824 +P1240__1__2472___824 +P1268__0.4__576___424_180 +P1470__0.4__512___0_270 +P2663__1__0___824 +P2800__1__0___1648 +P1641__1__1648___3958 +P2162__1__0___356 +P2428__1__0___824 +P1360__1__2976___824 +P1230__1__4120___1648 +P1471__0.4__0___0_270 +P2392__0.4__0___0_270 +P1793__0.4__0___229_180 +P1551__1__2472___1648 +P1449__0.4__0___0_90 +P2794__1__1648___824 +P2580__0.4__0___0 +P2655__0.4__0___0_90 +P2657__1__0___0 +P1640__0.4__576___0_270 +P2141__0.4__0___0 +P1458__1__2472___3296 +P1467__0.4__512___0_270 +P1794__0.4__230___743_270 +P2335__1__0___824 +P2194__0.4__0___0_180 +P1793__0.4__372___229_90 +P0030__1__824___1382 +P1743__1__0___824 +P2761__0.4__864___512_90 +P0770__1__0___0 +P1786__0.4__0___0 +P1467__0.4__512___510_90 +P1458__1__2472___1648 +P1464__0.4__502___0 +P2194__1__0___1148 +P1458__0.4__538___512 +P2397__1__824___0 +P0047__0.4__0___0_270 +P1258__1__0___0 +P1464__0.4__502___0_180 +P1863__0.4__126___0_180 +P1786__0.4__0___0_180 +P2742__1__824___824 +P1432__0.4__331___690_90 +P1752__1__2976___824 +P1466__0.4__1024___512 +P2725__1__2472___1648 +P1463__0.4__0___0_270 +P2572__0.4__0___0_180 +P2721__1__0___1429 +P0147__1__0___0 +P1505__0.4__27___0 +P1440__0.4__0___0_270 +P1876__1__0___573 +P2011__0.4__0___0 +P2651__1__641___0 +P2794__0.4__0___512_270 +P1467__0.4__512___510 +P2522__1__824___0 +P2722__1__824___824 +P1636__1__1648___2976 +P2348__1__0___341 +P1786__0.4__0___443_90 +P1964__1__0___0 +P1551__1__2472___2472 +P2570__0.4__0___0_180 +P1230__1__4176___824 +P0143__1__0___0 +P2692__1__1648___0 +P1268__0.4__512___424_90 +P1786__1__824___824 +P2528__1__1469___874 +P1977__1__647___0 +P2719__1__1648___0 +P2379__1__1061___0 +P2570__1__0___0 +P1471__1__1648___824 +P1440__1__824___621 +P2197__1__0___665 +P1445__0.4__1024___0_180 +P1467__1__2920___2472 +P1257__1__0___1648 +P1268__0.4__512___0_180 +P1464__1__2472___3296 +P2655__0.4__963___0_270 +P1794__1__0___3394 +P2502__1__0___824 +P2536__1__1381___1287 +P1247__0.4__0___0 +P1519__1__1648___2472 +P1640__0.4__576___512_270 +P1458__0.4__0___0 +P1466__0.4__0___1241_90 +P2756__0.4__0___688_90 +P2800__0.4__0___0 +P0044__0.4__0___0_180 +P1903__1__824___0 +P0029__0.4__0___0_180 +P1463__1__2472___1648 +P2512__1__824___0 +P1449__1__824___824 +P2594__0.4__0___90 +P1470__0.4__1177___512_180 +P1631__1__0___2472 +P0047__1__0___0 +P0141__0.4__707___0_270 +P1341__1__2976___824 +P1458__1__2880___3829 +P2011__0.4__0___0_90 +P2650__1__1933___974 +P2379__1__1061___824 +P1445__0.4__512___288_180 +P2438__1__0___0 +P1432__0.4__0___512_90 +P0018__1__800___0 +P1268__0.4__512___424 +P2435__1__0___1383 +P1791__1__3893___1648 +P1467__1__2472___1648 +P1445__0.4__0___288 +P1337__1__0___2472 +P1791__0.4__943___512_90 +P1268__0.4__512___424_180 +P1471__1__0___1648 +P0050__1__1535___2100 +P1432__0.4__0___512 +P0050__0.4__0___0_270 +P2368__0.4__7___0 +P0141__1__0___727 +P2595__0.4__0___0 +P2794__0.4__0___512_90 +P2257__1__0___0 +P1369__1__4120___1648 +P1638__1__2472___2472 +P1360__1__2472___1648 +P1742__1__0___1648 +P1466__0.4__1024___0_180 +P2580__1__0___0 +P2659__1__824___0 +P1740__1__1648___2976 +P1793__0.4__0___0_90 +P1432__0.4__0___0_180 +P0770__1__586___334 +P1863__0.4__0___296 +P1640__0.4__576___512_180 +P2756__0.4__512___688_90 +P1456__1__2470___0 +P2800__0.4__300___276_180 +P1639__1__1648___1648 +P2587__1__0___0 +P1458__0.4__0___0_180 +P1793__0.4__0___0_270 +P0047__0.4__0___0_90 +P1470__0.4__1024___512 +P1647__1__824___824 +P2650__0.4__159___0_270 +P1594__0.4__0___0_270 +P2754__0.4__0___465_180 +P2335__1__824___824 +P2721__1__824___824 +P2720__1__1648___0 +P2754__1__1648___1648 +P0130__1__0___0 +P2761__0.4__864___512 +P1640__1__0___3958 +P2597__1__0___0 +P2721__1__3515___1429 +P2721__1__2472___0 +P0673__0.4__0___0 +P2501__1__768___824 +P2761__0.4__864___0_270 +P1466__0.4__1024___1024 +P1467__0.4__0___510_90 +P2659__0.4__0___0 +P1471__0.4__512___0_90 +P1786__0.4__309___0_180 +P1464__1__824___0 +P1433__0.4__0___0_180 +P2655__1__824___0 +P1456__0.4__374___0_180 +P2714__1__3513___1426 +P1474__1__824___1648 +P2594__0.4__0___90_90 +P1466__1__2472___3296 +P2012__1__0___0 +P1464__0.4__0___512_180 +P0159__1__0___0 +P2585__0.4__0___0 +P1505__1__824___824 +P1475__0.4__0___0_270 +P1433__0.4__0___0_90 +P1791__0.4__943___512_180 +P1445__1__824___2257 +P1466__0.4__0___1241_270 +P1641__1__2472___824 +P1640__1__2976___3958 +P2761__0.4__864___0_90 +P1341__1__2976___0 +P2335__1__1463___1167 +P2166__0.4__0___0_180 +P2714__1__2472___1426 +P1903__1__0___0 +P1471__1__2472___824 +P1515__1__0___2472 +P1470__0.4__1024___512_270 +P2597__0.4__0___0_270 +P0673__1__91___123 +P1793__0.4__372___0_270 +P2592__1__0___443 +P1458__1__824___0 +P1791__0.4__943___0_90 +P2572__1__49___824 +P2365__1__1807___0 +P1342__1__824___824 +P2725__1__2472___0 +P2725__1__3502___0 +P2011__1__0___0 +P1594__1__1648___824 +P1432__0.4__331___690_270 +P2570__1__775___0 +P1977__1__0___735 +P1470__0.4__512___0_180 +P2754__0.4__359___465_180 +P1340__1__2472___824 +P1456__0.4__0___0_180 +P2362__0.4__0___0_90 +P2718__1__1648___1422 +P2800__1__824___824 +P2570__0.4__0___0_270 +P1238__1__2976___0 +P1470__0.4__1177___0_270 +P2716__1__824___0 +P2794__0.4__512___0_90 +P2055__1__130___418 +P2751__1__2472___2451 +P2203__0.4__0___0_180 +P2332__1__1967___824 +P1876__1__306___0 +P1463__1__2472___824 +P1633__1__824___1648 +P1470__1__3296___824 +P1475__0.4__0___0_180 +P2565__0.4__0___0_90 +P1863__0.4__0___0 +P2197__0.4__0___0_270 +P1433__1__0___0 +P0131__0.4__0___0_180 +P1440__1__824___0 +P1466__0.4__1024___512_270 +P1432__0.4__331___512 +P1458__0.4__512___0_180 +P0029__0.4__0___0_270 +P2587__1__824___824 +P2651__1__0___0 +P2595__1__0___0 +P2141__1__148___0 +P2379__1__824___824 +P2403__1__1109___824 +P0141__1__824___727 +P2590__1__0___0 +P1464__1__824___1648 +P2754__1__1648___824 +P2794__0.4__512___512_270 +P2650__0.4__159___0_90 +P1458__0.4__538___0_270 +P2714__1__2472___0 +P1268__0.4__576___424_270 +P1861__1__0___0 +P1639__1__0___3296 +P1584__1__2472___3947 +P2251__1__247___0 +P1791__0.4__943___512_270 +P1742__1__2472___824 +P2725__1__3296___1684 +P2587__1__0___824 +P1521__1__2976___2976 +P1505__0.4__27___0_180 +P2714__1__3296___1426 +P2591__1__829___824 +P0050__0.4__0___226_90 +P2194__0.4__0___0_90 +P1786__1__0___0 +P1467__0.4__0___510 +P1459__1__824___3296 +P1466__0.4__0___1241_180 +P2594__1__589___824 +P2580__0.4__0___0_180 +P2742__0.4__0___0_270 +P1641__1__2976___0 +P1466__0.4__0___1024_270 +P1467__0.4__554___0 +P2514__1__1016___1648 +P1466__0.4__512___0 +P2598__1__824___824 +P1458__0.4__0___0_90 +P1247__0.4__0___0_180 +P1594__0.4__512___0_90 +P2332__1__1648___824 +P1863__0.4__126___0_270 +P1445__0.4__512___0 +P2121__0.4__0___0_180 +P1466__0.4__1024___512_90 +P1863__1__0___824 +P0021__0.4__0___0 +P2714__1__824___824 +P2593__1__179___0 +P2724__1__824___824 +P1341__1__824___1648 +P1676__1__3267___1648 +P2428__1__0___1587 +P2091__0.4__0___0_90 +P1458__1__2880___2472 +P2653__0.4__0___166_180 +P2756__0.4__512___688_270 +P1337__1__824___1648 +P1793__0.4__372___229_180 +P0030__1__1648___824 +P2655__0.4__512___127_180 +P0047__1__102___0 +P2754__0.4__0___465 +P1333__0.4__576___0_270 +P1863__0.4__126___296_180 +P1466__0.4__512___512 +P1458__0.4__512___0_270 +P2373__1__824___824 +P0162__1__0___439 +P2672__0.4__0___0_180 +P2756__0.4__0___688 +P1791__0.4__512___0_180 +P1432__0.4__0___690 +P1977__0.4__0___0 +P1794__1__0___3296 +P2166__1__0___0 +P2598__0.4__0___0_90 +P1340__1__2976___1648 +P1449__1__824___1648 +P2335__1__0___1167 +P2116__0.4__0___0_270 +P2501__1__768___1436 +P1594__0.4__576___512_90 +P1640__0.4__512___512_180 +P2727__0.4__0___0 +P1471__1__2472___0 +P2203__0.4__0___0_90 +P0141__0.4__512___0_90 +P1464__0.4__0___838_180 +P1456__1__1648___0 +P1463__0.4__466___0_270 +P2656__0.4__0___0 +P1736__1__3258___1648 +P2657__0.4__0___0 +P1786__0.4__0___0_90 +P2121__0.4__0___0 +P2720__1__0___824 +P1464__0.4__0___838_270 +P2428__1__824___1587 +P1471__0.4__0___0_90 +P0141__0.4__0___0 +P0029__0.4__80___0_180 +P1861__1__1020___0 +P2657__0.4__0___0_180 +P0111__0.4__0___0_270 +P2800__0.4__300___276_270 +P1585__1__2976___1648 +P0050__0.4__0___0_180 +P1432__0.4__331___0_90 +P1638__1__2472___0 +P1432__1__1648___1648 +P1594__0.4__576___0_90 +P2373__1__1196___824 +P1466__0.4__512___1024 +P1342__1__2976___0 +P1964__0.4__0___0_270 +P2392__0.4__0___0_90 +P2592__0.4__0___0_90 +P2714__1__1648___0 +P1791__0.4__512___512_180 +P1463__1__1648___2022 +P1471__0.4__0___512_270 +P1584__1__1648___3947 +P1640__0.4__576___0_90 +P2653__1__0___824 +P0018__1__800___665 +P2538__1__863___0 +P2725__1__3296___824 +P0131__0.4__0___0_270 +P1432__1__1648___824 +P2768__1__437___824 +P2761__0.4__864___0 +P0021__0.4__0___270_270 +P1752__1__2976___1648 +P2536__1__1381___824 +P2587__0.4__0___0 +P2244__1__241___49 +P2597__0.4__0___0 +P2332__1__1967___1450 +P2655__0.4__0___0_180 +P1505__1__1603___1301 +P2721__1__0___824 +P0164__1__0___0 +P2563__1__883___0 +P1793__0.4__0___229 +P2594__1__589___1762 +P2592__0.4__0___0_270 +P1458__0.4__512___917_90 +P1876__1__306___573 +P0050__0.4__0___226 +P1732__1__3258___824 +P1640__1__2976___1648 +P0164__1__423___0 +P2721__1__0___0 +P2756__0.4__512___688_180 +P0021__0.4__0___270_180 +P2724__1__824___1432 +P1650__1__824___1648 +P0018__1__0___0 +P1514__1__0___2976 +P2423__1__0___0 +P1861__0.4__0___417 +P1240__1__1648___824 +P1794__0.4__230___743 +P2655__0.4__0___127_270 +P1458__1__2880___1648 +P0050__1__1535___1648 +P2572__1__49___1171 +P1445__0.4__1024___288 +P0673__1__91___0 +P2754__0.4__359___0_270 +P1466__0.4__512___0_180 +P1361__1__2976___2976 +P2794__0.4__0___0_270 +P2162__0.4__0___0_90 +P1471__0.4__0___994_270 +P1471__0.4__0___512_180 +P2591__0.4__0___0 +P1445__0.4__0___288_180 +P1640__0.4__576___512 +P2251__1__247___56 +P0030__0.4__80___0_270 +P2166__0.4__0___0_90 +P1466__0.4__0___512_270 +P1863__0.4__126___0_90 +P1333__1__2976___0 +P1458__1__0___824 +P1257__1__1648___0 +P2534__1__0___1593 +P1467__0.4__0___0_270 +P1432__0.4__0___512_180 +P2593__0.4__0___0 +P1633__1__0___2976 +P2203__1__446___824 +P1786__0.4__309___443_180 +P2592__1__0___0 +P2800__1__824___1648 +P1449__0.4__0___359_180 +P1794__0.4__0___512_90 +P1474__1__1648___2472 +P2734__1__1648___1648 +P1596__1__0___1648 +P2794__0.4__512___0_180 +P1464__0.4__0___512_90 +P2656__1__0___897 +P1467__1__1648___2810 +P1445__1__0___2257 +P1466__0.4__512___512_90 +P2719__1__824___0 +P1466__0.4__512___1241_270 +P1458__0.4__538___512_90 +P0021__1__1648___1648 +P2362__1__824___0 +P2721__1__3296___1429 +P1638__1__2472___1648 +P1466__0.4__0___0_270 +P2754__0.4__0___0_270 +P2141__0.4__0___0_90 +P2365__0.4__0___0_180 +P1793__0.4__0___229_270 +P2157__0.4__0___0 +P1333__0.4__512___0 +P1440__0.4__0___0_90 +P2055__1__0___418 +P2452__1__609___1136 +P1903__1__922___264 +P1585__1__2472___1648 +P0770__1__0___334 +P2598__1__824___0 +P2368__1__0___824 +P1466__1__1648___2472 +P2651__0.4__0___0_180 +P2091__0.4__0___0 +P1184__1__1570___3296 +P2538__1__824___0 +P1793__1__824___0 +P1470__0.4__1024___0_270 +P2800__0.4__0___0_90 +P1638__1__3270___2472 +P2522__1__0___0 +P2395__1__824___1439 +P2653__1__0___0 +P2800__0.4__300___0_270 +P2655__1__0___1648 +P1432__0.4__0___690_270 +P1443__1__3296___2316 +P1445__1__1648___1648 +P1268__1__2976___824 +P1445__0.4__512___288_270 +P0021__0.4__0___270 +P2595__0.4__0___0_180 +P2368__1__0___0 +P2655__0.4__963___127 +P2563__0.4__0___0 +P2720__1__1648___1422 +P1470__0.4__1177___512_90 +P2756__0.4__0___688_180 +P1732__1__0___1648 +P1794__0.4__230___512_180 +P1639__1__2472___3296 +P2651__1__641___897 +P2365__0.4__0___0 +P1333__0.4__576___0_90 +P2594__0.4__0___0_270 +P2714__1__1648___1426 +P1458__0.4__512___0_90 +P1458__1__2472___2472 +P1268__0.4__576___0_90 +P1333__0.4__576___0 +P0029__1__0___1383 +P2754__0.4__0___465_90 +P2714__1__0___824 +P2472__1__573___1348 +P2655__0.4__963___0_90 +P1466__0.4__1024___1024_180 +P2587__0.4__129___0_180 +P1340__1__2472___0 +P2593__0.4__0___0_270 +P0141__0.4__0___0_270 +P2742__1__824___1291 +P1340__1__2472___1648 +P0030__1__1735___1382 +P1791__1__3893___824 +P2724__1__2472___1432 +P1791__0.4__512___512_270 +P1458__0.4__512___512_270 +P2800__0.4__0___276_90 +P1445__1__4516___0 +P2166__0.4__0___0_270 +P2365__0.4__0___0_270 +P1640__0.4__0___969_270 +P2672__0.4__0___78_270 +P2655__1__0___0 +P2512__1__862___0 +P2375__1__0___0 +P1594__0.4__576___512_180 +P1360__1__2976___1648 +P1247__1__824___0 +P1471__1__824___1648 +P2653__1__824___0 +P2381__1__1205___0 +P0029__1__0___824 +P1449__0.4__0___0_180 +P1432__0.4__331___512_90 +P1550__1__1648___0 +P2506__1__824___824 +P1745__1__1648___1648 +P2655__1__3296___1854 +P1515__1__824___2976 +P2404__1__2472___0 +P2800__0.4__300___0_90 +P2435__1__549___1383 +P2692__0.4__50___0_180 +P1594__0.4__0___512_180 +P0021__0.4__0___0_180 +P2756__0.4__0___688_270 +P2655__0.4__512___0_270 +P2714__1__2472___824 +P1863__0.4__0___296_180 +P1466__0.4__1024___1024_270 +P0147__0.4__0___0 +P2751__1__824___1648 +P2800__1__0___0 +P1373__1__1648___2472 +P0018__1__0___665 +P2395__1__1648___1439 +P1466__1__2472___1648 +P2794__0.4__512___0_270 +P1638__1__2472___2976 +P1639__1__2976___2472 +P1594__0.4__576___0_270 +P0021__1__1648___824 +P1743__1__0___0 +P2435__1__549___824 +P1505__1__0___1301 +P1673__1__1648___1648 +P1640__0.4__0___969_180 +P2452__1__0___824 +P1464__0.4__502___838 +P1464__1__0___0 +P1522__1__2976___0 +P2502__1__0___0 +P2598__1__1332___0 +P2716__1__2996___824 +P1470__0.4__1177___512 +P2659__0.4__55___0_270 +P2368__0.4__0___0 +P2570__0.4__0___0 +P1464__0.4__502___0_270 +P2650__1__1933___824 +P2194__1__0___824 +P2203__0.4__0___0 +P1727__1__2472___2976 +P1636__1__1648___0 +P1449__0.4__0___359_90 +P1794__0.4__0___743 +P2768__1__0___1333 +P1466__0.4__512___1241_90 +P1791__0.4__512___0 +P1594__0.4__576___512_270 +P1445__0.4__1192___288_180 +P1463__0.4__466___194_90 +P1594__0.4__0___512 +P1730__1__1648___824 +P2655__1__824___1648 +P2719__1__1648___824 +P1467__0.4__512___510_180 +P2453__1__0___824 +P1640__0.4__512___512_270 +P1594__0.4__0___512_270 +P1268__0.4__512___0_270 +P1679__1__824___2472 +P2721__1__2472___1429 +P2348__1__465___341 +P1269__1__2472___1648 +P1639__1__2472___2472 +P2563__1__824___145 +P1584__1__824___3296 +P2761__0.4__864___0_180 +P1445__0.4__0___0_270 +P1470__0.4__1177___0 +P0018__0.4__0___0_90 +P1343__1__2976___1648 +P1432__1__824___2472 +P1432__1__824___0 +P1793__1__2466___824 +P2794__0.4__512___0 +P1742__1__1648___824 +P1466__0.4__512___1024_180 +P1467__1__1648___1648 +P1467__0.4__0___510_180 +P0131__1__512___0 +P1594__0.4__0___512_90 +P0141__1__0___0 +P1449__0.4__0___0_270 +P1463__1__1648___824 +P1631__1__0___1648 +P2563__0.4__0___0_270 +P1449__1__0___1648 +P1449__1__824___0 +P1467__0.4__0___0 +P2657__1__864___824 +P0673__0.4__0___0_270 +P2794__0.4__512___512_90 +P1634__1__824___0 +P2332__1__0___1450 +P0147__1__682___0 +P1640__0.4__512___512 +P1594__0.4__512___0_270 +P2472__1__0___824 +P2657__1__864___0 +P2655__0.4__963___127_90 +P1458__0.4__538___512_180 +P1471__0.4__896___0_90 +P1633__1__824___2976 +P0030__1__824___0 +P1727__1__2976___2976 +P2734__1__1648___1675 +P1445__0.4__0___0_90 +P1977__1__647___735 +P1466__0.4__512___1024_270 +P2727__1__633___0 +P2657__1__824___824 +P2587__0.4__0___0_90 +P2656__1__641___824 +P1650__1__824___824 +P2663__1__824___0 +P0030__0.4__0___0_180 +P2716__1__2472___1420 +P1641__1__2976___824 +P0856__1__824___424 +P1471__0.4__0___512_90 +P1594__1__2472___824 +P2657__1__0___1425 +P1518__1__0___2472 +P2655__1__824___1854 +P2721__1__1648___0 +P2692__0.4__50___0_90 +P1640__0.4__576___969_180 +P0111__0.4__0___0_90 +P1863__0.4__0___0_90 +P1863__0.4__0___0_270 +P2166__0.4__0___0 +P1529__1__0___1648 +P1863__0.4__126___296 +P1747__1__2472___1648 +P2794__0.4__0___0_180 +P1466__0.4__0___512 +P2362__0.4__0___0_270 +P2381__1__824___0 +P2162__0.4__0___0_270 +P2444__1__884___0 +P1640__0.4__512___969_270 +P2395__1__824___824 +P2565__0.4__0___0 +P2725__1__3502___824 +P1640__1__2472___2472 +P2365__1__1648___0 +P2756__1__1648___3257 +P1445__1__824___1648 +P1464__1__0___1648 +P1747__1__2976___1648 +P2655__1__2472___824 +P1325__1__1648___2472 +P1470__1__2472___824 +P1247__0.4__0___0_270 +P2761__1__3696___824 +P2121__1__0___0 +P1580__1__2472___0 +P1793__1__1648___0 +P2598__1__824___1030 +P1470__0.4__1024___0_180 +P2580__0.4__0___0_270 +P0029__1__824___1383 +P2055__1__0___0 +P1341__1__1648___0 +P1268__0.4__576___424_90 +P0047__0.4__0___0_180 +P2794__0.4__512___512 +P1464__1__1648___824 +P1458__0.4__0___0_270 +P0673__0.4__0___0_90 +P2597__1__0___703 +P1679__1__0___3296 +P1861__1__1020___824 +P1598__1__2472___2472 +P1786__0.4__0___443_180 +P0050__1__824___824 +P1445__0.4__512___0_270 +P2800__0.4__300___276_90 +P0139__1__0___0 +P2342__1__0___1350 +P1269__1__2472___824 +P2650__0.4__159___0_180 +P1791__0.4__512___512_90 +P1861__0.4__0___417_180 +P1786__0.4__309___0 +P2368__0.4__0___0_90 +P1373__1__2472___0 +P2373__1__824___0 +P0021__0.4__124___0_90 +P2725__1__0___824 +P2563__1__883___145 +P1471__1__0___2472 +P1456__0.4__374___0_270 +P1250__1__824___824 +P2754__0.4__359___0_180 +P1863__1__0___0 +P1445__0.4__512___288 +P2655__0.4__512___0 +P2244__1__0___0 +P1580__1__1648___0 +P2653__0.4__0___166_90 +P2452__1__609___824 +P2368__0.4__0___0_180 +P1463__0.4__0___194_180 +P2592__1__824___0 +P1786__0.4__309___443_90 +P2197__0.4__0___0_90 +P2655__0.4__512___127_270 +P2655__0.4__963___0 +P1464__1__824___3630 +P1791__0.4__943___512 +P0131__1__0___0 +P1247__1__0___0 +P2116__0.4__0___0_180 +P0044__1__624___708 +P1433__0.4__0___0_270 +P2720__1__2472___0 +P0770__0.4__0___0_270 +P1515__1__824___2472 +P2656__1__0___824 +P1638__1__3270___2976 +P1358__1__824___1648 +P1471__0.4__896___0_180 +P0018__0.4__0___0_270 +P0030__1__1735___0 +P1464__1__0___2472 +P1539__1__824___1648 +P0021__0.4__0___270_90 +P2116__0.4__0___0 +P2587__0.4__129___0_270 +P1458__0.4__538___917_270 +P2657__1__824___1425 +P1463__0.4__0___0_180 +P2197__0.4__0___0 +P0147__0.4__0___0_180 +P1475__0.4__0___0_90 +P2719__1__3514___0 +P2650__1__1648___974 +P1863__0.4__0___296_270 +P2663__1__1455___0 +P2756__0.4__863___688_180 +P1440__1__950___0 +P1786__0.4__0___443 +P2590__1__362___0 +P2650__0.4__0___0 +P1458__0.4__512___512_180 +P1641__1__0___1648 +P1639__1__2976___3296 +P2663__1__824___824 +P1594__0.4__576___512 +P0141__0.4__512___0 +P1794__0.4__230___512_90 +P1467__1__2472___2472 +P1556__1__2472___824 +P1445__1__4516___824 +P1325__1__1648___1648 +P1432__0.4__331___0_270 +P1793__0.4__372___229 +P1903__0.4__0___0_180 +P2794__0.4__512___512_180 +P2572__1__0___824 +P2714__1__824___1426 +P1794__0.4__0___512 +P2373__1__1196___0 +P2502__1__0___1400 +P2653__1__1091___824 +P2692__0.4__0___0_270 +P1471__0.4__0___0 +P2653__0.4__0___166_270 +P1445__0.4__1192___0_180 +P2593__0.4__0___0_180 +P1361__1__2472___2976 +P1432__0.4__0___0_90 +P1258__1__824___0 +P2719__1__3296___0 +P2692__0.4__0___0 +P0131__1__0___256 +P2362__1__824___1000 +P2157__1__0___0 +P1463__0.4__466___194_270 +P0856__0.4__0___0_180 +P1679__1__0___2472 +P2716__1__1648___1420 +P2157__1__89___0 +P2203__1__446___869 +P0770__0.4__0___0_180 +P2650__0.4__0___0_270 +P2800__0.4__0___0_180 +P2718__1__0___824 +P0856__1__0___424 +P1791__1__2472___1648 +P0141__0.4__0___0_90 +P1466__0.4__512___1241 +P1876__1__0___0 +P1466__0.4__0___1024_180 +P2162__1__205___0 +P1466__0.4__0___1241 +P1732__1__824___0 +P2368__0.4__7___0_90 +P1863__0.4__126___296_90 +P2716__1__1648___0 +P1463__1__824___1648 +P1594__0.4__512___512_180 +P1522__1__2472___0 +P1333__0.4__512___0_270 +P2591__1__824___824 +P2716__1__2996___0 +P1736__1__1648___3942 +P0050__0.4__0___226_270 +P2197__1__1225___0 +P2756__0.4__863___688_90 +P2365__0.4__108___0 +P1876__0.4__0___0_180 +P1466__0.4__0___512_180 +P2591__1__0___824 +P1977__0.4__0___0_180 +P2794__1__1648___0 +P2197__1__824___0 +P1456__1__2470___824 +P2672__0.4__0___0 +P0111__0.4__0___0_180 +P2656__1__0___0 +P1470__0.4__512___512 +P1636__1__2472___0 +P0673__1__0___0 +P2768__1__0___824 +P0144__1__0___0 +P1505__0.4__0___0 +P2672__0.4__0___78 +P2597__1__420___703 +P1793__0.4__372___229_270 +P0141__0.4__707___0 +P1793__0.4__0___0_180 +P1786__0.4__0___443_270 +P2392__0.4__0___0_180 +P1962__1__0___145 +P1540__1__824___2976 +P2597__0.4__0___0_180 +P1471__0.4__512___512_180 +P2416__1__0___824 +P2203__0.4__0___0_270 +P2727__0.4__0___0_180 +P2595__1__0___648 +P2528__1__824___824 +P2721__1__824___1429 +P1539__1__1648___1648 +P1456__0.4__374___0 +P2754__0.4__0___0_180 +P1636__1__824___2976 +P1574__1__2976___2976 +P1464__0.4__502___838_270 +P1449__1__0___0 +P1432__0.4__0___0 +P0164__1__423___222 +P2794__0.4__0___512_180 +P1333__0.4__512___0_180 +P2692__1__1662___0 +P2800__0.4__300___276 +P1445__1__0___824 +P0021__1__1846___0 +P1463__0.4__0___0_90 +P2800__0.4__0___276 +P1732__1__824___824 +P2719__1__824___824 +P2653__0.4__0___0_270 +P1432__0.4__331___512_270 +P2403__1__1109___0 +P2565__0.4__0___0_180 +P0029__0.4__0___0_90 +P1458__0.4__512___512_90 +P2594__0.4__0___90_270 +P1224__1__0___2976 +P1640__0.4__576___0_180 +P2650__0.4__0___0_180 +P1634__1__1648___0 +P1464__0.4__0___0_180 +P2591__1__829___1003 +P2593__1__0___0 +P1463__1__1648___1648 +P1471__0.4__896___0_270 +P1640__0.4__512___969_90 +P1791__0.4__943___0_270 +P2408__1__973___1539 +P2727__1__0___427 +P1464__1__2792___3296 +P2734__1__824___1648 +P1463__0.4__466___0_180 +P2594__0.4__0___0 +P2157__1__0___680 +P1751__1__824___1648 +P1794__0.4__230___743_90 +P1615__1__2472___2472 +P1456__1__2470___1488 +P2368__0.4__0___0_270 +P1440__1__950___621 +P0018__0.4__0___0_180 +P2651__0.4__0___0_90 +P2590__0.4__0___0_180 +P1445__0.4__0___288_90 +P1466__0.4__1024___0 +P2725__1__824___1684 +P1745__1__1648___824 +P2435__1__0___824 +P1445__0.4__1024___0_270 +P2378__1__0___0 +P1962__0.4__0___0_180 +P1470__0.4__512___512_180 +P2506__1__907___910 +P1693__1__2472___0 +P1459__1__1648___3296 +P1466__0.4__1024___1241 +P2572__0.4__0___0_270 +P2663__1__824___1143 +P0047__0.4__0___0 +P1432__0.4__331___0_180 +P1268__0.4__512___424_270 +P1342__1__2472___0 +P2672__0.4__0___0_90 +P2653__0.4__0___0 +P2055__0.4__0___0_90 +P1640__1__2976___2472 +P1962__1__0___0 +P2514__1__1016___1716 +P1464__1__2792___3630 +P1903__0.4__0___0_270 +P2362__1__1320___824 +P2368__0.4__7___0_270 +P2714__1__3296___824 +P1594__0.4__512___512_90 +P1432__0.4__331___0 +P1464__0.4__0___0_90 +P0050__0.4__0___0_90 +P1240__1__824___2472 +P2656__0.4__0___0_180 +P1467__0.4__512___0_180 +P1903__1__0___264 +P2663__1__1455___1143 +P2754__0.4__0___465_270 +P1459__1__1648___3406 +P2587__0.4__129___0 +P2519__1__0___0 +P1594__1__824___1648 +P1470__1__1648___1648 +P1445__0.4__512___0_90 +P2591__1__0___1003 +P2563__1__0___145 +P2656__0.4__0___0_270 +P2594__0.4__0___0_180 +P0111__0.4__0___0 +P2693__1__1131___1475 +P2722__1__2472___0 +P2724__1__824___0 +P2594__1__0___1762 +P2725__1__1648___1648 +P1463__0.4__0___0 +P2565__1__744___1258 +P2659__0.4__0___0_90 +P1962__0.4__0___0_90 +P2572__0.4__0___0 +P2657__1__864___1425 +P1464__0.4__502___838_90 +P0021__0.4__0___0_270 +P2501__1__0___1436 +P2395__1__1648___824 +P0141__1__824___0 +P2585__0.4__0___0_270 +P0164__0.4__0___0_270 +P1445__0.4__1192___288_90 +P1903__1__824___264 +P2572__1__0___1171 +P2335__1__1463___824 +P1584__1__2472___3296 +P2365__0.4__108___0_270 +P0149__1__0___0 +P2672__0.4__0___0_270 +P1964__0.4__0___0 +P2655__0.4__963___0_180 +P2655__1__3296___1648 +P1652__1__2472___824 +P1640__1__2976___3296 +P2722__1__0___1422 +P2692__0.4__50___0_270 +P1693__1__824___2472 +P1641__1__0___2472 +P1445__0.4__1192___0_90 +P2506__1__907___824 +P2656__0.4__0___0_90 +P1445__0.4__0___0 +P1861__0.4__0___417_90 +P2663__1__1455___824 +P2725__1__1648___824 +P2659__0.4__55___0_90 +P0030__1__1735___824 +P0021__0.4__0___0_90 +P1463__0.4__466___194_180 +P0050__0.4__0___0 +P1786__0.4__309___0_90 +P2761__0.4__864___512_180 +P1268__0.4__576___0_180 +P1794__0.4__0___743_90 +P1640__0.4__512___969_180 +P1466__0.4__512___1024_90 +P1467__0.4__512___510_270 +P1458__0.4__538___917_180 +P2542__1__0___824 +P2362__1__1320___1000 +P1432__0.4__331___690_180 +P1467__0.4__0___0_180 +P2244__1__241___0 +P2655__0.4__963___127_180 +P2428__1__824___824 +P1467__0.4__554___0_180 +P2374__1__0___686 +P2659__1__0___0 +P1739__1__2976___0 +P2721__1__1648___824 +P0044__0.4__0___0 +P1458__0.4__512___512 +P1230__1__4176___1648 +P2720__1__0___1422 +P1432__0.4__331___512_180 +P1977__1__0___0 +P1445__1__0___1648 +P2335__1__824___1167 +P2716__1__1648___824 +P1459__1__824___3406 +P2659__0.4__55___0_180 +P1432__0.4__0___690_90 +P2011__1__0___103 +P1340__1__2976___0 +P1470__0.4__1024___0 +P2379__1__0___824 +P1594__1__2976___824 +P1268__0.4__576___424 +P0021__1__1846___824 +P2722__1__0___824 +P1373__1__2976___824 +P1903__0.4__0___0 +P2655__1__2472___1854 +P0141__0.4__0___0_180 +P1505__0.4__27___0_270 +P2378__1__0___824 +P2332__1__1648___1450 +P2162__1__0___0 +P1456__0.4__0___0_270 +P1458__0.4__512___0 +P2504__1__596___0 +P1786__1__824___0 +P2650__1__1648___0 +P1467__0.4__0___0_90 +P1445__0.4__512___288_90 +P2512__1__862___824 +P1240__1__1648___1648 +P0111__1__824___720 +P1739__1__2976___824 +P2655__0.4__512___127 +P2365__0.4__108___0_90 +P2714__1__0___1426 +P1615__1__2472___824 +P0141__1__1648___727 +P1589__1__2976___2472 +P1791__0.4__512___0_90 +P0856__0.4__0___0_270 +P1640__0.4__576___969_90 +P1863__0.4__0___0_180 +P1636__1__824___0 +P1964__0.4__0___0_180 +P1636__1__1648___2472 +P1269__1__1648___824 +P1464__1__824___824 +P1445__0.4__1024___288_90 +P1467__1__2920___1648 +P2651__1__0___897 +P2716__1__2472___0 +P1458__0.4__538___512_270 +P2725__1__2472___824 +P2655__0.4__963___127_270 +P1445__0.4__512___0_180 +P1466__0.4__1024___0_270 +P2591__1__824___1003 +P1514__1__824___1648 +P1693__1__2976___0 +P1458__0.4__538___0_180 +P1594__0.4__576___0 +P1861__1__824___824 +P2692__0.4__50___0 +P1607__1__824___1648 +P1977__0.4__0___0_270 +P1636__1__824___2472 +P2332__1__0___824 +P0111__1__938___720 +P1464__0.4__0___512 +P1343__1__2472___1648 +P2650__1__1648___824 +P1333__1__2472___0 +P1342__1__824___0 +P1470__0.4__1024___512_180 +P1470__0.4__512___0 +P1471__0.4__896___512 +P2563__0.4__0___0_90 +P1505__1__1603___824 +P2597__0.4__0___0_90 +P1863__0.4__126___0 +P1471__0.4__512___512 +P1456__0.4__374___0_90 +P2598__0.4__0___0_270 +P0021__0.4__124___270_90 +P1464__0.4__0___838_90 +P0131__1__512___256 +P1231__1__1648___0 +P0673__0.4__0___0_180 +P2756__0.4__512___688 +P2472__1__573___824 +P2157__0.4__0___0_180 +P0030__0.4__80___0_180 +P1463__1__0___1648 +P1361__1__2472___2472 +P2585__0.4__0___0_90 +P1505__1__824___1301 +P1535__1__824___0 +P2721__1__3296___824 +P1470__1__2472___1648 +P1515__1__0___2976 +P1594__1__1648___0 +P1594__1__2472___0 +P2591__1__829___0 +P1470__0.4__512___0_90 +P2587__0.4__0___0_180 +P2725__1__3502___1648 +P2754__0.4__359___465_90 +P1471__0.4__896___512_90 +P2590__0.4__0___0 +P1250__1__824___1648 +P1458__0.4__538___0 +P2453__1__0___1185 +P2690__1__0___0 +P1467__0.4__554___510 +P2594__0.4__0___0_90 +P1863__1__824___0 +P1606__1__824___2976 +P1791__1__3296___1648 +P1594__1__824___2472 +P2655__0.4__512___127_90 +P2655__1__2472___0 +P1673__1__824___1648 +P1471__0.4__896___0 +P0856__1__838___424 +P2162__1__205___356 +P0021__1__1846___1648 +P2756__0.4__863___688 +P2055__0.4__0___0_180 +P0141__1__1648___0 +P1574__1__2976___1648 +P2580__0.4__0___0_90 +P1466__0.4__512___0_270 +P2197__1__1225___665 +P0141__1__2472___727 +P1519__1__1648___1648 +P2655__0.4__0___0_270 +P1456__0.4__0___0 +P1268__0.4__576___0 +P1589__1__2472___2976 +P2655__0.4__0___127_90 +P2194__0.4__0___0_270 +P1445__0.4__1024___288_270 +P2595__0.4__0___0_270 +P2716__1__0___0 +P2585__0.4__0___0_180 +P2656__1__641___897 +P1343__1__1648___2472 +P2724__1__1648___1432 +P1467__0.4__554___510_270 +P2593__0.4__0___0_90 +P2585__1__674___724 +P1521__1__2472___2472 +P0162__1__140___439 +P1466__0.4__0___1024_90 +P1471__1__1648___0 +P2587__1__824___0 +P1445__1__1648___824 +P1471__0.4__0___0_180 +P1861__1__0___824 +P1432__1__1648___0 +P2722__1__1648___0 +P2592__1__1031___443 +P1471__0.4__0___512 +P1466__0.4__512___512_180 +P1333__0.4__576___0_180 +P2725__1__3296___1648 +P2745__1__824___0 +P1470__1__3296___0 +P1458__1__824___824 +P0021__0.4__124___0 +P0029__0.4__80___0_270 +P1464__1__0___824 +P1732__1__824___1648 +P2572__0.4__0___0_90 +P2794__0.4__0___0_90 +P1470__0.4__1177___512_270 +P1521__1__2472___2976 +P2800__0.4__300___0_180 +P2657__0.4__0___0_270 +P1343__1__1648___2976 +P2742__0.4__0___0_180 +P1463__0.4__466___0 +P0030__1__824___824 +P0021__0.4__124___270 +P1358__1__1648___824 +P2408__1__824___1539 +P2197__1__824___665 +P1471__0.4__0___994_180 +P1471__0.4__512___512_270 +P2055__1__130___0 +P2203__1__0___869 +P1343__1__2472___2472 +P1466__1__1648___1648 +P1742__1__1648___1648 +P0030__0.4__0___0 +P2720__1__824___824 +P1341__1__1648___824 +P1432__1__824___1648 +P1591__1__1648___2976 +P1736__1__2472___0 +P1876__0.4__0___0_90 +P0131__0.4__0___0 +P0856__0.4__0___0 +P2362__1__1320___0 +P1445__1__824___824 +P2716__1__2996___1420 +P1361__1__2976___2472 +P2157__0.4__0___0_90 +P1471__0.4__512___0_180 +P0021__1__1648___0 +P2091__0.4__0___0_180 +P1373__1__2472___824 +P2721__1__2472___824 +P1268__0.4__576___0_270 +P1445__0.4__1024___288_180 +P2720__1__824___1422 +P1640__0.4__0___969 +P1470__1__1648___824 +P2655__0.4__0___0 +P2800__0.4__300___0 +P2595__0.4__0___0_90 +P2512__1__824___824 +P2657__1__0___824 +P2656__1__641___0 +P1464__0.4__0___0_270 +P1463__1__2472___2022 +P2742__1__0___1291 +P2655__0.4__0___127 +P1360__1__2472___824 +P1505__1__0___824 +P1445__0.4__1024___0_90 +P1466__1__2472___2472 +P2742__0.4__0___0 +P1640__0.4__576___969 +P2743__1__0___0 +P2800__0.4__0___276_270 +P2590__0.4__0___0_270 +P1458__0.4__512___917 +P2528__1__1469___824 +P0029__1__0___0 +P1470__0.4__1177___0_180 +P1458__1__0___0 +P1471__0.4__512___512_90 +P1793__0.4__372___0_90 +P2162__0.4__0___0 +P1458__0.4__512___917_270 +P1640__0.4__512___512_90 +P1521__1__2976___2472 +P2472__1__0___1348 +P1445__0.4__1192___0 +P1470__0.4__1177___0_90 +P1445__0.4__1024___0 +P2725__1__824___824 +P2655__1__3296___824 +P0141__0.4__707___0_180 +P2141__0.4__0___0_180 +P0030__0.4__80___0 +P2365__0.4__108___0_180 +P1463__1__2701___824 +P1793__0.4__372___0 +P0021__0.4__124___270_270 +P1464__1__2472___3630 +P1466__0.4__0___1024 +P1268__0.4__512___0 +P1574__1__2472___1648 +P2725__1__1648___0 +P2651__0.4__0___0 +P2754__0.4__0___0_90 +P0164__0.4__0___0 +P1470__0.4__512___512_270 +P1340__1__2976___824 +P1521__1__824___824 +P0141__0.4__512___0_270 +P0147__1__0___225 +P2527__1__1309___0 +P1876__0.4__0___0 +P2594__0.4__0___90_180 +P2650__0.4__159___0 +P2194__0.4__0___0 +P1748__1__1648___0 +P1466__0.4__1024___0_90 +P1471__1__824___2472 +P1432__0.4__0___512_270 +P2444__1__824___0 +P1791__1__2472___824 +P2794__1__824___824 +P2657__1__824___0 +P1471__0.4__896___512_270 +P2506__1__824___910 +P2592__0.4__0___0_180 +P2761__0.4__864___512_270 +P1358__1__824___2976 +P1640__0.4__0___969_90 +P1977__0.4__0___0_90 +P1470__0.4__1024___0_90 +P2655__1__3296___0 +P1748__1__824___0 +P1449__0.4__0___359_270 +P1793__0.4__0___0 +P1794__1__0___2472 +P1594__0.4__576___0_180 +P2754__0.4__359___0 +P1594__0.4__0___576_270 +P2379__1__824___0 +P1577__1__824___824 +P1467__0.4__512___0_90 +P2197__1__0___0 +P1470__1__2472___0 +P1743__1__824___0 +P2655__1__0___824 +P1471__0.4__0___994 +P1740__1__2472___2976 +P0856__1__824___0 +P1584__1__1648___3296 +P2768__1__437___1333 +P0770__0.4__0___0 +P2754__1__2433___1648 +P1577__1__824___0 +P0044__0.4__0___0_90 +P0044__0.4__0___0_270 +P2392__1__0___0 +P1791__0.4__512___512 +P1633__1__0___2472 +P2754__0.4__359___0_90 +P0131__0.4__0___0_90 +P0856__0.4__0___0_90 +P1638__1__1648___2472 +P2714__1__0___0 +P2650__1__824___974 +P1594__0.4__0___576_180 +P0021__0.4__124___0_180 +P2653__1__824___824 +P2725__1__1648___1684 +P2650__1__824___824 +P2592__0.4__0___0 +P1466__0.4__1024___512_180 +P2091__1__0___0 +P1339__1__0___1648 +P2563__1__0___0 +P1471__0.4__512___0 +P2742__0.4__0___0_90 +P1343__1__2472___2976 +P1514__1__0___2472 +P2365__0.4__0___0_90 +P1373__1__1648___1648 +P2655__0.4__512___0_90 +P1631__1__1648___2472 +P1962__0.4__0___0 +P1432__0.4__331___690 +P1257__1__824___1648 +P1342__1__0___0 +P1268__0.4__512___0_90 +P1551__1__2976___2472 +P1471__0.4__0___994_90 +P1445__0.4__1192___288_270 +P2662__1__1423___1909 +P2655__1__824___824 +P2011__0.4__0___0_270 +P1467__0.4__512___0 +P2458__1__0___0 +P2362__1__824___824 +P1369__1__4176___1648 +P1373__1__2976___0 +P1470__1__1648___0 +P1861__1__824___0 +P1443__1__2472___2316 +P2754__1__824___824 +P1464__1__0___3296 +P0030__0.4__0___0_90 +P0141__1__2472___0 +P2659__1__0___824 +P0673__1__0___123 +P2157__1__89___680 +P1466__0.4__1024___1241_270 +P1240__1__1648___2472 +P1224__1__824___2472 +P2121__1__0___91 +P0029__0.4__80___0_90 +P2754__0.4__359___465_270 +P2501__1__0___824 +P0147__0.4__0___0_270 +P2166__1__230___0 +P2716__1__824___1420 +P1445__1__4120___824 +P1432__0.4__0___0_270 +P1467__0.4__554___0_270 +P2672__0.4__0___78_180 +P2754__1__1648___0 +P2251__1__0___56 +P1594__0.4__512___0_180 +P1466__0.4__0___512_90 +P1791__0.4__943___0_180 +P2342__1__0___824 +P2594__1__0___824 +P2593__1__0___677 +P2375__1__541___0 +P2692__0.4__0___0_180 +P1640__0.4__576___512_90 +P1463__0.4__0___194_270 +P1467__0.4__0___510_270 +P0050__0.4__0___226_180 +P1639__1__0___3958 +P2157__0.4__0___0_270 +P1337__1__0___1648 +P1962__0.4__0___0_270 +P0164__1__0___222 +P2116__1__0___395 +P0030__0.4__0___0_270 +P1556__1__2472___0 +P1464__0.4__0___512_270 +P2197__0.4__0___0_180 +P2565__1__744___824 +P1794__0.4__230___512 +P2597__1__420___0 +P2408__1__973___824 +P1964__1__277___605 +P1903__0.4__0___0_90 +P0856__1__838___0 +P1693__1__824___1648 +P2408__1__824___824 +P2452__1__0___1136 +P0030__1__1648___0 +P1471__0.4__512___0_270 +P1529__1__0___2472 +P2590__1__0___149 +P1646__1__0___2976 +P1594__0.4__512___0 +P0147__1__682___225 +P1445__0.4__0___0_180 +P2244__1__0___49 +P2591__0.4__0___0_270 +P2659__0.4__0___0_180 +P1230__1__4120___824 +P2565__0.4__0___0_270 +P1736__1__3258___0 +P1268__1__2472___824 +P2570__1__775___441 +P1705__1__2472___1648 +P1224__1__824___2976 +P2594__1__0___1648 +P1793__1__1648___824 +P1594__0.4__0___0_180 +P2734__1__824___1675 +P1445__0.4__0___288_270 +P2659__0.4__55___0 +P2718__1__824___824 +P1640__1__0___3296 +P1456__1__1648___1488 +P2653__0.4__0___0_90 +P2727__0.4__0___0_90 +P0030__1__1648___1382 +P2598__0.4__0___0_180 +P2725__1__3502___1684 +P1631__1__1648___2976 +P2055__0.4__0___0_270 +P0141__0.4__512___0_180 +P1589__1__2472___2472 +P0141__0.4__707___0_90 +P1440__0.4__0___0_180 +P1458__0.4__538___917 +P2751__1__1648___2451 +P1586__1__2976___1648 +P1440__0.4__0___0 +P1633__1__824___2472 +P1333__0.4__512___0_90 +P0770__0.4__0___0_90 +P1456__1__1648___824 +P1641__1__2976___3958 +P2725__1__3296___0 +P1594__0.4__0___0 +P1445__0.4__1192___288 +P1793__1__1648___1648 +P2692__0.4__0___0_90 +P1432__1__824___824 +P1794__0.4__0___743_270 +P1466__0.4__1024___1241_90 +P1861__0.4__0___0_270 +P1466__0.4__0___0 +P0044__1__624___0 +P2756__0.4__863___688_270 +P1638__1__1648___0 +P2716__1__2472___824 +P1594__0.4__0___576 +P2659__1__0___963 +P2379__1__0___0 +P2655__1__0___1854 +P2800__1__824___0 +P1466__0.4__512___1241_180 +P1342__1__1648___824 +P1791__0.4__943___0 +P1257__1__824___2472 +P1433__0.4__0___0 +P1471__0.4__896___512_180 +P2725__1__0___0 +P1463__1__824___0 +P2591__0.4__0___0_90 +P1467__0.4__554___0_90 +P1464__1__1648___0 +P1443__1__3296___1648 +P2534__1__0___824 +P2011__0.4__0___0_180 +P1466__0.4__1024___1241_180 +P2727__1__0___0 +P1343__1__1648___0 +P1449__0.4__0___0 +P1463__0.4__0___194 +P1467__0.4__554___510_180 +P1964__1__277___0 +P2144__1__0___0 +P1432__0.4__0___690_180 +P1445__0.4__1192___0_270 +P1632__1__1648___2472 +P1474__1__824___2472 +P2653__0.4__0___166 +P0164__0.4__0___0_180 +P0029__0.4__0___0 +P1464__0.4__0___838 +P1636__1__1648___824 +P2743__1__0___824 +P2091__0.4__0___0_270 +P2653__1__1091___0 +P2727__1__633___427 +P1250__1__1648___1648 +P2528__1__824___874 +P2657__0.4__0___0_90 +P2162__0.4__0___0_180 +P1636__1__824___824 +P2598__0.4__0___0 +P2374__1__0___0 +P1240__1__0___824 +P1577__1__1648___0 +P1876__0.4__0___0_270 +P1464__0.4__0___0 +P1201__1__1648___0 +P2590__0.4__0___0_90 +P1343__1__2976___2472 +P2491__1__0___824 +P2121__0.4__0___0_270 +P2587__0.4__0___0_270 +P1639__1__2472___1648 +P2587__0.4__129___0_90 +P1793__0.4__0___229_90 +P1475__0.4__0___0 +P1475__1__0___0 +P1638__1__3270___1648 +P1589__1__2976___2976 +P1325__1__824___1648 +P1551__1__2976___1648 +P1863__0.4__0___296_90 +P2721__1__3515___824 +P1505__0.4__27___0_90 +P2720__1__1648___824 +P1540__1__824___2472 +P1445__1__1648___2257 +P1505__0.4__0___0_180 +P1342__1__0___1648 +P1964__0.4__0___0_90 +P0050__1__824___1648 +P2585__1__674___0 +P1256__1__824___824 +P1247__0.4__0___0_90 +P2116__1__282___395 +P1449__1__0___824 +P1341__1__1648___1648 +P2570__1__0___441 +P2714__1__3513___824 +P1463__0.4__466___0_90 +P1793__1__2466___1648 +P0021__0.4__124___270_180 +P2141__1__0___0 +P2650__1__824___0 +P1342__1__1648___0 +P2592__1__824___443 +P1358__1__1648___1648 +P2650__0.4__0___0_90 +P2693__1__824___1475 +P2794__0.4__0___512 +P2743__1__824___824 +P1458__1__2880___3296 +P2591__1__824___0 +P1458__0.4__538___0_90 +P1786__0.4__309___0_270 +P2655__0.4__0___127_180 +P2754__0.4__359___465 +P1632__1__1648___1648 +P2651__1__641___824 +P1467__0.4__554___510_90 +P2397__1__0___0 +P1466__0.4__512___512_270 +P2141__0.4__0___0_270 +P1863__0.4__126___296_270 +P1646__1__0___1648 +P1639__1__1648___3296 +P1521__1__824___0 +P1505__0.4__0___0_270 +P1341__1__1648___2472 +P2362__0.4__0___0_180 +P1615__1__2472___2976 +P1794__0.4__0___512_180 +P0029__0.4__80___0 +P1456__0.4__0___0_90 +P2662__1__1423___1648 +P2718__1__1648___824 +P1466__0.4__512___0_90 +P1640__0.4__512___969 +P2392__0.4__0___0 +P2725__1__824___0 +P0030__0.4__80___0_90 +P1631__1__1648___1648 +P1861__0.4__0___0_90 +P1786__0.4__309___443 +P2251__1__0___0 +P2794__1__824___0 +P1794__0.4__230___743_180 +P2591__0.4__0___0_180 +P1791__1__3296___824 +P1641__1__0___3958 +P1861__0.4__0___0_180 +P2672__1__0___824 +P1786__0.4__0___0_270 +P2116__1__0___0 +P2800__0.4__0___276_180 +P1466__0.4__1024___1024_90 +P2570__0.4__0___0_90 +P1470__0.4__1024___512_90 +P1598__1__2976___2472 +P1341__1__2472___0 +P2725__1__824___1648 +P2672__0.4__0___78_90 +P0770__1__586___0 +P1449__0.4__0___359 +P1463__0.4__0___194_90 +P2598__1__1332___1030 +P2722__1__1648___824 +P1464__0.4__502___0_90 +P1505__0.4__0___0_90 +P0147__0.4__0___0_90 +P1369__1__3296___1648 +P1458__0.4__538___917_90 +P1458__1__2472___3829 +P2141__1__148___166 +P2438__1__353___0 +P1861__0.4__0___417_270 +P2362__0.4__0___0 +P1794__0.4__230___512_270 +P0029__1__824___0 +P1474__1__1648___1648 +P1470__0.4__512___512_90 +P2598__1__1332___824 +P0856__1__0___0 +P1594__0.4__512___512 +P1577__1__1648___824 +P1647__1__824___1648 +P2663__1__0___0 +P1463__0.4__466___194 +P2800__0.4__0___0_270 +P2655__0.4__512___0_180 +P1640__0.4__576___969_270 +P2653__0.4__0___0_180 +P1594__0.4__0___576_90 +P1786__0.4__309___443_270 +P1224__1__0___2472 +P1638__1__1648___1648 +P1337__1__824___2472 +P2116__0.4__0___0_90 +P2594__1__589___1648 +P2742__1__0___824 +P1794__0.4__0___512_270 +P1458__0.4__512___917_180 +P2563__1__824___0 +P1354__1__2976___0 +P2655__1__2472___1648 +P1445__1__4120___0 +P1432__1__0___2472 +P2754__0.4__0___0 +P2491__1__0___0 +P1793__0.4__372___0_180 +P1466__0.4__0___0_90 +P2727__0.4__0___0_270 +P1464__1__824___3296 +P1594__0.4__0___0_90 +P2651__0.4__0___0_270 +P2721__1__1648___1429 +P1466__0.4__0___0_180 +P2563__0.4__0___0_180 +P1903__1__922___0 +P2659__0.4__0___0_270 +P2121__0.4__0___0_90 +P1464__0.4__502___838_180 +P2743__1__824___0 +P2055__0.4__0___0 +P2257__1__0___57 +P1640__0.4__576___0 +P1341__1__2472___824 +P0029__1__824___824 +P2794__0.4__0___0 +P1470__1__3296___1648 +P1791__0.4__512___0_270 +P1594__0.4__512___512_270 +P2725__1__2472___1684 +P2203__1__0___824 +P0164__0.4__0___0_90 +P0018__0.4__0___0 +P2368__0.4__7___0_180 +P1861__0.4__0___0 +P2593__1__179___677 +P0021__0.4__124___0_270 +P1580__1__2976___0 +P2591__1__0___0 +P2651__1__0___824 +P1794__0.4__0___743_180 +P1863__1__824___824 +P1240__1__2472___824 +P1268__0.4__576___424_180 +P1470__0.4__512___0_270 +P2663__1__0___824 +P2800__1__0___1648 +P1641__1__1648___3958 +P2162__1__0___356 +P2428__1__0___824 +P1360__1__2976___824 +P1230__1__4120___1648 +P1471__0.4__0___0_270 +P2392__0.4__0___0_270 +P1793__0.4__0___229_180 +P1551__1__2472___1648 +P1449__0.4__0___0_90 +P2794__1__1648___824 +P2580__0.4__0___0 +P2655__0.4__0___0_90 +P2657__1__0___0 +P1640__0.4__576___0_270 +P2141__0.4__0___0 +P1458__1__2472___3296 +P1467__0.4__512___0_270 +P1794__0.4__230___743_270 +P2335__1__0___824 +P2194__0.4__0___0_180 +P1793__0.4__372___229_90 +P0030__1__824___1382 +P1743__1__0___824 +P2761__0.4__864___512_90 +P0770__1__0___0 +P1786__0.4__0___0 +P1467__0.4__512___510_90 +P1458__1__2472___1648 +P1464__0.4__502___0 +P2194__1__0___1148 +P1458__0.4__538___512 +P2397__1__824___0 +P0047__0.4__0___0_270 +P1258__1__0___0 +P1464__0.4__502___0_180 +P1863__0.4__126___0_180 +P1449__0.4__0___359_270 +P2734__1__824___1648 +P1463__0.4__466___0_180 +P2594__0.4__0___0 +P2157__1__0___680 +P0082__0.4__1024___996 +P1464__0.4__502___512_180 +P2750__1__0___824 +P0368__1__0___0 +P0471__1__0___0 +P2653__1__0___0 +P0232__1__0___706 +P2754__0.4__359___0 +P2742__1__824___824 +P2382__1__824___824 +P2240__0.4__0___0_90 +P2655__1__0___1648 +P1432__0.4__0___690_270 +P1445__1__1648___1648 +P2240__0.4__0___0_270 +P0495__1__0___57 +P1432__0.4__331___690_90 +P1467__0.4__512___0_90 +P1791__0.4__0___512_180 +P1445__0.4__512___288_270 +P2659__1__1673___0 +P1445__0.4__0___288_90 +P2651__0.4__0___0_90 +P0867__0.4__0___0_270 +P2655__0.4__963___127 +P0082__0.4__1024___512_90 +P2590__0.4__0___0_180 +P2085__1__189___0 +P2563__0.4__0___0 +P1470__0.4__1177___512_90 +P2759__0.4__1536___936 +P1463__0.4__0___0_270 +P0236__1__0___0 +P1445__0.4__1024___0_270 +P0147__1__0___0 +P0093__1__0___0 +P0856__1__824___0 +P2651__1__641___897 +P1440__0.4__0___0_270 +P1470__0.4__512___512_180 +P2768__1__437___1333 +P2365__0.4__0___0 +P0082__0.4__1172___512_90 +P2655__1__3943___1648 +P1449__1__1154___0 +P0082__0.4__1172___996_90 +P2594__0.4__0___0_270 +P1791__0.4__512___512 +P0087__0.4__0___0_90 +P2651__1__641___0 +P1458__0.4__512___0_90 +P1268__0.4__0___424_180 +P1456__1__824___824 +P0368__0.4__0___0_90 +P1247__0.4__576___512_270 +P2754__0.4__359___0_90 +P1467__0.4__512___510 +P1432__1__2363___1648 +P1459__1__1648___3296 +P1931__0.4__0___0_90 +P2168__1__824___150 +P2348__1__0___341 +P0131__0.4__0___0_90 +P1470__0.4__0___1024 +P0856__0.4__0___0_90 +P0252__1__860___0 +P1247__0.4__576___512 +P2655__0.4__963___0_90 +P1267__0.4__512___424_90 +P2587__0.4__129___0_180 +P1432__0.4__331___0_180 +P1268__0.4__512___424_270 +P2735__1__1648___1608 +P0848__1__0___0 +P2650__1__824___974 +P2653__0.4__0___0 +P1268__0.4__0___424_90 +P0141__0.4__0___0_270 +P2759__0.4__0___936_90 +P0146__0.4__52___0_270 +P2759__0.4__0___936 +P2653__1__824___824 +P1268__0.4__512___424_90 +P2650__1__824___824 +P2742__1__824___1291 +P1464__1__2792___3630 +P0124__1__839___1319 +P1241__1__824___824 +P0867__0.4__103___0_180 +P0792__1__364___64 +P1791__0.4__512___512_270 +P1458__0.4__512___512_270 +P2091__1__0___0 +P1471__0.4__512___0 +P1470__0.4__0___512_180 +P2742__0.4__0___0_90 +P1247__1__2976___824 +P1470__1__4120___0 +P1268__1__1648___2596 +P1432__0.4__331___0 +P2365__0.4__0___0_90 +P2166__0.4__0___0_270 +P2365__0.4__0___0_270 +P2655__1__3943___1854 +P1373__1__1648___1648 +P1445__0.4__1024___0_180 +P1470__0.4__512___1024_270 +P2655__0.4__512___0_90 +P2145__1__0___108 +P1464__0.4__0___0_90 +P0368__1__0___363 +P2655__1__1648___824 +P0124__0.4__0___0_90 +P2656__0.4__0___0_180 +P2655__0.4__963___0_270 +P1432__0.4__331___690 +P1467__0.4__512___0_180 +P1443__1__1648___1648 +P1445__0.4__1192___288_270 +P1791__0.4__0___0_90 +P1467__1__0___0 +P1470__0.4__0___512_270 +P2655__1__824___824 +P0664__1__350___140 +P1458__0.4__0___0 +P0082__0.4__1024___996_90 +P1440__1__0___0 +P1459__1__1648___3406 +P2087__1__0___0 +P1791__1__1648___824 +P2587__0.4__129___0 +P2653__1__824___0 +P2381__1__1205___0 +P1467__0.4__512___0 +P1449__0.4__0___0_180 +P1470__0.4__0___1024_90 +P0368__0.4__0___0_180 +P1456__1__0___824 +P2606__1__768___0 +P1432__0.4__331___512_90 +P1470__1__1648___1648 +P1443__1__2472___2316 +P1445__0.4__512___0_90 +P1463__1__2472___1648 +P2756__0.4__512___0_270 +P2759__0.4__1536___936_90 +P2656__0.4__0___0_270 +P2655__1__3296___1854 +P0242__1__0___320 +P1449__1__824___824 +P1470__0.4__512___1240_90 +P0673__1__0___123 +P2594__0.4__0___0_180 +P2594__0.4__0___90 +P0087__0.4__0___0_270 +P0809__0.4__0___0_90 +P1470__0.4__1177___512_180 +P2157__1__89___680 +P1791__0.4__0___512 +P0146__0.4__52___0 +P0124__1__824___824 +P2754__1__824___0 +P2754__0.4__359___465_270 +P0124__0.4__0___0 +P1463__0.4__0___0 +P0147__0.4__0___0_270 +P2650__1__1933___974 +P2166__1__230___0 +P1458__0.4__0___512_90 +P1432__1__2363___2472 +P1470__0.4__0___1024_180 +P2655__0.4__512___0_270 +P1445__0.4__512___288_180 +P1432__0.4__0___0_270 +P0222__1__515___416 +P1432__0.4__0___512_90 +P0687__1__0___0 +P1432__1__2363___824 +P0147__0.4__0___0 +P1268__0.4__512___424 +P2659__0.4__0___0_90 +P1443__1__2472___1648 +P1445__0.4__0___288 +P1467__0.4__554___0_270 +P1373__1__824___2472 +P1268__0.4__512___424_180 +P0809__1__0___91 +P1373__1__1648___2472 +P1464__0.4__502___838_90 +P1471__1__0___1648 +P2251__1__0___56 +P2266__1__0___0 +P1432__0.4__0___512 +P0368__0.4__0___0 +P2659__1__1648___0 +P1267__0.4__576___424_90 +P2650__1__0___0 +P2759__0.4__2048___936_90 +P2756__0.4__512___0 +P0141__1__0___727 +P2594__1__0___824 +P0822__1__0___0 +P0845__1__0___85 +P0659__1__0___0 +P0383__0.4__0___0_270 +P0809__1__0___0 +P1474__1__2048___1648 +P0659__1__0___65 +P2044__1__0___0 +P1931__0.4__0___0 +P0455__1__118___0 +P0087__0.4__0___512_180 +P1445__1__2472___824 +P1463__0.4__0___194_270 +P1467__0.4__0___510_270 +P1470__0.4__0___0_270 +P1341__1__0___2976 +P2157__0.4__0___0_270 +P1458__1__1648___824 +P1464__0.4__502___838 +P1464__1__0___0 +P2653__1__0___1648 +P1445__0.4__1192___288_90 +P2365__0.4__108___0_270 +P0890__1__0___1648 +P2659__1__824___0 +P2759__0.4__0___936_180 +P2655__0.4__963___0_180 +P0082__0.4__1172___512_270 +P1470__0.4__1177___512 +P1432__0.4__0___0_180 +P2655__1__3296___1648 +P1467__1__824___0 +P2754__1__2433___824 +P2659__0.4__55___0_270 +P1464__0.4__0___512_270 +P2759__0.4__512___936_270 +P1464__0.4__502___0_270 +P2650__1__1933___824 +P0455__1__0___0 +P1214__1__0___824 +P2587__1__0___0 +P1474__1__2048___3296 +P1458__0.4__0___0_180 +P1445__0.4__1192___0_90 +P1470__0.4__1024___512 +P1449__0.4__0___359_90 +P1456__1__0___0 +P0642__1__0___0 +P2650__0.4__159___0_270 +P1463__1__0___824 +P2759__0.4__2048___512_180 +P2768__1__0___1333 +P2759__0.4__1536___512 +P2656__0.4__0___0_90 +P1445__0.4__0___0 +P0856__1__838___0 +P1931__1__0___0 +P1791__0.4__512___0 +P1467__1__824___824 +P0689__1__0___0 +P2107__0.4__0___0_270 +P1445__0.4__1192___288_180 +P2659__0.4__55___0_90 +P0374__1__318___0 +P2339__1__0___824 +P1463__0.4__466___194_90 +P2759__0.4__2505___0_180 +P1267__0.4__0___424_90 +P1463__0.4__466___194_180 +P2655__1__824___1648 +P1467__0.4__512___510_180 +P0087__0.4__0___0 +P1471__0.4__512___0_270 +P1467__0.4__512___510_270 +P2590__1__0___149 +P2750__1__2177___0 +P1458__0.4__538___917_180 +P2659__1__1648___963 +P0236__1__289___0 +P1432__0.4__331___690_180 +P1467__0.4__0___0_180 +P0673__0.4__0___0 +P2244__1__241___0 +P2655__0.4__963___127_180 +P1470__0.4__512___1240_180 +P1467__0.4__0___510_90 +P2348__1__465___341 +P0146__0.4__52___0_90 +P2659__0.4__0___0 +P1467__0.4__554___0_180 +P1471__0.4__512___0_90 +P1445__0.4__0___0_180 +P0147__1__682___225 +P2244__1__0___49 +P2090__1__0___0 +P2659__0.4__0___0_180 +P1433__0.4__0___0_180 +P0867__0.4__0___0 +P1268__1__1648___2472 +P2563__1__824___145 +P0792__1__0___0 +P2759__0.4__512___936_90 +P0146__1__0___0 +P2655__1__824___0 +P2655__1__1648___0 +P2759__0.4__2048___936_180 +P1456__0.4__374___0_180 +P1474__1__824___1648 +P2594__0.4__0___90_90 +P1458__0.4__512___512 +P1445__0.4__0___0_270 +P2131__1__280___397 +P0867__0.4__103___0 +P1432__0.4__331___512_180 +P1464__0.4__0___512_180 +P2089__1__0___0 +P1267__1__2472___2596 +P0383__1__0___0 +P1470__0.4__1177___0 +P2653__1__824___1648 +P1459__1__824___3406 +P1432__1__824___2472 +P1432__1__824___0 +P1445__0.4__0___288_270 +P2734__1__824___1675 +P1475__0.4__0___0_270 +P1474__1__1648___824 +P2659__0.4__55___0_180 +P2659__0.4__55___0 +P0845__1__0___0 +P1267__0.4__0___424_270 +P2756__0.4__863___0_180 +P1432__0.4__0___690_90 +P1267__0.4__576___424_270 +P2653__0.4__0___0_90 +P2727__0.4__0___0_90 +P1433__0.4__0___0_90 +P2743__1__2472___0 +P1258__1__0___0 +P1470__0.4__1024___0 +P1467__0.4__0___510_180 +P0140__1__0___0 +P1373__1__824___1648 +P1464__0.4__502___512 +P0131__1__512___0 +P0141__1__0___0 +P0082__0.4__1024___512 +P1449__0.4__0___0_270 +P1463__1__1648___824 +P2563__0.4__0___0_270 +P2750__1__1648___0 +P1268__0.4__576___424 +P1449__1__824___0 +P1467__0.4__0___0 +P1440__0.4__0___0_180 +P0236__1__289___426 +P1458__0.4__538___917 +P1470__0.4__512___1024_180 +P0673__0.4__0___0_270 +P2166__0.4__0___0_180 +P2659__1__1673___963 +P1791__1__824___824 +P0141__0.4__0___0_180 +P1445__1__1648___0 +P1470__0.4__1024___512_270 +P1247__0.4__576___0_180 +P0381__1__0___0 +P0673__1__91___123 +P1440__0.4__0___0 +P1464__0.4__502___512_270 +P2332__1__0___1450 +P0124__1__0___824 +P0495__1__0___0 +P2759__0.4__512___936_180 +P2332__1__1648___1450 +P1791__0.4__0___0 +P2168__1__995___0 +P0147__1__682___0 +P1456__0.4__0___0_270 +P1458__0.4__512___0 +P0782__1__0___47 +P2365__1__1807___0 +P2650__1__1648___0 +P0242__1__0___0 +P1445__0.4__1192___288 +P2657__1__864___0 +P1467__0.4__0___0_90 +P1445__0.4__512___288_90 +P2655__0.4__963___127_90 +P1458__0.4__538___512_180 +P1432__0.4__331___690_270 +P2756__0.4__863___0 +P1471__0.4__896___0_90 +P0146__0.4__0___0_180 +P1247__0.4__512___512 +P1432__1__824___824 +P2125__1__0___0 +P1470__0.4__512___0_180 +P2754__0.4__359___465_180 +P1456__0.4__0___0_180 +P1440__1__0___621 +P2655__0.4__512___127 +P0374__1__318___178 +P2734__1__1648___1675 +P0236__1__0___426 +P2365__0.4__108___0_90 +P1445__0.4__0___0_90 +P2727__1__633___0 +P2587__0.4__0___0_90 +P2656__1__641___824 +P1791__0.4__0___0_270 +P0252__1__824___0 +P0368__0.4__0___0_270 +P1780__0.4__0___0_180 +P0140__1__0___96 +P1791__0.4__512___0_90 +P1470__0.4__0___0_180 +P0850__1__0___0 +P1257__1__0___2976 +P2655__1__0___1854 +P0856__0.4__0___0_270 +P0856__1__824___424 +P1471__0.4__0___512_90 +P1470__0.4__0___512_90 +P0093__1__0___115 +P0374__1__0___178 +P2655__1__824___1854 +P1470__0.4__1177___0_270 +P1470__0.4__512___1024 +P1931__1__65___0 +P1269__1__1648___824 +P1433__0.4__0___0 +P1464__1__824___824 +P1780__1__0___824 +P1267__0.4__0___424_180 +P1463__1__824___0 +P0374__1__0___0 +P2332__1__1967___824 +P0082__0.4__1172___512 +P1467__0.4__554___0_90 +P2166__0.4__0___0 +P1445__0.4__1024___288_90 +P1470__0.4__0___0 +P0140__1__480___96 +P1463__1__2472___824 +P2651__1__0___897 +P0124__1__824___1319 +P1470__1__3296___824 +P2759__0.4__1536___512_90 +P1458__0.4__538___512_270 +P1475__0.4__0___0_180 +P2655__0.4__963___127_270 +P1256__1__0___0 +P2381__1__824___0 +P2107__1__0___0 +P2058__1__0___97 +P1247__0.4__512___0_270 +P1445__0.4__512___0_180 +P0082__1__4467___3296 +P1449__0.4__0___0 +P0869__1__0___0 +P1463__0.4__0___194 +P1467__0.4__554___510_180 +P2144__1__0___0 +P1256__1__824___0 +P1433__1__0___0 +P1432__0.4__0___690_180 +P1445__0.4__1192___0_270 +P0131__0.4__0___0_180 +P1791__1__824___1648 +P1470__0.4__512___1240_270 +P2168__1__824___0 +P2653__0.4__0___166 +P1247__0.4__512___512_180 +P1463__1__0___0 +P0373__1__0___0 +P2365__1__1648___0 +P1464__1__0___1648 +P1432__0.4__331___512 +P1458__1__1648___1648 +P1458__0.4__512___0_180 +P1458__0.4__538___0_180 +P1464__0.4__0___838 +P0383__1__111___0 +P2743__1__0___824 +P1268__0.4__0___424_270 +P2332__1__0___824 +P0203__1__741___0 +P2091__0.4__0___0_270 +P1464__0.4__0___512 +P2650__1__1648___824 +P2759__0.4__2505___0_270 +P2479__1__0___824 +P2727__1__633___427 +P2651__1__0___0 +P1470__0.4__1024___512_180 +P2657__0.4__0___0_90 +P1470__0.4__512___0 +P1247__0.4__512___512_270 +P0782__1__0___0 +P2563__0.4__0___0_90 +P1467__1__0___824 +P1456__0.4__374___0_90 +P0781__1__0___0 +P2590__1__0___0 +P1931__0.4__0___0_180 +P0087__1__824___1648 +P1247__0.4__512___512_90 +P1464__0.4__0___838_90 +P1443__1__1648___2316 +P0131__1__512___256 +P0809__1__380___0 +P2650__0.4__159___0_90 +P1458__0.4__538___0_270 +P1470__0.4__1024___0_180 +P1464__0.4__0___0 +P1470__0.4__0___1240_90 +P1268__0.4__576___424_270 +P0371__1__0___0 +P2590__0.4__0___0_90 +P2339__1__0___0 +P1467__1__2920___0 +P0673__0.4__0___0_180 +P1268__0.4__576___424_90 +P0381__1__111___0 +P2157__0.4__0___0_180 +P2587__0.4__0___0_270 +P1464__1__1648___824 +P1458__0.4__0___0_270 +P1445__1__2472___1648 +P2251__1__247___0 +P0673__0.4__0___0_90 +P2089__1__0___32 +P2335__1__824___0 +P0689__1__425___0 +P1470__0.4__0___1240_270 +P1470__0.4__0___1240 +P1373__1__1648___824 +P0082__0.4__1024___512_270 +P0833__1__617___0 +P2587__0.4__129___0_90 +P1470__1__2472___1648 +P2759__0.4__2048___512_90 +P0146__0.4__0___0 +P1470__0.4__512___1024_90 +P2168__1__995___150 +P1475__0.4__0___0 +P1445__0.4__512___0_270 +P2085__1__0___0 +P1475__1__0___0 +P1470__0.4__512___0_90 +P2587__0.4__0___0_180 +P1325__1__824___1648 +P2650__0.4__159___0_180 +P2754__0.4__359___465_90 +P2194__0.4__0___0_90 +P2335__1__1463___0 +P1791__0.4__512___512_90 +P1467__0.4__0___510 +P2590__0.4__0___0 +P1459__1__824___3296 +P1458__0.4__538___0 +P2742__0.4__0___0_270 +P1467__0.4__554___510 +P2594__0.4__0___0_90 +P2759__0.4__512___936 +P2107__0.4__0___0_90 +P1467__0.4__554___0 +P0867__0.4__103___0_90 +P1458__0.4__0___512 +P2563__1__883___145 +P1458__0.4__0___512_180 +P1256__1__824___824 +P1456__0.4__374___0_270 +P1458__0.4__0___0_90 +P2655__0.4__512___127_90 +P2754__0.4__359___0_180 +P2332__1__1648___824 +P1445__0.4__512___0 +P1445__0.4__512___288 +P2759__0.4__2048___936_270 +P2655__0.4__512___0 +P1463__0.4__466___0_90 +P2244__1__0___0 +P2735__1__824___1608 +P1471__0.4__896___0 +P1470__1__4120___824 +P0203__1__0___0 +P2653__0.4__0___166_90 +P0856__1__838___424 +P1463__0.4__0___194_180 +P2650__1__824___0 +P2091__0.4__0___0_90 +P1470__0.4__0___512 +P0082__0.4__1024___996_180 +P2756__0.4__512___0_180 +P1458__1__2880___2472 +P2653__0.4__0___166_180 +P1268__0.4__0___424 +P2655__0.4__0___0_270 +P0867__1__824___824 +P2655__0.4__512___127_270 +P1456__0.4__0___0 +P2655__0.4__963___0 +P0383__0.4__0___0 +P2194__1__692___0 +P1464__1__824___3630 +P2650__0.4__0___0_90 +P2682__1__0___0 +P2107__0.4__0___0_180 +P2759__0.4__1536___936_270 +P0778__1__585___0 +P0131__1__0___0 +P2655__0.4__512___127_180 +P1433__0.4__0___0_270 +P2759__0.4__2505___0_90 +P2656__1__0___824 +P2743__1__824___824 +P1458__0.4__512___0_270 +P0385__1__0___0 +P2655__0.4__0___127_90 +P1791__1__1648___1648 +P2665__1__0___0 +P1458__0.4__538___0_90 +P2742__1__1324___1291 +P2655__0.4__0___127_180 +P2754__0.4__359___465 +P1471__0.4__896___0_180 +P1791__0.4__512___0_180 +P2756__1__2472___0 +P2650__1__1933___0 +P1432__0.4__0___690 +P2651__1__641___824 +P2759__0.4__1536___936_180 +P1467__0.4__554___510_90 +P2759__0.4__2048___512 +P2659__1__824___824 +P2166__1__0___0 +P0781__1__0___47 +P2194__0.4__0___0_270 +P2587__0.4__129___0_270 +P0362__1__0___0 +P1445__0.4__1024___288_270 +P2335__1__0___1167 +P2742__1__1324___824 +P2659__1__1673___824 +P1458__0.4__538___917_270 +P2727__0.4__0___0 +P0642__1__0___64 +P1471__1__2472___0 +P2656__1__641___897 +P1463__1__824___824 +P1463__0.4__0___0_180 +P1467__0.4__554___510_270 +P0147__0.4__0___0_180 +P1464__0.4__0___838_180 +P1475__0.4__0___0_90 +P1456__0.4__0___0_90 +P1267__1__1648___2596 +P1267__0.4__576___424_180 +P2479__1__0___1111 +P1474__1__1648___3296 +P0833__1__617___135 +P1463__0.4__466___0_270 +P2650__1__1648___974 +P0867__1__824___0 +P2656__0.4__0___0 +P0082__0.4__1024___512_180 +P1471__1__1648___0 +P1449__1__1154___824 +P2657__0.4__0___0 +P2756__0.4__863___0_90 +P2587__1__824___0 +P1445__1__1648___824 +P1470__1__1648___4636 +P2382__1__1472___900 +P1471__0.4__0___0_180 +P2251__1__0___0 +P0082__0.4__1024___996_270 +P1780__0.4__0___0_90 +P1432__1__1648___0 +P1247__0.4__512___0 +P1464__0.4__0___838_270 +P0809__0.4__0___0_270 +P1471__0.4__0___0_90 +P0141__0.4__0___0 +P1456__1__824___1488 +P1456__1__0___1488 +P2590__1__362___0 +P2657__0.4__0___0_180 +P1464__1__2792___0 +P2659__1__1648___824 +P1467__1__2920___824 +P0152__1__0___0 +P2650__0.4__0___0 +P0457__1__0___0 +P1432__0.4__331___0_90 +P2759__1__5768___2472 +P1470__0.4__512___1240 +P0140__1__480___0 +P0087__0.4__0___512 +P2759__0.4__2505___0 +P1471__0.4__0___512 +P1247__0.4__512___0_90 +P0371__1__0___116 +P2087__1__401___0 +P0833__1__0___0 +P1458__0.4__512___512_180 +P1470__1__3296___0 +P1458__0.4__0___512_270 +P0082__1__4467___2472 +P1458__1__824___824 +P1247__0.4__576___512_90 +P1470__0.4__1024___512_90 +P0792__1__0___64 +P1791__0.4__512___512_180 +P1463__1__1648___2022 +P1470__0.4__1177___512_270 +P2145__1__0___0 +P1471__0.4__0___512_270 +P1267__0.4__0___424 +P2657__0.4__0___0_270 +P1470__0.4__0___1240_180 +P1325__1__1648___1648 +P1449__0.4__0___359 +P2742__0.4__0___0_180 +P1463__0.4__0___194_90 +P1463__0.4__466___0 +P1432__0.4__331___0_270 +P1470__0.4__0___1024_270 +P2266__1__0___58 +P0809__0.4__0___0 +P0677__1__0___0 +P1464__0.4__502___0_90 +P2653__1__0___824 +P0147__0.4__0___0_90 +P1246__1__2472___1648 +P0131__0.4__0___0_270 +P1458__0.4__538___917_90 +P1267__0.4__512___424_180 +P1432__1__1648___824 +P2768__1__437___824 +P2382__1__824___900 +P1791__0.4__0___512_90 +P1267__0.4__512___424 +P0082__0.4__1172___996_270 +P0124__1__0___1319 +P1471__0.4__0___0 +P1247__0.4__576___0_90 +P0482__1__0___0 +P0222__1__515___0 +P2754__1__0___0 +P2382__1__1472___824 +P2244__1__241___49 +P2587__0.4__0___0 +P1432__1__824___1648 +P2332__1__1967___1450 +P1445__0.4__1192___0_180 +P2653__0.4__0___166_270 +P1458__1__824___1648 +P1474__1__1648___1648 +P1247__0.4__512___0_180 +P2655__0.4__0___0_180 +P1470__0.4__512___512_90 +P0809__0.4__0___0_180 +P0131__0.4__0___0 +P0457__1__379___0 +P2563__1__883___0 +P0450__1__0___0 +P1432__0.4__0___0_90 +P0856__0.4__0___0 +P2240__0.4__0___0 +P1458__0.4__512___917_90 +P1358__1__824___2472 +P0131__1__0___256 +P2759__0.4__1536___512_180 +P2134__1__484___0 +P2157__1__0___0 +P0882__1__0___824 +P0146__0.4__0___0_270 +P0146__0.4__0___0_90 +P1463__0.4__466___194_270 +P0856__0.4__0___0_180 +P0232__1__502___706 +P2157__0.4__0___0_90 +P1471__0.4__512___0_180 +P2157__1__89___0 +P1463__0.4__466___194 +P0082__1__4120___3296 +P2759__1__1648___3875 +P2650__0.4__0___0_270 +P1470__1__4120___1648 +P0142__1__0___0 +P2091__0.4__0___0_180 +P2655__0.4__512___0_180 +P1445__0.4__1024___288_180 +P0664__1__0___0 +P0856__1__0___424 +P1214__1__0___1648 +P2655__0.4__0___127_270 +P2653__0.4__0___0_180 +P1458__1__2880___1648 +P0141__0.4__0___0_90 +P0087__0.4__0___0_180 +P1247__0.4__576___0_270 +P0146__0.4__52___0_180 +P2655__0.4__0___0 +P1445__0.4__1024___288 +P1189__1__0___0 +P0082__0.4__1172___996_180 +P1467__1__2472___0 +P1464__0.4__502___512_90 +P0161__1__1580___1690 +P2742__1__0___824 +P2759__0.4__1536___512_270 +P1458__0.4__512___917_180 +P0673__1__91___0 +P2563__1__824___0 +P2743__1__1648___0 +P2657__1__0___824 +P2754__0.4__359___0_270 +P2240__1__0___0 +P2656__1__641___0 +P1464__0.4__0___0_270 +P0867__0.4__103___0_270 +P1463__1__2472___2022 +P2742__1__0___1291 +P2655__0.4__0___127 +P1471__0.4__0___512_180 +P2365__0.4__108___0 +P0082__0.4__1172___512_180 +P2079__1__0___0 +P1432__1__0___2472 +P1445__0.4__0___288_180 +P2754__0.4__0___0 +P1791__0.4__0___512_270 +P2251__1__247___56 +P1445__0.4__1024___0_90 +P2166__0.4__0___0_90 +P2659__1__824___963 +P2742__0.4__0___0 +P1474__1__824___824 +P2727__0.4__0___0_270 +P1470__1__2472___2472 +P1464__1__824___3296 +P1458__1__2880___824 +P1458__0.4__512___917 +P2756__0.4__512___0_90 +P2590__0.4__0___0_270 +P2651__0.4__0___0_270 +P1470__1__1648___2472 +P2656__1__0___0 +P1470__0.4__512___512 +P1467__0.4__0___0_270 +P2563__0.4__0___0_180 +P1432__0.4__0___512_180 +P0673__1__0___0 +P2768__1__0___824 +P0809__1__380___91 +P0383__0.4__0___0_180 +P2659__0.4__0___0_270 +P0087__0.4__0___512_270 +P1464__0.4__502___838_180 +P2743__1__824___0 +P1470__0.4__1177___0_180 +P0792__1__364___0 +P1449__0.4__0___359_180 +P0082__0.4__1172___996 +P1458__0.4__512___917_270 +P0649__1__0___0 +P1470__1__3296___1648 +P1780__0.4__0___0 +P1791__0.4__512___0_270 +P0087__0.4__0___512_90 +P2240__0.4__0___0_180 +P0850__1__149___0 +P1474__1__1648___2472 +P0124__0.4__0___0_180 +P2727__0.4__0___0_180 +P1445__0.4__1192___0 +P1470__0.4__1177___0_90 +P0082__1__4120___2472 +P2107__0.4__0___0 +P1445__0.4__1024___0 +P2131__1__0___397 +P2734__1__1648___1648 +P2655__1__3296___824 +P2590__1__362___149 +P2743__1__1648___824 +P0664__1__0___140 +P1456__0.4__374___0 +P1467__1__2472___824 +P2365__0.4__108___0_180 +P1463__1__2701___824 +P1464__0.4__0___512_90 +P2656__1__0___897 +P2759__1__7416___0 +P2651__1__0___824 +P2058__1__0___0 +P2754__0.4__0___0_180 +P0867__0.4__0___0_90 +P1464__0.4__502___838_270 +P1470__1__1648___4120 +P1464__1__2472___3630 +P1247__0.4__576___0 +P2133__1__0___0 +P1268__0.4__576___424_180 +P1432__0.4__0___0 +P1458__0.4__538___512_90 +P0124__0.4__0___0_270 +P1780__0.4__0___0_270 +P2651__0.4__0___0 +P0867__0.4__0___0_180 +P2754__0.4__0___0_90 +P1470__0.4__0___0_90 +P1470__0.4__512___0_270 +P1470__0.4__512___512_270 +P2754__0.4__0___0_270 +P1241__1__1648___824 +P2759__0.4__0___936_270 +P2606__1__0___0 +P1463__0.4__0___0_90 +P0147__1__0___225 +P2365__0.4__0___0_180 +P1464__1__2472___0 +P2157__0.4__0___0 +P2655__1__3943___824 +P2594__0.4__0___90_180 +P2756__0.4__863___0_270 +P2650__0.4__159___0 +P2194__0.4__0___0 +P1440__0.4__0___0_90 +P2759__0.4__2048___512_270 +P2653__0.4__0___0_270 +P1432__0.4__331___512_270 +P1445__1__2472___0 +P0869__1__0___90 +P1791__0.4__0___0_180 +P0087__1__0___1648 +P1267__0.4__512___424_270 +P1471__0.4__0___0_270 +P2655__1__3943___0 +P1458__0.4__512___512_90 +P2594__0.4__0___90_270 +P1432__0.4__0___512_270 +P1449__0.4__0___0_90 +P0664__1__350___0 +P1247__0.4__576___512_180 +P1931__0.4__0___0_270 +P2650__0.4__0___0_180 +P2759__0.4__2048___936 +P2655__0.4__0___0_90 +P2657__1__0___0 +P0124__1__839___824 +P2653__1__0___1952 +P1464__0.4__0___0_180 +P2657__1__824___0 +P0383__0.4__0___0_90 +P0833__1__0___135 +P1467__0.4__512___0_270 +P1463__1__1648___1648 +P1246__1__1648___1648 +P2194__0.4__0___0_180 +P1471__0.4__896___0_270 +P2335__1__0___824 +P2651__0.4__0___0_180 +P2122__1__0___0 +P1358__1__824___2976 +P0161__1__1580___1648 +P0637__1__0___0 +P2759__1__7799___0 +P2091__0.4__0___0 +P1467__0.4__512___510_90 +P1464__0.4__502___0 +P1458__0.4__538___512 +P1470__0.4__1024___0_270 +P2090__1__0___65 +P1267__0.4__576___424 +P1470__0.4__1024___0_90 +P2655__1__3296___0 +P1464__0.4__502___0_180 +P1464__0.4__502___512_180 +P0368__1__0___0 +P1470__0.4__1024___1024_270 +P1382__1__824___2472 +P2742__1__824___824 +P1410__0.4__1024___1357_90 +P1791__0.4__0___512_180 +P1466__0.4__1024___512 +P2725__1__2472___1648 +P1738__1__0___824 +P1463__0.4__0___0_270 +P2572__0.4__0___0_180 +P2721__1__0___1429 +P2578__0.4__0___0 +P1388__0.4__512___1109 +P0147__1__0___0 +P1410__0.4__1024___512_270 +P1505__0.4__27___0 +P1440__0.4__0___0_270 +P1876__1__0___573 +P2011__0.4__0___0 +P2651__1__641___0 +P1467__0.4__512___510 +P2522__1__824___0 +P1791__1__1648___3774 +P2722__1__824___824 +P1388__0.4__512___512_270 +P2285__0.4__412___0_180 +P1964__1__0___0 +P2570__0.4__0___0_180 +P0143__1__0___0 +P1410__0.4__1024___512_180 +P2692__1__1648___0 +P1977__1__647___0 +P2578__0.4__0___0_180 +P2719__1__1648___0 +P0466__1__0___0 +P2570__1__0___0 +P1471__1__1648___824 +P1440__1__824___621 +P1470__0.4__512___1024_270 +P2655__0.4__963___0_270 +P1387__0.4__1640___512_180 +P2502__1__0___824 +P0082__0.4__512___996_180 +P1458__0.4__0___0 +P1466__0.4__0___1241_90 +P0082__0.4__1024___996_90 +P2756__0.4__0___688_90 +P0146__1__1667___824 +P2800__0.4__0___0 +P0368__0.4__0___0_180 +P0044__0.4__0___0_180 +P1903__1__824___0 +P0029__0.4__0___0_180 +P2512__1__824___0 +P1449__1__824___824 +P1470__0.4__512___1240_90 +P2594__0.4__0___90 +P1738__0.4__0___0_270 +P1470__0.4__1177___512_180 +P0047__1__0___0 +P0141__0.4__707___0_270 +P1791__0.4__0___512 +P0124__0.4__0___0 +P1458__1__2880___3829 +P2011__0.4__0___0_90 +P1780__0.4__0___512_270 +P1445__0.4__512___288_180 +P2438__1__0___0 +P1432__0.4__0___512_90 +P2527__1__824___0 +P0018__1__800___0 +P1470__0.4__1024___1024 +P1467__1__2472___1648 +P1445__0.4__0___288 +P1337__1__0___2472 +P0030__1__0___1382 +P1791__0.4__943___512_90 +P1471__1__0___1648 +P1780__0.4__410___512_180 +P0050__1__1535___2100 +P1432__0.4__0___512 +P0368__0.4__0___0 +P0050__0.4__0___0_270 +P2368__0.4__7___0 +P2595__0.4__0___0 +P2257__1__0___0 +P1369__1__4120___1648 +P1387__1__4944___3206 +P1780__0.4__410___892_90 +P1466__0.4__1024___0_180 +P2580__1__0___0 +P1793__0.4__0___0_90 +P1432__0.4__0___0_180 +P1387__0.4__1640___668_270 +P1863__0.4__0___296 +P0082__0.4__0___512_90 +P1794__0.4__230___0_270 +P1388__0.4__662___0_180 +P1470__0.4__1177___1240_270 +P2756__0.4__512___688_90 +P2168__1__0___150 +P1456__1__2470___0 +P2800__0.4__300___276_180 +P1458__0.4__0___0_180 +P1793__0.4__0___0_270 +P0047__0.4__0___0_90 +P1470__0.4__1024___512 +P1467__1__2472___2810 +P2650__0.4__159___0_270 +P2579__0.4__0___0_180 +P2721__1__824___824 +P1931__1__0___0 +P2107__0.4__0___0_270 +P2720__1__1648___0 +P1380__0.4__735___854_180 +P1387__0.4__1640___512 +P0505__1__0___885 +P2597__1__0___0 +P1470__1__4120___2472 +P1591__1__0___0 +P2721__1__3515___1429 +P2721__1__2472___0 +P0673__0.4__0___0 +P1466__0.4__1024___1024 +P1467__0.4__0___510_90 +P2659__0.4__0___0 +P1794__1__2112___1648 +P1471__0.4__512___0_90 +P1464__1__824___0 +P1433__0.4__0___0_180 +P2194__1__692___1148 +P1456__0.4__374___0_180 +P2714__1__3513___1426 +P1794__1__2112___3296 +P1474__1__824___1648 +P2594__0.4__0___90_90 +P1387__0.4__1024___668_270 +P2012__1__0___0 +P1464__0.4__0___512_180 +P0466__0.4__0___0_180 +P1388__0.4__512___0 +P0159__1__0___0 +P2585__0.4__0___0 +P1505__1__824___824 +P2533__1__1380___824 +P2582__0.4__0___0_90 +P1475__0.4__0___0_270 +P2419__1__566___0 +P0146__1__1648___824 +P1433__0.4__0___0_90 +P1410__0.4__512___1357_180 +P1791__0.4__943___512_180 +P1466__0.4__0___1241_270 +P1445__1__824___2257 +P2166__0.4__0___0_180 +P2714__1__2472___1426 +P1903__1__0___0 +P1470__0.4__1024___512_270 +P0381__1__0___0 +P2597__0.4__0___0_270 +P0673__1__91___123 +P1793__0.4__372___0_270 +P2592__1__0___443 +P1458__1__824___0 +P2572__1__49___824 +P2365__1__1807___0 +P0242__1__0___0 +P1388__0.4__662___1109_90 +P2725__1__2472___0 +P2725__1__3502___0 +P2011__1__0___0 +P2570__1__775___0 +P0466__0.4__0___0_270 +P1977__1__0___735 +P1470__0.4__512___0_180 +P1456__0.4__0___0_180 +P1780__0.4__410___892 +P2362__0.4__0___0_90 +P2718__1__1648___1422 +P2800__1__824___824 +P2570__0.4__0___0_270 +P1470__0.4__1177___0_270 +P1470__0.4__512___1024 +P2716__1__824___0 +P0204__0.4__0___0 +P2055__1__130___418 +P2203__0.4__0___0_180 +P1876__1__306___0 +P2572__1__49___0 +P1470__1__3296___824 +P1475__0.4__0___0_180 +P2578__1__24___403 +P2565__0.4__0___0_90 +P2107__1__0___0 +P1863__0.4__0___0 +P2197__0.4__0___0_270 +P1433__1__0___0 +P1598__1__1648___2472 +P0131__0.4__0___0_180 +P1440__1__824___0 +P1466__0.4__1024___512_270 +P1432__0.4__331___512 +P1458__0.4__512___0_180 +P0029__0.4__0___0_270 +P1388__0.4__512___1024 +P1794__1__2112___824 +P2651__1__0___0 +P2595__1__0___0 +P2141__1__148___0 +P0505__0.4__0___0_90 +P0141__1__824___727 +P2590__1__0___0 +P1443__1__1648___2316 +P1640__0.4__0___0_180 +P2754__1__1648___824 +P2650__0.4__159___0_90 +P0052__1__0___824 +P1458__0.4__538___0_270 +P2714__1__2472___0 +P1861__1__0___0 +P1380__1__2472___1648 +P0381__1__111___0 +P2251__1__247___0 +P1791__0.4__943___512_270 +P2725__1__3296___1684 +P1470__1__4479___3296 +P0146__0.4__0___0 +P1505__0.4__27___0_180 +P2714__1__3296___1426 +P2591__1__829___824 +P0050__0.4__0___226_90 +P2194__0.4__0___0_90 +P1467__0.4__0___510 +P1466__0.4__0___1241_180 +P2594__1__589___824 +P2580__0.4__0___0_180 +P2742__0.4__0___0_270 +P1410__0.4__1110___1357_180 +P0146__1__824___0 +P1410__0.4__1110___1357_90 +P1466__0.4__0___1024_270 +P1466__0.4__512___0 +P2598__1__824___824 +P1458__0.4__0___0_90 +P2503__1__0___1494 +P1382__0.4__0___1024_90 +P1863__0.4__126___0_270 +P1445__0.4__512___0 +P1470__1__0___4636 +P1794__1__824___2472 +P1466__0.4__1024___512_90 +P1387__0.4__1024___668_180 +P1863__1__0___824 +P0021__0.4__0___0 +P0505__0.4__0___0_180 +P2714__1__824___824 +P2593__1__179___0 +P2724__1__824___824 +P2569__1__0___0 +P1794__0.4__0___0 +P2566__0.4__0___0_180 +P2428__1__0___1587 +P0082__0.4__1024___996_180 +P1466__1__824___2472 +P2653__0.4__0___166_180 +P2756__0.4__512___688_270 +P1337__1__824___1648 +P2331__1__2250___824 +P1793__0.4__372___229_180 +P0030__1__1648___824 +P2107__0.4__0___0_180 +P1464__0.4__502___0_180 +P1410__1__2472___4120 +P2655__0.4__512___127_180 +P0047__1__102___0 +P1863__0.4__126___296_180 +P1466__0.4__512___512 +P2566__1__0___0 +P1458__0.4__512___0_270 +P0385__1__0___0 +P2781__1__2472___824 +P0162__1__0___439 +P2756__0.4__0___688 +P2578__1__0___0 +P1432__0.4__0___690 +P1640__0.4__0___0 +P1977__0.4__0___0 +P2166__1__0___0 +P2569__0.4__0___0_270 +P2598__0.4__0___0_90 +P1794__1__2112___3394 +P2116__0.4__0___0_270 +P2800__1__1648___824 +P2727__0.4__0___0 +P1463__1__824___824 +P2203__0.4__0___0_90 +P0141__0.4__512___0_90 +P1410__0.4__1024___1357_270 +P1464__0.4__0___838_180 +P1456__1__1648___0 +P0052__1__837___824 +P1463__0.4__466___0_270 +P2657__0.4__0___0 +P2720__1__0___824 +P0052__1__0___1249 +P1464__0.4__0___838_270 +P2428__1__824___1587 +P1471__0.4__0___0_90 +P0141__0.4__0___0 +P0029__0.4__80___0_180 +P1861__1__1020___2472 +P2657__0.4__0___0_180 +P0111__0.4__0___0_270 +P2800__0.4__300___276_270 +P1861__1__1020___0 +P0050__0.4__0___0_180 +P1432__0.4__331___0_90 +P1466__0.4__512___1024 +P1964__0.4__0___0_270 +P2392__0.4__0___0_90 +P2592__0.4__0___0_90 +P2714__1__1648___0 +P1791__0.4__512___512_180 +P0052__0.4__0___0_90 +P1380__0.4__512___854_180 +P2145__1__0___0 +P1471__0.4__0___512_270 +P2547__1__824___1401 +P2653__1__0___824 +P0018__1__800___665 +P2538__1__863___0 +P2725__1__3296___824 +P0131__0.4__0___0_270 +P1410__0.4__1110___1024_270 +P1432__1__1648___824 +P0021__0.4__0___270_270 +P2244__1__241___49 +P2597__0.4__0___0 +P0082__0.4__512___512_180 +P2655__0.4__0___0_180 +P2721__1__0___824 +P0164__1__0___0 +P2563__1__883___0 +P1793__0.4__0___229 +P2594__1__589___1762 +P2592__0.4__0___0_270 +P1458__0.4__512___917_90 +P1876__1__306___573 +P0050__0.4__0___226 +P0146__0.4__0___0_270 +P0146__0.4__0___0_90 +P2582__0.4__0___0_270 +P0164__1__423___0 +P0082__1__0___0 +P2721__1__0___0 +P2756__0.4__512___688_180 +P0021__0.4__0___270_180 +P1791__0.4__943___895_90 +P2724__1__824___1432 +P0018__1__0___0 +P1861__0.4__0___417 +P1794__0.4__230___743 +P1466__1__824___4120 +P2655__0.4__0___127_270 +P1589__1__0___1648 +P1458__1__2880___1648 +P0050__1__1535___1648 +P2572__1__49___1171 +P0673__1__91___0 +P0146__1__1667___0 +P2754__0.4__359___0_270 +P1466__0.4__512___0_180 +P2162__0.4__0___0_90 +P1471__0.4__0___994_270 +P1471__0.4__0___512_180 +P2591__0.4__0___0 +P2582__1__0___0 +P1445__0.4__0___288_180 +P1791__0.4__0___512_270 +P1791__0.4__512___895 +P2251__1__247___56 +P1470__0.4__1024___1240 +P0030__0.4__80___0_270 +P1380__0.4__735___512 +P2166__0.4__0___0_90 +P0052__1__824___824 +P1466__0.4__0___512_270 +P1863__0.4__126___0_90 +P1791__0.4__0___895_180 +P1458__1__0___824 +P2582__0.4__0___0_180 +P1467__0.4__0___0_270 +P1432__0.4__0___512_180 +P0383__0.4__0___0_180 +P2593__0.4__0___0 +P1633__1__0___2976 +P1388__1__3192___824 +P2203__1__446___824 +P2592__1__0___0 +P2800__1__824___1648 +P1449__0.4__0___359_180 +P1791__1__2472___3296 +P1794__0.4__0___512_90 +P1474__1__1648___2472 +P0124__0.4__0___0_180 +P1780__0.4__410___512_90 +P2351__1__822___705 +P1470__0.4__1177___1240 +P2734__1__1648___1648 +P1596__1__0___1648 +P1410__0.4__1110___512_90 +P1464__0.4__0___512_90 +P1467__1__1648___2810 +P0082__0.4__512___996_90 +P1445__1__0___2257 +P1466__0.4__512___512_90 +P2719__1__824___0 +P1466__0.4__512___1241_270 +P1458__0.4__538___512_90 +P0021__1__1648___1648 +P0124__0.4__0___0_270 +P2362__1__824___0 +P2721__1__3296___1429 +P1466__0.4__0___0_270 +P2754__0.4__0___0_270 +P2538__1__0___1196 +P2141__0.4__0___0_90 +P2580__1__0___824 +P2365__0.4__0___0_180 +P1464__1__2472___0 +P1793__0.4__0___229_270 +P2157__0.4__0___0 +P2579__0.4__0___0_90 +P0082__0.4__0___0_180 +P1440__0.4__0___0_90 +P2055__1__0___418 +P2452__1__609___1136 +P1903__1__922___264 +P2598__1__824___0 +P2368__1__0___824 +P1388__0.4__512___1024_270 +P2262__0.4__0___0_180 +P1466__1__1648___2472 +P2651__0.4__0___0_180 +P0158__1__1648___824 +P2538__1__824___0 +P1793__1__824___0 +P1470__0.4__1024___0_270 +P2565__1__0___1258 +P1505__1__0___0 +P2800__0.4__0___0_90 +P2584__1__824___0 +P2522__1__0___0 +P2368__1__824___1281 +P2653__1__0___0 +P1791__1__3296___3296 +P2800__0.4__300___0_270 +P2240__0.4__0___0_90 +P2655__1__0___1648 +P1432__0.4__0___690_270 +P2240__0.4__0___0_270 +P1470__0.4__1024___1240_270 +P1445__0.4__512___288_270 +P0021__0.4__0___270 +P1780__0.4__410___0_270 +P2368__1__0___0 +P2595__0.4__0___0_180 +P1793__1__824___1648 +P2655__0.4__963___127 +P2563__0.4__0___0 +P2720__1__1648___1422 +P1470__0.4__1177___512_90 +P2756__0.4__0___688_180 +P1794__0.4__230___512_180 +P2651__1__641___897 +P2365__0.4__0___0 +P2594__0.4__0___0_270 +P2714__1__1648___1426 +P1458__0.4__512___0_90 +P2592__1__1031___0 +P1456__1__824___824 +P0029__1__0___1383 +P2714__1__0___824 +P0021__1__824___2210 +P1931__0.4__0___0_90 +P1780__1__1648___824 +P1470__0.4__0___1024 +P0082__0.4__0___512_270 +P1388__0.4__512___1024_90 +P2655__0.4__963___0_90 +P1466__0.4__1024___1024_180 +P1388__0.4__512___512_90 +P2593__0.4__0___0_270 +P0141__0.4__0___0_270 +P2742__1__824___1291 +P0030__1__1735___1382 +P2724__1__2472___1432 +P1791__0.4__512___512_270 +P1458__0.4__512___512_270 +P2800__0.4__0___276_90 +P1410__0.4__0___0_270 +P1794__0.4__0___0_270 +P2781__1__2531___0 +P2166__0.4__0___0_270 +P2365__0.4__0___0_270 +P2569__0.4__0___0_90 +P0505__0.4__0___0_270 +P2512__1__862___0 +P2375__1__0___0 +P1380__0.4__735___512_180 +P2653__1__824___0 +P1861__1__0___2472 +P0029__1__0___824 +P1449__0.4__0___0_180 +P1432__0.4__331___512_90 +P1380__0.4__735___0 +P2655__1__3296___1854 +P1887__1__0___0 +P2800__0.4__300___0_90 +P2331__0.4__286___0_180 +P2692__0.4__50___0_180 +P1410__0.4__1024___1357_180 +P0021__0.4__0___0_180 +P2533__1__1380___0 +P0082__0.4__512___0 +P2756__0.4__0___688_270 +P1388__0.4__662___512_180 +P1730__1__2472___0 +P2655__0.4__512___0_270 +P0082__0.4__512___996 +P2714__1__2472___824 +P1863__0.4__0___296_180 +P1466__0.4__1024___1024_270 +P0147__0.4__0___0 +P1410__1__4120___824 +P0018__1__0___665 +P2569__0.4__0___0 +P1466__1__2472___1648 +P1387__0.4__1536___512_180 +P0021__1__1648___824 +P1466__1__1648___4120 +P0809__1__0___0 +P1466__1__824___1648 +P0082__1__2472___3296 +P0204__0.4__0___0_90 +P2452__1__0___824 +P0158__1__824___1551 +P2502__1__0___0 +P2598__1__1332___0 +P2716__1__2996___824 +P1470__0.4__1177___512 +P1410__0.4__1110___0_90 +P2659__0.4__55___0_270 +P1380__0.4__735___512_270 +P2368__0.4__0___0 +P1780__0.4__410___0_180 +P2570__0.4__0___0 +P1382__0.4__0___512_180 +P1464__0.4__502___0_270 +P1640__1__0___0 +P1380__0.4__512___512_180 +P2194__1__0___824 +P2203__0.4__0___0 +P1382__1__1648___2472 +P1388__0.4__662___1024_270 +P1449__0.4__0___359_90 +P1794__0.4__0___743 +P1466__0.4__512___1241_90 +P2579__1__0___0 +P1380__0.4__512___512_90 +P1463__0.4__466___194_90 +P1794__0.4__230___0 +P1791__0.4__512___895_180 +P2655__1__824___1648 +P2719__1__1648___824 +P1467__0.4__512___510_180 +P2453__1__0___824 +P1384__1__1648___0 +P1780__0.4__410___0 +P2721__1__2472___1429 +P2285__0.4__412___0_90 +P2538__1__0___824 +P2563__1__824___145 +P1380__0.4__735___0_180 +P1445__0.4__0___0_270 +P0665__1__0___0 +P1470__0.4__1177___0 +P1410__0.4__1110___0_270 +P0018__0.4__0___0_90 +P1382__1__1648___1648 +P1432__1__824___0 +P1793__1__2466___824 +P1410__0.4__512___0_180 +P2598__1__0___0 +P1466__0.4__512___1024_180 +P2351__1__822___0 +P1467__1__1648___1648 +P1467__0.4__0___510_180 +P0131__1__512___0 +P1449__0.4__0___0_270 +P2563__0.4__0___0_270 +P1449__1__824___0 +P1467__0.4__0___0 +P2657__1__864___824 +P0673__0.4__0___0_270 +P1382__0.4__512___512_180 +P1700__1__2976___2472 +P0021__1__824___824 +P1794__1__824___3296 +P1382__0.4__512___1024_270 +P0147__1__682___0 +P2800__1__824___2227 +P2453__1__824___824 +P2655__0.4__963___127_90 +P1458__0.4__538___512_180 +P1471__0.4__896___0_90 +P1410__0.4__1024___1024 +P1505__1__824___0 +P2734__1__1648___1675 +P1445__0.4__0___0_90 +P1977__1__647___735 +P1466__0.4__512___1024_270 +P1380__0.4__512___854_270 +P2727__1__633___0 +P2657__1__824___824 +P0030__0.4__0___0_180 +P2716__1__2472___1420 +P1380__0.4__512___512_270 +P1388__0.4__662___1024_90 +P1471__0.4__0___512_90 +P1387__0.4__1640___668 +P2655__1__824___1854 +P2721__1__1648___0 +P1931__1__65___0 +P2584__0.4__0___0_270 +P0021__1__1648___2210 +P2692__0.4__50___0_90 +P0111__0.4__0___0_90 +P2584__0.4__0___116_180 +P1863__0.4__0___0_90 +P1863__0.4__0___0_270 +P2166__0.4__0___0 +P1863__0.4__126___296 +P1780__1__824___1648 +P1466__0.4__0___512 +P1793__1__2466___2109 +P2362__0.4__0___0_270 +P2162__0.4__0___0_270 +P1410__0.4__1024___512 +P2781__0.4__398___0 +P2331__1__1648___0 +P2565__0.4__0___0 +P1470__0.4__512___1240_270 +P1780__1__2472___3296 +P2566__1__18___0 +P2725__1__3502___824 +P2365__1__1648___0 +P2756__1__1648___3257 +P1445__1__824___1648 +P2582__1__674___0 +P2655__1__2472___824 +P1780__0.4__0___512_180 +P1470__1__2472___824 +P1793__1__1648___0 +P2598__1__824___1030 +P1931__0.4__0___0_180 +P1470__0.4__1024___0_180 +P1470__0.4__0___1240_90 +P1931__1__0___610 +P2580__0.4__0___0_270 +P1184__1__824___4944 +P0029__1__824___1383 +P2055__1__0___0 +P0047__0.4__0___0_180 +P1464__1__1648___824 +P1458__0.4__0___0_270 +P0673__0.4__0___0_90 +P2597__1__0___703 +P1861__1__1020___824 +P1470__0.4__0___1240 +P1466__1__824___824 +P2579__0.4__0___0_270 +P0082__0.4__512___996_270 +P1445__0.4__512___0_270 +P2800__0.4__300___276_90 +P0139__1__0___0 +P2585__1__0___724 +P2168__1__0___0 +P0082__0.4__0___0 +P2650__0.4__159___0_180 +P1791__0.4__512___512_90 +P1861__0.4__0___417_180 +P2368__0.4__0___0_90 +P0052__0.4__0___0 +P2547__1__824___824 +P0021__0.4__124___0_90 +P2725__1__0___824 +P2107__0.4__0___0_90 +P1861__1__824___1648 +P2563__1__883___145 +P1794__1__1648___3296 +P1471__1__0___2472 +P1456__0.4__374___0_270 +P1184__1__824___824 +P1410__0.4__0___0_90 +P2754__0.4__359___0_180 +P1738__0.4__0___0_90 +P1863__1__0___0 +P1445__0.4__512___288 +P2655__0.4__512___0 +P2244__1__0___0 +P2653__0.4__0___166_90 +P2452__1__609___824 +P2368__0.4__0___0_180 +P1463__0.4__0___194_180 +P2592__1__824___0 +P2565__1__0___824 +P2197__0.4__0___0_90 +P2655__0.4__512___127_270 +P2655__0.4__963___0 +P1791__0.4__943___512 +P0131__1__0___0 +P0044__1__624___708 +P2116__0.4__0___0_180 +P1433__0.4__0___0_270 +P2720__1__2472___0 +P1470__0.4__1024___1240_180 +P0082__0.4__512___512_270 +P1358__1__824___1648 +P1471__0.4__896___0_180 +P0018__0.4__0___0_270 +P2781__1__2531___824 +P1464__1__0___2472 +P0021__0.4__0___270_90 +P1791__1__1648___3296 +P2584__0.4__0___0_90 +P2116__0.4__0___0 +P2582__1__0___725 +P1388__0.4__662___1024_180 +P1458__0.4__538___917_270 +P2657__1__824___1425 +P1463__0.4__0___0_180 +P2197__0.4__0___0 +P0147__0.4__0___0_180 +P1410__1__3296___4120 +P1475__0.4__0___0_90 +P2719__1__3514___0 +P0082__0.4__1024___512_180 +P2650__1__1648___974 +P1863__0.4__0___296_270 +P0158__1__0___0 +P2756__0.4__863___688_180 +P1738__0.4__0___0_180 +P1440__1__950___0 +P1780__0.4__0___0_90 +P1470__1__824___4120 +P0809__0.4__0___0_270 +P1456__1__824___1488 +P1456__1__0___1488 +P2590__1__362___0 +P2285__1__2565___0 +P1780__0.4__410___0_90 +P2650__0.4__0___0 +P2331__0.4__286___0 +P1458__0.4__512___512_180 +P1738__0.4__0___0 +P1780__0.4__410___512_270 +P0141__0.4__512___0 +P1794__0.4__230___512_90 +P2547__1__0___824 +P2179__1__217___100 +P1325__1__1648___1648 +P1432__0.4__331___0_270 +P1793__0.4__372___229 +P2285__1__2472___0 +P1903__0.4__0___0_180 +P2572__1__0___824 +P1382__0.4__0___512_270 +P2179__1__0___0 +P2547__1__0___1401 +P1410__0.4__1024___512_90 +P2714__1__824___1426 +P1794__0.4__0___512 +P1791__0.4__0___512_90 +P1599__1__1648___824 +P2502__1__0___1400 +P2653__1__1091___824 +P2692__0.4__0___0_270 +P1471__0.4__0___0 +P0030__1__0___824 +P2653__0.4__0___166_270 +P2593__0.4__0___0_180 +P1432__0.4__0___0_90 +P2719__1__3296___0 +P2692__0.4__0___0 +P0131__1__0___256 +P2362__1__824___1000 +P2157__1__0___0 +P1463__0.4__466___194_270 +P0856__0.4__0___0_180 +P1794__0.4__230___0_180 +P2203__1__446___869 +P2157__1__89___0 +P2397__1__0___824 +P2650__0.4__0___0_270 +P2800__0.4__0___0_180 +P2718__1__0___824 +P0856__1__0___424 +P0141__0.4__0___0_90 +P1466__0.4__512___1241 +P1876__1__0___0 +P1466__0.4__0___1024_180 +P2584__1__0___0 +P2162__1__205___0 +P1466__0.4__0___1241 +P2368__0.4__7___0_90 +P1863__0.4__126___296_90 +P2716__1__1648___0 +P1730__1__2976___0 +P1388__0.4__512___1024_180 +P1463__1__824___1648 +P2591__1__824___824 +P2716__1__2996___0 +P0082__1__2472___2472 +P1597__1__824___0 +P0050__0.4__0___226_270 +P2756__0.4__863___688_90 +P2365__0.4__108___0 +P1876__0.4__0___0_180 +P1466__0.4__0___512_180 +P1470__0.4__1024___1024_90 +P0466__0.4__0___0 +P2591__1__0___824 +P1977__0.4__0___0_180 +P2197__1__824___0 +P1456__1__2470___824 +P0111__0.4__0___0_180 +P1780__1__824___824 +P1470__0.4__512___512 +P0673__1__0___0 +P1410__0.4__1110___1357 +P0144__1__0___0 +P1505__0.4__0___0 +P0809__1__380___91 +P2597__1__420___703 +P1793__0.4__372___229_270 +P0146__1__824___824 +P1780__0.4__0___512_90 +P0141__0.4__707___0 +P1387__0.4__1024___512 +P1793__0.4__0___0_180 +P1470__0.4__1177___1024 +P1598__1__1648___1648 +P1410__0.4__1024___1024_270 +P2392__0.4__0___0_180 +P0146__1__1648___0 +P1780__0.4__0___0 +P0052__1__824___1249 +P2597__0.4__0___0_180 +P2543__1__0___824 +P2240__0.4__0___0_180 +P1471__0.4__512___512_180 +P2203__0.4__0___0_270 +P2582__1__674___725 +P0204__1__0___35 +P2727__0.4__0___0_180 +P1470__0.4__1177___1024_270 +P2580__1__0___895 +P2595__1__0___648 +P2107__0.4__0___0 +P2721__1__824___1429 +P1384__1__824___0 +P1456__0.4__374___0 +P2754__0.4__0___0_180 +P0082__1__824___824 +P0158__1__824___824 +P1410__0.4__512___1024_90 +P1388__0.4__512___0_90 +P1449__1__0___0 +P1432__0.4__0___0 +P0164__1__423___222 +P2692__1__1662___0 +P2800__0.4__300___276 +P1780__0.4__0___512 +P1445__1__0___824 +P0021__1__1846___0 +P1463__0.4__0___0_90 +P1380__0.4__735___0_90 +P2800__0.4__0___276 +P2719__1__824___824 +P1387__0.4__1536___512_270 +P2653__0.4__0___0_270 +P2569__0.4__0___0_180 +P1432__0.4__331___512_270 +P2403__1__1109___0 +P2565__0.4__0___0_180 +P1791__0.4__0___0_180 +P0029__0.4__0___0_90 +P1458__0.4__512___512_90 +P2594__0.4__0___90_270 +P2650__0.4__0___0_180 +P1931__0.4__0___0_270 +P1388__0.4__662___0 +P1464__0.4__0___0_180 +P2585__1__0___0 +P2591__1__829___1003 +P2593__1__0___0 +P1471__0.4__896___0_270 +P2727__1__0___427 +P2408__1__973___1539 +P1410__0.4__512___1024 +P2203__1__0___0 +P2734__1__824___1648 +P1463__0.4__466___0_180 +P2594__0.4__0___0 +P2157__1__0___680 +P1794__0.4__230___743_90 +P1791__0.4__0___895_270 +P1456__1__2470___1488 +P2368__0.4__0___0_270 +P1440__1__950___621 +P0018__0.4__0___0_180 +P2651__0.4__0___0_90 +P2590__0.4__0___0_180 +P1445__0.4__0___288_90 +P1791__1__2472___2472 +P1466__0.4__1024___0 +P1382__0.4__512___1024_180 +P2725__1__824___1684 +P1387__0.4__1024___512_270 +P2579__0.4__0___0 +P1410__1__3296___4928 +P1470__0.4__512___512_180 +P0082__0.4__1172___996_90 +P1388__0.4__512___0_270 +P1466__0.4__1024___1241 +P2462__1__0___824 +P2572__0.4__0___0_270 +P1382__0.4__512___1024_90 +P0047__0.4__0___0 +P1432__0.4__331___0_180 +P1470__0.4__1177___1024_180 +P2653__0.4__0___0 +P1387__0.4__1536___512 +P2055__0.4__0___0_90 +P1903__0.4__0___0_270 +P2362__1__1320___824 +P2368__0.4__7___0_270 +P2714__1__3296___824 +P1432__0.4__331___0 +P2145__1__0___108 +P1464__0.4__0___0_90 +P0050__0.4__0___0_90 +P0368__1__0___363 +P2800__1__1648___1648 +P1382__0.4__512___1024 +P1903__1__0___264 +P2519__1__0___0 +P1470__0.4__0___1024_90 +P1410__0.4__1110___512 +P1382__0.4__0___512_90 +P1445__0.4__512___0_90 +P2591__1__0___1003 +P0021__1__824___1648 +P1466__1__1648___824 +P2563__1__0___145 +P2594__0.4__0___0_180 +P0111__0.4__0___0 +P0809__0.4__0___0_90 +P1380__0.4__735___0_270 +P1640__0.4__0___0_90 +P2722__1__2472___0 +P2724__1__824___0 +P0146__0.4__52___0 +P2725__1__1648___1648 +P1463__0.4__0___0 +P1470__0.4__0___1024_180 +P2565__1__744___1258 +P1410__0.4__1024___0_180 +P2659__0.4__0___0_90 +P2572__0.4__0___0 +P2657__1__864___1425 +P1410__0.4__1110___1024 +P0021__0.4__0___0_270 +P2543__1__0___1462 +P2650__1__0___0 +P2566__0.4__0___0 +P0082__0.4__0___0_90 +P1380__0.4__512___854 +P0141__1__824___0 +P2585__0.4__0___0_270 +P0164__0.4__0___0_270 +P1903__1__824___264 +P2572__1__0___1171 +P2397__1__0___971 +P2365__0.4__108___0_270 +P1410__0.4__1110___512_180 +P0149__1__0___0 +P1964__0.4__0___0 +P2655__0.4__963___0_180 +P2522__1__824___824 +P2285__1__1648___0 +P1861__1__824___2472 +P2655__1__3296___1648 +P1470__1__824___4636 +P1387__0.4__1640___512_90 +P2722__1__0___1422 +P2692__0.4__50___0_270 +P1456__1__0___0 +P1445__0.4__0___0 +P1861__0.4__0___417_90 +P2725__1__1648___824 +P2659__0.4__55___0_90 +P0030__1__1735___824 +P0021__0.4__0___0_90 +P1463__0.4__466___194_180 +P0050__0.4__0___0 +P1377__1__824___0 +P1794__0.4__0___743_90 +P1466__0.4__512___1024_90 +P1467__0.4__512___510_270 +P1458__0.4__538___917_180 +P2542__1__0___824 +P2362__1__1320___1000 +P1467__0.4__0___0_180 +P2244__1__241___0 +P2655__0.4__963___127_180 +P1387__0.4__1024___668 +P0146__0.4__52___0_90 +P2374__1__0___686 +P2659__1__0___0 +P1794__0.4__0___0_90 +P2262__0.4__0___0_90 +P2721__1__1648___824 +P0044__0.4__0___0 +P1458__0.4__512___512 +P2720__1__0___1422 +P1432__0.4__331___512_180 +P1791__0.4__0___895_90 +P1977__1__0___0 +P1700__1__2472___2472 +P1387__0.4__1024___512_180 +P1445__1__0___1648 +P0383__1__0___0 +P2716__1__1648___824 +P1382__0.4__0___1024 +P2522__1__1445___0 +P2659__0.4__55___0_180 +P0665__1__270___0 +P1432__0.4__0___690_90 +P2011__1__0___103 +P2800__1__2287___1648 +P1470__0.4__1024___0 +P1793__1__824___824 +P0082__0.4__1024___512 +P0021__1__1846___2210 +P1791__0.4__512___895_270 +P0204__0.4__0___0_270 +P0021__1__1846___824 +P2722__1__0___824 +P1903__0.4__0___0 +P2655__1__2472___1854 +P0141__0.4__0___0_180 +P1505__0.4__27___0_270 +P0124__1__0___824 +P1791__0.4__0___0 +P1382__0.4__0___1024_180 +P2162__1__0___0 +P1456__0.4__0___0_270 +P1458__0.4__512___0 +P2504__1__596___0 +P2650__1__1648___0 +P1791__1__3296___2472 +P1467__0.4__0___0_90 +P1445__0.4__512___288_90 +P2512__1__862___824 +P0146__0.4__0___0_180 +P0111__1__824___720 +P1380__0.4__735___854_270 +P1470__0.4__1024___1024_180 +P2655__0.4__512___127 +P2365__0.4__108___0_90 +P2714__1__0___1426 +P0141__1__1648___727 +P1589__1__2976___2472 +P1780__0.4__0___0_180 +P1380__0.4__512___0_180 +P0856__0.4__0___0_270 +P0466__1__274___0 +P1863__0.4__0___0_180 +P1964__0.4__0___0_180 +P1464__1__824___824 +P1380__0.4__735___512_90 +P0158__1__824___0 +P2651__1__0___897 +P2716__1__2472___0 +P1458__0.4__538___512_270 +P2725__1__2472___824 +P2655__0.4__963___127_270 +P1445__0.4__512___0_180 +P1466__0.4__1024___0_270 +P2591__1__824___1003 +P0082__0.4__512___0_180 +P1458__0.4__538___0_180 +P1861__1__824___824 +P2692__0.4__50___0 +P1977__0.4__0___0_270 +P0383__1__111___0 +P0111__1__938___720 +P1410__0.4__0___0 +P1464__0.4__0___512 +P1382__0.4__0___1024_270 +P2650__1__1648___824 +P1470__0.4__1024___512_180 +P1791__0.4__943___895 +P1470__0.4__512___0 +P1471__0.4__896___512 +P2563__0.4__0___0_90 +P2597__0.4__0___0_90 +P1863__0.4__126___0 +P1471__0.4__512___512 +P1456__0.4__374___0_90 +P2598__0.4__0___0_270 +P0082__1__824___0 +P0021__0.4__124___270_90 +P1464__0.4__0___838_90 +P0131__1__512___256 +P1410__1__2472___4928 +P1387__1__4120___3206 +P0673__0.4__0___0_180 +P2756__0.4__512___688 +P2157__0.4__0___0_180 +P0030__0.4__80___0_180 +P2331__1__1648___824 +P2585__0.4__0___0_90 +P1388__0.4__512___1109_90 +P2721__1__3296___824 +P0082__0.4__1024___512_270 +P1470__1__2472___1648 +P1384__1__0___2472 +P1470__0.4__512___1024_90 +P2179__1__0___100 +P2591__1__829___0 +P1470__0.4__512___0_90 +P2725__1__3502___1648 +P2781__0.4__398___0_90 +P1471__0.4__896___512_90 +P2590__0.4__0___0 +P1458__0.4__538___0 +P2453__1__0___1185 +P1467__0.4__554___510 +P2594__0.4__0___0_90 +P1863__1__824___0 +P2331__0.4__286___0_90 +P2655__0.4__512___127_90 +P2788__1__0___0 +P1471__0.4__896___0 +P2351__1__0___705 +P1380__0.4__512___0_90 +P2162__1__205___356 +P0021__1__1846___1648 +P2756__0.4__863___688 +P2055__0.4__0___0_180 +P0141__1__1648___0 +P2580__0.4__0___0_90 +P1466__0.4__512___0_270 +P2572__1__0___0 +P1382__0.4__512___512 +P2655__0.4__0___0_270 +P1456__0.4__0___0 +P1589__1__2472___2976 +P2655__0.4__0___127_90 +P1794__1__2112___2472 +P2194__0.4__0___0_270 +P2595__0.4__0___0_270 +P2716__1__0___0 +P1410__0.4__1110___0 +P2585__0.4__0___0_180 +P1387__0.4__1536___512_90 +P0082__1__1648___3296 +P0504__1__946___494 +P2724__1__1648___1432 +P1467__0.4__554___510_270 +P2593__0.4__0___0_90 +P2585__1__674___724 +P0162__1__140___439 +P1466__0.4__0___1024_90 +P2800__1__2287___824 +P1471__1__1648___0 +P1471__0.4__0___0_180 +P1861__1__0___824 +P0052__0.4__0___0_270 +P1432__1__1648___0 +P2722__1__1648___0 +P2592__1__1031___443 +P1470__0.4__512___1240 +P1471__0.4__0___512 +P1466__0.4__512___512_180 +P2725__1__3296___1648 +P1470__0.4__1177___1240_180 +P1470__1__3296___0 +P1458__1__824___824 +P0021__0.4__124___0 +P0029__0.4__80___0_270 +P1464__1__0___824 +P2194__1__692___824 +P2572__0.4__0___0_90 +P1380__0.4__735___854_90 +P1470__0.4__1177___512_270 +P2800__0.4__300___0_180 +P2657__0.4__0___0_270 +P0158__1__1648___0 +P2742__0.4__0___0_180 +P1463__0.4__466___0 +P0030__1__824___824 +P1470__0.4__0___1024_270 +P1387__0.4__1640___512_270 +P0021__0.4__124___270 +P0809__0.4__0___0 +P1358__1__1648___824 +P2781__0.4__398___0_270 +P1931__1__65___610 +P2408__1__824___1539 +P1471__0.4__0___994_180 +P1471__0.4__512___512_270 +P2055__1__130___0 +P2203__1__0___869 +P1466__1__1648___1648 +P0082__0.4__1172___996_270 +P0030__0.4__0___0 +P2720__1__824___824 +P1876__0.4__0___0_90 +P0131__0.4__0___0 +P0856__0.4__0___0 +P1388__0.4__662___0_270 +P2362__1__1320___0 +P0146__1__1667___860 +P1585__1__824___824 +P1445__1__824___824 +P2716__1__2996___1420 +P2157__0.4__0___0_90 +P1471__0.4__512___0_180 +P0021__1__1648___0 +P1410__1__4312___1648 +P2721__1__2472___824 +P2720__1__824___1422 +P1214__1__0___1648 +P0146__0.4__52___0_180 +P2655__0.4__0___0 +P0082__0.4__1172___996_180 +P2800__0.4__300___0 +P1587__1__0___2976 +P1464__0.4__502___512_90 +P2595__0.4__0___0_90 +P2512__1__824___824 +P1410__0.4__1024___1024_180 +P1380__0.4__512___854_90 +P1410__0.4__512___1024_270 +P1464__0.4__0___0_270 +P2742__1__0___1291 +P2655__0.4__0___127 +P1410__0.4__1024___0 +P1505__1__0___824 +P0082__0.4__512___512 +P1791__0.4__943___895_270 +P1466__1__2472___2472 +P2742__0.4__0___0 +P1780__0.4__410___892_270 +P2351__1__0___0 +P2743__1__0___0 +P2800__0.4__0___276_270 +P2590__0.4__0___0_270 +P1458__0.4__512___917 +P1791__0.4__512___895_90 +P0029__1__0___0 +P2533__1__824___0 +P2584__0.4__0___116_270 +P0049__1__604___0 +P2800__1__1648___0 +P1470__0.4__1177___0_180 +P2578__0.4__0___0_270 +P1458__1__0___0 +P1471__0.4__512___512_90 +P1793__0.4__372___0_90 +P1791__0.4__0___895 +P0082__0.4__1172___996 +P2162__0.4__0___0 +P1458__0.4__512___917_270 +P1470__0.4__1177___0_90 +P2725__1__824___824 +P0082__0.4__0___512 +P2655__1__3296___824 +P1410__0.4__512___0 +P2141__0.4__0___0_180 +P0141__0.4__707___0_180 +P0030__0.4__80___0 +P1388__0.4__512___1109_180 +P2650__1__0___824 +P2365__0.4__108___0_180 +P1793__0.4__372___0 +P1725__1__2976___824 +P0021__0.4__124___270_270 +P1466__0.4__0___1024 +P1780__0.4__0___0_270 +P2725__1__1648___0 +P2651__0.4__0___0 +P2754__0.4__0___0_90 +P0164__0.4__0___0 +P1470__0.4__512___512_270 +P0141__0.4__512___0_270 +P0147__1__0___225 +P2527__1__1309___0 +P1876__0.4__0___0 +P1410__0.4__1024___0_270 +P2594__0.4__0___90_180 +P1388__0.4__662___512_270 +P2650__0.4__159___0 +P2194__0.4__0___0 +P1861__1__1020___1648 +P1466__0.4__1024___0_90 +P0665__0.4__0___0_180 +P1466__1__1648___3296 +P1410__0.4__512___1357_90 +P1432__0.4__0___512_270 +P1471__0.4__896___512_270 +P0383__0.4__0___0_90 +P2592__0.4__0___0_180 +P1358__1__824___2976 +P2392__1__530___0 +P1977__0.4__0___0_90 +P1470__0.4__1024___0_90 +P1449__0.4__0___359_270 +P0082__0.4__1024___996 +P1793__0.4__0___0 +P2754__0.4__359___0 +P0082__0.4__1024___512_90 +P2197__1__0___0 +P1470__1__2472___0 +P2655__1__0___824 +P1471__0.4__0___994 +P0082__0.4__512___512_90 +P1780__0.4__410___892_180 +P2331__1__2250___0 +P0044__0.4__0___0_90 +P0044__0.4__0___0_270 +P2392__1__0___0 +P1791__0.4__512___512 +P1633__1__0___2472 +P0368__0.4__0___0_90 +P2754__0.4__359___0_90 +P1387__0.4__1640___668_90 +P1410__0.4__1024___0_90 +P1184__1__0___4944 +P0131__0.4__0___0_90 +P0856__0.4__0___0_90 +P1410__0.4__1110___512_270 +P2453__1__824___1185 +P2714__1__0___0 +P2650__1__824___974 +P0146__0.4__52___0_270 +P0021__0.4__124___0_180 +P2653__1__824___824 +P2725__1__1648___1684 +P2650__1__824___824 +P2592__0.4__0___0 +P1466__0.4__1024___512_180 +P1339__1__0___1648 +P2563__1__0___0 +P1471__0.4__512___0 +P1388__0.4__512___512_180 +P2742__0.4__0___0_90 +P0466__0.4__0___0_90 +P0665__0.4__0___0_90 +P2365__0.4__0___0_90 +P2655__0.4__512___0_90 +P1793__1__1648___2109 +P0124__0.4__0___0_90 +P1470__0.4__1177___1024_90 +P1388__0.4__662___1109_180 +P1471__0.4__0___994_90 +P1791__0.4__0___0_90 +P2655__1__824___824 +P2011__0.4__0___0_270 +P1410__1__4312___2472 +P2362__1__824___824 +P1369__1__4176___1648 +P2566__0.4__0___0_90 +P2788__1__424___0 +P1861__1__824___0 +P1456__1__0___824 +P1443__1__2472___2316 +P2754__1__824___824 +P1380__0.4__512___512 +P0082__1__0___824 +P1464__1__0___3296 +P0030__0.4__0___0_90 +P1380__0.4__512___0 +P0141__1__2472___0 +P2584__0.4__0___116 +P2659__1__0___824 +P0673__1__0___123 +P1410__0.4__0___0_180 +P2285__0.4__412___0 +P2157__1__89___680 +P1466__0.4__1024___1241_270 +P0029__0.4__80___0_90 +P2754__1__824___0 +P1410__1__4120___1648 +P0147__0.4__0___0_270 +P2166__1__230___0 +P0504__1__824___494 +P2584__0.4__0___116_90 +P1432__0.4__0___0_270 +P1432__1__2363___824 +P0809__1__0___91 +P2754__1__1648___0 +P2584__0.4__0___0 +P2251__1__0___56 +P1466__0.4__0___512_90 +P2594__1__0___824 +P2593__1__0___677 +P0383__0.4__0___0_270 +P2375__1__541___0 +P1931__0.4__0___0 +P1387__0.4__1536___668_90 +P2692__0.4__0___0_180 +P1463__0.4__0___194_270 +P1410__0.4__1110___1024_180 +P1467__0.4__0___510_270 +P0050__0.4__0___226_180 +P2462__1__0___1179 +P2157__0.4__0___0_270 +P1337__1__0___1648 +P0164__1__0___222 +P2116__1__0___395 +P1597__1__824___824 +P0030__0.4__0___0_270 +P1556__1__2472___0 +P1464__0.4__0___512_270 +P2197__0.4__0___0_180 +P1470__1__0___4120 +P2565__1__744___824 +P1794__0.4__230___512 +P1214__1__0___824 +P2597__1__420___0 +P2579__1__0___989 +P1387__0.4__1536___668 +P1388__0.4__662___0_90 +P1410__1__3296___1648 +P1964__1__277___605 +P1903__0.4__0___0_90 +P1410__0.4__512___1357_270 +P1387__0.4__1640___668_180 +P2452__1__0___1136 +P1471__0.4__512___0_270 +P1388__1__3192___4309 +P2590__1__0___149 +P1410__0.4__1024___1024_90 +P1470__0.4__512___1240_180 +P0147__1__682___225 +P1445__0.4__0___0_180 +P2244__1__0___49 +P2591__0.4__0___0_270 +P2659__0.4__0___0_180 +P2565__0.4__0___0_270 +P2570__1__775___441 +P1793__1__1648___824 +P1445__0.4__0___288_270 +P2734__1__824___1675 +P2659__0.4__55___0 +P2718__1__824___824 +P1456__1__1648___1488 +P1725__1__2472___2472 +P2653__0.4__0___0_90 +P2727__0.4__0___0_90 +P1587__1__0___2472 +P0030__1__1648___1382 +P1464__0.4__502___512 +P2598__0.4__0___0_180 +P2725__1__3502___1684 +P2055__0.4__0___0_270 +P0141__0.4__512___0_180 +P1589__1__2472___2472 +P0141__0.4__707___0_90 +P1440__0.4__0___0_180 +P1458__0.4__538___917 +P1470__0.4__512___1024_180 +P1794__1__1648___2472 +P1861__1__0___1648 +P2566__0.4__0___0_270 +P1586__1__2976___1648 +P1440__0.4__0___0 +P1456__1__1648___824 +P1464__0.4__502___512_270 +P2725__1__3296___0 +P1387__0.4__1536___668_270 +P1794__1__1648___824 +P1793__1__1648___1648 +P2692__0.4__0___0_90 +P2262__1__824___0 +P1432__1__824___824 +P1794__0.4__0___743_270 +P1466__0.4__1024___1241_90 +P1861__0.4__0___0_270 +P1388__0.4__662___512 +P0082__0.4__0___0_270 +P1466__0.4__0___0 +P0044__1__624___0 +P2756__0.4__863___688_270 +P2716__1__2472___824 +P1791__0.4__0___0_270 +P0505__1__0___824 +P2659__1__0___963 +P0368__0.4__0___0_270 +P1470__1__4479___2472 +P2655__1__0___1854 +P2800__1__824___0 +P1466__0.4__512___1241_180 +P1589__1__0___2472 +P1433__0.4__0___0 +P1382__0.4__512___512_90 +P1471__0.4__896___512_180 +P1666__1__0___2472 +P2725__1__0___0 +P2591__0.4__0___0_90 +P1388__1__3192___4120 +P1377__1__0___0 +P1388__0.4__662___512_90 +P1791__1__1648___2472 +P1443__1__3296___1648 +P2534__1__0___824 +P2011__0.4__0___0_180 +P1466__0.4__1024___1241_180 +P2285__0.4__412___0_270 +P2331__0.4__286___0_270 +P2727__1__0___0 +P1384__1__1648___824 +P1387__0.4__1024___512_90 +P1449__0.4__0___0 +P1388__0.4__512___0_180 +P1463__0.4__0___194 +P1467__0.4__554___510_180 +P1964__1__277___0 +P0082__1__1648___2472 +P1432__0.4__0___690_180 +P1791__1__824___1648 +P1410__0.4__512___1024_180 +P1474__1__824___2472 +P2653__0.4__0___166 +P0164__0.4__0___0_180 +P0029__0.4__0___0 +P1464__0.4__0___838 +P2262__0.4__0___0_270 +P2743__1__0___824 +P2653__1__1091___0 +P2727__1__633___427 +P2657__0.4__0___0_90 +P2162__0.4__0___0_180 +P2598__0.4__0___0 +P2374__1__0___0 +P2584__0.4__0___0_180 +P0809__1__380___0 +P1876__0.4__0___0_270 +P1464__0.4__0___0 +P2590__0.4__0___0_90 +P1410__1__4312___824 +P2491__1__0___824 +P2781__0.4__398___0_180 +P1470__0.4__0___1240_270 +P1793__0.4__0___229_90 +P1475__0.4__0___0 +P1475__1__0___0 +P1589__1__2976___2976 +P1863__0.4__0___296_90 +P2721__1__3515___824 +P1505__0.4__27___0_90 +P2720__1__1648___824 +P2141__1__0___166 +P1470__0.4__1024___1240_90 +P1505__0.4__0___0_180 +P1470__1__4120___3296 +P1382__0.4__0___512 +P1964__0.4__0___0_90 +P0146__1__824___860 +P0050__1__824___1648 +P2585__1__674___0 +P2116__1__282___395 +P2582__0.4__0___0 +P1449__1__0___824 +P1341__1__1648___1648 +P2578__1__24___0 +P1387__0.4__1536___668_180 +P0665__0.4__0___0_270 +P2570__1__0___441 +P2714__1__3513___824 +P1463__0.4__466___0_90 +P1793__1__2466___1648 +P0021__0.4__124___270_180 +P1794__0.4__230___0_90 +P2141__1__0___0 +P2650__1__824___0 +P1410__0.4__512___0_270 +P1342__1__1648___0 +P2592__1__824___443 +P1358__1__1648___1648 +P1738__1__0___0 +P1388__0.4__512___1109_270 +P0383__0.4__0___0 +P2650__0.4__0___0_90 +P1725__1__2472___2976 +P2743__1__824___824 +P1458__1__2880___3296 +P2591__1__824___0 +P1458__0.4__538___0_90 +P0204__1__0___0 +P2655__0.4__0___127_180 +P2578__1__0___403 +P2533__1__824___824 +P2651__1__641___824 +P1467__0.4__554___510_90 +P2397__1__0___0 +P1466__0.4__512___512_270 +P2141__0.4__0___0_270 +P1863__0.4__126___296_270 +P1505__0.4__0___0_270 +P2569__1__0___643 +P2578__0.4__0___0_90 +P2362__0.4__0___0_180 +P1793__1__2466___0 +P1794__1__1648___3394 +P1794__0.4__0___512_180 +P0029__0.4__80___0 +P1456__0.4__0___0_90 +P2718__1__1648___824 +P1466__0.4__512___0_90 +P2392__0.4__0___0 +P2725__1__824___0 +P0082__1__3296___3296 +P0030__0.4__80___0_90 +P1861__0.4__0___0_90 +P2251__1__0___0 +P0082__0.4__1024___996_270 +P1794__0.4__230___743_180 +P2591__0.4__0___0_180 +P1861__0.4__0___0_180 +P1464__1__2792___0 +P2116__1__0___0 +P0152__1__0___0 +P2800__0.4__0___276_180 +P1466__0.4__1024___1024_90 +P1780__0.4__410___512 +P2570__0.4__0___0_90 +P1470__0.4__1024___512_90 +P0082__0.4__512___0_270 +P2725__1__824___1648 +P1470__0.4__0___1240_180 +P1449__0.4__0___359 +P1463__0.4__0___194_90 +P2598__1__1332___1030 +P2722__1__1648___824 +P1464__0.4__502___0_90 +P1505__0.4__0___0_90 +P2179__1__217___0 +P0147__0.4__0___0_90 +P1369__1__3296___1648 +P2409__1__540___1174 +P1458__0.4__538___917_90 +P1458__1__2472___3829 +P1725__1__2976___2976 +P1791__0.4__943___895_180 +P1410__0.4__1110___1357_270 +P2141__1__148___166 +P1388__0.4__512___512 +P1861__0.4__0___417_270 +P2362__0.4__0___0 +P1794__0.4__230___512_270 +P0029__1__824___0 +P1474__1__1648___1648 +P2262__1__1267___0 +P1470__0.4__512___512_90 +P0809__0.4__0___0_180 +P2598__1__1332___824 +P2800__1__1648___2227 +P0204__0.4__0___0_180 +P2240__0.4__0___0 +P1358__1__824___2472 +P1556__1__824___0 +P1456__1__824___0 +P1463__0.4__466___194 +P0082__0.4__512___0_90 +P2800__0.4__0___0_270 +P2655__0.4__512___0_180 +P0082__0.4__0___512_180 +P2653__0.4__0___0_180 +P1388__0.4__662___1109 +P1184__1__0___5200 +P1337__1__824___2472 +P2240__1__0___573 +P2116__0.4__0___0_90 +P1640__0.4__0___0_270 +P2594__1__589___1648 +P2742__1__0___824 +P1794__0.4__0___512_270 +P1458__0.4__512___917_180 +P2563__1__824___0 +P1388__1__3192___1648 +P2655__1__2472___1648 +P1432__1__0___2472 +P0146__1__1648___860 +P2754__0.4__0___0 +P2491__1__0___0 +P1410__1__4120___2472 +P1184__1__824___5200 +P1793__0.4__372___0_180 +P1466__0.4__0___0_90 +P2727__0.4__0___0_270 +P2368__1__824___824 +P2651__0.4__0___0_270 +P1725__1__2976___2472 +P2721__1__1648___1429 +P1466__0.4__0___0_180 +P1964__1__0___605 +P2563__0.4__0___0_180 +P1903__1__922___0 +P2659__0.4__0___0_270 +P2257__1__0___57 +P2743__1__824___0 +P2055__0.4__0___0 +P1382__0.4__512___512_270 +P1794__1__824___3394 +P0029__1__824___824 +P1410__1__824___0 +P1470__1__3296___1648 +P2781__1__2472___0 +P2725__1__2472___1684 +P2203__1__0___824 +P0164__0.4__0___0_90 +P0018__0.4__0___0 +P2368__0.4__7___0_180 +P1388__0.4__662___1024 +P1861__0.4__0___0 +P2593__1__179___677 +P0021__0.4__124___0_270 +P2590__1__362___149 +P2651__1__0___824 +P1794__0.4__0___743_180 +P1863__1__824___824 +P2262__0.4__0___0 +P1380__0.4__512___0_270 +P1410__0.4__512___0_90 +P1410__0.4__512___1357 +P0665__0.4__0___0 +P1470__0.4__512___0_270 +P1388__0.4__662___1109_270 +P1791__1__0___2472 +P2162__1__0___356 +P0505__0.4__0___0 +P1410__1__3296___2472 +P1380__0.4__735___854 +P2541__1__1590___824 +P1471__0.4__0___0_270 +P1410__0.4__1110___0_180 +P2392__0.4__0___0_270 +P1793__0.4__0___229_180 +P1449__0.4__0___0_90 +P2580__0.4__0___0 +P2655__0.4__0___0_90 +P1794__0.4__0___0_180 +P2141__0.4__0___0 +P1458__1__2472___3296 +P0052__0.4__0___0_180 +P1794__0.4__230___743_270 +P2194__0.4__0___0_180 +P1793__0.4__372___229_90 +P0030__1__824___1382 +P1470__0.4__1177___1240_90 +P1410__0.4__1110___1024_90 +P1467__0.4__512___510_90 +P1458__1__2472___1648 +P1387__0.4__1024___668_90 +P1464__0.4__502___0 +P2194__1__0___1148 +P1458__0.4__538___512 +P0047__0.4__0___0_270 +P1410__0.4__1024___1357 +P1863__0.4__126___0_180 +P1464__0.4__502___512_180 +P0368__1__0___0 +P1470__0.4__1024___1024_270 +P1382__1__824___2472 +P2742__1__824___824 +P1410__0.4__1024___1357_90 +P1791__0.4__0___512_180 +P1466__0.4__1024___512 +P2725__1__2472___1648 +P1738__1__0___824 +P1463__0.4__0___0_270 +P2572__0.4__0___0_180 +P2721__1__0___1429 +P2578__0.4__0___0 +P1388__0.4__512___1109 +P0147__1__0___0 +P1410__0.4__1024___512_270 +P1505__0.4__27___0 +P1440__0.4__0___0_270 +P1876__1__0___573 +P2011__0.4__0___0 +P2651__1__641___0 +P1467__0.4__512___510 +P2522__1__824___0 +P1791__1__1648___3774 +P2722__1__824___824 +P1388__0.4__512___512_270 +P2285__0.4__412___0_180 +P1964__1__0___0 +P2570__0.4__0___0_180 +P0143__1__0___0 +P1410__0.4__1024___512_180 +P2692__1__1648___0 +P1977__1__647___0 +P2578__0.4__0___0_180 +P2719__1__1648___0 +P0466__1__0___0 +P2570__1__0___0 +P1471__1__1648___824 +P1440__1__824___621 +P1470__0.4__512___1024_270 +P2655__0.4__963___0_270 +P1387__0.4__1640___512_180 +P2502__1__0___824 +P0082__0.4__512___996_180 +P1458__0.4__0___0 +P1466__0.4__0___1241_90 +P0082__0.4__1024___996_90 +P2756__0.4__0___688_90 +P0146__1__1667___824 +P2800__0.4__0___0 +P0368__0.4__0___0_180 +P0044__0.4__0___0_180 +P1903__1__824___0 +P0029__0.4__0___0_180 +P2512__1__824___0 +P1449__1__824___824 +P1470__0.4__512___1240_90 +P2594__0.4__0___90 +P1738__0.4__0___0_270 +P1470__0.4__1177___512_180 +P0047__1__0___0 +P0141__0.4__707___0_270 +P1791__0.4__0___512 +P0124__0.4__0___0 +P1458__1__2880___3829 +P2011__0.4__0___0_90 +P1780__0.4__0___512_270 +P1445__0.4__512___288_180 +P2438__1__0___0 +P1432__0.4__0___512_90 +P2527__1__824___0 +P0018__1__800___0 +P1470__0.4__1024___1024 +P1467__1__2472___1648 +P1445__0.4__0___288 +P1337__1__0___2472 +P0030__1__0___1382 +P1791__0.4__943___512_90 +P1471__1__0___1648 +P1780__0.4__410___512_180 +P0050__1__1535___2100 +P1432__0.4__0___512 +P0368__0.4__0___0 +P0050__0.4__0___0_270 +P2368__0.4__7___0 +P2595__0.4__0___0 +P2257__1__0___0 +P1369__1__4120___1648 +P1387__1__4944___3206 +P1780__0.4__410___892_90 +P1466__0.4__1024___0_180 +P2580__1__0___0 +P1793__0.4__0___0_90 +P1432__0.4__0___0_180 +P1387__0.4__1640___668_270 +P1863__0.4__0___296 +P0082__0.4__0___512_90 +P1794__0.4__230___0_270 +P1388__0.4__662___0_180 +P1470__0.4__1177___1240_270 +P2756__0.4__512___688_90 +P2168__1__0___150 +P1456__1__2470___0 +P2800__0.4__300___276_180 +P1458__0.4__0___0_180 +P1793__0.4__0___0_270 +P0047__0.4__0___0_90 +P1470__0.4__1024___512 +P1467__1__2472___2810 +P2650__0.4__159___0_270 +P2579__0.4__0___0_180 +P2721__1__824___824 +P1931__1__0___0 +P2107__0.4__0___0_270 +P2720__1__1648___0 +P1380__0.4__735___854_180 +P1387__0.4__1640___512 +P0505__1__0___885 +P2597__1__0___0 +P1470__1__4120___2472 +P1591__1__0___0 +P2721__1__3515___1429 +P2721__1__2472___0 +P0673__0.4__0___0 +P1466__0.4__1024___1024 +P1467__0.4__0___510_90 +P2659__0.4__0___0 +P1794__1__2112___1648 +P1471__0.4__512___0_90 +P1464__1__824___0 +P1433__0.4__0___0_180 +P2194__1__692___1148 +P1456__0.4__374___0_180 +P2714__1__3513___1426 +P1794__1__2112___3296 +P1474__1__824___1648 +P2594__0.4__0___90_90 +P1387__0.4__1024___668_270 +P2012__1__0___0 +P1464__0.4__0___512_180 +P0466__0.4__0___0_180 +P1388__0.4__512___0 +P0159__1__0___0 +P2585__0.4__0___0 +P1505__1__824___824 +P2533__1__1380___824 +P2582__0.4__0___0_90 +P1475__0.4__0___0_270 +P2419__1__566___0 +P0146__1__1648___824 +P1433__0.4__0___0_90 +P1410__0.4__512___1357_180 +P1791__0.4__943___512_180 +P1466__0.4__0___1241_270 +P1445__1__824___2257 +P2166__0.4__0___0_180 +P2714__1__2472___1426 +P1903__1__0___0 +P1470__0.4__1024___512_270 +P0381__1__0___0 +P2597__0.4__0___0_270 +P0673__1__91___123 +P1793__0.4__372___0_270 +P2592__1__0___443 +P1458__1__824___0 +P2572__1__49___824 +P2365__1__1807___0 +P0242__1__0___0 +P1388__0.4__662___1109_90 +P2725__1__2472___0 +P2725__1__3502___0 +P2011__1__0___0 +P2570__1__775___0 +P0466__0.4__0___0_270 +P1977__1__0___735 +P1470__0.4__512___0_180 +P1456__0.4__0___0_180 +P1780__0.4__410___892 +P2362__0.4__0___0_90 +P2718__1__1648___1422 +P2800__1__824___824 +P2570__0.4__0___0_270 +P1470__0.4__1177___0_270 +P1470__0.4__512___1024 +P2716__1__824___0 +P0204__0.4__0___0 +P2055__1__130___418 +P2203__0.4__0___0_180 +P1876__1__306___0 +P2572__1__49___0 +P1470__1__3296___824 +P1475__0.4__0___0_180 +P2578__1__24___403 +P2565__0.4__0___0_90 +P2107__1__0___0 +P1863__0.4__0___0 +P2197__0.4__0___0_270 +P1433__1__0___0 +P1598__1__1648___2472 +P0131__0.4__0___0_180 +P1440__1__824___0 +P1466__0.4__1024___512_270 +P1432__0.4__331___512 +P1458__0.4__512___0_180 +P0029__0.4__0___0_270 +P1388__0.4__512___1024 +P1794__1__2112___824 +P2651__1__0___0 +P2595__1__0___0 +P2141__1__148___0 +P0505__0.4__0___0_90 +P0141__1__824___727 +P2590__1__0___0 +P1443__1__1648___2316 +P1640__0.4__0___0_180 +P2754__1__1648___824 +P2650__0.4__159___0_90 +P0052__1__0___824 +P1458__0.4__538___0_270 +P2714__1__2472___0 +P1861__1__0___0 +P1380__1__2472___1648 +P0381__1__111___0 +P2251__1__247___0 +P1791__0.4__943___512_270 +P2725__1__3296___1684 +P1470__1__4479___3296 +P0146__0.4__0___0 +P1505__0.4__27___0_180 +P2714__1__3296___1426 +P2591__1__829___824 +P0050__0.4__0___226_90 +P2194__0.4__0___0_90 +P1467__0.4__0___510 +P1466__0.4__0___1241_180 +P2594__1__589___824 +P2580__0.4__0___0_180 +P2742__0.4__0___0_270 +P1410__0.4__1110___1357_180 +P0146__1__824___0 +P1410__0.4__1110___1357_90 +P1466__0.4__0___1024_270 +P1466__0.4__512___0 +P2598__1__824___824 +P1458__0.4__0___0_90 +P2503__1__0___1494 +P1382__0.4__0___1024_90 +P1863__0.4__126___0_270 +P1445__0.4__512___0 +P1470__1__0___4636 +P1794__1__824___2472 +P1466__0.4__1024___512_90 +P1387__0.4__1024___668_180 +P1863__1__0___824 +P0021__0.4__0___0 +P0505__0.4__0___0_180 +P2714__1__824___824 +P2593__1__179___0 +P2724__1__824___824 +P2569__1__0___0 +P1794__0.4__0___0 +P2566__0.4__0___0_180 +P2428__1__0___1587 +P0082__0.4__1024___996_180 +P1466__1__824___2472 +P2653__0.4__0___166_180 +P2756__0.4__512___688_270 +P1337__1__824___1648 +P2331__1__2250___824 +P1793__0.4__372___229_180 +P0030__1__1648___824 +P2107__0.4__0___0_180 +P1464__0.4__502___0_180 +P1410__1__2472___4120 +P2655__0.4__512___127_180 +P0047__1__102___0 +P1863__0.4__126___296_180 +P1466__0.4__512___512 +P2566__1__0___0 +P1458__0.4__512___0_270 +P0385__1__0___0 +P2781__1__2472___824 +P0162__1__0___439 +P2756__0.4__0___688 +P2578__1__0___0 +P1432__0.4__0___690 +P1640__0.4__0___0 +P1977__0.4__0___0 +P2166__1__0___0 +P2569__0.4__0___0_270 +P2598__0.4__0___0_90 +P1794__1__2112___3394 +P2116__0.4__0___0_270 +P2800__1__1648___824 +P2727__0.4__0___0 +P1463__1__824___824 +P2203__0.4__0___0_90 +P0141__0.4__512___0_90 +P1410__0.4__1024___1357_270 +P1464__0.4__0___838_180 +P1456__1__1648___0 +P0052__1__837___824 +P1463__0.4__466___0_270 +P2657__0.4__0___0 +P2720__1__0___824 +P0052__1__0___1249 +P1464__0.4__0___838_270 +P2428__1__824___1587 +P1471__0.4__0___0_90 +P0141__0.4__0___0 +P0029__0.4__80___0_180 +P1861__1__1020___2472 +P2657__0.4__0___0_180 +P0111__0.4__0___0_270 +P2800__0.4__300___276_270 +P1861__1__1020___0 +P0050__0.4__0___0_180 +P1432__0.4__331___0_90 +P1466__0.4__512___1024 +P1964__0.4__0___0_270 +P2392__0.4__0___0_90 +P2592__0.4__0___0_90 +P2714__1__1648___0 +P1791__0.4__512___512_180 +P0052__0.4__0___0_90 +P1380__0.4__512___854_180 +P2145__1__0___0 +P1471__0.4__0___512_270 +P2547__1__824___1401 +P2653__1__0___824 +P0018__1__800___665 +P2538__1__863___0 +P2725__1__3296___824 +P0131__0.4__0___0_270 +P1410__0.4__1110___1024_270 +P1432__1__1648___824 +P0021__0.4__0___270_270 +P2244__1__241___49 +P2597__0.4__0___0 +P0082__0.4__512___512_180 +P2655__0.4__0___0_180 +P2721__1__0___824 +P0164__1__0___0 +P2563__1__883___0 +P1793__0.4__0___229 +P2594__1__589___1762 +P2592__0.4__0___0_270 +P1458__0.4__512___917_90 +P1876__1__306___573 +P0050__0.4__0___226 +P0146__0.4__0___0_270 +P0146__0.4__0___0_90 +P2582__0.4__0___0_270 +P0164__1__423___0 +P0082__1__0___0 +P2721__1__0___0 +P2756__0.4__512___688_180 +P0021__0.4__0___270_180 +P1791__0.4__943___895_90 +P2724__1__824___1432 +P0018__1__0___0 +P1861__0.4__0___417 +P1794__0.4__230___743 +P1466__1__824___4120 +P2655__0.4__0___127_270 +P1589__1__0___1648 +P1458__1__2880___1648 +P0050__1__1535___1648 +P2572__1__49___1171 +P0673__1__91___0 +P0146__1__1667___0 +P2754__0.4__359___0_270 +P1466__0.4__512___0_180 +P2162__0.4__0___0_90 +P1471__0.4__0___994_270 +P1471__0.4__0___512_180 +P2591__0.4__0___0 +P2582__1__0___0 +P1445__0.4__0___288_180 +P1791__0.4__0___512_270 +P1791__0.4__512___895 +P2251__1__247___56 +P1470__0.4__1024___1240 +P0030__0.4__80___0_270 +P1380__0.4__735___512 +P2166__0.4__0___0_90 +P0052__1__824___824 +P1466__0.4__0___512_270 +P1863__0.4__126___0_90 +P1791__0.4__0___895_180 +P1458__1__0___824 +P2582__0.4__0___0_180 +P1467__0.4__0___0_270 +P1432__0.4__0___512_180 +P0383__0.4__0___0_180 +P2593__0.4__0___0 +P1633__1__0___2976 +P1388__1__3192___824 +P2203__1__446___824 +P2592__1__0___0 +P2800__1__824___1648 +P1449__0.4__0___359_180 +P1791__1__2472___3296 +P1794__0.4__0___512_90 +P1474__1__1648___2472 +P0124__0.4__0___0_180 +P1780__0.4__410___512_90 +P2351__1__822___705 +P1470__0.4__1177___1240 +P2734__1__1648___1648 +P1596__1__0___1648 +P1410__0.4__1110___512_90 +P1464__0.4__0___512_90 +P1467__1__1648___2810 +P0082__0.4__512___996_90 +P1445__1__0___2257 +P1466__0.4__512___512_90 +P2719__1__824___0 +P1466__0.4__512___1241_270 +P1458__0.4__538___512_90 +P0021__1__1648___1648 +P0124__0.4__0___0_270 +P2362__1__824___0 +P2721__1__3296___1429 +P1466__0.4__0___0_270 +P2754__0.4__0___0_270 +P2538__1__0___1196 +P2141__0.4__0___0_90 +P2580__1__0___824 +P2365__0.4__0___0_180 +P1464__1__2472___0 +P1793__0.4__0___229_270 +P2157__0.4__0___0 +P2579__0.4__0___0_90 +P0082__0.4__0___0_180 +P1440__0.4__0___0_90 +P2055__1__0___418 +P2452__1__609___1136 +P1903__1__922___264 +P2598__1__824___0 +P2368__1__0___824 +P1388__0.4__512___1024_270 +P2262__0.4__0___0_180 +P1466__1__1648___2472 +P2651__0.4__0___0_180 +P0158__1__1648___824 +P2538__1__824___0 +P1793__1__824___0 +P1470__0.4__1024___0_270 +P2565__1__0___1258 +P1505__1__0___0 +P2800__0.4__0___0_90 +P2584__1__824___0 +P2522__1__0___0 +P2368__1__824___1281 +P2653__1__0___0 +P1791__1__3296___3296 +P2800__0.4__300___0_270 +P2240__0.4__0___0_90 +P2655__1__0___1648 +P1432__0.4__0___690_270 +P2240__0.4__0___0_270 +P1470__0.4__1024___1240_270 +P1445__0.4__512___288_270 +P0021__0.4__0___270 +P1780__0.4__410___0_270 +P2368__1__0___0 +P2595__0.4__0___0_180 +P1793__1__824___1648 +P2655__0.4__963___127 +P2563__0.4__0___0 +P2720__1__1648___1422 +P1470__0.4__1177___512_90 +P2756__0.4__0___688_180 +P1794__0.4__230___512_180 +P2651__1__641___897 +P2365__0.4__0___0 +P2594__0.4__0___0_270 +P2714__1__1648___1426 +P1458__0.4__512___0_90 +P2592__1__1031___0 +P1456__1__824___824 +P0029__1__0___1383 +P2714__1__0___824 +P0021__1__824___2210 +P1931__0.4__0___0_90 +P1780__1__1648___824 +P1470__0.4__0___1024 +P0082__0.4__0___512_270 +P1388__0.4__512___1024_90 +P2655__0.4__963___0_90 +P1466__0.4__1024___1024_180 +P1388__0.4__512___512_90 +P2593__0.4__0___0_270 +P0141__0.4__0___0_270 +P2742__1__824___1291 +P0030__1__1735___1382 +P2724__1__2472___1432 +P1791__0.4__512___512_270 +P1458__0.4__512___512_270 +P2800__0.4__0___276_90 +P1410__0.4__0___0_270 +P1794__0.4__0___0_270 +P2781__1__2531___0 +P2166__0.4__0___0_270 +P2365__0.4__0___0_270 +P2569__0.4__0___0_90 +P0505__0.4__0___0_270 +P2512__1__862___0 +P2375__1__0___0 +P1380__0.4__735___512_180 +P2653__1__824___0 +P1861__1__0___2472 +P0029__1__0___824 +P1449__0.4__0___0_180 +P1432__0.4__331___512_90 +P1380__0.4__735___0 +P2655__1__3296___1854 +P1887__1__0___0 +P2800__0.4__300___0_90 +P2331__0.4__286___0_180 +P2692__0.4__50___0_180 +P1410__0.4__1024___1357_180 +P0021__0.4__0___0_180 +P2533__1__1380___0 +P0082__0.4__512___0 +P2756__0.4__0___688_270 +P1388__0.4__662___512_180 +P1730__1__2472___0 +P2655__0.4__512___0_270 +P0082__0.4__512___996 +P2714__1__2472___824 +P1863__0.4__0___296_180 +P1466__0.4__1024___1024_270 +P0147__0.4__0___0 +P1410__1__4120___824 +P0018__1__0___665 +P2569__0.4__0___0 +P1466__1__2472___1648 +P1387__0.4__1536___512_180 +P0021__1__1648___824 +P1466__1__1648___4120 +P0809__1__0___0 +P1466__1__824___1648 +P0082__1__2472___3296 +P0204__0.4__0___0_90 +P2452__1__0___824 +P0158__1__824___1551 +P2502__1__0___0 +P2598__1__1332___0 +P2716__1__2996___824 +P1470__0.4__1177___512 +P1410__0.4__1110___0_90 +P2659__0.4__55___0_270 +P1380__0.4__735___512_270 +P2368__0.4__0___0 +P1780__0.4__410___0_180 +P2570__0.4__0___0 +P1382__0.4__0___512_180 +P1464__0.4__502___0_270 +P1640__1__0___0 +P1380__0.4__512___512_180 +P2194__1__0___824 +P2203__0.4__0___0 +P1382__1__1648___2472 +P1388__0.4__662___1024_270 +P1449__0.4__0___359_90 +P1794__0.4__0___743 +P1466__0.4__512___1241_90 +P2579__1__0___0 +P1380__0.4__512___512_90 +P1463__0.4__466___194_90 +P1794__0.4__230___0 +P1791__0.4__512___895_180 +P2655__1__824___1648 +P2719__1__1648___824 +P1467__0.4__512___510_180 +P2453__1__0___824 +P1384__1__1648___0 +P1780__0.4__410___0 +P2721__1__2472___1429 +P2285__0.4__412___0_90 +P2538__1__0___824 +P2563__1__824___145 +P1380__0.4__735___0_180 +P1445__0.4__0___0_270 +P0665__1__0___0 +P1470__0.4__1177___0 +P1410__0.4__1110___0_270 +P0018__0.4__0___0_90 +P1382__1__1648___1648 +P1432__1__824___0 +P1793__1__2466___824 +P1410__0.4__512___0_180 +P2598__1__0___0 +P1466__0.4__512___1024_180 +P2351__1__822___0 +P1467__1__1648___1648 +P1467__0.4__0___510_180 +P0131__1__512___0 +P1449__0.4__0___0_270 +P2563__0.4__0___0_270 +P1449__1__824___0 +P1467__0.4__0___0 +P2657__1__864___824 +P0673__0.4__0___0_270 +P1382__0.4__512___512_180 +P1700__1__2976___2472 +P0021__1__824___824 +P1794__1__824___3296 +P1382__0.4__512___1024_270 +P0147__1__682___0 +P2800__1__824___2227 +P2453__1__824___824 +P2655__0.4__963___127_90 +P1458__0.4__538___512_180 +P1471__0.4__896___0_90 +P1410__0.4__1024___1024 +P1505__1__824___0 +P2734__1__1648___1675 +P1445__0.4__0___0_90 +P1977__1__647___735 +P1466__0.4__512___1024_270 +P1380__0.4__512___854_270 +P2727__1__633___0 +P2657__1__824___824 +P0030__0.4__0___0_180 +P2716__1__2472___1420 +P1380__0.4__512___512_270 +P1388__0.4__662___1024_90 +P1471__0.4__0___512_90 +P1387__0.4__1640___668 +P2655__1__824___1854 +P2721__1__1648___0 +P1931__1__65___0 +P2584__0.4__0___0_270 +P0021__1__1648___2210 +P2692__0.4__50___0_90 +P0111__0.4__0___0_90 +P2584__0.4__0___116_180 +P1863__0.4__0___0_90 +P1863__0.4__0___0_270 +P2166__0.4__0___0 +P1863__0.4__126___296 +P1780__1__824___1648 +P1466__0.4__0___512 +P1793__1__2466___2109 +P2362__0.4__0___0_270 +P2162__0.4__0___0_270 +P1410__0.4__1024___512 +P2781__0.4__398___0 +P2331__1__1648___0 +P2565__0.4__0___0 +P1470__0.4__512___1240_270 +P1780__1__2472___3296 +P2566__1__18___0 +P2725__1__3502___824 +P2365__1__1648___0 +P2756__1__1648___3257 +P1445__1__824___1648 +P2582__1__674___0 +P2655__1__2472___824 +P1780__0.4__0___512_180 +P1470__1__2472___824 +P1793__1__1648___0 +P2598__1__824___1030 +P1931__0.4__0___0_180 +P1470__0.4__1024___0_180 +P1470__0.4__0___1240_90 +P1931__1__0___610 +P2580__0.4__0___0_270 +P1184__1__824___4944 +P0029__1__824___1383 +P2055__1__0___0 +P0047__0.4__0___0_180 +P1464__1__1648___824 +P1458__0.4__0___0_270 +P0673__0.4__0___0_90 +P2597__1__0___703 +P1861__1__1020___824 +P1470__0.4__0___1240 +P1466__1__824___824 +P2579__0.4__0___0_270 +P0082__0.4__512___996_270 +P1445__0.4__512___0_270 +P2800__0.4__300___276_90 +P0139__1__0___0 +P2585__1__0___724 +P2168__1__0___0 +P0082__0.4__0___0 +P2650__0.4__159___0_180 +P1791__0.4__512___512_90 +P1861__0.4__0___417_180 +P2368__0.4__0___0_90 +P0052__0.4__0___0 +P2547__1__824___824 +P0021__0.4__124___0_90 +P2725__1__0___824 +P2107__0.4__0___0_90 +P1861__1__824___1648 +P2563__1__883___145 +P1794__1__1648___3296 +P1471__1__0___2472 +P1456__0.4__374___0_270 +P1184__1__824___824 +P1410__0.4__0___0_90 +P2754__0.4__359___0_180 +P1738__0.4__0___0_90 +P1863__1__0___0 +P1445__0.4__512___288 +P2655__0.4__512___0 +P2244__1__0___0 +P2653__0.4__0___166_90 +P2452__1__609___824 +P2368__0.4__0___0_180 +P1463__0.4__0___194_180 +P2592__1__824___0 +P2565__1__0___824 +P2197__0.4__0___0_90 +P2655__0.4__512___127_270 +P2655__0.4__963___0 +P1791__0.4__943___512 +P0131__1__0___0 +P0044__1__624___708 +P2116__0.4__0___0_180 +P1433__0.4__0___0_270 +P2720__1__2472___0 +P1470__0.4__1024___1240_180 +P0082__0.4__512___512_270 +P1358__1__824___1648 +P1471__0.4__896___0_180 +P0018__0.4__0___0_270 +P2781__1__2531___824 +P1464__1__0___2472 +P0021__0.4__0___270_90 +P1791__1__1648___3296 +P2584__0.4__0___0_90 +P2116__0.4__0___0 +P2582__1__0___725 +P1388__0.4__662___1024_180 +P1458__0.4__538___917_270 +P2657__1__824___1425 +P1463__0.4__0___0_180 +P2197__0.4__0___0 +P0147__0.4__0___0_180 +P1410__1__3296___4120 +P1475__0.4__0___0_90 +P2719__1__3514___0 +P0082__0.4__1024___512_180 +P2650__1__1648___974 +P1863__0.4__0___296_270 +P0158__1__0___0 +P2756__0.4__863___688_180 +P1738__0.4__0___0_180 +P1440__1__950___0 +P1780__0.4__0___0_90 +P1470__1__824___4120 +P0809__0.4__0___0_270 +P1456__1__824___1488 +P1456__1__0___1488 +P2590__1__362___0 +P2285__1__2565___0 +P1780__0.4__410___0_90 +P2650__0.4__0___0 +P2331__0.4__286___0 +P1458__0.4__512___512_180 +P1738__0.4__0___0 +P1780__0.4__410___512_270 +P0141__0.4__512___0 +P1794__0.4__230___512_90 +P2547__1__0___824 +P2179__1__217___100 +P1325__1__1648___1648 +P1432__0.4__331___0_270 +P1793__0.4__372___229 +P2285__1__2472___0 +P1903__0.4__0___0_180 +P2572__1__0___824 +P1382__0.4__0___512_270 +P2179__1__0___0 +P2547__1__0___1401 +P1410__0.4__1024___512_90 +P2714__1__824___1426 +P1794__0.4__0___512 +P1791__0.4__0___512_90 +P1599__1__1648___824 +P2502__1__0___1400 +P2653__1__1091___824 +P2692__0.4__0___0_270 +P1471__0.4__0___0 +P0030__1__0___824 +P2653__0.4__0___166_270 +P2593__0.4__0___0_180 +P1432__0.4__0___0_90 +P2719__1__3296___0 +P2692__0.4__0___0 +P0131__1__0___256 +P2362__1__824___1000 +P2157__1__0___0 +P1463__0.4__466___194_270 +P0856__0.4__0___0_180 +P1794__0.4__230___0_180 +P2203__1__446___869 +P2157__1__89___0 +P2397__1__0___824 +P2650__0.4__0___0_270 +P2800__0.4__0___0_180 +P2718__1__0___824 +P0856__1__0___424 +P0141__0.4__0___0_90 +P1466__0.4__512___1241 +P1876__1__0___0 +P1466__0.4__0___1024_180 +P2584__1__0___0 +P2162__1__205___0 +P1466__0.4__0___1241 +P2368__0.4__7___0_90 +P1863__0.4__126___296_90 +P2716__1__1648___0 +P1730__1__2976___0 +P1388__0.4__512___1024_180 +P1463__1__824___1648 +P2591__1__824___824 +P2716__1__2996___0 +P0082__1__2472___2472 +P1597__1__824___0 +P0050__0.4__0___226_270 +P2756__0.4__863___688_90 +P2365__0.4__108___0 +P1876__0.4__0___0_180 +P1466__0.4__0___512_180 +P1470__0.4__1024___1024_90 +P0466__0.4__0___0 +P2591__1__0___824 +P1977__0.4__0___0_180 +P2197__1__824___0 +P1456__1__2470___824 +P0111__0.4__0___0_180 +P1780__1__824___824 +P1470__0.4__512___512 +P0673__1__0___0 +P1410__0.4__1110___1357 +P0144__1__0___0 +P1505__0.4__0___0 +P0809__1__380___91 +P2597__1__420___703 +P1793__0.4__372___229_270 +P0146__1__824___824 +P1780__0.4__0___512_90 +P0141__0.4__707___0 +P1387__0.4__1024___512 +P1793__0.4__0___0_180 +P1470__0.4__1177___1024 +P1598__1__1648___1648 +P1410__0.4__1024___1024_270 +P2392__0.4__0___0_180 +P0146__1__1648___0 +P1780__0.4__0___0 +P0052__1__824___1249 +P2597__0.4__0___0_180 +P2543__1__0___824 +P2240__0.4__0___0_180 +P1471__0.4__512___512_180 +P2203__0.4__0___0_270 +P2582__1__674___725 +P0204__1__0___35 +P2727__0.4__0___0_180 +P1470__0.4__1177___1024_270 +P2580__1__0___895 +P2595__1__0___648 +P2107__0.4__0___0 +P2721__1__824___1429 +P1384__1__824___0 +P1456__0.4__374___0 +P2754__0.4__0___0_180 +P0082__1__824___824 +P0158__1__824___824 +P1410__0.4__512___1024_90 +P1388__0.4__512___0_90 +P1449__1__0___0 +P1432__0.4__0___0 +P0164__1__423___222 +P2692__1__1662___0 +P2800__0.4__300___276 +P1780__0.4__0___512 +P1445__1__0___824 +P0021__1__1846___0 +P1463__0.4__0___0_90 +P1380__0.4__735___0_90 +P2800__0.4__0___276 +P2719__1__824___824 +P1387__0.4__1536___512_270 +P2653__0.4__0___0_270 +P2569__0.4__0___0_180 +P1432__0.4__331___512_270 +P2403__1__1109___0 +P2565__0.4__0___0_180 +P1791__0.4__0___0_180 +P0029__0.4__0___0_90 +P1458__0.4__512___512_90 +P2594__0.4__0___90_270 +P2650__0.4__0___0_180 +P1931__0.4__0___0_270 +P1388__0.4__662___0 +P1464__0.4__0___0_180 +P2585__1__0___0 +P2591__1__829___1003 +P2593__1__0___0 +P1471__0.4__896___0_270 +P2727__1__0___427 +P2408__1__973___1539 +P1410__0.4__512___1024 +P2203__1__0___0 +P2734__1__824___1648 +P1463__0.4__466___0_180 +P2594__0.4__0___0 +P2157__1__0___680 +P1794__0.4__230___743_90 +P1791__0.4__0___895_270 +P1456__1__2470___1488 +P2368__0.4__0___0_270 +P1440__1__950___621 +P0018__0.4__0___0_180 +P2651__0.4__0___0_90 +P2590__0.4__0___0_180 +P1445__0.4__0___288_90 +P1791__1__2472___2472 +P1466__0.4__1024___0 +P1382__0.4__512___1024_180 +P2725__1__824___1684 +P1387__0.4__1024___512_270 +P2579__0.4__0___0 +P1410__1__3296___4928 +P1470__0.4__512___512_180 +P0082__0.4__1172___996_90 +P1388__0.4__512___0_270 +P1466__0.4__1024___1241 +P2462__1__0___824 +P2572__0.4__0___0_270 +P1382__0.4__512___1024_90 +P0047__0.4__0___0 +P1432__0.4__331___0_180 +P1470__0.4__1177___1024_180 +P2653__0.4__0___0 +P1387__0.4__1536___512 +P2055__0.4__0___0_90 +P1903__0.4__0___0_270 +P2362__1__1320___824 +P2368__0.4__7___0_270 +P2714__1__3296___824 +P1432__0.4__331___0 +P2145__1__0___108 +P1464__0.4__0___0_90 +P0050__0.4__0___0_90 +P0368__1__0___363 +P2800__1__1648___1648 +P1382__0.4__512___1024 +P1903__1__0___264 +P2519__1__0___0 +P1470__0.4__0___1024_90 +P1410__0.4__1110___512 +P1382__0.4__0___512_90 +P1445__0.4__512___0_90 +P2591__1__0___1003 +P0021__1__824___1648 +P1466__1__1648___824 +P2563__1__0___145 +P2594__0.4__0___0_180 +P0111__0.4__0___0 +P0809__0.4__0___0_90 +P1380__0.4__735___0_270 +P1640__0.4__0___0_90 +P2722__1__2472___0 +P2724__1__824___0 +P0146__0.4__52___0 +P2725__1__1648___1648 +P1463__0.4__0___0 +P1470__0.4__0___1024_180 +P2565__1__744___1258 +P1410__0.4__1024___0_180 +P2659__0.4__0___0_90 +P2572__0.4__0___0 +P2657__1__864___1425 +P1410__0.4__1110___1024 +P0021__0.4__0___0_270 +P2543__1__0___1462 +P2650__1__0___0 +P2566__0.4__0___0 +P0082__0.4__0___0_90 +P1380__0.4__512___854 +P0141__1__824___0 +P2585__0.4__0___0_270 +P0164__0.4__0___0_270 +P1903__1__824___264 +P2572__1__0___1171 +P2397__1__0___971 +P2365__0.4__108___0_270 +P1410__0.4__1110___512_180 +P0149__1__0___0 +P1964__0.4__0___0 +P2655__0.4__963___0_180 +P2522__1__824___824 +P2285__1__1648___0 +P1861__1__824___2472 +P2655__1__3296___1648 +P1470__1__824___4636 +P1387__0.4__1640___512_90 +P2722__1__0___1422 +P2692__0.4__50___0_270 +P1456__1__0___0 +P1445__0.4__0___0 +P1861__0.4__0___417_90 +P2725__1__1648___824 +P2659__0.4__55___0_90 +P0030__1__1735___824 +P0021__0.4__0___0_90 +P1463__0.4__466___194_180 +P0050__0.4__0___0 +P1377__1__824___0 +P1794__0.4__0___743_90 +P1466__0.4__512___1024_90 +P1467__0.4__512___510_270 +P1458__0.4__538___917_180 +P2542__1__0___824 +P2362__1__1320___1000 +P1467__0.4__0___0_180 +P2244__1__241___0 +P2655__0.4__963___127_180 +P1387__0.4__1024___668 +P0146__0.4__52___0_90 +P2374__1__0___686 +P2659__1__0___0 +P1794__0.4__0___0_90 +P2262__0.4__0___0_90 +P2721__1__1648___824 +P0044__0.4__0___0 +P1458__0.4__512___512 +P2720__1__0___1422 +P1432__0.4__331___512_180 +P1791__0.4__0___895_90 +P1977__1__0___0 +P1700__1__2472___2472 +P1387__0.4__1024___512_180 +P1445__1__0___1648 +P0383__1__0___0 +P2716__1__1648___824 +P1382__0.4__0___1024 +P2522__1__1445___0 +P2659__0.4__55___0_180 +P0665__1__270___0 +P1432__0.4__0___690_90 +P2011__1__0___103 +P2800__1__2287___1648 +P1470__0.4__1024___0 +P1793__1__824___824 +P0082__0.4__1024___512 +P0021__1__1846___2210 +P1791__0.4__512___895_270 +P0204__0.4__0___0_270 +P0021__1__1846___824 +P2722__1__0___824 +P1903__0.4__0___0 +P2655__1__2472___1854 +P0141__0.4__0___0_180 +P1505__0.4__27___0_270 +P0124__1__0___824 +P1791__0.4__0___0 +P1382__0.4__0___1024_180 +P2162__1__0___0 +P1456__0.4__0___0_270 +P1458__0.4__512___0 +P2504__1__596___0 +P2650__1__1648___0 +P1791__1__3296___2472 +P1467__0.4__0___0_90 +P1445__0.4__512___288_90 +P2512__1__862___824 +P0146__0.4__0___0_180 +P0111__1__824___720 +P1380__0.4__735___854_270 +P1470__0.4__1024___1024_180 +P2655__0.4__512___127 +P2365__0.4__108___0_90 +P2714__1__0___1426 +P0141__1__1648___727 +P1589__1__2976___2472 +P1780__0.4__0___0_180 +P1380__0.4__512___0_180 +P0856__0.4__0___0_270 +P0466__1__274___0 +P1863__0.4__0___0_180 +P1964__0.4__0___0_180 +P1464__1__824___824 +P1380__0.4__735___512_90 +P0158__1__824___0 +P2651__1__0___897 +P2716__1__2472___0 +P1458__0.4__538___512_270 +P2725__1__2472___824 +P2655__0.4__963___127_270 +P1445__0.4__512___0_180 +P1466__0.4__1024___0_270 +P2591__1__824___1003 +P0082__0.4__512___0_180 +P1458__0.4__538___0_180 +P1861__1__824___824 +P2692__0.4__50___0 +P1977__0.4__0___0_270 +P0383__1__111___0 +P0111__1__938___720 +P1410__0.4__0___0 +P1464__0.4__0___512 +P1382__0.4__0___1024_270 +P2650__1__1648___824 +P1470__0.4__1024___512_180 +P1791__0.4__943___895 +P1470__0.4__512___0 +P1471__0.4__896___512 +P2563__0.4__0___0_90 +P2597__0.4__0___0_90 +P1863__0.4__126___0 +P1471__0.4__512___512 +P1456__0.4__374___0_90 +P2598__0.4__0___0_270 +P0082__1__824___0 +P0021__0.4__124___270_90 +P1464__0.4__0___838_90 +P0131__1__512___256 +P1410__1__2472___4928 +P1387__1__4120___3206 +P0673__0.4__0___0_180 +P2756__0.4__512___688 +P2157__0.4__0___0_180 +P0030__0.4__80___0_180 +P2331__1__1648___824 +P2585__0.4__0___0_90 +P1388__0.4__512___1109_90 +P2721__1__3296___824 +P0082__0.4__1024___512_270 +P1470__1__2472___1648 +P1384__1__0___2472 +P1470__0.4__512___1024_90 +P2179__1__0___100 +P2591__1__829___0 +P1470__0.4__512___0_90 +P2725__1__3502___1648 +P2781__0.4__398___0_90 +P1471__0.4__896___512_90 +P2590__0.4__0___0 +P1458__0.4__538___0 +P2453__1__0___1185 +P1467__0.4__554___510 +P2594__0.4__0___0_90 +P1863__1__824___0 +P2331__0.4__286___0_90 +P2655__0.4__512___127_90 +P2788__1__0___0 +P1471__0.4__896___0 +P2351__1__0___705 +P1380__0.4__512___0_90 +P2162__1__205___356 +P0021__1__1846___1648 +P2756__0.4__863___688 +P2055__0.4__0___0_180 +P0141__1__1648___0 +P2580__0.4__0___0_90 +P1466__0.4__512___0_270 +P2572__1__0___0 +P1382__0.4__512___512 +P2655__0.4__0___0_270 +P1456__0.4__0___0 +P1589__1__2472___2976 +P2655__0.4__0___127_90 +P1794__1__2112___2472 +P2194__0.4__0___0_270 +P2595__0.4__0___0_270 +P2716__1__0___0 +P1410__0.4__1110___0 +P2585__0.4__0___0_180 +P1387__0.4__1536___512_90 +P0082__1__1648___3296 +P0504__1__946___494 +P2724__1__1648___1432 +P1467__0.4__554___510_270 +P2593__0.4__0___0_90 +P2585__1__674___724 +P0162__1__140___439 +P1466__0.4__0___1024_90 +P2800__1__2287___824 +P1471__1__1648___0 +P1471__0.4__0___0_180 +P1861__1__0___824 +P0052__0.4__0___0_270 +P1432__1__1648___0 +P2722__1__1648___0 +P2592__1__1031___443 +P1470__0.4__512___1240 +P1471__0.4__0___512 +P1466__0.4__512___512_180 +P2725__1__3296___1648 +P1470__0.4__1177___1240_180 +P1470__1__3296___0 +P1458__1__824___824 +P0021__0.4__124___0 +P0029__0.4__80___0_270 +P1464__1__0___824 +P2194__1__692___824 +P2572__0.4__0___0_90 +P1380__0.4__735___854_90 +P1470__0.4__1177___512_270 +P2800__0.4__300___0_180 +P2657__0.4__0___0_270 +P0158__1__1648___0 +P2742__0.4__0___0_180 +P1463__0.4__466___0 +P0030__1__824___824 +P1470__0.4__0___1024_270 +P1387__0.4__1640___512_270 +P0021__0.4__124___270 +P0809__0.4__0___0 +P1358__1__1648___824 +P2781__0.4__398___0_270 +P1931__1__65___610 +P2408__1__824___1539 +P1471__0.4__0___994_180 +P1471__0.4__512___512_270 +P2055__1__130___0 +P2203__1__0___869 +P1466__1__1648___1648 +P0082__0.4__1172___996_270 +P0030__0.4__0___0 +P2720__1__824___824 +P1876__0.4__0___0_90 +P0131__0.4__0___0 +P0856__0.4__0___0 +P1388__0.4__662___0_270 +P2362__1__1320___0 +P0146__1__1667___860 +P1585__1__824___824 +P1445__1__824___824 +P2716__1__2996___1420 +P2157__0.4__0___0_90 +P1471__0.4__512___0_180 +P0021__1__1648___0 +P1410__1__4312___1648 +P2721__1__2472___824 +P2720__1__824___1422 +P1214__1__0___1648 +P0146__0.4__52___0_180 +P2655__0.4__0___0 +P0082__0.4__1172___996_180 +P2800__0.4__300___0 +P1587__1__0___2976 +P1464__0.4__502___512_90 +P2595__0.4__0___0_90 +P2512__1__824___824 +P1410__0.4__1024___1024_180 +P1380__0.4__512___854_90 +P1410__0.4__512___1024_270 +P1464__0.4__0___0_270 +P2742__1__0___1291 +P2655__0.4__0___127 +P1410__0.4__1024___0 +P1505__1__0___824 +P0082__0.4__512___512 +P1791__0.4__943___895_270 +P1466__1__2472___2472 +P2742__0.4__0___0 +P1780__0.4__410___892_270 +P2351__1__0___0 +P2743__1__0___0 +P2800__0.4__0___276_270 +P2590__0.4__0___0_270 +P1458__0.4__512___917 +P1791__0.4__512___895_90 +P0029__1__0___0 +P2533__1__824___0 +P2584__0.4__0___116_270 +P0049__1__604___0 +P2800__1__1648___0 +P1470__0.4__1177___0_180 +P2578__0.4__0___0_270 +P1458__1__0___0 +P1471__0.4__512___512_90 +P1793__0.4__372___0_90 +P1791__0.4__0___895 +P0082__0.4__1172___996 +P2162__0.4__0___0 +P1458__0.4__512___917_270 +P1470__0.4__1177___0_90 +P2725__1__824___824 +P0082__0.4__0___512 +P2655__1__3296___824 +P1410__0.4__512___0 +P2141__0.4__0___0_180 +P0141__0.4__707___0_180 +P0030__0.4__80___0 +P1388__0.4__512___1109_180 +P2650__1__0___824 +P2365__0.4__108___0_180 +P1793__0.4__372___0 +P1725__1__2976___824 +P0021__0.4__124___270_270 +P1466__0.4__0___1024 +P1780__0.4__0___0_270 +P2725__1__1648___0 +P2651__0.4__0___0 +P2754__0.4__0___0_90 +P0164__0.4__0___0 +P1470__0.4__512___512_270 +P0141__0.4__512___0_270 +P0147__1__0___225 +P2527__1__1309___0 +P1876__0.4__0___0 +P1410__0.4__1024___0_270 +P2594__0.4__0___90_180 +P1388__0.4__662___512_270 +P2650__0.4__159___0 +P2194__0.4__0___0 +P1861__1__1020___1648 +P1466__0.4__1024___0_90 +P0665__0.4__0___0_180 +P1466__1__1648___3296 +P1410__0.4__512___1357_90 +P1432__0.4__0___512_270 +P1471__0.4__896___512_270 +P0383__0.4__0___0_90 +P2592__0.4__0___0_180 +P1358__1__824___2976 +P2392__1__530___0 +P1977__0.4__0___0_90 +P1470__0.4__1024___0_90 +P1449__0.4__0___359_270 +P0082__0.4__1024___996 +P1793__0.4__0___0 +P2754__0.4__359___0 +P0082__0.4__1024___512_90 +P2197__1__0___0 +P1470__1__2472___0 +P2655__1__0___824 +P1471__0.4__0___994 +P0082__0.4__512___512_90 +P1780__0.4__410___892_180 +P2331__1__2250___0 +P0044__0.4__0___0_90 +P0044__0.4__0___0_270 +P2392__1__0___0 +P1791__0.4__512___512 +P1633__1__0___2472 +P0368__0.4__0___0_90 +P2754__0.4__359___0_90 +P1387__0.4__1640___668_90 +P1410__0.4__1024___0_90 +P1184__1__0___4944 +P0131__0.4__0___0_90 +P0856__0.4__0___0_90 +P1410__0.4__1110___512_270 +P2453__1__824___1185 +P2714__1__0___0 +P2650__1__824___974 +P0146__0.4__52___0_270 +P0021__0.4__124___0_180 +P2653__1__824___824 +P2725__1__1648___1684 +P2650__1__824___824 +P2592__0.4__0___0 +P1466__0.4__1024___512_180 +P1339__1__0___1648 +P2563__1__0___0 +P1471__0.4__512___0 +P1388__0.4__512___512_180 +P2742__0.4__0___0_90 +P0466__0.4__0___0_90 +P0665__0.4__0___0_90 +P2365__0.4__0___0_90 +P2655__0.4__512___0_90 +P1793__1__1648___2109 +P0124__0.4__0___0_90 +P1470__0.4__1177___1024_90 +P1388__0.4__662___1109_180 +P1471__0.4__0___994_90 +P1791__0.4__0___0_90 +P2655__1__824___824 +P2011__0.4__0___0_270 +P1410__1__4312___2472 +P2362__1__824___824 +P1369__1__4176___1648 +P2566__0.4__0___0_90 +P2788__1__424___0 +P1861__1__824___0 +P1456__1__0___824 +P1443__1__2472___2316 +P2754__1__824___824 +P1380__0.4__512___512 +P0082__1__0___824 +P1464__1__0___3296 +P0030__0.4__0___0_90 +P1380__0.4__512___0 +P0141__1__2472___0 +P2584__0.4__0___116 +P2659__1__0___824 +P0673__1__0___123 +P1410__0.4__0___0_180 +P2285__0.4__412___0 +P2157__1__89___680 +P1466__0.4__1024___1241_270 +P0029__0.4__80___0_90 +P2754__1__824___0 +P1410__1__4120___1648 +P0147__0.4__0___0_270 +P2166__1__230___0 +P0504__1__824___494 +P2584__0.4__0___116_90 +P1432__0.4__0___0_270 +P1432__1__2363___824 +P0809__1__0___91 +P2754__1__1648___0 +P2584__0.4__0___0 +P2251__1__0___56 +P1466__0.4__0___512_90 +P2594__1__0___824 +P2593__1__0___677 +P0383__0.4__0___0_270 +P2375__1__541___0 +P1931__0.4__0___0 +P1387__0.4__1536___668_90 +P2692__0.4__0___0_180 +P1463__0.4__0___194_270 +P1410__0.4__1110___1024_180 +P1467__0.4__0___510_270 +P0050__0.4__0___226_180 +P2462__1__0___1179 +P2157__0.4__0___0_270 +P1337__1__0___1648 +P0164__1__0___222 +P2116__1__0___395 +P1597__1__824___824 +P0030__0.4__0___0_270 +P1556__1__2472___0 +P1464__0.4__0___512_270 +P2197__0.4__0___0_180 +P1470__1__0___4120 +P2565__1__744___824 +P1794__0.4__230___512 +P1214__1__0___824 +P2597__1__420___0 +P2579__1__0___989 +P1387__0.4__1536___668 +P1388__0.4__662___0_90 +P1410__1__3296___1648 +P1964__1__277___605 +P1903__0.4__0___0_90 +P1410__0.4__512___1357_270 +P1387__0.4__1640___668_180 +P2452__1__0___1136 +P1471__0.4__512___0_270 +P1388__1__3192___4309 +P2590__1__0___149 +P1410__0.4__1024___1024_90 +P1470__0.4__512___1240_180 +P0147__1__682___225 +P1445__0.4__0___0_180 +P2244__1__0___49 +P2591__0.4__0___0_270 +P2659__0.4__0___0_180 +P2565__0.4__0___0_270 +P2570__1__775___441 +P1793__1__1648___824 +P1445__0.4__0___288_270 +P2734__1__824___1675 +P2659__0.4__55___0 +P2718__1__824___824 +P1456__1__1648___1488 +P1725__1__2472___2472 +P2653__0.4__0___0_90 +P2727__0.4__0___0_90 +P1587__1__0___2472 +P0030__1__1648___1382 +P1464__0.4__502___512 +P2598__0.4__0___0_180 +P2725__1__3502___1684 +P2055__0.4__0___0_270 +P0141__0.4__512___0_180 +P1589__1__2472___2472 +P0141__0.4__707___0_90 +P1440__0.4__0___0_180 +P1458__0.4__538___917 +P1470__0.4__512___1024_180 +P1794__1__1648___2472 +P1861__1__0___1648 +P2566__0.4__0___0_270 +P1586__1__2976___1648 +P1440__0.4__0___0 +P1456__1__1648___824 +P1464__0.4__502___512_270 +P2725__1__3296___0 +P1387__0.4__1536___668_270 +P1794__1__1648___824 +P1793__1__1648___1648 +P2692__0.4__0___0_90 +P2262__1__824___0 +P1432__1__824___824 +P1794__0.4__0___743_270 +P1466__0.4__1024___1241_90 +P1861__0.4__0___0_270 +P1388__0.4__662___512 +P0082__0.4__0___0_270 +P1466__0.4__0___0 +P0044__1__624___0 +P2756__0.4__863___688_270 +P2716__1__2472___824 +P1791__0.4__0___0_270 +P0505__1__0___824 +P2659__1__0___963 +P0368__0.4__0___0_270 +P1470__1__4479___2472 +P2655__1__0___1854 +P2800__1__824___0 +P1466__0.4__512___1241_180 +P1589__1__0___2472 +P1433__0.4__0___0 +P1382__0.4__512___512_90 +P1471__0.4__896___512_180 +P1666__1__0___2472 +P2725__1__0___0 +P2591__0.4__0___0_90 +P1388__1__3192___4120 +P1377__1__0___0 +P1388__0.4__662___512_90 +P1791__1__1648___2472 +P1443__1__3296___1648 +P2534__1__0___824 +P2011__0.4__0___0_180 +P1466__0.4__1024___1241_180 +P2285__0.4__412___0_270 +P2331__0.4__286___0_270 +P2727__1__0___0 +P1384__1__1648___824 +P1387__0.4__1024___512_90 +P1449__0.4__0___0 +P1388__0.4__512___0_180 +P1463__0.4__0___194 +P1467__0.4__554___510_180 +P1964__1__277___0 +P0082__1__1648___2472 +P1432__0.4__0___690_180 +P1791__1__824___1648 +P1410__0.4__512___1024_180 +P1474__1__824___2472 +P2653__0.4__0___166 +P0164__0.4__0___0_180 +P0029__0.4__0___0 +P1464__0.4__0___838 +P2262__0.4__0___0_270 +P2743__1__0___824 +P2653__1__1091___0 +P2727__1__633___427 +P2657__0.4__0___0_90 +P2162__0.4__0___0_180 +P2598__0.4__0___0 +P2374__1__0___0 +P2584__0.4__0___0_180 +P0809__1__380___0 +P1876__0.4__0___0_270 +P1464__0.4__0___0 +P2590__0.4__0___0_90 +P1410__1__4312___824 +P2491__1__0___824 +P2781__0.4__398___0_180 +P1470__0.4__0___1240_270 +P1793__0.4__0___229_90 +P1475__0.4__0___0 +P1475__1__0___0 +P1589__1__2976___2976 +P1863__0.4__0___296_90 +P2721__1__3515___824 +P1505__0.4__27___0_90 +P2720__1__1648___824 +P2141__1__0___166 +P1470__0.4__1024___1240_90 +P1505__0.4__0___0_180 +P1470__1__4120___3296 +P1382__0.4__0___512 +P1964__0.4__0___0_90 +P0146__1__824___860 +P0050__1__824___1648 +P2585__1__674___0 +P2116__1__282___395 +P2582__0.4__0___0 +P1449__1__0___824 +P1341__1__1648___1648 +P2578__1__24___0 +P1387__0.4__1536___668_180 +P0665__0.4__0___0_270 +P2570__1__0___441 +P2714__1__3513___824 +P1463__0.4__466___0_90 +P1793__1__2466___1648 +P0021__0.4__124___270_180 +P1794__0.4__230___0_90 +P2141__1__0___0 +P2650__1__824___0 +P1410__0.4__512___0_270 +P1342__1__1648___0 +P2592__1__824___443 +P1358__1__1648___1648 +P1738__1__0___0 +P1388__0.4__512___1109_270 +P0383__0.4__0___0 +P2650__0.4__0___0_90 +P1725__1__2472___2976 +P2743__1__824___824 +P1458__1__2880___3296 +P2591__1__824___0 +P1458__0.4__538___0_90 +P0204__1__0___0 +P2655__0.4__0___127_180 +P2578__1__0___403 +P2533__1__824___824 +P2651__1__641___824 +P1467__0.4__554___510_90 +P2397__1__0___0 +P1466__0.4__512___512_270 +P2141__0.4__0___0_270 +P1863__0.4__126___296_270 +P1505__0.4__0___0_270 +P2569__1__0___643 +P2578__0.4__0___0_90 +P2362__0.4__0___0_180 +P1793__1__2466___0 +P1794__1__1648___3394 +P1794__0.4__0___512_180 +P0029__0.4__80___0 +P1456__0.4__0___0_90 +P2718__1__1648___824 +P1466__0.4__512___0_90 +P2392__0.4__0___0 +P2725__1__824___0 +P0082__1__3296___3296 +P0030__0.4__80___0_90 +P1861__0.4__0___0_90 +P2251__1__0___0 +P0082__0.4__1024___996_270 +P1794__0.4__230___743_180 +P2591__0.4__0___0_180 +P1861__0.4__0___0_180 +P1464__1__2792___0 +P2116__1__0___0 +P0152__1__0___0 +P2800__0.4__0___276_180 +P1466__0.4__1024___1024_90 +P1780__0.4__410___512 +P2570__0.4__0___0_90 +P1470__0.4__1024___512_90 +P0082__0.4__512___0_270 +P2725__1__824___1648 +P1470__0.4__0___1240_180 +P1449__0.4__0___359 +P1463__0.4__0___194_90 +P2598__1__1332___1030 +P2722__1__1648___824 +P1464__0.4__502___0_90 +P1505__0.4__0___0_90 +P2179__1__217___0 +P0147__0.4__0___0_90 +P1369__1__3296___1648 +P2409__1__540___1174 +P1458__0.4__538___917_90 +P1458__1__2472___3829 +P1725__1__2976___2976 +P1791__0.4__943___895_180 +P1410__0.4__1110___1357_270 +P2141__1__148___166 +P1388__0.4__512___512 +P1861__0.4__0___417_270 +P2362__0.4__0___0 +P1794__0.4__230___512_270 +P0029__1__824___0 +P1474__1__1648___1648 +P2262__1__1267___0 +P1470__0.4__512___512_90 +P0809__0.4__0___0_180 +P2598__1__1332___824 +P2800__1__1648___2227 +P0204__0.4__0___0_180 +P2240__0.4__0___0 +P1358__1__824___2472 +P1556__1__824___0 +P1456__1__824___0 +P1463__0.4__466___194 +P0082__0.4__512___0_90 +P2800__0.4__0___0_270 +P2655__0.4__512___0_180 +P0082__0.4__0___512_180 +P2653__0.4__0___0_180 +P1388__0.4__662___1109 +P1184__1__0___5200 +P1337__1__824___2472 +P2240__1__0___573 +P2116__0.4__0___0_90 +P1640__0.4__0___0_270 +P2594__1__589___1648 +P2742__1__0___824 +P1794__0.4__0___512_270 +P1458__0.4__512___917_180 +P2563__1__824___0 +P1388__1__3192___1648 +P2655__1__2472___1648 +P1432__1__0___2472 +P0146__1__1648___860 +P2754__0.4__0___0 +P2491__1__0___0 +P1410__1__4120___2472 +P1184__1__824___5200 +P1793__0.4__372___0_180 +P1466__0.4__0___0_90 +P2727__0.4__0___0_270 +P2368__1__824___824 +P2651__0.4__0___0_270 +P1725__1__2976___2472 +P2721__1__1648___1429 +P1466__0.4__0___0_180 +P1964__1__0___605 +P2563__0.4__0___0_180 +P1903__1__922___0 +P2659__0.4__0___0_270 +P2257__1__0___57 +P2743__1__824___0 +P2055__0.4__0___0 +P1382__0.4__512___512_270 +P1794__1__824___3394 +P0029__1__824___824 +P1410__1__824___0 +P1470__1__3296___1648 +P2781__1__2472___0 +P2725__1__2472___1684 +P2203__1__0___824 +P0164__0.4__0___0_90 +P0018__0.4__0___0 +P2368__0.4__7___0_180 +P1388__0.4__662___1024 +P1861__0.4__0___0 +P2593__1__179___677 +P0021__0.4__124___0_270 +P2590__1__362___149 +P2651__1__0___824 +P1794__0.4__0___743_180 +P1863__1__824___824 +P2262__0.4__0___0 +P1380__0.4__512___0_270 +P1410__0.4__512___0_90 +P1410__0.4__512___1357 +P0665__0.4__0___0 +P1470__0.4__512___0_270 +P1388__0.4__662___1109_270 +P1791__1__0___2472 +P2162__1__0___356 +P0505__0.4__0___0 +P1410__1__3296___2472 +P1380__0.4__735___854 +P2541__1__1590___824 +P1471__0.4__0___0_270 +P1410__0.4__1110___0_180 +P2392__0.4__0___0_270 +P1793__0.4__0___229_180 +P1449__0.4__0___0_90 +P2580__0.4__0___0 +P2655__0.4__0___0_90 +P1794__0.4__0___0_180 +P2141__0.4__0___0 +P1458__1__2472___3296 +P0052__0.4__0___0_180 +P1794__0.4__230___743_270 +P2194__0.4__0___0_180 +P1793__0.4__372___229_90 +P0030__1__824___1382 +P1470__0.4__1177___1240_90 +P1410__0.4__1110___1024_90 +P1467__0.4__512___510_90 +P1458__1__2472___1648 +P1387__0.4__1024___668_90 +P1464__0.4__502___0 +P2194__1__0___1148 +P1458__0.4__538___512 +P0047__0.4__0___0_270 +P1410__0.4__1024___1357 +P1863__0.4__126___0_180 +P1274__1__2976___2976 +P0095__0.4__0___0 +P2422__1__1518___0 +P2278__1__0___11 +P1281__1__2472___1648 +P0099__0.4__0___0_90 +P0232__1__0___706 +P1281__1__1648___0 +P1594__0.4__0___576_270 +P1382__1__824___2472 +P1594__0.4__576___0_180 +P0095__0.4__0___0_180 +P0162__1__140___0 +P2283__0.4__0___0 +P1387__0.4__512___512 +P0098__0.4__0___0 +P2509__1__0___1211 +P1672__0.4__576___512_90 +P0898__0.4__0___512_90 +P2421__1__824___1216 +P1752__1__2976___824 +P2689__0.4__0___0_270 +P0100__1__0___0 +P0087__0.4__752___1031_270 +P2769__1__1662___1648 +P2522__1__1445___824 +P1382__0.4__512___1024_180 +P2644__0.4__512___350 +P1581__1__0___2976 +P1751__1__2472___2472 +P1748__1__2976___824 +P1747__1__1648___1648 +P1558__1__824___2472 +P1586__1__2472___1648 +P1387__0.4__1024___512_270 +P1745__1__1648___824 +P2507__1__1358___824 +P1672__0.4__576___0_90 +P2507__1__824___824 +P0095__0.4__0___0_270 +P0861__0.4__0___0_90 +P1746__1__0___2472 +P1462__0.4__512___512_180 +P2467__1__0___962 +P1770__1__824___1648 +P1388__0.4__512___1109 +P1410__0.4__1024___512_270 +P1757__1__1648___0 +P0087__0.4__752___0_180 +P0103__0.4__0___0 +P2413__1__824___0 +P0262__0.4__0___0 +P1387__0.4__0___0_180 +P1809__1__3296___5735 +P1380__1__1648___0 +P1746__0.4__0___576_180 +P0087__0.4__0___0_90 +P0098__0.4__0___0_90 +P1462__0.4__0___512 +P2479__1__610___1111 +P0085__0.4__0___0 +P1693__1__2472___0 +P1619__1__2472___824 +P1401__1__3425___824 +P1746__1__0___2976 +P0087__0.4__752___1024 +P1388__0.4__512___0_270 +P2522__1__824___0 +P1765__1__1648___824 +P1410__0.4__1024___0_90 +P1765__1__0___3935 +P2275__1__824___1303 +P2455__1__847___0 +P2466__1__683___0 +P2689__0.4__1024___342_270 +P1388__1__0___3296 +P2548__1__824___0 +P1752__1__1648___1648 +P1410__0.4__1110___512_270 +P1693__1__2472___1648 +P1388__0.4__512___1024_90 +P1388__0.4__512___512_270 +P2446__1__824___1289 +P0100__0.4__0___0_90 +P0898__0.4__76___0 +P0085__0.4__22___0_180 +P2460__1__0___824 +P1382__0.4__512___1024_90 +P0232__1__502___0 +P2509__1__0___824 +P2802__0.4__0___0_270 +P1388__0.4__512___512_90 +P0109__0.4__0___0_270 +P1594__0.4__0___576_180 +P2331__0.4__0___0_90 +P1745__1__2472___1648 +P1812__1__417___628 +P1410__0.4__1024___512_180 +P1387__0.4__0___512_90 +P1688__1__1648___824 +P0095__1__0___0 +P2689__1__4120___0 +P1380__1__1648___1648 +P0898__0.4__0___512 +P2532__1__0___824 +P1388__0.4__512___512_180 +P0094__1__0___208 +P1770__1__1648___824 +P0085__1__824___1482 +P0067__0.4__0___0_180 +P2714__1__3296___824 +P0099__0.4__0___0 +P1752__1__2472___1648 +P1594__0.4__512___512_90 +P2689__0.4__0___342_270 +P2415__1__0___1468 +P2464__1__0___0 +P1591__1__0___2472 +P1607__1__2976___824 +P0868__0.4__291___0_270 +P2689__0.4__512___0_90 +P1344__1__2472___2976 +P1747__1__824___2472 +P0098__0.4__0___0_180 +P1599__1__0___0 +P0087__0.4__752___512_270 +P0070__0.4__0___0 +P1594__0.4__512___576 +P0087__0.4__512___512_270 +P1765__1__2976___2472 +P2275__1__824___824 +P0071__0.4__0___0_90 +P2375__1__0___0 +P0099__0.4__0___0_180 +P2802__1__2472___824 +P0094__0.4__0___0 +P1388__0.4__662___1109_180 +P1240__1__2472___1648 +P0070__1__0___0 +P2331__1__824___824 +P1672__1__2976___1648 +P2460__1__824___924 +P1594__0.4__576___512_180 +P0104__1__0___0 +P1382__0.4__512___1024 +P0898__0.4__76___512_270 +P1700__1__2472___2976 +P1672__0.4__576___512_270 +P0898__0.4__0___528 +P1586__1__1648___824 +P2802__0.4__0___0_90 +P0087__1__3296___1648 +P0068__0.4__0___0 +P1380__0.4__735___512_180 +P0898__0.4__0___0_90 +P1748__1__2976___2472 +P2283__0.4__0___0_180 +P0262__0.4__0___0_270 +P1369__1__4176___1648 +P2496__1__824___824 +P2467__1__0___824 +P1410__0.4__1110___512 +P1594__1__824___1648 +P0861__1__0___0 +P2691__1__4855___2472 +P1587__1__1648___2976 +P1669__1__824___2976 +P1382__0.4__0___512_90 +P1598__1__1648___824 +P2482__1__824___824 +P1380__0.4__735___0 +P1380__0.4__512___512 +P1594__0.4__512___576_90 +P2444__1__0___953 +P1380__0.4__0___0 +P2471__1__620___0 +P2600__0.4__0___0_270 +P1380__0.4__512___0 +P1380__1__3373___2472 +P2802__0.4__1351___0 +P2464__1__889___0 +P2449__1__0___0 +P1688__1__2472___2472 +P2600__0.4__0___0 +P1759__1__2472___2976 +P1700__1__1648___2976 +P2275__1__1648___824 +P1387__0.4__0___668_270 +P0087__0.4__0___0_270 +P2532__1__0___0 +P1672__0.4__0___0 +P1380__0.4__735___0_270 +P0163__1__0___0 +P2477__1__0___824 +P1587__1__824___2472 +P1380__1__2472___2472 +P2467__1__1464___0 +P1765__1__2472___3935 +P2331__0.4__286___0_180 +P1594__0.4__0___512_180 +P2600__0.4__0___0_180 +P2416__1__825___824 +P2419__1__566___824 +P1672__0.4__512___512_180 +P1408__0.4__512___0_180 +P1281__1__1648___1648 +P1387__0.4__512___512_180 +P0087__1__3415___0 +P1765__1__0___2472 +P1585__1__1648___824 +P2430__1__0___0 +P1591__1__0___1648 +P1388__0.4__662___512_180 +P1689__1__1648___1648 +P2495__1__0___1661 +P0087__0.4__512___1024_90 +P1382__0.4__0___0_90 +P1387__0.4__512___512_270 +P0113__1__1648___3116 +P1845__1__2894___0 +P1745__1__1648___0 +P2438__1__0___0 +P2644__0.4__816___350_180 +P1410__0.4__1024___0_180 +P1462__0.4__512___512_90 +P0861__0.4__0___88 +P2196__1__246___56 +P0085__1__1591___824 +P2454__1__0___824 +P1748__1__0___824 +P0070__1__154___0 +P0898__1__1648___824 +P1672__0.4__576___0_180 +P0087__1__824___4113 +P0872__0.4__0___0_180 +P1751__1__2976___0 +P2446__1__991___1289 +P0898__0.4__0___0_180 +P2543__1__1082___1462 +P1387__0.4__512___668_90 +P0094__1__156___208 +P1705__1__824___0 +P2213__1__756___0 +P1380__1__824___0 +P2689__1__3296___0 +P1594__0.4__512___0_180 +P2444__1__884___953 +P1672__0.4__0___0_270 +P0096__0.4__0___0 +P2196__1__0___0 +P1382__0.4__0___1536 +P1277__1__824___824 +P1856__1__824___824 +P1594__0.4__512___576_180 +P1387__1__0___3206 +P1594__0.4__576___0_270 +P2417__1__0___824 +P2419__1__566___1122 +P1387__0.4__512___668_270 +P2689__0.4__1439___0_90 +P1258__1__1648___824 +P0087__0.4__512___0_180 +P1704__1__3269___2976 +P2375__1__541___0 +P1669__1__1648___824 +P1358__1__824___824 +P0898__0.4__0___528_270 +P1589__1__2976___0 +P2689__1__2472___0 +P1369__1__4120___1648 +P2275__1__824___0 +P0087__0.4__0___512_180 +P1380__0.4__512___854 +P1401__1__3296___824 +P2529__1__0___0 +P1597__1__0___3954 +P2464__1__824___0 +P2450__1__0___0 +P2671__1__0___1899 +P1649__1__1648___2472 +P1673__1__1648___1648 +P0164__0.4__0___0_270 +P1669__1__824___824 +P1751__1__0___0 +P1408__0.4__0___512 +P0086__0.4__0___0_270 +P1387__0.4__0___668_180 +P1462__1__0___824 +P2454__1__911___824 +P2689__0.4__1439___0_180 +P2689__0.4__0___0_180 +P2438__1__353___824 +P1751__1__2472___824 +P1770__1__824___824 +P1759__1__2976___2976 +P2802__1__1648___0 +P2283__1__0___0 +P1446__1__1022___1648 +P0158__1__2116___824 +P2531__1__0___1282 +P1410__0.4__1110___512_180 +P0087__1__824___0 +P1597__1__824___3954 +P0164__1__0___222 +P2213__1__756___333 +P1598__1__2976___0 +P1670__1__0___2472 +P0087__0.4__512___0_270 +P0872__0.4__0___0_90 +P2282__0.4__0___213_90 +P2689__0.4__512___0 +P0087__1__0___3296 +P1382__1__1648___824 +P1410__0.4__1110___0_90 +P1380__0.4__735___512_270 +P1673__1__2472___2472 +P1747__1__2472___2976 +P1387__0.4__0___668_90 +P0087__0.4__752___1024_270 +P1382__0.4__0___512_180 +P1277__1__0___824 +P1388__0.4__662___0_180 +P1380__0.4__512___512_180 +P2477__1__600___824 +P1673__1__2976___2472 +P2203__0.4__0___0 +P1370__1__2976___824 +P2428__1__1540___824 +P1344__1__3296___2472 +P0087__0.4__0___1024_90 +P1591__1__3269___2472 +P1594__1__824___824 +P1579__1__2976___1648 +P0109__0.4__0___0_180 +P1388__0.4__662___1024_270 +P1670__1__0___0 +P2517__1__1194___941 +P2497__1__0___1465 +P1388__0.4__662___0_90 +P2421__1__1068___0 +P1410__1__3296___1648 +P1594__0.4__0___0_270 +P0136__0.4__138___0_180 +P1384__1__0___1648 +P2282__1__824___2069 +P2802__0.4__1351___0_180 +P1277__1__824___1648 +P2689__0.4__512___342_90 +P2413__1__1159___0 +P1594__0.4__576___512_270 +P2444__1__0___824 +P1380__0.4__512___512_90 +P2416__1__824___824 +P1586__1__0___2472 +P0136__0.4__138___0_270 +P1380__0.4__735___854_180 +P1599__1__2472___0 +P2600__1__0___0 +P1594__0.4__0___512 +P0070__1__0___208 +P2479__1__610___824 +P0136__0.4__0___7_180 +P1688__1__2472___2976 +P2464__1__889___824 +P0087__0.4__0___0 +P2539__1__824___824 +P1344__1__2472___2472 +P2450__1__656___0 +P2446__1__991___824 +P0087__1__3415___1648 +P2275__1__2053___824 +P1594__0.4__0___512_270 +P2271__1__0___0 +P1594__0.4__576___576_90 +P1591__1__0___0 +P1672__0.4__512___512_90 +P0872__0.4__0___0_270 +P1669__1__0___2472 +P0087__0.4__512___1031_270 +P2485__1__0___824 +P0081__0.4__0___0_180 +P2538__1__824___824 +P1752__1__0___824 +P1594__0.4__512___0 +P2606__1__0___1630 +P1387__0.4__1024___668 +P1672__0.4__576___512 +P1672__0.4__576___512_180 +P2769__1__1662___2472 +P0068__1__0___0 +P1388__1__824___4120 +P0081__1__0___0 +P1408__0.4__512___0_270 +P0136__0.4__0___7_270 +P2532__1__824___824 +P1672__1__0___0 +P1660__1__0___2472 +P2538__1__0___824 +P1693__1__2472___824 +P2417__1__337___1075 +P2279__1__1648___824 +P1598__1__2976___1648 +P0085__0.4__0___0_180 +P1724__1__0___1648 +P2644__0.4__512___350_180 +P2532__1__824___0 +P2416__1__825___0 +P1672__1__824___0 +P1599__1__824___3296 +P1841__1__824___0 +P2279__1__1648___0 +P1746__0.4__0___576 +P1747__1__824___2976 +P1388__0.4__0___512 +P2714__1__3513___1426 +P2331__1__824___935 +P2721__1__1648___824 +P2282__0.4__0___213 +P1756__1__1648___824 +P2420__1__0___0 +P1705__1__2472___1648 +P1380__0.4__735___0_180 +P1589__1__1648___1648 +P1656__1__2976___1648 +P1757__1__1648___2976 +P1387__0.4__1024___668_270 +P0087__1__2472___2472 +P2802__0.4__512___0_270 +P1700__1__2472___2472 +P2505__1__0___0 +P2442__1__0___0 +P1387__0.4__1024___512_180 +P1388__0.4__512___0 +P0136__0.4__0___7_90 +P0087__0.4__752___512_180 +P2802__0.4__1024___0_180 +P1387__0.4__0___512 +P1382__0.4__0___1024 +P2644__0.4__512___350_90 +P0086__1__133___0 +P1410__0.4__1110___0_270 +P1388__0.4__0___1109_90 +P1462__0.4__512___512_270 +P1589__1__2472___0 +P0898__1__1725___824 +P1594__0.4__0___0_180 +P2507__1__1358___1251 +P1748__1__824___2472 +P0262__0.4__0___0_180 +P0136__0.4__138___7_180 +P0898__0.4__76___0_270 +P1747__1__824___824 +P2422__1__824___0 +P1388__1__824___2472 +P1589__1__1648___0 +P0104__0.4__0___0_90 +P1462__1__824___824 +P1598__1__824___1648 +P1700__1__2976___2976 +P1587__1__0___2472 +P1462__0.4__512___0_180 +P2609__1__2472___3836 +P2545__1__0___824 +P2802__0.4__1024___0_90 +P0898__0.4__76___0_90 +P1746__0.4__0___512 +P1594__0.4__0___512_90 +P1594__1__2472___2976 +P0087__0.4__512___1031_180 +P1748__1__2976___2976 +P1380__1__3296___1648 +P1702__1__824___2976 +P1594__1__2976___824 +P1598__1__0___2472 +P1600__1__2472___1648 +P0071__0.4__0___0_180 +P1746__0.4__0___512_90 +P1274__1__2976___2472 +P2609__1__1648___3836 +P0898__0.4__76___528 +P1382__0.4__512___512_180 +P1598__1__2976___824 +P1757__1__1648___824 +P2282__1__0___2069 +P2416__1__0___1164 +P1700__1__2976___2472 +P2454__1__911___928 +P2468__1__0___824 +P1382__0.4__512___0_90 +P1598__1__824___3954 +P0868__0.4__0___0 +P0136__0.4__138___7 +P1720__1__824___2472 +P1382__0.4__512___1024_270 +P2427__1__824___824 +P0868__0.4__0___0_180 +P0085__0.4__0___0_270 +P2475__1__866___824 +P1382__0.4__0___1024_180 +P0861__0.4__0___0_270 +P0099__1__133___460 +P2283__0.4__0___0_270 +P1594__0.4__512___0_270 +P1594__0.4__0___0 +P2472__1__0___824 +P2689__0.4__1024___0_90 +P1765__1__824___2472 +P2802__1__3296___824 +P0898__0.4__0___0_270 +P2504__1__596___0 +P1558__1__1648___2472 +P0096__0.4__0___0_270 +P2417__1__337___824 +P1388__0.4__662___1109_90 +P0868__0.4__291___0 +P2691__1__4855___824 +P1388__0.4__0___1024_90 +P1594__1__1648___824 +P0136__0.4__138___7_90 +P1856__1__0___1395 +P2523__1__767___1648 +P1382__0.4__0___0 +P1382__1__824___3296 +P0103__0.4__0___0_90 +P1380__0.4__735___854_270 +P1754__1__0___0 +P0087__0.4__752___1024_180 +P2467__1__824___824 +P2689__0.4__0___342_90 +P0087__1__0___4113 +P1599__1__824___3954 +P1719__1__0___2976 +P2689__0.4__1024___342_90 +P2418__1__517___824 +P2548__1__1138___0 +P2547__1__0___0 +P1388__0.4__662___512 +P2543__1__824___1462 +P0898__0.4__0___512_270 +P1608__1__2472___2976 +P1669__1__1648___1648 +P1388__1__0___4120 +P0087__1__2472___824 +P1768__1__0___2472 +P1669__1__0___2976 +P1380__0.4__512___854_270 +P2689__0.4__0___342 +P1672__1__2976___824 +P2507__1__824___1251 +P2471__1__0___824 +P1594__0.4__0___576 +P1380__1__1648___2472 +P2802__0.4__1351___0_90 +P0087__1__0___0 +P1388__1__1648___1648 +P1380__0.4__512___512_270 +P1747__1__2976___2976 +P1380__0.4__512___0_180 +P0098__0.4__0___0_270 +P1746__0.4__0___576_90 +P0162__1__0___0 +P2507__1__0___824 +P1388__0.4__662___1024_90 +P1382__0.4__0___1536_270 +P1594__1__2472___824 +P1747__1__0___2976 +P2802__1__3296___0 +P2802__0.4__512___0 +P2468__1__0___856 +P0800__0.4__0___0 +P1408__0.4__0___0_270 +P1563__1__0___2472 +P1589__1__2472___1648 +P1751__1__1648___0 +P1408__0.4__0___0_180 +P1380__1__3296___3296 +P1388__1__0___4309 +P2802__1__2472___0 +P1672__0.4__512___0 +P2454__1__824___928 +P1382__0.4__512___512_90 +P2689__0.4__1024___0_180 +P1672__0.4__0___0_180 +P2496__1__900___0 +P1408__0.4__0___0_90 +P1809__1__4120___3296 +P2471__1__0___1104 +P1841__1__936___721 +P0068__0.4__0___0_90 +P0136__1__1648___1554 +P2802__0.4__1351___0_270 +P1700__1__824___2976 +P2203__0.4__0___0_180 +P1666__1__824___824 +P1387__0.4__512___512_90 +P1380__0.4__0___512_90 +P0868__0.4__0___0_270 +P2430__1__0___824 +P1688__1__1648___0 +P1758__1__0___2472 +P1703__1__2472___1648 +P1380__1__1648___824 +P1388__0.4__662___512_90 +P1380__0.4__735___512_90 +P2529__1__924___0 +P1747__1__2472___1648 +P0085__1__1591___0 +P2542__1__0___0 +P1257__1__2976___1648 +P0096__1__0___0 +P0800__1__0___824 +P1462__0.4__0___512_270 +P2485__1__548___824 +P0085__1__824___0 +P2331__0.4__286___0_270 +P1599__1__824___0 +P1586__1__0___2976 +P1387__0.4__0___512_270 +P1384__1__1648___824 +P2689__0.4__512___0_180 +P1387__1__2472___2472 +P1672__0.4__512___512 +P1410__0.4__1024___512 +P2522__1__1445___1007 +P1387__0.4__1024___512_90 +P2444__1__824___953 +P1768__1__2976___824 +P1388__0.4__512___0_180 +P0094__0.4__0___0_270 +P1388__0.4__0___0_270 +P1841__1__824___721 +P0067__1__132___459 +P0081__0.4__0___0 +P1387__0.4__0___668 +P2520__1__0___0 +P1747__1__2976___2472 +P1809__1__2472___3296 +P1598__1__1648___2472 +P1748__1__824___2976 +P1382__0.4__512___0_180 +P1560__1__2976___2976 +P0087__0.4__752___1024_90 +P1693__1__2976___0 +P2482__1__0___824 +P1408__0.4__512___512_90 +P0164__0.4__0___0_180 +P1747__1__1648___824 +P2416__1__824___0 +P0067__0.4__0___0_90 +P0087__0.4__752___0 +P1594__0.4__576___0 +P1380__0.4__0___512_180 +P1751__1__0___824 +P2496__1__0___824 +P2282__0.4__0___213_270 +P2495__1__0___1648 +P1589__1__0___824 +P2644__1__2472___2412 +P1758__1__0___2976 +P2689__0.4__1439___0_270 +P2689__1__2472___824 +P2531__1__0___824 +P1382__0.4__0___1024_270 +P2477__1__0___0 +P1757__1__2472___824 +P2417__1__0___1075 +P1388__0.4__512___1024 +P1388__0.4__0___1109_270 +P1462__1__0___1648 +P2479__1__0___824 +P1462__0.4__0___0 +P1382__1__0___4120 +P0085__1__824___824 +P2517__1__1194___824 +P2494__1__788___0 +P2689__0.4__512___342_270 +P0158__1__0___1551 +P0087__0.4__512___512_180 +P1380__1__824___824 +P2644__0.4__816___350 +P1757__1__2472___2976 +P2418__1__0___824 +P2457__1__0___824 +P0232__1__0___0 +P1388__0.4__0___1109 +P1763__1__1648___0 +P2496__1__900___904 +P1649__1__1648___2976 +P1382__0.4__0___1536_180 +P1672__0.4__512___0_90 +P2375__1__0___824 +P1673__1__824___2472 +P1408__0.4__0___512_270 +P0087__0.4__0___1031_270 +P1670__1__0___824 +P1382__0.4__512___0_270 +P1609__1__0___0 +P1660__1__0___2976 +P1809__1__3296___4944 +P2488__1__824___824 +P1462__0.4__0___0_270 +P0071__0.4__0___0 +P0068__0.4__0___0_270 +P0099__1__0___0 +P0095__0.4__0___0_90 +P1388__0.4__0___0_90 +P1607__1__2976___0 +P0100__0.4__0___0 +P2413__1__0___0 +P2478__1__727___0 +P2689__0.4__1024___342 +P2282__0.4__206___213_270 +P1594__1__2976___2472 +P2446__1__991___0 +P1388__0.4__512___1109_90 +P0085__1__0___1482 +P1384__1__824___4611 +P0800__1__0___888 +P1598__1__2472___2472 +P2432__1__0___0 +P0086__0.4__0___0_180 +P1594__1__1648___0 +P1751__1__2976___2472 +P1745__1__824___824 +P0087__0.4__0___1024_180 +P2478__1__727___598 +P0898__0.4__76___528_270 +P1283__1__824___0 +P2802__0.4__0___0_180 +P1380__0.4__0___512_270 +P1594__1__2472___0 +P2275__1__2053___1303 +P0087__0.4__512___0_90 +P1387__1__2472___3206 +P2714__1__3296___1426 +P1551__1__2976___1648 +P1754__1__2472___0 +P1597__1__1648___1648 +P2542__1__824___1463 +P0087__0.4__0___1024_270 +P2523__1__767___0 +P1179__1__3296___2472 +P2517__1__824___824 +P0094__0.4__0___0_180 +P1763__1__824___2472 +P2427__1__0___0 +P1589__1__0___2976 +P0103__0.4__0___0_270 +P2609__1__1648___3296 +P1388__1__824___1648 +P2547__1__824___824 +P2466__1__0___0 +P1387__0.4__0___512_180 +P1748__1__824___824 +P1446__1__824___1648 +P1666__1__0___0 +P0898__0.4__0___528_90 +P1770__1__2472___824 +P0094__0.4__0___0_90 +P1581__1__0___1648 +P1809__1__2472___2472 +P1382__0.4__0___512 +P2496__1__824___0 +P2331__0.4__286___0_90 +P0067__0.4__0___0 +P0800__0.4__0___0_180 +P2282__0.4__206___213_180 +P1673__1__1648___2472 +P2689__0.4__1439___342_90 +P1462__0.4__512___512 +P2430__1__824___0 +P1591__1__2472___1648 +P2689__0.4__1439___342_270 +P2475__1__824___0 +P1380__0.4__0___854_180 +P2538__1__863___824 +P1765__1__824___824 +P1382__0.4__0___1024_90 +P0872__1__0___0 +P1594__0.4__512___0_90 +P1809__1__3296___3296 +P0109__1__0___0 +P2468__1__679___856 +P2714__1__3513___824 +P2507__1__0___1251 +P1594__0.4__576___576 +P1387__0.4__1024___668_180 +P1589__1__0___1648 +P1401__1__3425___0 +P1589__1__2976___824 +P1380__0.4__512___0_90 +P1602__1__0___0 +P2452__1__609___824 +P0136__1__1648___824 +P1672__0.4__512___0_180 +P1384__1__4120___1648 +P2496__1__900___824 +P1756__1__1648___0 +P1380__1__3296___2472 +P2418__1__517___0 +P1388__1__2472___2472 +P2689__0.4__1024___0 +P1242__1__2976___2976 +P0136__1__1880___1554 +P2689__0.4__1024___342_180 +P1574__1__2976___1648 +P2538__1__0___0 +P2724__1__3517___0 +P2421__1__824___824 +P2485__1__548___0 +P2331__0.4__0___0 +P1358__1__1648___1648 +P1380__0.4__0___0_180 +P0086__1__0___0 +P1382__0.4__512___512 +P1388__0.4__0___1109_180 +P2283__1__824___0 +P0075__1__0___0 +P1619__1__2976___1648 +P2689__0.4__0___342_180 +P1388__0.4__512___1109_270 +P1745__1__824___2976 +P2518__1__823___824 +P1608__1__0___2976 +P0800__0.4__0___0_270 +P0086__1__133___459 +P1720__1__824___1648 +P1610__1__824___0 +P2522__1__0___1007 +P0087__0.4__512___512 +P2203__1__446___0 +P2468__1__679___824 +P1462__0.4__512___0_90 +P0800__1__0___0 +P0087__0.4__512___1031_90 +P1387__1__0___2472 +P1591__1__1648___1648 +P0079__1__0___0 +P2547__1__2704___0 +P2644__0.4__816___350_90 +P0086__0.4__0___0_90 +P1765__1__0___0 +P0098__1__0___0 +P0872__0.4__0___0 +P1609__1__0___1648 +P1704__1__3269___2472 +P0162__1__0___439 +P2689__1__3296___824 +P1384__1__0___3296 +P1747__1__2472___824 +P1754__1__1648___0 +P1600__1__2472___2472 +P1358__1__824___1648 +P0898__0.4__76___528_90 +P0085__0.4__22___0 +P2689__0.4__512___342 +P0086__0.4__0___0 +P2496__1__0___0 +P2397__1__0___0 +P1758__1__824___2472 +P1594__1__2976___0 +P1380__1__3373___1648 +P1384__1__824___2472 +P0136__1__1880___824 +P0087__0.4__512___512_90 +P0067__0.4__0___0_270 +P0096__0.4__0___0_180 +P0096__1__0___163 +P1594__1__1648___2976 +P1591__1__3269___1648 +P1589__1__2976___1648 +P2542__1__1082___1463 +P1598__1__0___0 +P2454__1__0___928 +P1388__1__824___4309 +P1410__0.4__1110___0 +P1370__1__2472___824 +P1384__1__4840___2472 +P1594__0.4__576___512_90 +P1594__1__1648___1648 +P2523__1__0___0 +P2282__0.4__0___213_180 +P0861__0.4__0___88_180 +P1388__0.4__662___1024_180 +P1653__1__0___824 +P1608__1__824___2976 +P2457__1__0___0 +P1388__1__0___1648 +P0070__0.4__0___0_270 +P2203__0.4__0___0_90 +P1598__1__824___2472 +P1462__0.4__0___512_180 +P2275__1__0___1303 +P0087__0.4__752___512_90 +P0136__0.4__0___0_90 +P2513__1__807___0 +P1388__0.4__0___512_90 +P0136__0.4__0___0_180 +P0099__1__0___460 +P2421__1__1068___1216 +P2460__1__921___824 +P0898__0.4__0___528_180 +P2479__1__0___1111 +P0087__0.4__752___1031 +P0109__0.4__0___0 +P1763__1__824___1648 +P2689__0.4__0___0_90 +P0162__1__140___439 +P0136__0.4__0___0 +P0158__1__0___0 +P1598__1__2472___824 +P0136__0.4__138___0 +P0087__1__824___824 +P0087__0.4__512___1024 +P1765__1__824___0 +P2606__1__0___824 +P1751__1__2976___1648 +P1757__1__2472___1648 +P1765__1__824___1648 +P0085__0.4__22___0_270 +P1672__0.4__512___512_270 +P1277__1__0___1648 +P1594__0.4__576___576_180 +P1387__0.4__0___0_270 +P0087__0.4__512___1024_180 +P0103__0.4__0___0_180 +P2608__1__0___1427 +P0158__1__0___824 +P0085__0.4__0___0_90 +P1408__1__1648___824 +P2438__1__0___824 +P1765__1__2472___3296 +P2522__1__0___824 +P0868__0.4__0___0_90 +P1382__0.4__0___0_180 +P1380__0.4__0___0_90 +P2211__1__824___824 +P1586__1__824___824 +P2416__1__824___1164 +P0087__1__0___824 +P1673__1__0___0 +P2600__0.4__0___0_90 +P1408__0.4__512___512_180 +P2417__1__337___0 +P1594__1__1648___2472 +P1580__1__0___1648 +P1401__1__3296___0 +P0087__0.4__0___512 +P2331__0.4__286___0 +P1598__1__2472___1648 +P0136__0.4__138___0_90 +P1594__1__0___824 +P0087__0.4__752___0_90 +P1746__0.4__0___576_270 +P0087__0.4__512___0 +P1594__0.4__576___0_90 +P2689__0.4__1439___342_180 +P0099__0.4__0___0_270 +P1274__1__2472___2976 +P1594__0.4__576___512 +P0136__1__0___1554 +P1672__1__2976___0 +P1672__1__1648___0 +P1747__1__0___1648 +P2275__1__0___824 +P2547__1__0___824 +P2279__1__824___0 +P0087__1__2472___1648 +P2689__1__1648___824 +P1581__1__0___2472 +P1384__1__0___4611 +P1594__1__2976___2976 +P2691__1__3296___3296 +P2435__1__0___0 +P1765__1__2472___2472 +P2644__0.4__512___350_270 +P0086__1__0___459 +P1598__1__1648___3954 +P2689__0.4__512___0_270 +P2467__1__1464___962 +P2415__1__1022___824 +P1380__0.4__735___854_90 +P1380__0.4__512___854_180 +P1240__1__2472___2472 +P0158__1__1648___0 +P0070__0.4__0___0_90 +P1589__1__1648___2472 +P0071__0.4__0___0_270 +P2547__1__824___1401 +P0087__1__824___3296 +P1752__1__2976___0 +P0898__0.4__76___512 +P1700__1__0___2976 +P2644__1__3296___2412 +P2467__1__1464___824 +P2689__0.4__1439___0 +P1388__1__1648___3296 +P0087__0.4__0___1031 +P2545__1__0___1293 +P1387__0.4__0___0_90 +P2538__1__863___0 +P1752__1__2472___824 +P0109__1__155___0 +P2331__0.4__0___0_180 +P1382__0.4__0___512_270 +P2455__1__824___0 +P2446__1__0___0 +P1619__1__2976___824 +P1704__1__1648___1648 +P2203__1__0___869 +P1408__0.4__0___512_180 +P1410__0.4__1024___512_90 +P1589__1__2472___824 +P2518__1__823___0 +P0898__0.4__76___0_180 +P1765__1__1648___2472 +P0099__1__133___0 +P2547__1__0___1401 +P2802__0.4__1024___0_270 +P2331__0.4__0___0_270 +P0094__1__0___0 +P1388__0.4__512___512 +P2283__1__1283___0 +P0087__0.4__752___1031_90 +P2438__1__353___0 +P2427__1__824___1587 +P1560__1__0___0 +P1384__1__0___4120 +P2279__1__824___824 +P1752__1__2976___1648 +P1745__1__0___2976 +P2644__0.4__816___350_270 +P0898__0.4__76___512_180 +P0087__0.4__0___1031_180 +P0087__0.4__752___512 +P0087__0.4__0___1024 +P2802__0.4__512___0_90 +P1408__0.4__512___0_90 +P2485__1__548___1360 +P1672__0.4__576___0 +P0164__1__0___0 +P2446__1__824___0 +P0087__1__1648___2472 +P0904__1__31___0 +P1340__1__824___824 +P1388__0.4__662___0_270 +P0087__1__1648___1648 +P2467__1__824___962 +P1666__1__824___0 +P0094__1__156___0 +P2446__1__824___824 +P1408__0.4__512___512 +P1585__1__824___824 +P0085__1__0___824 +P2465__1__0___0 +P2512__1__0___0 +P1598__1__1648___3296 +P1753__1__824___824 +P0898__0.4__0___512_180 +P1594__0.4__512___512 +P0898__0.4__76___512_90 +P2689__0.4__1024___0_270 +P1746__0.4__0___512_270 +P2477__1__600___0 +P0232__1__502___706 +P1672__0.4__576___0_270 +P1587__1__824___2976 +P0164__1__423___0 +P2203__1__446___869 +P2211__1__824___0 +P2609__1__2472___3296 +P2282__1__0___1648 +P1382__0.4__0___1536_90 +P2523__1__767___1687 +P0136__0.4__138___7_270 +P1669__1__824___1648 +P0068__0.4__0___0_180 +P1069__1__824___2323 +P2444__1__824___824 +P1388__0.4__0___0_180 +P1380__0.4__0___0_270 +P0262__1__0___0 +P1240__1__1648___824 +P1672__1__2472___1648 +P2436__1__400___845 +P2802__0.4__1024___0 +P0100__0.4__0___0_180 +P2437__1__824___0 +P1462__1__824___1648 +P1594__0.4__0___576_90 +P0158__1__1648___1551 +P1768__1__2976___1648 +P1274__1__2472___2472 +P0104__0.4__0___0_270 +P1388__0.4__662___1109 +P1388__0.4__0___1024_270 +P0087__0.4__0___0_180 +P1765__1__824___3296 +P1380__1__2472___0 +P0087__0.4__512___1024_270 +P1669__1__3270___824 +P1594__0.4__576___576_270 +P1699__1__0___824 +P2541__1__1590___0 +P2518__1__0___824 +P1388__1__824___3296 +P0262__0.4__0___0_90 +P2539__1__0___824 +P1563__1__0___1648 +P1599__1__3269___2472 +P2432__1__0___767 +P1580__1__0___2472 +P2689__1__1648___0 +P2482__1__1050___824 +P0160__1__0___0 +P0868__0.4__291___0_90 +P1388__0.4__512___1024_180 +P0861__0.4__0___88_90 +P0070__1__154___208 +P1754__1__0___3921 +P1672__0.4__512___0_270 +P1594__0.4__512___512_180 +P1380__0.4__512___854_90 +P2282__1__824___1648 +P2735__1__0___824 +P2460__1__0___924 +P1455__1__824___0 +P2416__1__825___1164 +P1719__1__0___2472 +P1384__1__4840___1648 +P2689__1__824___824 +P0071__1__0___0 +P1384__1__824___3296 +P1672__0.4__0___0_90 +P0087__1__3415___824 +P2769__1__1648___2472 +P1765__1__0___824 +P1587__1__824___1648 +P1601__1__1648___824 +P0868__0.4__291___0_180 +P1751__1__824___0 +P2504__1__0___0 +P0087__0.4__0___1031_90 +P1768__1__0___1648 +P1669__1__824___2472 +P2283__0.4__0___0_90 +P1410__0.4__1024___0 +P0087__1__3296___824 +P2196__1__0___56 +P1587__1__1648___2472 +P0109__1__0___208 +P0087__1__3296___0 +P2460__1__921___924 +P1746__0.4__0___512_180 +P2426__1__0___824 +P0872__1__0___152 +P0861__0.4__0___0_180 +P0070__0.4__0___0_180 +P1380__0.4__735___512 +P2485__1__0___0 +P1714__1__3270___2976 +P1747__1__1648___2472 +P1366__1__4120___824 +P0100__0.4__0___0_270 +P1370__1__1648___824 +P1594__0.4__0___0_90 +P2464__1__0___824 +P0109__0.4__0___0_90 +P1745__1__2976___1648 +P2520__1__0___824 +P0868__1__824___1314 +P1408__1__824___824 +P2802__0.4__512___0_180 +P0081__0.4__0___0_90 +P1366__1__3296___824 +P0136__0.4__0___0_270 +P2375__1__541___824 +P0085__0.4__22___0_90 +P2689__0.4__512___342_180 +P1462__0.4__512___0 +P2415__1__824___1468 +P1344__1__3296___2976 +P1751__1__2976___824 +P2802__0.4__0___0 +P1408__0.4__512___0 +P2476__1__0___0 +P2211__1__0___824 +P0109__1__155___208 +P0087__1__2472___0 +P1560__1__0___824 +P2203__1__446___824 +P0087__0.4__0___512_270 +P1382__1__824___824 +P0861__0.4__0___88_270 +P1387__0.4__0___0 +P0095__1__0___163 +P0158__1__2116___0 +P0078__1__0___0 +P1382__0.4__512___512_270 +P2689__1__4120___824 +P1757__1__1648___1648 +P1462__0.4__0___0_180 +P1387__0.4__1024___512 +P1748__1__2472___824 +P2454__1__824___824 +P1753__1__0___824 +P1703__1__2472___2976 +P1594__0.4__512___512_270 +P1462__0.4__512___0_270 +P1382__0.4__0___0_270 +P1388__0.4__0___0 +P0087__0.4__0___512_90 +P0087__0.4__752___0_270 +P2472__1__0___1348 +P2475__1__866___0 +P0800__0.4__0___0_90 +P2416__1__0___824 +P1380__0.4__0___854 +P2203__0.4__0___0_270 +P1453__1__1298___718 +P2769__1__1648___1648 +P2203__1__0___824 +P0164__0.4__0___0_90 +P2282__0.4__206___213 +P2724__1__3296___0 +P2429__1__0___1796 +P0081__0.4__0___0_270 +P2464__1__824___824 +P1388__0.4__662___1024 +P2196__1__246___0 +P0074__1__0___0 +P0104__0.4__0___0_180 +P1388__0.4__0___512_270 +P1388__0.4__512___1109_180 +P0067__1__0___459 +P1591__1__3269___2976 +P1841__1__936___0 +P0076__1__0___0 +P1410__0.4__1110___512_90 +P2421__1__1068___824 +P1752__1__1648___2472 +P0067__1__0___0 +P1753__1__0___2472 +P1388__0.4__0___1024 +P2529__1__824___0 +P2415__1__1022___1468 +P1758__1__824___2976 +P1380__0.4__512___0_270 +P1607__1__2472___824 +P2436__1__400___824 +P1388__0.4__0___512_180 +P2471__1__0___0 +P1388__0.4__512___0_90 +P1574__1__2472___1648 +P0164__1__423___222 +P2517__1__0___0 +P2278__1__0___0 +P0898__1__1725___1648 +P0164__0.4__0___0 +P1705__1__2976___1648 +P1702__1__824___2472 +P1602__1__0___824 +P0898__1__1648___1648 +P2689__0.4__0___0 +P1387__0.4__512___668_180 +P0104__0.4__0___0 +P1763__1__824___2976 +P2437__1__0___0 +P1388__0.4__662___1109_270 +P1380__1__3373___3296 +P1380__0.4__735___0_90 +P2512__1__0___824 +P1410__0.4__1024___0_270 +P1560__1__2976___2472 +P2444__1__884___824 +P1751__1__0___1648 +P2517__1__824___941 +P2275__1__0___0 +P1380__0.4__0___854_270 +P1388__0.4__662___512_270 +P1702__1__0___2976 +P0898__0.4__0___0 +P1387__1__824___1648 +P1388__0.4__0___1024_180 +P2452__1__609___1136 +P1689__1__2472___1648 +P0087__0.4__752___1031_180 +P1380__0.4__735___854 +P1856__1__824___1395 +P2417__1__0___0 +P0067__1__132___0 +P1408__0.4__0___0 +P2475__1__824___824 +P2446__1__0___824 +P2485__1__0___1360 +P2496__1__0___904 +P1462__0.4__0___512_90 +P0898__0.4__76___528_180 +P1410__0.4__1110___0_180 +P2430__1__0___1192 +P1462__0.4__0___0_90 +P0096__0.4__0___0_90 +P1598__1__2472___0 +P1551__1__2472___1648 +P1751__1__2472___0 +P1589__1__824___0 +P1380__0.4__0___512 +P1388__0.4__662___0 +P1753__1__0___1648 +P1380__0.4__0___854_90 +P1387__0.4__512___668 +P1752__1__0___2472 +P0136__1__0___824 +P1765__1__0___3296 +P1753__1__824___1648 +P1693__1__1648___1648 +P1388__0.4__512___1024_270 +P2282__0.4__206___213_90 +P0126__1__0___0 +P2427__1__824___0 +P0087__0.4__512___1031 +P2539__1__824___1196 +P2429__1__0___1648 +P2539__1__0___1196 +P1382__0.4__512___0 +P1752__1__824___1648 +P2488__1__1298___824 +P1809__1__4120___2472 +P0103__1__0___0 +P0904__1__0___0 +P2496__1__824___904 +P0136__0.4__0___7 +P1765__1__1648___1648 +P1747__1__824___1648 +P2211__1__0___0 +P2426__1__0___1189 +P0861__0.4__0___0 +P2689__0.4__1439___342 +P0098__1__0___163 +P2430__1__835___0 +P2538__1__824___0 +P0158__1__1648___824 +P1387__1__0___1648 +P1408__0.4__512___512_270 +P1387__0.4__1024___668_90 +P1408__0.4__0___512_90 +P0104__1__0___163 +P1581__1__824___2976 +P1699__1__0___0 +P1594__0.4__512___576_270 +P2460__1__824___824 +P1586__1__0___0 +P2203__1__0___0 +P2791__1__3988___3296 +P1139__0.4__1536___512_270 +P1098__1__1316___824 +P1098__1__824___0 +P1150__1__1648___2199 +P1483__1__693___875 +P1149__0.4__0___622_180 +P1159__1__1648___1648 +P1170__1__2472___1648 +P2235__1__0___0 +P1393__1__824___0 +P1149__0.4__224___622 +P1149__0.4__224___0_270 +P1139__0.4__1536___0_90 +P1393__1__1648___2472 +P1149__1__1648___824 +P1479__1__824___1042 +P1393__1__3296___1648 +P1139__0.4__1024___1024_90 +P1131__0.4__656___0_90 +P1139__0.4__1536___1024 +P1139__0.4__1024___512 +P1139__0.4__1536___0 +P0169__1__2472___1648 +P1131__0.4__656___512_90 +P1486__1__0___0 +P1139__0.4__1024___512_90 +P1131__0.4__512___0_270 +P1508__1__0___0 +P2226__1__3296___0 +P1159__1__824___1648 +P1393__1__3296___824 +P1150__1__0___2199 +P1098__1__0___1125 +P1394__1__1648___824 +P1150__0.4__0___0_90 +P1150__0.4__0___265 +P1150__0.4__114___0 +P1393__1__2472___824 +P1139__0.4__1024___1024_270 +P1397__1__824___2442 +P1487__1__717___361 +P2269__1__205___0 +P0223__1__0___0 +P1149__1__824___2472 +P1150__0.4__0___265_180 +P1181__1__2472___1648 +P1393__1__0___0 +P1149__0.4__224___512 +P1139__0.4__1024___1024_180 +P1181__1__2663___824 +P2242__1__0___0 +P0169__1__3296___824 +P1391__1__3296___1648 +P1150__1__1821___2199 +P0169__1__1648___0 +P1150__0.4__114___265_90 +P1391__1__0___2916 +P1139__0.4__1024___512_270 +P1179__1__2472___824 +P0255__1__0___0 +P0169__1__3296___3072 +P1149__0.4__0___0_90 +P1394__1__2472___824 +P1159__1__2384___2472 +P1139__0.4__1536___512 +P1159__1__1648___824 +P1139__0.4__1536___0_270 +P1390__1__4120___2074 +P1149__0.4__224___512_180 +P1150__0.4__0___265_270 +P1483__1__0___875 +P1149__1__824___3090 +P1131__0.4__512___0 +P1149__0.4__224___622_270 +P1150__1__824___2199 +P1508__1__0___1648 +P1508__1__548___1648 +P1391__1__0___2472 +P1487__1__0___0 +P1131__0.4__656___512 +P1149__1__2097___0 +P1149__0.4__224___512_270 +P0220__1__0___182 +P1131__0.4__656___0 +P1150__0.4__0___0 +P0168__1__1280___768 +P2226__1__2472___178 +P1395__1__3296___824 +P1131__0.4__512___512_270 +P1181__1__2663___1648 +P1149__0.4__224___622_90 +P1149__1__1648___0 +P1401__1__3425___0 +P1483__1__0___824 +P1139__0.4__1536___512_180 +P1149__0.4__0___512 +P1150__1__824___1648 +P0255__1__277___0 +P1139__1__4944___1648 +P1401__1__1648___0 +P1486__1__472___88 +P1393__1__0___824 +P2791__1__3988___4120 +P1150__0.4__114___265_180 +P1181__1__1648___2472 +P1487__1__717___0 +P2226__1__824___0 +P1149__1__824___1648 +P1139__1__4944___2472 +P1181__1__1648___1648 +P1131__0.4__512___0_180 +P1392__1__1648___0 +P1179__1__1648___1648 +P1098__1__1316___1125 +P1139__0.4__1024___0_90 +P2600__1__0___824 +P1479__1__1380___824 +P1486__1__0___88 +P1401__1__1648___824 +P1139__0.4__1024___0_180 +P1158__1__3296___824 +P1392__1__824___0 +P1391__1__1648___2472 +P1131__0.4__656___0_270 +P1150__0.4__0___0_180 +P1131__1__2472___824 +P2269__1__0___0 +P1098__1__824___1125 +P1508__1__0___2707 +P1393__1__3296___3296 +P2236__1__234___0 +P1131__0.4__656___512_180 +P1150__0.4__114___0_180 +P1508__1__0___2472 +P1393__1__2472___2472 +P1179__1__824___1648 +P2271__1__1648___824 +P1139__0.4__1536___0_180 +P1149__0.4__224___0_180 +P2242__1__0___142 +P2600__0.4__0___0_270 +P2600__0.4__0___0 +P1149__0.4__224___622_180 +P1508__1__548___2707 +P1154__1__0___2472 +P2271__1__824___824 +P1150__0.4__0___0_270 +P1150__0.4__114___265 +P1158__1__3296___1648 +P1401__1__2472___0 +P1395__1__3945___824 +P0169__1__4120___824 +P0220__1__0___0 +P1393__1__824___824 +P1393__1__2472___3296 +P1159__1__2384___1648 +P1149__0.4__0___512_90 +P1391__1__0___1648 +P2600__0.4__0___0_180 +P1508__1__548___824 +P1131__0.4__512___512_180 +P1139__0.4__1024___1024 +P1393__1__2472___0 +P1184__1__824___4120 +P1150__1__1821___824 +P1508__1__548___2472 +P1149__0.4__224___0_90 +P0169__1__4120___0 +P1131__1__3177___824 +P1479__1__1380___1042 +P2600__1__0___1356 +P1391__1__1648___2916 +P1139__1__4120___2472 +P2269__1__0___357 +P1131__0.4__656___512_270 +P1139__0.4__1536___1024_270 +P0168__1__1280___0 +P2210__1__2472___0 +P1392__1__2472___0 +P1393__1__2472___1648 +P1098__1__0___824 +P1149__0.4__0___622_90 +P1181__1__2472___824 +P1149__0.4__0___622_270 +P1397__1__1648___2442 +P1393__1__1648___824 +P1173__1__0___0 +P1181__1__824___1648 +P1139__0.4__1024___0 +P1149__0.4__224___512_90 +P1150__1__1648___824 +P1483__1__693___824 +P1487__1__0___361 +P1159__1__2384___824 +P1139__0.4__1024___0_270 +P1131__0.4__512___512_90 +P1184__1__824___3296 +P2173__1__1648___824 +P1149__0.4__0___622 +P2247__1__1774___143 +P0169__1__3296___1648 +P1154__1__0___2704 +P1098__1__0___0 +P2226__1__2472___0 +P1139__1__4120___1648 +P1391__1__1648___824 +P1139__1__3296___2472 +P1486__1__472___0 +P2600__0.4__0___0_90 +P1401__1__3296___0 +P1384__1__1648___824 +P1159__1__1648___2472 +P1139__0.4__1536___1024_90 +P1412__1__3296___0 +P1150__0.4__114___0_270 +P1479__1__824___824 +P1150__0.4__0___265_90 +P1150__0.4__114___265_270 +P0168__1__824___0 +P1393__1__3296___2472 +P1390__1__3296___2074 +P1391__1__2472___1648 +P1149__0.4__0___512_180 +P1179__1__1648___824 +P1393__1__1648___0 +P1170__1__824___4120 +P2173__1__824___824 +P1098__1__1316___0 +P1149__1__2097___824 +P1508__1__0___824 +P1098__1__824___824 +P1149__1__824___824 +P1150__0.4__114___0_90 +P2226__1__824___178 +P1139__0.4__1536___1024_180 +P1139__0.4__1024___512_180 +P1508__1__548___0 +P1149__0.4__0___512_270 +P1149__0.4__0___0_180 +P1139__0.4__1536___512_90 +P1392__1__3129___0 +P1149__0.4__0___0_270 +P0223__1__0___441 +P0168__1__824___768 +P1131__0.4__656___0_180 +P1131__0.4__512___512 +P1170__1__1648___1648 +P2226__1__3296___178 +P1131__0.4__512___0_90 +P1150__1__1648___1648 +P2210__1__2472___453 +P1149__0.4__0___0 +P1150__1__1821___1648 +P1393__1__3296___0 +P1149__0.4__224___0 \ No newline at end of file diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000..e910ab4 --- /dev/null +++ b/requirements.txt @@ -0,0 +1,4 @@ +Cython +EasyDict +opencv-python +mxnet-cu80==0.12.0b20171027 diff --git a/test_dota_light_RoITransformer.sh b/test_dota_light_RoITransformer.sh new file mode 100644 index 0000000..519ed71 --- /dev/null +++ b/test_dota_light_RoITransformer.sh @@ -0,0 +1 @@ +python experiments/faster_rcnn/test_poly.py --cfg experiments/faster_rcnn/cfgs/resnet_v1_101_dota_RoITransformer_trainval_rcnn_end2end.yaml diff --git a/test_dota_light_fpn_RoITransformer.sh b/test_dota_light_fpn_RoITransformer.sh new file mode 100644 index 0000000..feefefe --- /dev/null +++ b/test_dota_light_fpn_RoITransformer.sh @@ -0,0 +1 @@ +python experiments/fpn/fpn_test_poly.py --cfg experiments/fpn/cfgs/resnet_v1_101_dota_rotbox_light_head_RoITransformer_trainval_fpn_end2end.yaml diff --git a/train_dota_deformable_psroi.sh b/train_dota_deformable_psroi.sh new file mode 100644 index 0000000..482b412 --- /dev/null +++ b/train_dota_deformable_psroi.sh @@ -0,0 +1 @@ +python experiments/faster_rcnn/rcnn_end2end_train_test_poly.py --cfg experiments/faster_rcnn/cfgs/resnet_v1_101_dota_light_head_deformpsroi_trainval_rcnn_end2end.yaml \ No newline at end of file diff --git a/train_dota_light_RoITransformer.sh b/train_dota_light_RoITransformer.sh new file mode 100644 index 0000000..9bb02c2 --- /dev/null +++ b/train_dota_light_RoITransformer.sh @@ -0,0 +1 @@ +python experiments/faster_rcnn/rcnn_end2end_train_test_RoITransformer.py --cfg experiments/faster_rcnn/cfgs/resnet_v1_101_dota_RoITransformer_trainval_rcnn_end2end.yaml diff --git a/train_dota_light_fpn_RoITransformer.sh b/train_dota_light_fpn_RoITransformer.sh new file mode 100644 index 0000000..9fe883f --- /dev/null +++ b/train_dota_light_fpn_RoITransformer.sh @@ -0,0 +1 @@ +python experiments/fpn/fpn_end2end_train_test_RoITransformer.py --cfg experiments/fpn/cfgs/resnet_v1_101_dota_rotbox_light_head_RoITransformer_trainval_fpn_end2end.yaml diff --git a/train_dota_light_obb.sh b/train_dota_light_obb.sh new file mode 100644 index 0000000..70b24ec --- /dev/null +++ b/train_dota_light_obb.sh @@ -0,0 +1 @@ +python experiments/faster_rcnn/rcnn_end2end_train_test_poly.py --cfg experiments/faster_rcnn/cfgs/resnet_v1_101_dota_light_head_trainval_rcnn_end2end.yaml