Parking Slot Detection Deep Learning
December 2020
- Parking Slot Detection Deep Learning System
- Parking Slot Detection Deep Learning
- Parking Slot Detection Deep Learning Algorithms
- Parking Slot Detection Deep Learning Software
Published by Elsevier B.V. Peer-review under responsibility of the scientific committee of the 22nd EURO Working Group on Transportation Meeting. 22nd EURO Working Group on Transportation Meeting, EWGT 2019, 18-20 September 2019, Barcelona, Spain Improving Parking Availability Information Using Deep Learning Techniques Jamie Arjona a. If it is possible to extend the parking detection logic to any parking map that may use deep learning, it is great that OpenCV restrictive use of each use case. The VGG model used by CNN is a weighting model. We want to try a lighter model. Cloud server 99 yuan group buying! Laxin can also win cash red envelope!
Secondly, a learning based parking-slot detection approach is proposed. With this approach, given a test image, the marking-points will be detected at first and then the valid parking-slots can be inferred. Its efficacy and efficiency have been corroborated on our database. Tongji Parking-slot Dataset 1.0. Vacant parking space detection In a smart parking system, information about vacant parking spaces is important. A simple method to identify how many vacant spaces are available is to count the difference between the parking lot capacity and the number of vehicles currently in the lot.
tl;dr: Parking slot detection by detecting marking point with a CenterNet-like algorithm.
Overall impression
For my future self: Dataset is super important. Your algorithm is only going to evolve to the level your dataset KPI requires it to.
The algorithm only focuses on detecting the marking point detection and did not mention too much about the post-processing needed to combine the marking points to parking slot. It is more general in that it can detect more than T/L-shaped marking points.
The paper is very poorly written, with tons of sloppy annotation and non-standard terminology.
Key ideas
- A coarse-to-fine marking point detection algorithm. Very much like CenterNet.
- The regression also predicts the “vertex paradigm”. Basically it predicts the pattern of the connectivity among the marking points.
Technical details
Parking Slot Detection Deep Learning System

- Annotated a dataset (~15k images). This is slightly bigger than PS2.0 dataset with 12k images.
- The paper uses L2 loss to supervise the heatmaps and attributes. This is a bit strange as most studies uses focal loss for heatmap prediction and L1 for attribute prediction.
Notes
- Questions and notes on how to improve/revise the current work
Parking Slot Detection Deep Learning
Dataset Download
You can download CNRPark+EXT using the following links:

Parking Slot Detection Deep Learning Algorithms

Parking Slot Detection Deep Learning Software
CNRPark+EXT.csv (18.1 MB)
CSV collecting metadata for each patch of both CNRPark and CNR-EXT datasets
CNRPark-Patches-150x150.zip (36.6 MB)
segmented images (patches) of parking spaces belonging to the CNRPark preliminary subset.
Files follow this organization:<CAMERA>/<CLASS>/YYYYMMDD_HHMM_<SLOT_ID>.jpg
, where:<CAMERA>
can beA
orB
,<CLASS>
can befree
orbusy
,YYYYMMDD_HHMM
is the zero-padded 24-hour capture datetime,<SLOT_ID>
is a local ID given to the slot for that particular camera
E.g:
A/busy/20150703_1425_32.jpg
CNR-EXT-Patches-150x150.zip (449.5 MB)
segmented images (patches) of parking spaces belonging to the CNR-EXT subset.
Files follow this organization:PATCHES/<WEATHER>/<CAPTURE_DATE>/camera<CAM_ID>/<W_ID>_<CAPTURE_DATE>_<CAPTURE_TIME>_C0<CAM_ID>_<SLOT_ID>.jpg
,
where:<WEATHER>
can beSUNNY
,OVERCAST
orRAINY
,<CAPTURE_DATE>
is the zero-paddedYYYY-MM-DD
formatted capture date,<CAM_ID>
is the number of the camera, ranging1
-9
,<W_ID>
is a weather identifier, that can beS
,O
orR
,<CAPTURE_TIME>
is the zero-padded 24-hourHH.MM
formatted capture time,<SLOT_ID>
is a global ID given to the monitored slot; this can be used to uniquely identify a slot in the CNR-EXT dataset.
E.g:
PATCHES/SUNNY/2015-11-22/camera6/S_2015-11-22_09.47_C06_205.jpg
The
LABELS
folder contains a list file for each split of the dataset used in our experiments. Each line in list files follow this format:<IMAGE_PATH> <LABEL>
, where:<IMAGE_PATH>
is the path to a slot image,<LABEL>
is0
forfree
,1
forbusy
.
CNR-EXT_FULL_IMAGE_1000x750.tar (1.1 GB)
full frames of the cameras belonging to the CNR-EXT subset. Images have been downsampled from 2592x1944 to 1000x750 due to privacy issues.
Files follow this organization:FULL_IMAGE_1000x750/<WEATHER>/<CAPTURE_DATE>/camera<CAM_ID>/<CAPTURE_DATE>_<CAPTURE_TIME>.jpg
,
where:<WEATHER>
can beSUNNY
,OVERCAST
orRAINY
,<CAPTURE_DATE>
is the zero-paddedYYYY-MM-DD
formatted capture date,<CAM_ID>
is the number of the camera, ranging1
-9
,<CAPTURE_TIME>
is the zero-padded 24-hourHHMM
formatted capture time.
The archive contains also 9 CSV files (one per camera) containing the bounding boxes of each parking space with which patches have been segmented. Pixel coordinates of the bouding boxes refer to the 2592x1944 version of the image and need to be rescaled to match the 1000x750 version.
splits.zip (27.2 MB)
all the splits used in our experiments. Those splits combine our datasets and also third-party datasets (such as PKLot).