Reference
Hastie, T., R. Tibshirani, and J.H. Friedman, The elements of statistical learning : data mining, inference, and prediction. 2nd ed. Springer series in statistics,. 2009, New York, NY: Springer. xxii, 745 p.
2. Bishop, C.M., Pattern recognition and machine learning. 2006, New York:: Springer.
3. Schapiere, R.E. and Y. Freund, Boosting: Foundations and Algorithms. . 2012, Cambridge, MA: MIT Press.
4. Zhang, Y. and Y. Xie, Travel mode choice modeling with support vector machines. Transportation Research Record: Journal of the Transportation Research Board, 2008(2076): p. 141-150.
5. Awad, M. and R. Khanna, Support Vector Regression, in Efficient Learning Machines: Theories, Concepts, and Applications for Engineers and System Designers, M. Awad and R. Khanna, Editors. 2015, Apress: Berkeley, CA. p. 67-80.
6. Ben-Hur, A., et al., Support Vector Clustering. Journal of Machine Learning Research, 2001. 2: p. 125-137.
7. Yuan, F. and R.L. Cheu, Incident detection using support vector machines. Transportation Research Part C-Emerging Technologies, 2003. 11(3-4): p. 309-328.
8. Zhang, Y.L. and Y.C. Xie, Forecasting of short-term freeway volume with v-support vector machines. Transportation Research Record, 2007(2024): p. 92-99.
9. Jahangiri, A. and H.A. Rakha, Applying Machine Learning Techniques to Transportation Mode Recognition Using Mobile Phone Sensor Data. IEEE Transactions on Intelligent Transportation Systems, 2015. 16(5): p. 2406-2417.
10. Cetin, M., I. Ustun, and O. Sahin. Classification Algorithms for Detecting Vehicle Stops from Smartphone Accelerometer Data. in The 95th Annual meeting of the Transportation Research Board. 2016. Washington DC: Transportation Research Board.
11. Yang, B., et al. A Data Imputation Method with Support Vector Machines for Activity-Based Transportation Models. in Foundations of Intelligent Systems. 2012. Berlin, Heidelberg: Springer Berlin Heidelberg.
12. Li, X., et al., Predicting motor vehicle crashes using Support Vector Machine models. Accident Analysis and Prevention, 2008. 40(4): p. 1611-1618.
13. Theofilatos, A., C. Chen, and C. Antoniou, Comparing Machine Learning and Deep Learning Methods for Real-Time Crash Prediction. Transportation Research Record, 2019. 2673(8): p. 169-178.
14. Zhang, Y. and Y. Xie, Travel Mode Choice Modeling with Support Vector Machines. Transportation Research Record, 2008. 2076(1): p. 141-150.
15. Principe, J.C., N.R. Euliano, and W.C. Lefebvre, Neural and adaptive systems: fundamentals through simulations. Vol. 672. 2000: John Wiley and Sons, Inc., New York.
16. Duda, R.O., P.E. Hart, and D.G. Stork, Pattern classification. 2001: John Wiley & Sons, Inc., New York.
17. Ham, F.M. and I. Kostanic, Principles of neurocomputing for science and engineering. 2001: McGraw-Hill Higher Education.
18. Pal, S.K., Dillon, Tharam S., Yeung, Daniel S., Soft computing in case based reasoning. 2001: Springer-Verlag London Limited, Great Britain. 1-28.
19. Jang, J.-S.R., C.-T. Sun, and E. Mizutani, Neuro-fuzzy and soft computing-a computational approach to learning and machine intelligence. 1997: Prentice-Hall, N.J.
20. Ablameyko, S., Goras, L., Gori, M., Piuri, V, Neural Networks for Instrumentation, Measurement, and Related Industrial Applications. Series III: Computer and Systems Sciences. Vol. 185. 2003: IOS press.
21. Lefebvre, C., Neuro Solutions, Version 4.10. NeuroDimension, Inc., Gainesville, FL, 2001.
22. Hopfield, J.J., Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 1982. 79(8): p. 2554-2558.
23. Rumelhart, D.E., G.E. Hinton, and R.J. Williams, Learning representations by back-propagating errors. Cognitive modeling, 1988. 5(3): p. 1.
24. Jordan, M., Serial order: a parallel distributed processing approach. Technical report, June 1985-March 1986. 1986, California Univ., San Diego, La Jolla (USA). Inst. for Cognitive Science.
25. Elman, J.L., Finding structure in time. Cognitive science, 1990. 14(2): p. 179-211.
26. Dorffner, G. Neural networks for time series processing. in Neural network world. 1996. Citeseer.
27. Malhotra, P., et al. Long short term memory networks for anomaly detection in time series. in Proceedings. 2015. Presses universitaires de Louvain.
28. Lipton, Z.C., J. Berkowitz, and C. Elkan, A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019, 2015.
29. Werbos, P.J., Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 1990. 78(10): p. 1550-1560.
30. Haykin, S., Neural networks: a comprehensive foundation. 1994: Macmillan College Publishing Company, Inc., Prentice Hall PTR.
32. Nielsen, M.A., Neural networks and deep learning. Vol. 25. 2015: Determination press San Francisco, CA, USA:.
33. Hubel, D.H. and T.N. Wiesel, Receptive fields and functional architecture of monkey striate cortex. The Journal of physiology, 1968. 195(1): p. 215-243.
34. Abdoli, S., P. Cardinal, and A.L. Koerich, End-to-end environmental sound classification using a 1d convolutional neural network. Expert Systems with Applications, 2019. 136: p. 252-263.
35. Zeiler, M.D., G.W. Taylor, and R. Fergus. Adaptive deconvolutional networks for mid and high level feature learning. in 2011 International Conference on Computer Vision. 2011. IEEE.
36. Sobel, I.F., G, An isotropic 3× 3 image gradient operater. Machine vision for three-dimensional scenes, 1990: p. 376-379.
39. Dai, J., et al. Deformable convolutional networks. in Proceedings of the IEEE international conference on computer vision. 2017. arXiv preprint arXiv:1703.06211.
40. Chen, L.-C., et al., Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 2017. 40(4): p. 834-848.
41. He, K., et al. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. in Proceedings of the IEEE international conference on computer vision. 2015.
42. Clevert, D.-A., T. Unterthiner, and S. Hochreiter, Fast and accurate deep network learning by exponential linear units (ELUs). arXiv preprint arXiv:1511.07289, 2015.
43. Stanford University, n.d. Convolutional Neural Networks for Visual Recognition. [Online]
44. Santurkar, S., et al. How does batch normalization help optimization? in Advances in Neural Information Processing Systems. 2018.
45. Srivastava, N., et al., Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 2014. 15(1): p. 1929-1958.
46. Ioffe, S. and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
48. Simonyan, K. and A. Zisserman, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
49. Neurohive, n.d. VGG16 – Convolutional Network for Classification and Detection. [Online]
50. Krizhevsky, A., I. Sutskever, and G.E. Hinton. Imagenet classification with deep convolutional neural networks. in Advances in neural information processing systems. 2012.
51. He, K., et al. Deep residual learning for image recognition. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
52. Zeiler, M.D. and R. Fergus. Visualizing and understanding convolutional networks. in European conference on computer vision. 2014. Springer.
53. Szegedy, C., et al. Going deeper with convolutions. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
54. Li, H., et al. Visualizing the loss landscape of neural nets. in Advances in Neural Information Processing Systems. 2018.
55. Ronneberger, O., P. Fischer, and T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. in International Conference on Medical image computing and computer-assisted intervention. 2015. Springer.
56. Girshick, R., et al. Rich feature hierarchies for accurate object detection and semantic segmentation. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2014.
57. Girshick, R. Fast R-CNN. in Proceedings of the IEEE international conference on computer vision. 2015.
58. Ren, S., et al. Faster R-CNN: Towards real-time object detection with region proposal networks. in Advances in neural information processing systems. 2015.
59. He, K., et al. Mask R-CNN. in Proceedings of the IEEE international conference on computer vision. 2017.
60. Liu, W., et al. SSD: Single shot multibox detector. in European conference on computer vision. 2016. Springer.
61. Redmon, J., et al. You only look once: Unified, real-time object detection. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
62. Redmon, J. and A. Farhadi. YOLO9000: better, faster, stronger. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
63. Redmon, J. and A. Farhadi, Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.
64. Lin, T.-Y., et al. Focal loss for dense object detection. in Proceedings of the IEEE international conference on computer vision. 2017.
65. Howard, A.G., et al., Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
66. Zhou, X., D. Wang, and P. Krähenbühl, Objects as points. arXiv preprint arXiv:1904.07850, 2019.
67. Sharma, A., et al., Portable Multi-Sensor System for Intersection Safety Performance Assessment. 2018, Ames: Iowa Department of Transportation.
68. Sharma, A., Chakraborty, P., Hawkins, N. & Knickerbocker, S., Automating Near Miss Crash Detection using Existing Traffic Cameras. 2019, Ames: Iowa Department of Transportation.
69. He, P., et al., Truck Taxonomy and Classification Using Video and Weigh-In Motion (WIM) Technology. 2019, Florida Department of Transportation: Tallahassee.
70. Yang, J.J., Y. Wang, and C.-C. Hung, Monitoring and Assessing Traffic Safety at Signalized Intersections Using Live Video Images. 2018, Atlanta: Georgia Department of Transportation.
71. Ma, X., et al., Learning traffic as images: a deep convolutional neural network for large-scale transportation network speed prediction. Sensors, 2017. 17(4): p. 818.
72. Wang, K., et al. Deep learning for asphalt pavement cracking recognition using convolutional neural network. in Proc. Int. Conf. Airfield Highway Pavements. 2017.
73. Fan, Z., et al., Automatic pavement crack detection based on structured prediction with the convolutional neural network. arXiv preprint arXiv:1802.02208.CNN, 2018.
74. Li, S. and X. Zhao, Image-Based Concrete Crack Detection Using Convolutional Neural Network and Exhaustive Search Technique. Advances in Civil Engineering, 2019. 2019: p. 1-12.
75. Dabiri, S. and K. Heaslip, Inferring transportation modes from GPS trajectories using a convolutional neural network. Transportation research part C: emerging technologies, 2018. 86: p. 360-371.
76. Xu, J., et al., Real-time prediction of taxi demand using recurrent neural networks. IEEE Transactions on Intelligent Transportation Systems, 2017. 19(8): p. 2572-2581.
77. Rahman, R., Applications of Deep Learning Models for Traffic Prediction Problems, in University of Central Florida. 2019.
78. Gers, F.A., Schmidhuber, J., Cummins, F., Learning to forget: continual prediction with LSTM, in 1999 Ninth International Conference on Artificial Neural Networks ICANN 99. (Conf. Publ. No. 470). 1999. p. 1-19.
79. Hochreiter, S. and J. Schmidhuber, Long short-term memory. Neural computation, 1997. 9(8): p. 1735-1780.
80. Ma, X., et al., Long short-term memory neural network for traffic speed prediction using remote microwave sensor data. Transportation Research Part C: Emerging Technologies, 2015. 54: p. 187-197.
81. Rahman, R. and S. Hasan. Short-Term Traffic Speed Prediction for Freeways During Hurricane Evacuation: A Deep Learning Approach. in 2018 21st International Conference on Intelligent Transportation Systems (ITSC). 2018. IEEE.
82. Polson, N.G. and V.O. Sokolov, Deep learning for short-term traffic flow prediction. Transportation Research Part C: Emerging Technologies, 2017. 79: p. 1-17.
83. Cui, Z., et al., Deep bidirectional and unidirectional LSTM recurrent neural network for network-wide traffic speed prediction. arXiv preprint arXiv:1801.02143, 2018: p. 22-25.
84. Epelbaum, T., et al., Deep Learning applied to Road Traffic Speed forecasting. arXiv preprint arXiv:1710.08266, 2017.
85. Duan, Y., Y. Lv, and F.-Y. Wang. Travel time prediction with LSTM neural network. in 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC). 2016. IEEE.
86. Luo, X., et al., Spatiotemporal traffic flow prediction with KNN and LSTM. Journal of Advanced Transportation, 2019. 2019.
87. Yang, B., et al., Traffic flow prediction using LSTM with feature enhancement. Neurocomputing, 2019. 332: p. 320-327.
88. Lee, S., et al., An advanced deep learning approach to real-time estimation of lane-based queue lengths at a signalized junction. Transportation research part C: emerging technologies, 2019. 109: p. 117-136.
89. Rahman, R. and S. Hasan, Real-time Signal Queue Length Prediction Using Long Short-Term Memory Neural Network, in Transportation Research Board 98th Annual MeetingTransportation Research Board. 2019.
90. Yu, R., et al. Deep learning: A generic approach for extreme condition traffic forecasting. in Proceedings of the 2017 SIAM international Conference on Data Mining. 2017. SIAM.
91. Yuan, J., et al., Real-time crash risk prediction using long short-term memory recurrent neural network. Transportation research record, 2019. 2673(4): p. 314-326.
92. Sameen, M.I. and B. Pradhan, Severity prediction of traffic accidents with recurrent neural networks. Applied Sciences (Switzerland), 2017. 7(6): p. 476.
93. Ren, H., et al. A deep learning approach to the citywide traffic accident risk prediction. in 21st International Conference on Intelligent Transportation Systems (ITSC). 2018. IEEE.
94. Do, L.N.N., N. Taherifar, and H.L. Vu, Survey of neural network-based models for short-term traffic state prediction. WIREs Data Mining and Knowledge Discovery, 2019. 9(1): p. e1285.
Last updated