Smart Cities Using Machine Learning and Intelligent Applications

Authors

  • Anggy Giri Prawiyogi University of Buana Perjuangan Karawang
  • Suryari Purnama University of Esa Unggul
  • Lista Meria University of Esa Unggul

DOI:

https://doi.org/10.33050/italic.v1i1.204

Keywords:

Smart Cities, Machine Learning, Cyber Security, Smart Grids, Artificial Intelligence

Abstract

The goal of smart cities is to properly manage to expand urbanization, Reduce energy usage, Enhance the economic and quality of life of the locals while also preserving the environment conditions, and improve people's ability to use and adapt modern technology used in information and communication efficiently (ICT). Information and communication technology (ICT) is central to the concept of smart cities because it facilitates the development of policies, decision-making, implementation, and, eventually, the provision of useful services. This review's main objective is to examine how machine learning, deep reinforcement learning (DRL), and artificial intelligence are advancing smart cities. The previous strategies are effectively utilized to provide the best policies possible for a number of challenging issues relating to smart cities. The uses of the earlier methods are thoroughly discussed in this survey. in intelligent transportation systems (ITSs), Cybersecurity, as well as smart grid energy efficiency (SGs), ensuring the best 5G and beyond 5G (B5G) networking service, as well as a smart city with a smart health system, by effectively using unmanned aerial vehicles (UAVs). Finally, we list a number of the above-mentioned research problems and potential future directions approaches can be extremely helpful in bringing the idea of a smart city to life.

References

R. Lal, “Home gardening and urban agriculture for advancing food and nutritional security in response to the COVID-19 pandemic,” Food Secur., vol. 12, no. 4, pp. 871–876, 2020.

G. Lăzăroiu, L. Ionescu, C. Uță, I. Hurloiu, M. Andronie, and I. Dijmărescu, “Environmentally responsible behavior and sustainability policy adoption in green public procurement,” Sustainability, vol. 12, no. 5, p. 2110, 2020.

R. Bucea-Manea-Țoniş et al., “Blockchain Technology Enhances Sustainable Higher Education,” Sustainability, vol. 13, no. 22, p. 12347, 2021.

A. S. Anwar, U. Rahardja, A. G. Prawiyogi, N. P. L. Santoso, and S. Maulana, “iLearning Model Approach in Creating Blockchain Based Higher Education Trust,” Int. J. Artif. Intell. Res, vol. 6, no. 1, 2022.

Z. He et al., “A fast water level optimal control method based on two stage analysis for long term power generation scheduling of hydropower station,” Energy, vol. 210, p. 118531, 2020.

A. Heidari, N. J. Navimipour, and M. Unal, “Applications of ML/DL in the management of smart cities and societies based on new trends in information technologies: A systematic literature review,” Sustain. Cities Soc., p. 104089, 2022.

H. Kim, H. Choi, H. Kang, J. An, S. Yeom, and T. Hong, “A systematic review of the smart energy conservation system: From smart homes to sustainable smart cities,” Renew. Sustain. Energy Rev., vol. 140, p. 110755, 2021.

S. S. Kumar, A. S. Bale, P. M. Matapati, and N. Vinay, “Conceptual Study of Artificial Intelligence in Smart Cities with Industry 4.0,” in 2021 International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), 2021, pp. 575–577.

D. Chen, P. Wawrzynski, and Z. Lv, “Cyber security in smart cities: a review of deep learning-based applications and case studies,” Sustain. Cities Soc., vol. 66, p. 102655, 2021.

M. Bharathidasan, V. Indragandhi, V. Suresh, M. Jasiński, and Z. Leonowicz, “A review on electric vehicle: Technologies, energy trading, and cyber security,” Energy Reports, vol. 8, pp. 9662–9685, 2022.

P. Sharma, S. Jain, S. Gupta, and V. Chamola, “Role of machine learning and deep learning in securing 5G-driven industrial IoT applications,” Ad Hoc Networks, vol. 123, p. 102685, 2021.

G. F. Huseien and K. W. Shah, “A review on 5G technology for smart energy management and smart buildings in Singapore,” Energy AI, vol. 7, p. 100116, 2022.

M. Brandão and L. A. Joia, “The influence of context in the implementation of a smart city project: the case of Cidade Inteligente Búzios,” Rev. Adm. Pública, vol. 52, pp. 1125–1154, 2018.

B. Rawat, A. S. Bist, U. Rahardja, C. Lukita, and D. Apriliasari, “The Impact Of Online System on Health During Covid 19: A Comprehensive Study,” ADI J. Recent Innov., vol. 3, no. 2, pp. 195–201, 2022.

Y. Zhang, S. Zhao, and Y. Sang, “Towards unknown traffic identification using deep auto-encoder and constrained clustering,” in International Conference on Computational Science, 2019, pp. 309–322.

A. HasanzadeZonuzy, D. M. Kalathil, and S. Shakkottai, “Model-Based Reinforcement Learning for Infinite-Horizon Discounted Constrained Markov Decision Processes.,” in IJCAI, 2021, pp. 2519–2525.

M. B. Péron, “Optimal sequential decision-making under uncertainty.” Queensland University of Technology, 2018.

M. Hyland and H. S. Mahmassani, “Operational benefits and challenges of shared-ride automated mobility-on-demand services,” Transp. Res. Part A Policy Pract., vol. 134, pp. 251–270, 2020.

T. Lykouris, M. Simchowitz, A. Slivkins, and W. Sun, “Corruption-robust exploration in episodic reinforcement learning,” in Conference on Learning Theory, 2021, pp. 3242–3245.

A. Agnesina, K. Chang, and S. K. Lim, “VLSI placement parameter optimization using deep reinforcement learning,” in Proceedings of the 39th International Conference on Computer-Aided Design, 2020, pp. 1–9.

S. Kapoor, “Multi-agent reinforcement learning: A report on challenges and approaches,” arXiv Prepr. arXiv1807.09427, 2018.

F. Bu and X. Wang, “A smart agriculture IoT system based on deep reinforcement learning,” Futur. Gener. Comput. Syst., vol. 99, pp. 500–507, 2019.

J. Jara-Ettinger, L. E. Schulz, and J. B. Tenenbaum, “The naive utility calculus as a unified, quantitative framework for action understanding,” Cogn. Psychol., vol. 123, p. 101334, 2020.

S. Curi, F. Berkenkamp, and A. Krause, “Efficient model-based reinforcement learning through optimistic policy search and planning,” Adv. Neural Inf. Process. Syst., vol. 33, pp. 14156–14170, 2020.

I. Kostrikov, A. Nair, and S. Levine, “Offline reinforcement learning with implicit q-learning,” arXiv Prepr. arXiv2110.06169, 2021.

Y. Zhou, E.-J. Van Kampen, and Q. Chu, “Incremental model based online heuristic dynamic programming for nonlinear adaptive tracking control with partial observability,” Aerosp. Sci. Technol., vol. 105, p. 106013, 2020.

G. Wang, B. Li, and G. B. Giannakis, “A multistep Lyapunov approach for finite-time analysis of biased stochastic approximation,” arXiv Prepr. arXiv1909.04299, 2019.

Downloads

Published

2022-11-27