Exploring AI’s Potential and Challenges in Software and Wireless System Design
October 14, 2025
October 14, 2025
From its origins in 1950s research discussions to today’s large language models and AI-driven network management, artificial intelligence has come a long way. What started as a theoretical concept is now a functional tool solving complex problems across industries.
At Magister, we’re especially interested in AI’s emerging role and benefits in software development, wireless system design, and satellite constellation planning.
As AI becomes more deeply embedded into these and other domains, it introduces new questions around management, security, and sustainability.
In this blog post, we explore the evolution of AI, its current applications, and the opportunities and challenges it presents.
Artificial intelligence (AI) refers to computer systems that can perform tasks requiring human-like reasoning, such as learning, problem-solving and decision-making. The term “artificial intelligence” was coined in 1956 during the Dartmouth Summer Research Project on Artificial Intelligence workshop.
Progress in AI was slow for decades due to limited technology, leading to a so-called “AI winter” until the early 2000s. The pace of AI development started to pick up thanks to two major breakthroughs: increased computing power and the rise of internet and social media.
The AI revolution is happening now because we have access to massive amounts of data and the computing power needed to process it.
Although AI has been a research topic since the 1950s, its recent boom has taken most people by surprise. Tools like ChatGPT, Claude and Microsoft Copilot have made AI familiar to the general public. Despite AI’s potential and benefits, its large adoption has also brought up ethical concerns around security, sustainability, and accountability, among others.
There are many questions up in the air about AI right now. For example, where are we at in Europe – and Finland, Magister’s home country – in terms of AI development? What are organizations and governments doing to stay ahead?
AI’s fast growth means that companies are at various points of adopting it. Some are just beginning to explore its potential, while others are already taking the lead in development.
As a company specializing in simulator development, we naturally approach AI through the lens of software. So, what impacts might AI have on the way we do things in the future?
Traditionally, software development has been deterministic. This means that when given a specific input, an algorithm will always produce the same output, with little randomness involved. Regular regression tests are run to confirm this consistency.
However, introducing AI into the mix brings up new questions. AI applies both deterministic and probabilistic approaches to decision-making – each suited to different types of tasks. Therefore, it’s important to understand how these models work to be able to evaluate which approach best fits a certain use case.
Deterministic AI produces consistent outputs for the same input. This makes it well-suited for simulator development, where repeatability and reliability are critical. Probabilistic AI, on the other hand, makes decisions based on probabilities and likelihoods. It can produce different outputs even when given the same parameters.
The role of AI in software development is evolving. In many ways, it still remains a bit fuzzy and complex. For example, there’s uncertainty around how well we can detect improvements or bugs caused by AI. If an AI model’s decision-making changes and a new version is made, how can we be sure that nothing has broken? And if we cannot test these things, how do we maintain them?
There are many questions around AI in software architecture design. A traditional issue in optimizing is that when one aspect is improved, it may weaken another.
For example, AI might be able to optimize an individual component but not those around it. There are also limits to how much can be optimized and whether these AI-based optimizations are actually beneficial to users in practice.
The effectiveness and reliability of AI models depend heavily on how they are designed and trained. Careful consideration during these stages supports more accurate outcomes in different real-world applications.
Poor training, on the other hand, can lead to undesirable AI model behavior such as overfitting – where the model makes accurate predictions for training data but cannot do that for any other data. This can happen when the model is too complex for the data or when there’s not enough data to learn from.
Issues can also arise when AI optimizes aspects that people don’t understand. That is why it’s important define guidelines when integrating AI into an organization’s operations. This could include, for example, ensuring that AI-driven optimizations align with the organization’s strategy, as well as ensuring that problems are noticed if things start going wrong.
The 3GPP (3rd Generation Partnership Project) is actively working towards integrating AI and machine learning (ML) into the 5G network. AI topics have been covered in various technical specification documents for Rel-18 and Rel-19.
As part of Rel-18 accomplishments, frameworks have been defined for AI/ML lifecycle management and potential operational contexts have been described for AI/ML inference functions within the 5G system.
Themes around security are at the center of AI research. The question is: how can it be implemented in a secure and viable way? For example, AI needs a large amount of data to be able to optimize systems but not all kinds of data can be collected from people or devices. The 3GPP is investigating the controllability of growing data collection, reinforcing the notion that data privacy, anonymity, and user consent should be respected when dealing with AI.
Another ethical consideration in AI is sustainability. While AI systems can consume significant amounts of energy, they also offer great potential for improving energy efficiency within networks. Both aspects should be taken into account. A network might become more energy-efficient through AI, but if increased AI usage drives up overall energy consumption, sustainability goals can still fall short.
Since Release 10 in 2011, 3GPP has been monitoring the energy efficiency of its systems, and recent Releases have made it an even more central design criterion. Consequently, sustainability is a key factor in the ongoing Rel-19 normative work for 5G and in planning for 6G. These systems are expected to incorporate sustainable AI and ML practices, including energy-efficient mechanisms to enable resource-aware model development.
The International Telecommunication Union (ITU) has set an objective for the information and communication technology (ICT) industry to reduce greenhouse gas emissions by 45% between 2020 and 2030. Mobile telecommunications accounts for 9% of the total of ICT CO2 emissions, which is around 100 million metric tons per year.
Reducing power consumption in telecommunications not only supports these global emission targets but also allows operators to lower operational expenses. For end-users, energy savings can translate into more stable service costs, longer device battery life, and greener networks – with AI helping to maintain service quality by predicting traffic patterns and managing resources efficiently.
Telecommunication networks are evolving rapidly with the advancements of 5G and 5G-Advanced and the early development of 6G. As networks become more dynamic and complex, they must support a growing number of users, devices, applications, and data-intensive services, all while maintaining high performance and reliability.
To meet these demands, AI is increasingly being examined as a tool in network management. It offers promising capabilities – from intelligent optimization and predictive analytics to enhanced security and autonomous infrastructure control – helping operators manage complexity, reduce operational costs, and maintain reliable network performance.
Network operators are investigating and deploying AI applications at a rapid pace. According to Nokia’s 2025 Industrial Digitalization Report with GlobalData, 70% of industrial enterprises using private wireless and on-premise edge technologies are already leveraging AI-driven use cases, such as predictive maintenance, real-time monitoring, and digital twins.
AI’s capabilities aren’t only being explored for traditional terrestrial networks but also for those operating in space. In satellite communications, AI could support mobility management across terrestrial (TN) and non-terrestrial (NTN) networks, helping ensure seamless connectivity as users move between coverage areas.
AI could also bring several benefits in satellite constellation design, helping determine optimal satellite placement and beam configurations to maximize coverage and efficiency. As satellites move – for example, towards the North Pole – overlapping beams must be carefully managed to avoid interference. AI could assist in dynamically switching beams on and off, taking into account beam width, power limitations, and battery constraints.
Another way AI could contribute is by considering operational restrictions, such as satellite movement patterns, transmission power, and energy consumption. These aspects are especially critical in NTNs, where sustainability and energy efficiency are key concerns.
Given the high costs of satellite deployment, AI combined with simulations and predictive models could enable smarter decision-making before satellites are launched, helping ensure better performance and cost-effectiveness.
At Magister, we’re currently exploring AI themes through the NexaSphere project, which focuses on designing a unified 3D communication network. Key themes of the project include TN/NTN interoperability, 6G, network orchestration aided by AI, and supporting connectivity across aviation, automotive, and rail sectors. We’re responsible for the performance validation and simulation of 3D networks.
It’s fascinating to follow the evolving AI landscape, especially its growing role in wireless system development. These advancements also open exciting possibilities for our own simulators in the near future. Our Magister SimLab simulation platform already comes with built-in integrations to ML platforms, allowing users to develop AI/ML algorithms and benchmark solutions.
One thing is clear: AI is here to stay. As we move forward, it’s important to balance AI innovation with responsibility. The key is to stay informed on the latest information on AI and learn to work with it in ways that are beneficial.
In writing this article, we consulted the following Magisterians: Vesa Hytönen (Principal Scientist), Hanna-Liisa Tiri (Senior Reseacher), Verneri Rönty (Researcher).