From biology to business: Neuromorphic computing pathways to intelligent innovation

  0 Be the first to comment

From biology to business: Neuromorphic computing pathways to intelligent innovation

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

From algorithms to AI: Promise, power, and the pressing challenges ahead

Large scale datasets and information processing requirements, within complex environments, are continuously reaching unprecedented levels of sophistication, especially in the advent of artificial intelligence and other emerging technologies, pushing the boundaries of fierce competition for available, scalable and adaptable computing resources.

The magnitude of convoluted problems and the complexity of workloads require novel computing approaches that can perform multiplex computational tasks in a speedy, effective, interoperable, cost-effective and energy-efficient manner, along with, being resilient against cyber threats and attacks. At the same time, the wide variety of these workloads is driving the need for multiple compute options that are able to handle such kind of data-intensive and high-performance operations.

Global data centre infrastructure (excluding IT hardware) capital expenditures are expected to exceed $1.7 trillion by 2030. Moreover, $1 trillion worth of data centres would need to be built in the next several years to support the deployment of generative AI capabilities. Data centres and transmission networks are deemed as responsible for the 1% of global energy-related greenhouse gases. The electricity consumed by data centres globally can more than double by 2026 to more than 1,000 TWhs. Continuous advancements in artificial intelligence will result in data centres end up using 4.5% of globally-generated energy by 2030. Since 2017, global electricity usage by data centres has grown by around 12% annually, more than four times the rate of overall electricity consumption. This new reality projects a data and computational resources tetralemma that is branched across four crucial elements: decarbonisation, affordability, accessibility, and reliability.

Groundbreaking innovations in the design of machines and algorithms have opened new doors for integrating fundamental principles of biology, humanities, neuroscience, and cognitive science, towards the development of human-like artificial intelligence.

When computers think like neurons: The rise of neuromorphic technology

Neuromorphic computing, as an emerging computing engineering concept, receives inspiration from biology and more specifically from the architecture and functioning of the complexity of the biological brain. The human brain, with less than 20 Watts of power consumption and approximately 1000 trillion operations per second, is and remains the most complex, efficient and powerful known structure within our world. The human brain magically outperforms state-of-the art supercomputers in terms of energy and volume, making it a more versatile, a more energy-efficient and a more adaptable information processor. By being positioned in the biology and neuroscience, mathematics, and physics, computer science and electronic engineering nexus, neuromorphic computing envisions to lead a new and exciting chapter in computational growth, beyond the physical limitations of Moore’s Law, the observation, made by Gordon Moore in 1965. According to Moore’s Law, the number of transistors on an integrated circuit (and thus computing power) tends to double approximately every two years, while the cost per transistor decreases. This trend has driven rapid growth in computing performance, however it has also created a major challenge: keeping up requires making transistors smaller and more complex, which is becoming harder, more expensive, and closer to physical limits.

Neuromorphic computing endeavours to emulate the human brain's processing capabilities. By mimicking the structure and operation of biological neural networks, neuromorphic computing aims at replicating the human brain’s, efficiency, adaptability, synaptic plasticity, and learning capabilities, with calculations being performed directly in the memory.

Neuromorphic computing systems use electronic circuits to simulate neurons and synapses, enabling them to process information in ways that are fundamentally different from traditional computers. This parallel processing of data ensures enhanced performance and energy efficiency.

This technological advancement of the next generation of computing becomes crucial because it also enriches our understanding of the brain and cognition, while at the same time it allows for further innovations towards the development of ultra-low power cognitive computing systems. In the future, low-power devices (e.g. smartphones, IoT sensors) will highly likely be able to run powerful AI models, enabling on-the-edge-computing, while gradually reducing current dependencies on cloud resources.

Neuromorphic computing systems can learn from their environment, adapt to changes, and use event-driven processing, improving their performance over time through mechanisms such as synaptic plasticity. The ability to learn and adapt, results in the development of more sophisticated, capable, adaptive and contextual artificial intelligence systems with significant advancements in pattern recognition, autonomous decision making and real-time processing of various events, especially when conditions become unpredictable or variable.

When brains meet machines: How neuromorphic computing could reshape industries

As an emerging technology, neuromorphic computing is considered as a critical enabler within potential business applications and use cases across various industries. In the era of artificial intelligence and machine learning, neuromorphic computing can contribute to optimal performance of deep learning tasks, allowing for the deployment of computer vision and sophisticated natural language processing capabilities. Such kind of tasks require the processing of vast amounts of data at high speeds and the analysis of complex data in real-time for more informed decision making.

Within the field of financial services and capital markets, indicative use cases of neuromorphic include large and complex financial data analysis and processing, fraud detection (neuromorphic-based anomaly detection systems for transaction patterns, hidden correlations and inconsistencies and customer data analysis), risk assessment and evaluation (enhanced by a combination of neuromorphic computing and machine learning for better and accurate precision), optimisation of trading methodologies (via potential synergies between intelligent neuromorphic computing and advanced machine learning algorithms) especially in times of financial market volatility, real-time trading decision-making and anomaly detection in markets, all contributing to financial security and encouraging of sustainable practices. Within financial services and capital markets, an integration of intelligent neuromorphic computing capabilities with machine learning algorithms can potentially lead to robust systems for processing and analysing complex and vast patterns in data, ensuring enhanced precision, scalability, delivery of highly personalized services (e.g. investment advice, loan products, financial planning) and flexibility of financial applications, towards achieving customer satisfaction and loyalty.

Other indicative neuromorphic computing applications include smart vision sensors, speech and image processing, myoelectric prosthetics limbs and control, wearable healthcare systems and computational electronic skin (e-skin), gesture control applications (smart home devices, offices, factories), autonomous and near-human touch sensitive robots, self-driving vehicles and drones.

All these applications showcase that machines require greater adaptability to perform complex tasks by processing sensor information in real-time and making autonomous decisions by continuously learning from their environments. This means that in the future users will be able to interact with computers in more human-like, immersive virtual and augmented reality ways.

Shaping the future: Why business leaders need to pay attention to neuromorphic computing

Discussing neuromorphic computing in business context, goes beyond how computers are actually being designed and used. It is really all about acknowledging and understanding that our ongoing relationship with technology is reshaped and redefined, especially in light of the energy-related and physical limitations posed by traditional computing technologies.

Business leaders need to start thinking strategically in terms of understanding the applications and the anticipated transformative impact of neuromorphic computing by appreciating these upcoming changes, the opportunities for multidisciplinary innovation, the iterative small-scale experimentations and the mobilisation of strategic preparedness and investments. Emerging applications, such as brain-machine interfaces, bioinformatics and neuromorphic chips in IoT devices, pave the way towards a new era of computational innovations. Businesses need to ensure that they will be fit-for-purpose in terms of maintaining a transient advantage within the future world of computing, along with, reducing their environmental footprints in alignment to their sustainability goals and decarbonisation/net-zero endeavours.

It is critical for business leaders to stay informed about these technological advancements, invest in dedicated neuromorphic computing R&D and consider strategic partnerships with tech companies, neuromorphic chip developers (e.g. Loihi by Intel Lab, TrueNorth by IBM, Akida by BrainChip), universities and research institutions, in order to access cutting-edge neuromorphic computing capabilities, hardware, algorithms and talent.

Boards of directors need to start asking the right questions and be prepared to lead the way as the technology matures, by thinking and answering important ethical, societal, economic, and governance questions, especially in relation to data governance, privacy concerns (gathering, storing and processing private information) and mitigation of bias (ensuring accuracy of data analysis, explainability of outcomes/recommendations and detection of bias in machine learning algorithms and data used for training purposes).

Neuromorphic computing allows us to get a glimpse into the future where machines can think and learn in ways that are more human-like compared to traditional computers, offering challenging but promising avenues towards developing the next generation of intelligent large-scale, accessible, reliable, and energy-efficient computing systems.

Comments: (0)

/ai Long Reads

Dimitrios Salampasis

Dimitrios Salampasis Assoc. Professor, Emerging Technologies & FinTech at Swinburne University of Technology

From biology to business: Neuromorphic computing pathways to intelligent innovation

/ai

Hamish Monk

Hamish Monk Senior Reporter at Finextra

What is treasury technology?

/ai

Hamish Monk

Hamish Monk Senior Reporter at Finextra

What is predictive analysis?

/ai

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.