Filtering by
- Member of: Faculty and Staff
- Member of: Mediterranean Landscape Dynamics Project
The curse of dimensionality poses a significant challenge to modern multilayer perceptron-based architectures, often causing performance stagnation and scalability issues. Addressing this limitation typically requires vast amounts of data. In contrast, Kolmogorov-Arnold Networks have gained attention in the machine learning community for their bold claim of being unaffected by the curse of dimensionality. This paper explores the Kolmogorov-Arnold representation theorem and the mathematical principles underlying Kolmogorov-Arnold Networks, which enable their scalability and high performance in high-dimensional spaces. We begin with an introduction to foundational concepts necessary to understand Kolmogorov-Arnold Networks, including interpolation methods and Basis-splines, which form their mathematical backbone. This is followed by an overview of perceptron architectures and the Universal approximation theorem, a key principle guiding modern machine learning. This is followed by an overview of the Kolmogorov-Arnold representation theorem, including its mathematical formulation and implications for overcoming dimensionality challenges. Next, we review the architecture and error-scaling properties of Kolmogorov-Arnold Networks, demonstrating how these networks achieve true freedom from the curse of dimensionality. Finally, we discuss the practical viability of Kolmogorov-Arnold Networks, highlighting scenarios where their unique capabilities position them to excel in real-world applications. This review aims to offer insights into Kolmogorov-Arnold Networks' potential to redefine scalability and performance in high-dimensional learning tasks.


Background: Environmental heat exposure is a public health concern. The impacts of environmental heat on mortality and morbidity at the population scale are well documented, but little is known about specific exposures that individuals experience.
Objectives: The first objective of this work was to catalyze discussion of the role of personal heat exposure information in research and risk assessment. The second objective was to provide guidance regarding the operationalization of personal heat exposure research methods.
Discussion: We define personal heat exposure as realized contact between a person and an indoor or outdoor environment that poses a risk of increases in body core temperature and/or perceived discomfort. Personal heat exposure can be measured directly with wearable monitors or estimated indirectly through the combination of time–activity and meteorological data sets. Complementary information to understand individual-scale drivers of behavior, susceptibility, and health and comfort outcomes can be collected from additional monitors, surveys, interviews, ethnographic approaches, and additional social and health data sets. Personal exposure research can help reveal the extent of exposure misclassification that occurs when individual exposure to heat is estimated using ambient temperature measured at fixed sites and can provide insights for epidemiological risk assessment concerning extreme heat.
Conclusions: Personal heat exposure research provides more valid and precise insights into how often people encounter heat conditions and when, where, to whom, and why these encounters occur. Published literature on personal heat exposure is limited to date, but existing studies point to opportunities to inform public health practice regarding extreme heat, particularly where fine-scale precision is needed to reduce health consequences of heat exposure.

A general consensus on the concept of rainfall intermittency has not yet been reached, and intermittency is often attributed to different aspects of rainfall variability, including the fragmentation of the rainfall support (i.e., the alternation of wet and dry intervals) and the strength of intensity fluctuations and bursts. To explore these different aspects, a systematic analysis of rainfall intermittency properties in the time domain is presented using high-resolution (1-min) data recorded by a network of 201 tipping-bucket gauges covering the entire island of Sardinia (Italy). Four techniques, including spectral and scale invariance analysis, and computation of clustering and intermittency exponents, are applied to quantify the contribution of the alternation of dry and wet intervals (i.e., the rainfall support fragmentation), and the fluctuations of intensity amplitudes, to the overall intermittency of the rainfall process. The presence of three ranges of scaling regimes between 1 min to ~ 45 days is first demonstrated. In accordance with past studies, these regimes can be associated with a range dominated by single storms, a regime typical of frontal systems, and a transition zone.
The positions of the breaking points separating these regimes change with the applied technique, suggesting that different tools explain different aspects of rainfall variability. Results indicate that the intermittency properties of rainfall support are fairly similar across the island, while metrics related to rainfall intensity fluctuations are characterized by significant spatial variability, implying that the local climate has a significant effect on the amplitude of rainfall fluctuations and minimal influence on the process of rainfall occurrence. In addition, for each analysis tool, evidence is shown of spatial patterns of the scaling exponents computed in the range of frontal systems. These patterns resemble the main pluviometric regimes observed on the island and, thus, can be associated with the corresponding synoptic circulation patterns. Last but not least, we demonstrate how the methodology adopted to sample the rainfall signal from the records of the tipping instants can significantly affect the intermittency analysis, especially at smaller scales. The multifractal scale invariance analysis is the only tool that is insensitive to the sampling approach. Results of this work may be useful to improve the calibration of stochastic algorithms used to downscale coarse rainfall predictions of climate and weather forecasting models, as well as the parameterization of intensity-duration-frequency curves, adopted for land planning and design of civil infrastructures.

The human hand comprises complex sensorimotor functions that can be impaired by neurological diseases and traumatic injuries. Effective rehabilitation can bring the impaired hand back to a functional state because of the plasticity of the central nervous system to relearn and remodel the lost synapses in the brain. Current rehabilitation therapies focus on strengthening motor skills, such as grasping, employ multiple objects of varying stiffness so that affected persons can experience a wide range of strength training. These devices have limited range of stiffness due to the rigid mechanisms employed in their variable stiffness actuators. This paper presents a novel soft robotic haptic device for neuromuscular rehabilitation of the hand, which is designed to offer adjustable stiffness and can be utilized in both clinical and home settings. The device eliminates the need for multiple objects by employing a pneumatic soft structure made with highly compliant materials that act as the actuator of the haptic interface. It is made with interchangeable sleeves that can be customized to include materials of varying stiffness to increase the upper limit of the stiffness range. The device is fabricated using existing 3D printing technologies, and polymer molding and casting techniques, thus keeping the cost low and throughput high. The haptic interface is linked to either an open-loop system that allows for an increased pressure during usage or closed-loop system that provides pressure regulation in accordance to the stiffness the user specifies. Preliminary evaluation is performed to characterize the effective controllable region of variance in stiffness. It was found that the region of controllable stiffness was between points 3 and 7, where the stiffness appeared to plateau with each increase in pressure. The two control systems are tested to derive relationships between internal pressure, grasping force exertion on the surface, and displacement using multiple probing points on the haptic device. Additional quantitative evaluation is performed with study participants and juxtaposed to a qualitative analysis to ensure adequate perception in compliance variance. The qualitative evaluation showed that greater than 60% of the trials resulted in the correct perception of stiffness in the haptic device.

Urban transportation systems are vulnerable to congestion, accidents, weather, special events, and other costly delays. Whereas typical policy responses prioritize reduction of delays under normal conditions to improve the efficiency of urban road systems, analytic support for investments that improve resilience (defined as system recovery from additional disruptions) is still scarce. In this effort, we represent paved roads as a transportation network by mapping intersections to nodes and road segments between the intersections to links. We built road networks for 40 of the urban areas defined by the U.S. Census Bureau. We developed and calibrated a model to evaluate traffic delays using link loads. The loads may be regarded as traffic-based centrality measures, estimating the number of individuals using corresponding road segments. Efficiency was estimated as the average annual delay per peak-period auto commuter, and modeled results were found to be close to observed data, with the notable exception of New York City. Resilience was estimated as the change in efficiency resulting from roadway disruptions and was found to vary between cities, with increased delays due to a 5% random loss of road linkages ranging from 9.5% in Los Angeles to 56.0% in San Francisco. The results demonstrate that many urban road systems that operate inefficiently under normal conditions are nevertheless resilient to disruption, whereas some more efficient cities are more fragile. The implication is that resilience, not just efficiency, should be considered explicitly in roadway project selection and justify investment opportunities related to disaster and other disruptions.

Delays are a major cause for concern in the construction industry in Saudi Arabia. This paper identifies the main causes of delay in infrastructure projects in Mecca, Saudi Arabia, and compares these with projects around the country and other Gulf countries. Data was obtained from 49 infrastructure projects undertaken by the owner and were analyzed quantitatively to understand the severity and causes of delay. 10 risk factors were identified and were grouped into four categories. Average delay in infrastructure projects in Mecca was found to be 39%. The most severe cause of delay was found to be the land acquisition factor. This highlights the critical land ownership and acquisition issues that are prevailing in the city. Additionally, other factors that contribute to delay include contractors’ lack of expertise, re-designing, and haphazard underground utilities (line services). It is concluded that the majority of project delays were caused from the owner's side as compared to contractors, consultants, and other project's stakeholders. This finding matched with the research findings of the Gulf Countries Construction (GCC) Industry's literature. This study fills an important practice and research gap for improving the efficiency in delivering infrastructure projects in the holy city of Mecca and Gulf countries at large.