News
Entertainment
Science & Technology
Life
Culture & Art
Hobbies
News
Entertainment
Science & Technology
Culture & Art
Hobbies
14 | Follower
Researchers have developed a novel 6D pose dataset designed to improve robotic grasping accuracy and adaptability in industrial settings. The dataset, which integrates RGB and depth images, demonstrates significant potential to enhance the precision of robots performing pick-and-place tasks in dynamic environments.
Are humans or machines better at recognizing speech? A new study shows that in noisy conditions, current automatic speech recognition (ASR) systems achieve remarkable accuracy and sometimes even surpass human performance. However, the systems need to be trained on an incredible amount of data, while humans acquire comparable skills in less time.
A new initiative is challenging the conversation around the direction of artificial intelligence (AI). It charges that the current trajectory is inherently biased against non-Western modes of thinking about intelligence -- especially those originating from Indigenous cultures. Abundant Intelligences is an international, multi-institutional and interdisciplinary program that seeks to rethink how we conceive of AI. The driving concept behind it is the incorporation of Indigenous knowledge systems to create an inclusive, robust concept of intelligence and intelligent action, and how that can be embedded into existing and future technologies.
Imagine a future where your phone, computer or even a tiny wearable device can think and learn like the human brain -- processing information faster, smarter and using less energy. A breakthrough approach brings this vision closer to reality by electrically 'twisting' a single nanoscale ferroelectric domain wall.
Facing high employee turnover and an aging population, nursing homes have increasingly turned to robots to complete a variety of care tasks, but few researchers have explored how these technologies impact workers and the quality of care. A new study on the future of work finds that robot use is associated with increased employment and employee retention, improved productivity and a higher quality of care.
Researchers have harnessed artificial intelligence to take a key step toward slashing the time and cost of designing new wireless chips and discovering new functionalities to meet expanding demands for better wireless speed and performance.
Bio-inspired wind sensing using strain sensors on flexible wings could revolutionize robotic flight control strategy. Researchers have developed a method to detect wind direction with 99% accuracy using seven strain gauges on the flapping wing and a convolutional neural network model. This breakthrough, inspired by natural strain receptors in birds and insects, opens up new possibilities for improving the control and adaptability of flapping-wing aerial robots in varying wind conditions.
Artificial intelligence has the potential to improve the analysis of medical image data. For example, algorithms based on deep learning can determine the location and size of tumors. This is the result of AutoPET, an international competition in medical image analysis. The seven best autoPET teams report on how algorithms can detect tumor lesions in positron emission tomography (PET) and computed tomography (CT).
Even highly realistic androids can cause unease when their facial expressions lack emotional consistency. Traditionally, a 'patchwork method' has been used for facial movements, but it comes with practical limitations. A team developed a new technology using 'waveform movements' to create real-time, complex expressions without unnatural transitions. This system reflects internal states, enhancing emotional communication between robots and humans, potentially making androids feel more humanlike.
Researchers developed a laser-based artificial neuron that fully emulates the functions, dynamics and information processing of a biological graded neuron, which could lead to new breakthroughs in advanced computing. With a processing speed a billion times faster than nature, chip-based laser neuron could help advance AI tasks such as pattern recognition and sequence prediction.
Scientists have developed swarms of tiny magnetic robots that work together like ants to achieve Herculean feats, including traversing and picking up objects many times their size. The findings suggest that these microrobot swarms -- operating under a rotating magnetic field -- could be used to take on difficult tasks in challenging environments that individual robots would struggle to handle, such as offering a minimally invasive treatment for clogged arteries and precisely guiding organisms.
Reinforcement Learning, an artificial intelligence approach, has the potential to guide physicians in designing sequential treatment strategies for better patient outcomes but requires significant improvements before it can be applied in clinical settings, finds a new study.
Infertility affects an estimated 186 million people worldwide, with fallopian tube obstruction contributing to 11%-67% of female infertility cases. Researchers have developed an innovative solution using a magnetically driven robotic microscrew to treat fallopian tube blockages. The microrobot is made from nonmagnetic photosensitive resin, coated with a thin iron layer to give it magnetic properties. By applying an external magnetic field, the robot rotates, generating translational motion that enables it to navigate through a glass channel simulating a fallopian tube.
A study is the first-of-its-kind to recognize American Sign Language (ASL) alphabet gestures using computer vision. Researchers developed a custom dataset of 29,820 static images of ASL hand gestures. Each image was annotated with 21 key landmarks on the hand, providing detailed spatial information about its structure and position. Combining MediaPipe and YOLOv8, a deep learning method they trained, with fine-tuning hyperparameters for the best accuracy, represents a groundbreaking and innovative approach that hasn't been explored in previous research.
Pretrained large-scale AI models need to 'forget' specific information for privacy and computational efficiency, but no methods exist for doing so in black-box vision-language models, where internal details are inaccessible. Now, researchers addressed this issue through a strategy based on latent context sharing, successfully getting an image classifier to forget multiple classes it was trained on. Their findings could expand the use cases of large-scale AI models while safeguarding end users' privacy.
The rapidly increasing aging population will lead to a shortage of care providers in the future. While robotic technologies are a potential alternative, their widespread use is limited by poor acceptance. In a new study, researchers have examined a user-centric approach to understand the factors influencing user willingness among caregivers and recipients in Japan, Ireland, and Finland. Users' perspectives can aid the development of home-care robots with better acceptance.
Imagine an artificial intelligence (AI) model that can watch and understand moving images with the subtlety of a human brain. Now, scientists have made this a reality by creating MovieNet: an innovative AI that processes videos much like how our brains interpret real-life scenes as they unfold over time.
Researchers have created the smallest walking robot yet. Its mission: to be tiny enough to interact with waves of visible light and still move independently, so that it can maneuver to specific locations -- in a tissue sample, for instance -- to take images and measure forces at the scale of some of the body's smallest structures.
Humans get a real buzz from the virtual world of gaming and augmented reality but now scientists have trialled the use of these new-age technologies on small animals, to test the reactions of tiny hoverflies and even crabs. In a bid to comprehend the aerodynamic powers of flying insects and other little-understood animal behaviors, the study is gaining new perspectives on how invertebrates respond to, interact with and navigate virtual 'worlds' created by advanced entertainment technology.
A new article examines the convergence of physics, chemistry, and AI, highlighted by recent Nobel Prizes. It traces the historical development of neural networks, emphasizing the role of interdisciplinary research in advancing AI. The authors advocate for nurturing AI-enabled polymaths to bridge the gap between theoretical advancements and practical applications, driving progress toward artificial general intelligence.
The genome has space for only a small fraction of the information needed to control complex behaviors. So then how, for example, does a newborn sea turtle instinctually know to follow the moonlight? Neuroscientists have devised a potential explanation for this age-old paradox. Their ideas should lead to faster, more evolved forms of artificial intelligence.
Physical reservoir computing (PRC) utilizing synaptic devices shows significant promise for edge AI. Researchers from the Tokyo University of Science have introduced a novel self-powered dye-sensitized solar cell-based device that mimics human synaptic behavior for efficient edge AI processing, inspired by the eye's afterimage phenomenon. The device has light intensity-controllable time constants, helping it achieve high performance during time-series data processing and motion recognition tasks. This work is a major step toward multiple time-scale PRC.
In order to use remote locations to record and assess the behavior of wildlife and environmental conditions, the GAIA Initiative developed an artificial intelligence (AI) algorithm that reliably and automatically classifies behaviors of white-backed vultures using animal tag data. As scavengers, vultures always look for the next carcass. With the help of tagged animals and a second AI algorithm, the scientists can now automatically locate carcasses across vast landscapes.
Researchers have developed a robot that identifies different plant species at various stages of growth by 'touching' their leaves with an electrode. The robot can measure properties such as surface texture and water content that cannot be determined using existing visual approaches. The robot identified ten different plant species with an average accuracy of 97.7% and identified leaves of the flowering bauhinia plant with 100% accuracy at various growth stages.