News
Entertainment
Science & Technology
Life
Culture & Art
Hobbies
News
Entertainment
Science & Technology
Culture & Art
Hobbies
14 | Follower
Despite advances in machine vision, processing visual data requires substantial computing resources and energy, limiting deployment in edge devices. Now, researchers from Japan have developed a self-powered artificial synapse that distinguishes colors with high resolution across the visible spectrum, approaching human eye capabilities. The device, which integrates dye-sensitized solar cells, generates its electricity and can perform complex logic operations without additional circuitry, paving the way for capable computer vision systems integrated in everyday devices.
Large language models (LLMs) -- the advanced AI behind tools like ChatGPT -- are increasingly integrated into daily life, assisting with tasks such as writing emails, answering questions, and even supporting healthcare decisions. But can these models collaborate with others in the same way humans do? Can they understand social situations, make compromises, or establish trust? A new study reveals that while today's AI is smart, it still has much to learn about social intelligence.
Engineers have developed a real-life Transformer that has the 'brains' to morph in midair, allowing the drone-like robot to smoothly roll away and begin its ground operations without pause. The increased agility and robustness of such robots could be particularly useful for commercial delivery systems and robotic explorers.
have created a new type of insect cyborg that can navigate autonomously -- without wires, surgery, or stress-inducing electrical shocks. The system uses a small ultraviolet (UV) light helmet to steer cockroaches by taking advantage of their natural tendency to avoid bright light, especially in the UV range. This method not only preserves the insect's sensory organs but also maintains consistent control over time.
Is artificial intelligence (AI) capable of suggesting appropriate behavior in emotionally charged situations? A team put six generative AIs -- including ChatGPT -- to the test using emotional intelligence (EI) assessments typically designed for humans. The outcome: these AIs outperformed average human performance and were even able to generate new tests in record time. These findings open up new possibilities for AI in education, coaching, and conflict management.
Humans no longer have exclusive control over training social robots to interact effectively, thanks to a new study. The study introduces a new simulation method that lets researchers test their social robots without needing human participants, making research faster and scalable.
Researchers found that vision-language models, widely used to analyze medical images, do not understand negation words like 'no' and 'not.' This could cause them to fail unexpectedly when asked to retrieve medical images that contain certain objects but not others.
Researchers have developed a digital laboratory (dLab) system that fully automates the material synthesis and structural, physical property evaluation of thin-film samples. With dLab, the team can autonomously synthesize thin-film samples and measure their material properties. The team's dLab system demonstrates advanced automatic and autonomous material synthesis for data- and robot-driven materials science.
Listen to the first notes of an old, beloved song. Can you name that tune? If you can, congratulations -- it's a triumph of your associative memory, in which one piece of information (the first few notes) triggers the memory of the entire pattern (the song), without you actually having to hear the rest of the song again. We use this handy neural mechanism to learn, remember, solve problems and generally navigate our reality.
Engineers built E-BAR, a mobile robot designed to physically support the elderly and prevent them from falling as they move around their homes. E-BAR acts as a set of robotic handlebars that follows a person from behind, allowing them to walk independently or lean on the robot's arms for support.
What makes people think an AI system is creative? New research shows that it depends on how much they see of the creative act. The findings have implications for how we research and design creative AI systems, and they also raise fundamental questions about how we perceive creativity in other people.
An edible robot leverages a combination of biodegradable fuel and surface tension to zip around the water's surface, creating a safe -- and nutritious -- alternative to environmental monitoring devices made from artificial polymers and electronics.
While service robots with male characteristics can be more persuasive when interacting with some women who have a low sense of decision-making power, 'cute' design features -- such as big eyes and raised cheeks -- affect both men and women similarly, according to new research.
Researchers developed a framework to enable decentralized artificial intelligence-based building automation with a focus on privacy. The system enables AI-powered devices like cameras and interfaces to cooperate directly, using a new form of device-to-device communication. In doing so, it eliminates the need for central servers and thus the need for centralized data retention, often seen as a potential security weak point and risk to private data.
Many products in the modern world are in some way fabricated using computer numerical control (CNC) machines, which use computers to automate machine operations in manufacturing. While simple in concept, the ways to instruct these machines is in reality often complex. A team of researchers has devised a system to demonstrate how to mitigate some of this complexity.
Humans are better than current AI models at interpreting social interactions and understanding social dynamics in moving scenes. Researchers believe this is because AI neural networks were inspired by the infrastructure of the part of the brain that processes static images, which is different from the area of the brain that processes dynamic social scenes.
A powerful clinical artificial intelligence tool developed by biomedical informatics researchers has demonstrated remarkable accuracy on all three parts of the United States Medical Licensing Exam (Step exams), according to a new article.
Inspired by the movements of a tiny parasitic worm, engineers have created a 5-inch soft robot that can jump as high as a basketball hoop. Their device, a silicone rod with a carbon-fiber spine, can leap 10 feet high even though it doesn't have legs. The researchers made it after watching high-speed video of nematodes pinching themselves into odd shapes to fling themselves forward and backward.
Researchers have developed a new robotic framework powered by artificial intelligence -- called RHyME (Retrieval for Hybrid Imitation under Mismatched Execution) -- that allows robots to learn tasks by watching a single how-to video.
Researchers have developed a new artificial intelligence (AI) technique that brings machine vision closer to how the human brain processes images. Called Lp-Convolution, this method improves the accuracy and efficiency of image recognition systems while reducing the computational burden of existing AI models.
A groundbreaking open-source computer program uses artificial intelligence to analyze videos of patients with Parkinson's disease and other movement disorders. The tool, called VisionMD, helps doctors more accurately monitor subtle motor changes, improving patient care and advancing clinical research.
It's a game a lot of us played as children -- and maybe even later in life: unspooling measuring tape to see how far it would extend before bending. But to engineer, this game was an inspiration, suggesting that measuring tape could become a great material for a robotic gripper. The grippers would be a particularly good fit for agriculture applications, as their extremities are soft enough to grab fragile fruits and vegetables, researchers wrote. The devices are also low-cost and safe around humans.
American Sign Language (ASL) recognition systems often struggle with accuracy due to similar gestures, poor image quality and inconsistent lighting. To address this, researchers developed a system that translates gestures into text with 98.2% accuracy, operating in real time under varying conditions. Using a standard webcam and advanced tracking, it offers a scalable solution for real-world use, with MediaPipe tracking 21 keypoints on each hand and YOLOv11 classifying ASL letters precisely.