News
Entertainment
Science & Technology
Life
Culture & Art
Hobbies
News
Entertainment
Science & Technology
Culture & Art
Hobbies
14 | Follower
A stretchy electronic skin could equip robots and other devices with the same softness and touch sensitivity as human skin, opening up new possibilities to perform tasks that require a great deal of precision and control of force.
New algorithm encourages robots to move more randomly to collect more diverse data for learning. In tests, robots started with no knowledge and then learned and correctly performed tasks within a single attempt. New model could improve safety and practicality of self-driving cars, delivery drones and more.
One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
A four-legged robot trained with machine learning has learned to avoid falls by spontaneously switching between walking, trotting, and pronking -- a milestone for roboticists as well as biologists interested in animal locomotion.
The use of pliable soft materials to collaborate with humans and work in disaster areashas drawn much recent attention. However, controlling soft dynamics for practical applications has remained a significant challenge. Researchers developed a method to control pneumatic artificial muscles, which are soft robotic actuators. Rich dynamics of these drive components can be exploited as a computational resource.
Computer vision can be a valuable tool for anyone tasked with analyzing hours of footage because it can speed up the process of identifying individuals. For example, law enforcement may use it to perform a search for individuals with a simple query, such as 'Locate anyone wearing a red scarf over the past 48 hours.'
Robotics engineers have worked for decades and invested many millions of research dollars in attempts to create a robot that can walk or run as well as an animal. And yet, it remains the case that many animals are capable of feats that would be impossible for robots that exist today.
Soft skin coverings and touch sensors have emerged as a promising feature for robots that are both safer and more intuitive for human interaction, but they are expensive and difficult to make. A recent study demonstrates that soft skin pads doubling as sensors made from thermoplastic urethane can be efficiently manufactured using 3D printers.
A team of computer scientists working on two different problems -- how to quickly detect damaged buildings in crisis zones and how to accurately estimate the size of bird flocks -- recently announced an AI framework that can do both. The framework, called DISCount, blends the speed and massive data-crunching power of artificial intelligence with the reliability of human analysis to quickly deliver reliable estimates that can quickly pinpoint and count specific features from very large collections of images.
A new technique can more effectively perform a safety check on an AI chatbot. Researchers enabled their model to prompt a chatbot to generate toxic responses, which are used to prevent the chatbot from giving hateful or harmful answers when deployed.
In a bid to restore privacy, researchers have created a new approach to designing cameras that process and scramble visual information before it is digitized so that it becomes obscured to the point of anonymity.
A research team has addressed the long-standing challenge of creating artificial olfactory sensors with arrays of diverse high-performance gas sensors. Their newly developed biomimetic olfactory chips (BOC) are able to integrate nanotube sensor arrays on nanoporous substrates with up to 10,000 individually addressable gas sensors per chip, a configuration that is similar to how olfaction works for humans and other animals.
Umwelt is a new a system that enables blind and low-vision users to author accessible, interactive charts representing data in three modalities: visualization, textual description, and sonification.
What would you do if you walked up to a robot with a human-like head and it smiled at you first? You'd likely smile back and perhaps feel the two of you were genuinely interacting. But how does a robot know how to do this? Or a better question, how does it know to get you to smile back?
Engineers aim to give robots a bit of common sense when faced with situations that push them off their trained path, so they can self-correct after missteps and carry on with their chores. The team's method connects robot motion data with the common sense knowledge of large language models, or LLMs.
Artificial intelligence can spot COVID-19 in lung ultrasound images much like facial recognition software can spot a face in a crowd, new research shows. The findings boost AI-driven medical diagnostics and bring health care professionals closer to being able to quickly diagnose patients with COVID-19 and other pulmonary diseases with algorithms that comb through ultrasound images to identify signs of disease.
If it walks like a particle, and talks like a particle... it may still not be a particle. A topological soliton is a special type of wave or dislocation which behaves like a particle: it can move around but cannot spread out and disappear like you would expect from, say, a ripple on the surface of a pond. Researchers now demonstrate the atypical behavior of topological solitons in a robotic metamaterial, something which in the future may be used to control how robots move, sense their surroundings and communicate.
Performing a new task based solely on verbal or written instructions, and then describing it to others so that they can reproduce it, is a cornerstone of human communication that still resists artificial intelligence (AI). A team has succeeded in modelling an artificial neural network capable of this cognitive prowess. After learning and performing a series of basic tasks, this AI was able to provide a linguistic description of them to a 'sister' AI, which in turn performed them.
Neural networks have been powering breakthroughs in artificial intelligence, including the large language models that are now being used in a wide range of applications, from finance, to human resources to healthcare. But these networks remain a black box whose inner workings engineers and scientists struggle to understand. Now, a team has given neural networks the equivalent of an X-ray to uncover how they actually learn.
A new safety-check technique can prove with 100 percent accuracy that a planned robot motion will not result in a collision. The method can generate a proof in seconds and does so in a way that can be easily verified by a human.
Researchers applied deep-learning approaches from vehicle routing to streamline planning trajectories for robots in an e-commerce warehouse. Their method breaks the problem down into smaller chunks and then predicts the best chunks to solve with traditional algorithms.
Ultraviolet-laser processing is a promising technique for developing intricate microstructures, enabling complex alignment of muscle cells, required for building life-like biohybrid actuators. Compared to traditional complex methods, this innovative technique enables easy and quick fabrication of microstructures with intricate patterns for achieving different muscle cell arrangements, paving the way for biohybrid actuators capable of complex, flexible movements.
Scientists introduce what they call 'simultaneous and heterogeneous multithreading' or SHMT. This system doubles computer processing speeds with existing hardware by simultaneously using graphics processing units (GPUs), hardware accelerators for artificial intelligence (AI) and machine learning (ML), or digital signal processing units to process information.
In an image, estimating the distance between objects and the camera by using the blur in the images as clue, also known as depth from focus/defocus, is essential in computer vision. However, model-based methods fail when texture-less surfaces are present, and learning-based methods require the same camera settings during training and testing. Now, researchers have come up with an innovative strategy for depth estimation that combines the best of both the worlds to solve these limitations, extending the applicability of depth from focus/defocus.
Many people are familiar with facial recognition systems that unlock smartphones and game systems or allow access to our bank accounts online. But the current technology can require boxy projectors and lenses. Now, researchers report on a sleeker 3D surface imaging system with flatter, simplified optics. In proof-of-concept demonstrations, the new system recognized the face of Michelangelo's David just as well as an existing smartphone system.
Artificial intelligence using neural networks performs calculations digitally with the help of microelectronic chips. Physicists have now created a type of neural network that works not with electricity but with so-called active colloidal particles.The researchers describe how these microparticles can be used as a physical system for artificial intelligence and the prediction of time series.