20% Off Code: PEAK20
Subscribe Button
20% Off Code: PEAK20 JOIN NOW

AI News

More Top Stories

Twitter Timeline

Financial Headlines

Recent AI News and Breakthroughs

  • Researchers develop framework to merge AI and human intelligence for process safety

    Artificial intelligence (AI) has grown rapidly in the last few years, and with that increase, industries have been able to automate and improve their efficiency in operations.

  • Can consciousness exist in a computer simulation?

    Would it be desirable for artificial intelligence to develop consciousness? Not really, for a variety of reasons, according to Dr. Wanja Wiese from the Institute of Philosophy II at Ruhr University Bochum, Germany.

  • Enhancing adaptive radar with AI and an enormous open-source dataset

    The world around us is constantly being flash photographed by adaptive radar systems. From salt flats to mountains and everything in between, adaptive radar is used to detect, locate and track moving objects. Just because human eyes can't see these ultra-high frequency (UHF) ranges doesn't mean they're not taking pictures.

  • Study says blockchain could help combat AI misinformation

    Two Master of Accounting students in the UO's Lundquist College of Business are shaking up the academic publishing world with a new paper on a timely topic: artificial intelligence and blockchain.

  • Neural network learns to build maps using Minecraft

    Imagine you are in the middle of an unknown town. Even if your surroundings are initially unfamiliar, you can explore around and eventually create a mental map of your environment—where the buildings, streets, signs, and so on are in relation to one another. This ability to construct spatial maps in the brain is the basis of even more advanced types of cognition in humans: For example, it is theorized that language is encoded in a map-like structure in the brain.

  • New framework allows robots to learn via online human demonstration videos

    To be successfully deployed in real-world settings, robots should be capable of reliably completing various everyday tasks, ranging from household chores to industrial processes. Some of the tasks they could complete entail manipulating fabrics, for instance when folding clothes to put them in a wardrobe or helping older adults with mobility impairments to knot their ties before a social event.

  • Machine learning unlocks secrets to advanced alloys

    The concept of short-range order (SRO)—the arrangement of atoms over small distances—in metallic alloys has been underexplored in materials science and engineering. But the past decade has seen renewed interest in quantifying it, since decoding SRO is a crucial step toward developing tailored high-performing alloys, such as stronger or heat-resistant materials.

  • Engineers develop OptoGPT for improving solar cells, smart windows, telescopes and more

    Solar cell, telescope and other optical component manufacturers may be able to design better devices more quickly with AI. OptoGPT, developed by University of Michigan engineers, harnesses the computer architecture underpinning ChatGPT to work backward from desired optical properties to the material structure that can provide them. The paper is published in the journal Opto-Electronic Advances.

  • Free 3D-printing datasets enable analysis, confidence in printed parts

    The Department of Energy's Oak Ridge National Laboratory has publicly released a new set of additive manufacturing data that industry and researchers can use to evaluate and improve the quality of 3D-printed components. The breadth of the datasets can significantly boost efforts to verify the quality of additively manufactured parts using only information gathered during printing, without requiring expensive and time-consuming post-production analysis.

  • Artists are taking things into their own hands to protect their work from generative AI

    The oil painting depicts a woman standing on a podium, her arm aloft as she grasps a laurel crown in her hand. A scarlet cloak drapes across her chest as she stares at the viewer. To the naked eye, the painting looks like a normal piece in an online portfolio. But the version of the painting uploaded online belies a hidden defense system—a tool called Glaze that masks the artist's style and cloaks the art from use by generative AI.

  • Artificial intelligence meets cartography: Mapping tools can create satellite images from text prompts

    Most people interact with maps regularly, for example, when they're trying to get from point A to point B, track the weather or plan a trip. But beyond those daily activities, maps are also increasingly being combined with artificial intelligence to create powerful tools for urban modeling, navigation systems, natural hazard forecasting and response, climate change monitoring, virtual habitat modeling and other kinds of surveillance.

  • Study: Large language models are biased, but can still help analyze complex data

    In a pilot study, posted to the arXiv preprint server, researchers have found evidence that large language models (LLMs) have the ability to analyze controversial topics such as the Australian Robodebt scandal in similar ways to humans—and sometimes exhibit similar biases.

  • Unlocking dynamic systems: A new multiscale neural approach

    Dynamical systems describe the evolution of natural phenomena over time and space through mathematical frameworks, often using differential equations. Accurate predictions in these systems are crucial for various applications, yet traditional methods face challenges due to rigidity and complex dynamic behaviors.

  • Creating and verifying stable AI-controlled robotic systems in a rigorous and flexible way

    Neural networks have made a seismic impact on how engineers design controllers for robots, catalyzing more adaptive and efficient machines. Still, these brain-like machine-learning systems are a double-edged sword: Their complexity makes them powerful, but it also makes it difficult to guarantee that a robot powered by a neural network will safely accomplish its task.

  • Hong Kong is testing out its own ChatGPT-style tool as OpenAI planned extra steps to block access

    Hong Kong's government is testing the city's own ChatGPT -style tool for its employees, with plans to eventually make it available to the public, its innovation minister said after OpenAI took extra steps to block access from the city and other unsupported regions.

  • 'Extreme boosting' AI model can cut through social media 'noise'

    Social media offers a treasure trove of data for researchers to understand how organizations and individuals use the technology to communicate with and grow their base of followers. However, manually analyzing the content can be time consuming or, in some cases, simply impossible due to the volume of data. While machine-learning models can help, they present their own set of challenges.

  • Machine learning framework maps global rooftop growth for sustainable energy and urban planning

    A novel machine learning framework developed by IIASA researchers to estimate global rooftop area growth from 2020 to 2050 can aid in planning sustainable energy systems, urban development, and climate change mitigation, and has potential for significant benefits in emerging economies.

  • Online experiment reveals people prefer AI to make redistributive decisions

    A new study has revealed that people prefer artificial intelligence (AI) over humans when it comes to redistributive decisions.

  • Sorry, I didn't get that: Evaluating usability issues with AI-assisted smart speakers

    With the rapid development of AI technology, voice-controlled smart speakers are becoming increasingly popular due to their convenience and ability to control compatible home devices. Despite the rise in use, smart speakers often do not have screens and little-to-none of the visual information feedback common to manually operated devices. This aspect complicates their usability, thus providing room for research and subsequent improvement.

  • Large language models make human-like reasoning mistakes, researchers find

    Large language models (LLMs) can complete abstract reasoning tasks, but they are susceptible to many of the same types of mistakes made by humans. Andrew Lampinen, Ishita Dasgupta, and colleagues tested state-of-the-art LLMs and humans on three kinds of reasoning tasks: natural language inference, judging the logical validity of syllogisms, and the Wason selection task.

  • Microsoft unveils software that allows LLMs to work with spreadsheets

    A team of programmers and AI specialists at Microsoft has developed an AI tool called SpreadsheetLLM that applies large language model capabilities to spreadsheets. In their study, now posted on the arXiv preprint server, the group developed SheetCompressor, an encoding framework that compresses spreadsheets effectively for use by large language models (LLMs).

  • New technique to assess a general-purpose AI model's reliability before it's deployed

    Foundation models are massive deep-learning models that have been pretrained on an enormous amount of general-purpose, unlabeled data. They can be applied to a variety of tasks, like generating images or answering customer questions.

  • New system enables intuitive teleoperation of a robotic manipulator in real-time

    Imitation learning is a promising method to teach robots how to reliably complete everyday tasks, such as washing dishes or cooking. Despite their potential, imitation learning frameworks rely on detailed human demonstrations, which should include data that can help to reproduce specific movements using robotic systems.

  • Temporal shift for speech emotion recognition

    Humans can guess how someone on the other end of a phone call is feeling based on how they speak as well as what they say. Speech emotion recognition is the artificial intelligence version of this ability. Seeking to address the issue of channel alignment in downstream speech emotion recognition applications, a research group at East China Normal University in Shanghai developed a temporal shift module that outperforms state-of-the-art methods in fine-tuning and feature-extraction scenarios.

  • A new neural network makes decisions like a human would

    Humans make nearly 35,000 decisions every day, from whether it's safe to cross the road to what to have for lunch. Every decision involves weighing the options, remembering similar past scenarios, and feeling reasonably confident about the right choice. What may seem like a snap decision actually comes from gathering evidence from the surrounding environment. And often the same person makes different decisions in the same scenarios at different times.

  • Self-organizing drone flock demonstrates safe traffic solution for smart cities of the future

    After creating the world's first self-organizing drone flock, researchers at Eötvös Loránd University (ELTE), Budapest, Hungary have now also demonstrated the first large-scale autonomous drone traffic solution. This fascinating new system is capable of far more than what could be executed with human pilots.

  • OpenAI whistleblowers ask SEC to investigate the company's non-disclosure agreements with employees

    OpenAI whistleblowers have filed a complaint with the Securities and Exchange Commission and asked the agency to investigate whether the ChatGPT maker illegally restricted workers from speaking out about the risks of its artificial intelligence technology.

  • NASA cloud-based platform could help streamline, improve air traffic

    Just like your smartphone navigation app can instantly analyze information from many sources to suggest the best route to follow, a NASA-developed resource is now making data available to help the aviation industry do the same thing.

  • Smart diagnostics: Possible uses of generative AI to empower nuclear plant operators

    Imagine being able not only to detect a fault in a complex system but also to receive a clear, understandable explanation of its cause. Just like having a seasoned expert by your side. This is the promise of combining a large language model (LLM) such as GPT-4 with advanced diagnostic tools.

  • Training AI requires more data than we have—generating synthetic data could help solve this challenge

    The rapid rise of generative artificial intelligence like OpenAI's GPT-4 has brought remarkable advancements, but it also presents significant risks.

AI Gallery

*Slide to left or right