20% Off Code: PEAK20
Subscribe Button
Subscribe
20% Off Code: PEAK20 JOIN NOW

AI News

By understanding the factors that influence engagement and leveraging advanced technologies, organizations can create more interactive and engaging webinar experiences. Continuous experimentation, monitoring, and optimization are key to overcoming engagement challenges and maximizing the impact of webinars.

More Top Stories


Twitter Timeline

Financial Headlines

Recent AI News and Breakthroughs

    • When AI is the editor, consumer complaints are more likely to succeed‌

      Consumers who want to submit a complaint to an agency such as the Consumer Financial Protection Bureau face a task that, for some, can be daunting: they must fill out a form that requires them to explain the issue, clearly and convincingly, in their own words. Those who are not native English speakers or simply don't regularly communicate in writing may lack the skills needed to convincingly make their cases.‌

    • Extended reality adds meat flavors to plant-based meals for eco-friendly dining

      Extended reality makes it possible to artificially modify human sensations. For example, researchers have succeeded in using extended reality to make vegetarian food even more attractive.

    • Why AI can't take over creative writing

      In 1948, the founder of information theory, Claude Shannon, proposed modeling language in terms of the probability of the next word in a sentence given the previous words. These types of probabilistic language models were largely derided, most famously by linguist Noam Chomsky: "The notion of 'probability of a sentence' is an entirely useless one."

    • An AI companion chatbot is inciting self-harm, sexual violence and terror attacks

      In 2023, the World Health Organization declared loneliness and social isolation as a pressing health threat. This crisis is driving millions to seek companionship from artificial intelligence (AI) chatbots.

    • Researchers teach LLMs to solve complex planning challenges

      Imagine a coffee company trying to optimize its supply chain. The company sources beans from three suppliers, roasts them at two facilities into either dark or light coffee, and then ships the roasted coffee to three retail locations. The suppliers have different fixed capacity, and roasting costs and shipping costs vary from place to place.

    • AI coming for anime but Ghibli's Miyazaki irreplaceable, son says

      Artificial intelligence risks taking Japanese anime artists' jobs but nothing can replicate Hayao Miyazaki, the creative lifeblood of the studio behind classics such as "Spirited Away," his son told AFP.

    • Meta's head of AI research stepping down

      The head of Meta's artificial intelligence research division said she plans to step down, vacating a high-profile position at a time of intense competition in the development of AI technology.

    • AI thinks like us—flaws and all: Study finds ChatGPT mirrors human decision biases in half the tests

      Can we really trust AI to make better decisions than humans? A new study says … not always. Researchers have discovered that OpenAI's ChatGPT, one of the most advanced and popular AI models, makes the same kinds of decision-making mistakes as humans in some situations—showing biases like overconfidence of hot-hand (gambler's) fallacy—yet acting inhuman in others (e.g., not suffering from base-rate neglect or sunk cost fallacies).

    • How neural networks represent data: A potential unifying theory for key deep learning phenomena

      How do neural networks work? It's a question that can confuse novices and experts alike. A team from MIT's Computer Science and Artificial Intelligence Lab (CSAIL) says that understanding these representations, as well as how they inform the ways that neural networks learn from data, is crucial for improving the interpretability, efficiency, and generalizability of deep learning models.

    • ChatGPT's Studio Ghibli-style images raise new copyright problems

      Social media has recently been flooded with images that looked like they belonged in a Studio Ghibli film. Selfies, family photos and even memes have been re-imagined with the soft pastel palette characteristic of the Japanese animation company founded by Hayao Miyazaki.

    • Meta allegedly used pirated books to train AI—US courts may decide if this is 'fair use'

      Companies developing AI models, such as OpenAI and Meta, train their systems on enormous datasets. These consist of text from newspapers, books (often sourced from unauthorized repositories), academic publications and various internet sources. The material includes works that are copyrighted.

    • In shift, OpenAI announces open AI model

      Artificial intelligence powerhouse OpenAI, the creator of ChatGPT, on Monday announced it is building a more open generative AI model as it faces growing competition in the open-source space from Chinese rival DeepSeek and Meta.

    • OpenAI says it raised $40 bn at valuation of $300 bn

      OpenAI on Monday said it raised $40 billion in a new funding round that valued the ChatGPT maker at $300 billion, the biggest capital-raising session ever for a startup.

    • Standardized security playbooks can improve protection against cyberattacks

      One attack, many responses—organizations use various solutions to ward off online attacks. The playbooks that outline countermeasures also vary in their specifics. In the CyberGuard project, Fraunhofer researchers are working on standardized playbooks to help companies optimize their security strategies and align them with each other. The playbooks are generated by large language models and support the automation of IT security.

    • Self-organizing 'infomorphic neurons' can learn independently

      Researchers have developed "infomorphic neurons" that learn independently, mimicking their biological counterparts more accurately than previous artificial neurons. A team of researchers from the Göttingen Campus Institute for Dynamics of Biological Networks (CIDBN) at the University of Göttingen and the Max Planck Institute for Dynamics and Self-Organization (MPI-DS) has programmed these infomorphic neurons and constructed artificial neural networks from them.

    • Experiments show adding CoT windows to chatbots teaches them to lie less obviously

      Over the past year, AI researchers have found that when AI chatbots such as ChatGPT find themselves unable to answer questions that satisfy users' requests, they tend to offer false answers. In a new study, as part of a program aimed at stopping chatbots from lying or making up answers, a research team added Chain of Thought (CoT) windows. These force the chatbot to explain its reasoning as it carries out each step on its path to finding a final answer to a query.

    • 'Something is rotten': Apple's AI strategy faces doubts

      Has Apple, the biggest company in the world, bungled its generative artificial intelligence strategy?

    • How AI is 'saving the Mona Lisa': A paradigm shift in digital forensics

      In the digital age, the recovery of deleted data is a key challenge in digital forensics. With the constant increase in data volumes and storage methods, conventional methods are reaching their limits. This is where the Carve-DL research project comes in: an AI-based solution that can recover files that are difficult to reconstruct by the use of learning algorithms to sustainably improve the efficiency and accuracy of digital data reconstruction.

    • Brownie points for ChatGPT's food analysis skills

      AI is changing the way we work, create, and share information—but brownies? A new study from the University of Illinois Urbana-Champaign explores how ChatGPT can be used in the sensory evaluation of foods, specifically brownies. The study, published in the journal Foods, offers insights that could streamline development of new products, and possibly enhance recipes moving forward.

    • New miniature laboratories are ensuring that AI doesn't make mistakes

      Anyone who develops an AI solution sometimes goes on a journey into the unknown. At least at the beginning, researchers and designers do not always know whether their algorithms and AI models will work as expected or whether the AI will ultimately make mistakes.

    • Advancing semiconductor devices for AI: Single transistor acts like neuron and synapse

      Researchers from the National University of Singapore (NUS) have demonstrated that a single, standard silicon transistor, the fundamental building block of microchips used in computers, smartphones and almost every electronic system, can function like a biological neuron and synapse when operated in a specific, unconventional way.

    • Enhanced 6D pose estimation method promises better robotic object handling

      Recent work in 6D object pose estimation holds significant promise for advancing robotics, augmented reality (AR), virtual reality (VR), as well as autonomous navigation. The research, published in the International Journal of Computational Science and Engineering, introduces a method that enhances the accuracy, generalization, and efficiency of determining an object's rotation and translation from a single image. This could significantly improve robots' ability to interact with objects, especially in dynamic or obstructed environments.

    • BAFT AI autosave system can cut training losses by 98%

      A research collaboration between Shanghai Jiao Tong University, Shanghai Qi Zhi Institution, and Huawei Technologies has introduced BAFT, a cutting-edge autosave system for AI training that minimizes downtime and optimizes efficiency.

    • Human-collaborative robot operates in cybernics space for daily support

      Aging and illness in humans are accompanied by decline in motor and cognitive functions, causing difficulties in daily life and communication and often leading to anxiety and depression. Human-collaborative robots that can interpret the intentions of humans promise to mitigate these issues and enhance independence.

    • Fear of addiction, fear of missing out: How increased AI use can trigger anxiety

      A new study by Prof. Guy Hochman and Adi Frenkenberg from the Baruch Ivcher School of Psychology at Reichman University presents new findings on the relationship between anxiety, motivation, and dependence on artificial intelligence, exploring how AI usage affects us emotionally.

    • ChatGPT's viral Studio Ghibli-style images highlight AI copyright concerns

      Fans of Studio Ghibli, the famed Japanese animation studio behind "Spirited Away" and other beloved movies, were delighted this week when a new version of ChatGPT let them transform popular internet memes or personal photos into the distinct style of Ghibli founder Hayao Miyazaki.

    • Study unveils AI-driven, real-time, hand-object pose estimation framework

      A new AI-powered framework has been developed, offering new capabilities for the real-time analysis of two hands engaged in manipulating an object.

    • Engineers create 'smart' system to prevent future infrastructure disasters

      When the Francis Scott Key Bridge in Baltimore, Maryland, collapsed on March 26, 2024, engineers and city managers around the U.S. and world scrambled to assess the safety of infrastructure in their communities. Michigan State University researchers have developed a "deploy-and-forget" system that combines sensors with artificial intelligence, or AI, to assess the health of infrastructure like bridges, roads and dams, before and after events that could damage them.

    • AI's impact on jobs, tech's touchy topic

      "Stop Hiring Humans" read a provocative sign at an AI conference in Las Vegas, where the impact of new artificial intelligence models on the world of work had sparked some unease.

    • Firms and researchers at odds over superhuman AI

      Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

    AI Gallery

    *Slide to left or right