AI Pulse

Researchers find that retraining only small parts of AI models can cut costs and prevent forgetting

Enterprise efforts to fine-tune large language models (LLMs) can sometimes result in a phenomenon known as “catastrophic forgetting,” where models lose their ability to perform previously learned tasks. A study from the University of Illinois Urbana-Champaign introduces a new training method aimed at addressing this issue, particularly focusing on models that generate responses from images,

Researchers find that retraining only small parts of AI models can cut costs and prevent forgetting Read More »

Self-improving language models are becoming reality with MIT's updated SEAL technique

Researchers at the Massachusetts Institute of Technology (MIT) have developed and open-sourced a technique called SEAL (Self-Adapting LLMs) that enables large language models (LLMs) to autonomously enhance their capabilities by generating synthetic data for fine-tuning. This method was detailed in an initial paper released in June, with an updated version published last month along with

Self-improving language models are becoming reality with MIT's updated SEAL technique Read More »

Microsoft launches its first in-house AI models

Microsoft AI announces first image generator created in-house

Microsoft has introduced its first text-to-image generator, named MAI-Image-1, which has been developed in-house. This announcement follows the company’s recent release of its first internally developed AI models. Microsoft describes MAI-Image-1 as a significant advancement in their ongoing AI initiatives. To enhance its output quality, Microsoft engaged with creative professionals during the development process to

Microsoft AI announces first image generator created in-house Read More »

Scroll to Top