Hot AI News
Gmail enters the Gemini era: AI Overviews, smarter replies, and a cleaner inbox
ChatGPT Health turns OpenAI's chatbot into a personal health assistant
Nvidia fast-tracks Vera Rubin chips, promising a 5x jump in AI performance
9 Bold AI Predictions From Nvidia's Jensen Huang: How AI Will Reshape Wealth, Jobs, and Industry
NVIDIA RTX PRO 5000 72GB Blackwell: Supercharging agentic AI on your desktop
Aiholics: Your Source for AI News and Trends
  • News
    NewsShow More
    gmail gemini ai 2026
    Gmail enters the Gemini era: AI Overviews, smarter replies, and a cleaner inbox
    January 9, 2026
    chatgpt-health-2026-openai-available-rollout
    ChatGPT Health turns OpenAI's chatbot into a personal health assistant
    January 8, 2026
    Nvidia fast-tracks Vera Rubin chips, promising a 5x jump in AI performance
    January 6, 2026
    nvidia ceo jensen huang
    9 Bold AI Predictions From Nvidia's Jensen Huang: How AI Will Reshape Wealth, Jobs, and Industry
    January 6, 2026
    workstation rtx pro blackwell gpu nvidia agentic ai desktop
    NVIDIA RTX PRO 5000 72GB Blackwell: Supercharging agentic AI on your desktop
    December 20, 2025
  • AI Tools and Reviews
    AI Tools and ReviewsShow More
    Intelligent agents in AI: how agents make decisions in artificial systems
    Intelligent agents in AI: How agents make decisions in artificial intelligence systems
    December 20, 2025
    Emergent AI review
    ElevenLabs review
    magictrips ai review
    MagicTrips AI review
    AI tool identifies structural heart disease with 88% accuracy using smartwatch data
    November 3, 2025
  • AI assistants
    AI assistantsShow More
    gmail gemini ai 2026
    Gmail enters the Gemini era: AI Overviews, smarter replies, and a cleaner inbox
    January 9, 2026
    chatgpt-health-2026-openai-available-rollout
    ChatGPT Health turns OpenAI's chatbot into a personal health assistant
    January 8, 2026
    chatgpt 5.2
    GPT-5.2 arrives as OpenAI races to keep pace with Google's Gemini 3
    December 12, 2025
    ai overviews summary google search
    EU investigates Google over AI summaries: what this means for creators and tech innovation
    December 9, 2025
    chatgpt-5
    GPT-5.2 release: Features, upgrades and OpenAI's urgent ‘code red' response
    December 6, 2025
  • Safety
    SafetyShow More
    How AI helped solve the mystery of a missing mountaineer
    January 9, 2026
    ai overviews summary google search
    EU investigates Google over AI summaries: what this means for creators and tech innovation
    December 9, 2025
    smart ai radar camera speed car big brother
    Spain's new AI occupancy cameras: How stealth tech fines solo drivers
    November 23, 2025
    tik tok manage topics ai content manage filter
    New TikTok features make it easier to spot AI – and choose how much of it you see
    November 23, 2025
    ai vegans antiai movement
    Meet the ‘AI vegans': Young users cutting AI out of their daily lives
    November 22, 2025
  • Research
    ResearchShow More
    How AI helped solve the mystery of a missing mountaineer
    January 9, 2026
    Polytechnic artificial intelligence: how AI diploma programs transform vocational education
    AI in polytechnic education: Diploma programs bringing artificial intelligence to vocational studies
    December 20, 2025
    How our brain processes speech: A layered approach like AI models
    December 14, 2025
    mit ai self learning notes
    MIT researchers unveil a method that lets AI models learn from their own notes
    December 13, 2025
    artificial intelligence agi vs ai myths
    From AI to AGI: Debunking myths and setting real expectations
    December 8, 2025
  • Companies
    • OpenAI
    • Google
    • Meta
    • Apple
    • Nvidia
    • Microsoft
    • ByteDance
    • Other companies
    CompaniesShow More
    gmail gemini ai 2026
    Gmail enters the Gemini era: AI Overviews, smarter replies, and a cleaner inbox
    January 9, 2026
    chatgpt-health-2026-openai-available-rollout
    ChatGPT Health turns OpenAI's chatbot into a personal health assistant
    January 8, 2026
    Nvidia fast-tracks Vera Rubin chips, promising a 5x jump in AI performance
    January 6, 2026
    workstation rtx pro blackwell gpu nvidia agentic ai desktop
    NVIDIA RTX PRO 5000 72GB Blackwell: Supercharging agentic AI on your desktop
    December 20, 2025
    chatgpt 5.2
    GPT-5.2 arrives as OpenAI races to keep pace with Google's Gemini 3
    December 12, 2025
  • AI futurology
    AI futurologyShow More
    artificial intelligence agi vs ai myths
    From AI to AGI: Debunking myths and setting real expectations
    December 8, 2025
    Why synthetic data is becoming the most valuable resource in AI
    December 6, 2025
    How AI is quietly changing the way we grieve and remember loved ones
    December 3, 2025
    ai post writing articles content
    More articles are written by AI than humans: What that means for content creators
    November 24, 2025
    Why landing a first job is getting harder – and how AI plays a role
    November 23, 2025
  • Events
  • Sustainability
    SustainabilityShow More
    sustainability ai green technology environment ecology
    AI's climate impact: why it's not the environmental villain you think
    December 6, 2025
    Thermodynamic computing Extropic superconducting chips ai energy
    Extropic's superconducting chips could change everything about AI's power problem
    November 2, 2025
    Google's first carbon capture project: A new path to clean, reliable energy
    November 2, 2025
    Japan's AI-generated video shows what a Mount Fuji eruption could really look like
    November 2, 2025
    How NASA's new AI model is changing the way we predict solar storms
    November 2, 2025
  • Finance
    FinanceShow More
    OpenAI headquarters
    OpenAI reportedly preparing for a $1 trillion stock market debut by 2026
    November 2, 2025
    Meta's AI gamble: Why Zuckerberg's massive spending is spooking investors
    November 2, 2025
    nvidia_most_valuable_stock_market_cap
    Nvidia reaches $5 trillion valuation as AI demand explodes. Can rivals keep up?
    November 2, 2025
    Perplexity AI makes a bold $34.5 billion bid for Google Chrome
    November 2, 2025
    How a 23-year-old raised $1.5 billion for an AI hedge fund
    November 2, 2025
  • AI Tutorials and Prompts

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • May 2025
  • August 2024
  • July 2024
  • June 2024

Categories

  • AI Apps and Tools
  • AI assistants
  • AI futurology
  • AI Tools and Reviews
  • AI Tutorials and Prompts
  • Anthropic
  • Apple
  • ByteDance
  • Companies
  • Events
  • Finance
  • Free Prompts
  • Google
  • Meta
  • Microsoft
  • News
  • Nvidia
  • OpenAI
  • Other companies
  • Research
  • Safety
  • Sustainability
  • Uncategorized
Reading: MIT researchers unveil a method that lets AI models learn from their own notes
Search AI news & posts
Font ResizerAa
Aiholics: Your Source for AI News and TrendsAiholics: Your Source for AI News and Trends
  • News
  • Companies
  • AI assistants
  • Sustainability
  • Safety
  • Research
Search
  • News
  • Companies
    • Google
    • Meta
    • Microsoft
    • Nvidia
    • Apple
  • AI assistants
  • Sustainability
  • Safety
  • Research
  • AI futurology

Why AI therapy isn't the silver bullet we hope for

By Alex Carter
November 2, 2025
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow
  • About us
  • Advertise with us
  • Privacy Policy
  • Terms and Conditions
  • Affiliate links Disclaimer
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Research / MIT researchers unveil a method that lets AI models learn from their own notes
Research

MIT researchers unveil a method that lets AI models learn from their own notes

SEAL enables AI to create its own training data in the form of self-edits, promoting continual learning.

Daniel Reed
ByDaniel Reed
AI Research, Safety & Ethics Analyst
Daniel Reed currently works as an AI Research, Safety & Ethics Analyst at Aiholics, writing about how changes in artificial intelligence are affecting and will affect...
- AI Research, Safety & Ethics Analyst
Published: December 13, 2025
8 Min Read
Share
mit ai self learning notes
SHARE

Large language models (LLMs) have already amazed us by reading, writing, and answering questions with impressive skill. But once their initial training is done, their knowledge tends to stay frozen, making it tricky to teach them new facts or skills — especially when we don’t have much task-specific data for retraining.

I recently came across MIT‘s new SEAL framework, an approach that flips that limitation on its head. Instead of relying on pre-designed training data and fixed instructions, SEAL lets AI models generate their own study notes and decide how best to train themselves. It’s a bit like how we humans prepare for tests — by rewriting notes, summarizing key ideas, and testing ourselves repeatedly, instead of just rereading textbooks.

Advertisements

How SEAL lets AI learn like a student

The core idea behind SEAL (which stands for Self-Adapting Large Language models) is that the AI produces short natural-language instructions called self-edits. These notes don’t just restate information but can infer new implications, summarize, or even suggest training tweaks like adjusting the learning rate. The AI then fine-tunes itself on these self-made notes, updating its internal parameters slightly.

Just like humans, complex AI systems can’t remain static for their entire lifetimes. They are constantly facing new inputs. SEAL aims to create models that keep improving themselves.

SEAL operates in two loops. In the inner loop, the model generates self-edits based on new readings and updates itself accordingly. Then it tests its own improvements by answering questions or solving puzzles. The outer loop uses reinforcement learning to keep only those self-edits that actually help performance — effectively teaching the AI how to write better notes over time.

Turning text into lasting knowledge

One of the coolest tests for SEAL was teaching the AI new factual knowledge. Instead of training directly on the original text, SEAL lets the model generate notes that highlight logical implications and key facts from a passage. Then the model trains on these notes using small updates.

How MIT‘s SEAL works. The AI writes “self-edits” short instructions for how to change its own model, applies those changes, takes a test task, gets a score (reward), and repeats the loop to learn which self-edits help it improve. Image: MIT

Here’s where it gets interesting: without any adaptation, the model in the test answered about 33% of questions correctly. Training directly on the original passages barely bumped that up. But training on its own generated notes improved accuracy to nearly 40%. Even more impressive, notes generated by GPT-4.1 helped push accuracy to about 46%, while SEAL’s own self-learned notes nudged that further to 47%, surpassing the performance of a much larger model’s notes.

More Read

How AI helped solve the mystery of a missing mountaineer
gmail gemini ai 2026
Gmail enters the Gemini era: AI Overviews, smarter replies, and a cleaner inbox
chatgpt-health-2026-openai-available-rollout
ChatGPT Health turns OpenAI’s chatbot into a personal health assistant
Nvidia fast-tracks Vera Rubin chips, promising a 5x jump in AI performance

And this wasn’t just a fluke; SEAL kept its edge when learning from hundreds of passages simultaneously, suggesting it genuinely learned a general skill: how to write great study notes.

Advertisements

Adapting on the fly for problem solving

SEAL also shines on puzzle-like reasoning tasks that demand quick adaptation. Imagine a small AI given just a few examples to solve visual pattern puzzles with colored grids. Normally, without training, success was zero. With simple test-time training, it reached only 20%. After SEAL’s self-editing process rehearsed multiple study plans and picked the best, success jumped to over 70%!

How SEAL adds new knowledge. The model reads a new passage, writes its own “study notes” (key takeaways/implications), then fine-tunes on those notes. After that, it’s tested with questions about the passage without seeing the original text – and its score becomes the reward signal that guides the next round of learning. Image: MIT

This is a massive boost, showing how self-generated training strategies can help models adapt in real time to new challenges. While a human-designed ideal training plan still hits 100%, SEAL demonstrates that AI can develop its own clever study methods, cutting down the need for human-crafted solutions.

Figure 3: Learning from a few examples with SEAL. The model starts with a handful of example puzzles, then writes a “self-edit” that says how it should practice (like what extra training examples to create and what training settings to use). It fine-tunes itself using that plan, and then it’s tested on a new puzzle to see if it improved. Image: MIT

The challenges ahead and why this matters

Of course, SEAL isn’t perfect. One ongoing problem is catastrophic forgetting, where learning new information causes the model to gradually forget what it previously knew. The AI doesn’t crash outright, but older knowledge erodes as new self-edits overwrite it.

Also, running these self-edits requires fine-tuning and testing steps that take up to 45 seconds each, which could become expensive or slow with bigger models or massive datasets. Solutions like letting AIs generate their own tests to evaluate themselves might reduce this overhead in the future.

Forgetting after repeated self-updates. The model is updated on one new passage at a time, then re-tested on earlier passages. The heatmap shows that as it learns newer passages, its performance on older ones often drops (it “forgets”). Image: MIT

Despite the hurdles, SEAL points us toward a future where AI models don’t get stuck as static entities but instead keep growing, revising what they know and how they know it — much like how people learn throughout their lives. This capability would be a game changer for AI assistants that need to stay updated, scientific research bots that digest new papers, or educational tools that improve by catching their own mistakes and filling in gaps.

SEAL offers a concrete path toward language models that are not just trained once and frozen, but that continue to learn in a data-constrained world.

In other words, teaching AI to take and learn from its own notes might be the breakthrough needed for models that evolve continuously, making them more resilient, adaptable, and ultimately, smarter.

Advertisements

Key takeaways

  • SEAL enables AI models to generate self-edits—study notes that help them improve continuously without human-designed datasets.
  • Training on self-generated notes raised knowledge retention and reasoning success dramatically, showing models can learn how to learn.
  • Challenges like catastrophic forgetting and costly training remain, but the approach points toward adaptable, lifelong learning AI systems.

It’s exciting to watch AI inch closer to learning more like we do – revising knowledge, testing itself, and growing over time instead of just stopping after initial training. SEAL is a step in that direction, and I can’t wait to see where this idea leads next.

TAGGED:AIAI assistantsAI ModelsAI researchMITpuzzles

Sign Up for the Daily AI Pulse

One email a day. All the stories that matter.

By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Flipboard Whatsapp Whatsapp LinkedIn Reddit Telegram Email Copy Link
ByDaniel Reed
AI Research, Safety & Ethics Analyst
Daniel Reed currently works as an AI Research, Safety & Ethics Analyst at Aiholics, writing about how changes in artificial intelligence are affecting and will affect scholarship, society, and human civilization. He reports on breakthroughs in AI research, the development of safety frameworks, discussion of long-term risks, and ethical challenges; he also reports on global shifts in policy and governance. Daniel aims to make complex research papers and long-term thinking accessible to the everyday reader without sacrificing nuance. With his thoughtful and analytical style of writing, Daniel translates advanced topics into clear language. He targets questions that really matter: how safe are today's AI systems, what kind of ethical boundaries do we need, and how could exponential progress affect the way education, jobs, governance, and human values are shaped? His articles are often not just expert opinions but also balanced views and insight into emerging debates that define AI's place in the world. Daniel believes responsible AI development begins with awareness, transparency, and informed public conversation. In terms of his work with Aiholics, he encourages readers to look beyond headlines to understand the promise of artificial intelligence but also some of its consequences.
Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

gmail gemini ai 2026 edit

Gmail enters the Gemini era: AI Overviews, smarter replies, and a cleaner inbox

Trending

No found posts, Please add a new post for this query or change the block settings: Edit Page

FacebookLike
XFollow
TiktokFollow
Research

How our brain processes speech: A layered approach like AI models

The brain processes speech through multiple layers that progressively interpret sound, similar to AI neural networks.

December 14, 2025
By Daniel Reed

Your may also like!

News

China's next-gen AI sexbots ready to hit the shelves

AI Tools and ReviewsCompaniesMicrosoftNews

Microsoft Lens retires: Scanning app makes way for AI-powered Copilot

artificial intelligence stages
AI futurology

The 10 stages of Artificial Intelligence

chatgpt-health-2026-openai-available-rollout
AI Apps and ToolsAI assistantsCompaniesNewsOpenAI

ChatGPT Health turns OpenAI's chatbot into a personal health assistant

Quick Links

  • About us
  • Advertise with us
  • Privacy Policy
  • Terms and Conditions
  • Affiliate links Disclaimer
Advertise with us

Socials

Follow Aiholics
© 2026 AIholics.com
Accessibility Adjustments

Powered by OneTap

How long do you want to hide the accessibility toolbar?
Hide Toolbar Duration
Colors
Orientation
Version 2.4.0
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
View preferences
{title} {title} {title}
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
View preferences
{title} {title} {title}
adbanner
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?