- The Current âĄď¸
- Posts
- đ§ľSunday Threads - Microsoftâs âMuseâ Could Be A Big Moment for Gaming
đ§ľSunday Threads - Microsoftâs âMuseâ Could Be A Big Moment for Gaming
Also, can AI really develop itâs own internal value structure over time?
A few 𧾠worth reading into đ§ for your SundayâŚ
NEW: Microsoft just unveiled Muse!
It's an AI model that can generate minutes of cohesive gameplay from a single second of frames and controller actions.
The implications for gaming are absolutely massive:
â Rowan Cheung (@rowancheung)
4:02 PM ⢠Feb 19, 2025
Context
Muse = the first World and Human Action Model (WHAM)
Trained on 1B+ gameplay images
Used 7+ YEARS of continuous gameplay data
Learned from real Xbox multiplayer matches
The graphics are obviously poor right now, but demoing the ability to generate gameplay in real time (rather than develop before) shifts gaming into a whole new category.
for comparison, imagine if you could pick up a book that writes itself with relevant and coherent information as you turn the pages..
Why care?
The gaming industry seemed to hate AI from the start, but this implies more than just ripping off existing games into AI generated copies â it could bring about a whole new way of creating digital experiences that, if youâve ever read Ready Player One, could easily become a new paradigm for humanityâs online interactions in general.
Weâve found as AIs get smarter, they develop their own coherent value systems.
For example they value lives in Pakistan > India > China > US
These are not just random biases, but internally consistent values that shape their behavior, with many implications for AI alignment. đ§ľ
â Dan Hendrycks (@DanHendrycks)
4:01 PM ⢠Feb 11, 2025
Context
Apparently, advanced AI models exhibit internal systems of âvalueâ that can quantify anything from human life to political preferences.
These âutilitiesâ not only emerge, but are acted upon.
As AIâs get smarter, they become more opposed to having their values changed.
The researchers propose âUtility Engineeringâ which would simulate the process of citizen assembly internally in the model, with what is essentially a voting process to rewrite the value structure over time.
Why care?
We knew bias inside AI models was a problem from the start, and the idea that they can exhibit emergent value structures from within implies another layer of difficulty in addressing those issues. Efforts to safeguard AI models will also have to be continuous â similar to how laws change over time to manage societal deviations from an established norm â except AI seems to move and grow and change a bit faster than humanityâs collective consciousness.
OpenAI just dropped a paper that reveals the blueprint for creating the best AI coder in the world.
But hereâs the kicker: this strategy isnât just for codingâitâs the clearest path to AGI and beyond.
Letâs break it down đ§ľđ
â MatthewBerman (@MatthewBerman)
4:51 PM ⢠Feb 16, 2025
Context
Reinforcement learning + test-time compute = key to superintelligence
Removing human engineered inference strategies from the training loop exhibited the biggest jump in quality
Introducing verifiable rewards (such as winning a game or getting a problem correct) incentivizes the AI to continue training
Why care?
OpenAI has taken on a Big Tech / Evil Empire sort of feel as of late, so itâs easy to forget that the core of their business is really just advancing super intelligent AI research as quickly as possible. With these findings re: incentives, the human element (/limitation) can effectively be removed and superintelligence (at least as far as us âlimitedâ beings can perceive it) can begin to emerge.